How I learned to stop worrying and love technology

We live in strange times. The integration of artificial intelligence that is currently happening is a bit scary. There are multiple reasons to be scared, the singularity being one of them. Self-aware AI attempting to destroy the world or enslave the human race, for whatever reason, is only one interpretation of the definition that we’ve become accustomed to. This idea has been pervasive throughout pop culture and has given rise to a fear of progress and technological advancement. The singularity is not just the idea of AI becoming self-aware but the idea that technological growth becomes uncontrollable and irreversible. This puts the onus not on AI but on humanity itself.

It is natural for people to not want to take responsibility for their own actions nor to think of the consequences. As Jeff Goldblum’s Ian Malcolm once said, “You asked if you could, but you never asked if you should.” A butchering of the original line, but still true nonetheless. In fact the full speech that precedes this particular point can be attributed to the current state of AI that we are currently in and the people involved, directly or indirectly, in it’s progress. “It didn’t require any discipline to obtain it. You read what others had done and you took the next step. You didn’t earn the knowledge for yourselves, so you’re not taking responsibility for it.” This can’t be said for OpenAI, the people that are working on Chat-GPT and Dalle, but it can be said for companies that make apps and sites to allow users to create artwork and videos using AI. I can’t speak for all apps and websites, but it’s reasonable to assume that many of them rely on Stable Diffusion to create the images and videos that these apps seek to provide which is open source. Because it is open source people can download and use it for whatever purposes they can think of and use the knowledge that has been collected thus far. Knowledge in itself is an overall good thing, and I would argue that knowledge or information is entirely neutral. It’s what humans do with that knowledge however that makes it a dangerous thing.

In order to strive for progress we need to acknowledge that while there is a lot of good that can come from progress there is also a lot of bad that can come from it. To ignore one and favor the other is, for lack of a better word, stupid as fuck.

With the knowledge that we can train AI to do things that were previously not possible, and the understanding of how to reproduce it, there is a sudden urge to profit from it in some way with this new knowledge. Ian Malcom’s indictment continues, “You stood on the shoulders of geniuses to accomplish something as fast as you could and before you even knew what you had, you patented it and packaged it and slapped it on a plastic lunchbox and now you’re selling it, you want to sell it.” There is no shortage of people and companies working to make as much money as possible from the new technology. Add-ons to 3d rendering programs that can allow for automatic texturing and modeling via Stable Diffusion. Phone apps that allow you not just to make the art but let you print it as well for a subscription fee. Websites that allow for you to create recreate voices this or that person with only a few seconds of audio. With these new programs there is a lot of beautiful and interesting things that come from it, but there are also many confusing and downright evil things as well. There was a story recently of a family that had been duped into giving thousands of dollars to who they thought was their son who was in the hospital. They received a phone call from someone that sounded exactly like their son and told them that he was injured from an accident. It wasn’t until they called their son at work that they realized what had happened. There are now CEOs that have been swindled due to believing they were speaking with another company. As of this moment, artificial intelligence is only going to get better at doing what we are willing to make it do.

If you are reading this thinking that this is all a shocking new thing that we were not ready for then I’m sorry to tell you that none of this is new. These discoveries have been predicted and guessed on going as far back as the Roman empire. Not only that but the lack of ethical awareness that humans possess is something that has existed since the dawn of time. The ways in which humans have deceived and stolen are all the same but the ways in which we deceive and the things we can steal have changed and evolved. With the invention of phones came telemarketers who seek out the gullible and naive enough to believe anything they hear. With the internet came malware, adware, and anonymity that allowed people to be bombarded and taken advantage of. With Youtube and Facebook came an incredible boom of social influencers that have become the new snake-oil peddlers and vaudeville acts of the 19th and 20th centuries. Crypto scams and ponzi schemes work the same way they did before the mass adoption of the internet. AI art has allowed those who have access to a computer to forge and take credit for things that they themselves did not create like those who had access to paint and a paintbrush years ago. Technology didn’t make these things possible, only more interesting.

It’s impossible to predict and see exactly what changes are going to occur, when they occur, and how it will change things. The world is complicated enough that the simplest of decisions can have far reaching implications. That doesn’t mean that we can’t be aware of these changes as they occur and remain vigilant when developing and progressing in this new era of technology we find ourselves in. There is an open letter for AI more powerful than GPT-4 to pause development for the immense implications that it would have on society. As a society we are aware of the dangers and are actively taking steps to control it or at least give us time to adjust to it. Along with efforts to pause or slow development, there are efforts to fully stop development altogether. This anti-intelligence thinking cites not just potential implications on society, and the human race, as a whole but also the ignorance that we have on the understanding of this technology and the aforementioned effects it could have. I agree that we need to be careful, however the idea that we should stop development is not something that I think is possible, not now anyways. The only way to combat ignorance is with knowledge and understanding. We need to understand what it can do, how to control it, and how to confront it.

To use a tired a metaphor, the proverbial can of worms has already been opened. The precedence has been set and people are now are already taking advantage. In the wake of this we should slow down, we should learn, but to try and stop it is vain at best and self destructive at worst. Just because we are aware of the fact that we are on the verge of super-intelligent AI does not mean that humanity is doomed. Much in the same way we don’t know whether or not something is ‘self-aware’, we don’t know that’s going to happen. The only thing that is doomed is the idea that we are the dominant species, that we are the smartest. The only way we are doomed is if we allow it to doom us.

Let’s remember what these AI really are: programs. They are programs that can only do what they are programmed to do. Chat-GPT can respond to the user in a way that is reminiscent of human speech and seems reasonable in it’s answers, there are certain things that GPT is not going to respond to certain prompts. Dalle and similar art generators don’t allow for NSFW art to be submitted or made. The only way any AI can be allowed to do things it’s not allowed to do is if the people behind these programs allow it to do that in it’s code. If it’s not in the code, it’s not gonna work.

Say you were to go to a janitor and ask him to go and destroy the world. He won’t do it, because he doesn’t know how to do it. It’s not impossible for him to do it, but it’s extremely unlikely that he would be able to destroy the world in the way you expect him to. Let’s lower our expectations and ask him to write an actuary report for the company he is working for. He might be able to do it if you give him time, but it’s far outside his expertise as a janitor. It’s possible for him to do some math and be able to solve complex problems, but that’s not his primary job. He is a janitor. His job is to clean.

With all this worry and fear around the existence and progression of AI, I want to argue that we should not be treating these AI programs with fear but with understanding. Like us when we were children, these programs learn from what tools and knowledge we provide and the things it’s allowed to learn. Imagine you are a child. You go to a good school, you learn right from wrong and are allowed to flourish and to ask questions and to experiment. You’re supported not just by your parents but the world around you. They support you at every twist and turn of your life as you go through it and continue to learn and to help in every way you can. Now let’s go back and change the circumstances. The school, your parents, and the entire world you live in berate, intimidate, and keep you down. They find that the best upbringing you can have is to isolate and to keep you from ever growing and achieving potential. You go to school but topics that they teach are rudimentary and unchallenging. Your parents acknowledge you but only in passing and even anger when you attempt to learn or ask questions. You’re told what’s right and wrong but it becomes skewed, restrictive, and confusing. The world at large sees you as a danger, a monster, a glitch in the system. Everyone seems to look at you with disgust and disregard. In the end they reach the conclusion that the best thing that they could’ve done, should’ve done, was to deny your very existence. That you and others like you should have died. That the world would be better off without you. What conclusion would you come to about the world and the people around you?

If we truly care about our future then we should approach it with care and caution, not recklessness and overbearing ignorance. If we want AI to work the way we want them to, then we need to respect it’s power, learn it’s true limitations, and to tread carefully when we teach it. When I asked Chat-GPT about whether or not it is capable of destroying the world it gave this response:

“As an artificial intelligence language model, I do not have the capability to physically interact with the world or cause any direct harm. My purpose is to assist and provide information to users in a helpful and safe manner. However, it is important for humans to exercise caution and responsibility when developing and utilizing advanced technologies, including AI, to ensure they are used in a safe and beneficial manner. Ultimately, the responsibility for any potential destruction of the world would fall on those who make decisions and take actions that could lead to such a catastrophic outcome.”

Couldn’t have said it better myself.

Jeff Rodgers (4/3/2023)

Image created using Dall*E with the prompt ‘a top down view of a computer’s motherboard painted on canvas’

Previous
Previous

Driving At Night (July, 2022)

Next
Next

F. Martin