I decided to write about this topic because it is probably the most impactful and pressing, especially as ChatGPT and AI get more powerful and more ubiquitous across the workforce arena. Also, we are already seeing tremendous concern on this.
My biggest gripe with AI and its influence in the business sector is this: AI is powerful, yes, but it is still a tool… it should not cause people to lose their jobs. It should instead enhance their jobs. And when AI still does things better than people in ordinary mundane work, fine, but don’t just toss these people out to the street. Train them to find new jobs, jobs that embrace AI in their line of work. And also to *safely* leverage AI. Why the word “safely”? Because the way central AI is built and is available to the public, it entails a lot of safety problems, especially with confidential information, authoritative knowledge, business intellectual property, and etc.
So in a nutshell, don’t just toss workers off to the streets; instead, have them train in AI to make their jobs better, or at worst, guide them to new and better jobs that leverage AI safely.
Why not just shove AI away from jobs and prevent it from creeping in and taking over? Well, as much as we’d like that, the problem is the free market and the freedom of the internet together won’t make this sustainable for all the traditional jobs. The solution is not to control the free market and the free internet in my opinion. It is like trying to control guns. It is very elusive, and generally, these actions punish law-abiding citizens and stifle innovation and at the end of the day, negatively affect national security. Why national security? Well, if you prevent security service personnel from the power of AI, other nations could take advantage of it and leave us in the dust in terms of power. For what stifles innovation in the public sector, also trickles down to innovation in the military sector. As a country, we do not want that.
The next argument to address: “well, AI will take over anyways… humans are weak and unable to keep up, and AI is powerful.”
Well, hold your horses there. This at the end of the day is a very dangerous perspective. Because now this entails an existential threat to humankind. This is where religion comes in, and at a very critical point. I am sure that this original argument was probably applied with the industrial revolution at some point, and this should be addressed in the like manner.
Such a perspective does 3 hurtful things: 1) it scares the general public unnecessarily, 2) it limits our capacity as human beings on how to handle the onslaught of powerful AI, and 3) as mentioned, it poses an existential threat.
So how do we address this? Since I (the author of this post) am a Christian, I will pull arguments from the Scriptures to address this. And this is certainly applicable for Jewish readers, and many other religions that are based on Genesis, the first book of the Bible. The verse and argument in question is Genesis 1:26 (in the ESV translation):
“Then God said, “Let us make man in our image, after our likeness. And let them have dominion over the fish of the sea and over the birds of the heavens and over the livestock and over all the earth and over every creeping thing that creeps on the earth.”” (Genesis 1:26 ESV)
It’s a very simple argument: God made man (humankind) to have dominion over all things on the earth. And this certainly includes AI. Afterall, humankind made AI to begin with. The problem is, when humans think: “oh well, AI is certainly more powerful and smarter than humans… we might as well let AI control everything.” But… why? Who says we should give AI all the power and ability to do everything for us, such that to ultimately replace us completely? This is the crux of the problem of modern AI and human thought. I think the proper response to powerful AI, in light of the authoritative verse above is this: use AI, but don’t let AI overpower and control us. Don’t let it have dominion over us, as the verse said. We should always have the power cable to AI, and not let AI ever “think” (in its unattended data model sense) to have total control over its own power cable and over us. Even Elon Musk mentioned this in an interview.
The thought and motion to give AI the power to its own “power plug” (metaphorically and literally speaking), is what is the problem. Not that AI is too powerful for us.
One may argue this: “well, why should we not let something that is too powerful for us to be able to control us, when it is for the good of all?”
My response: for the good of what? For whom? For robots? For the earth? Why the heck care about robots and the earth if it means the demise of the human race? Why not submit to the directive of Genesis 1:26 and keep AI and robots in submission? What good is protecting the earth (from pollution and humans) if it means our death? Why was the earth created in the first place (again, a religious point… atheists and others would argue that the earth was never created for us).
So as can be seen all above, this in some sense isn’t a considerable struggle about AI in and of itself, it’s a serious struggle between 2 different human perspectives about man-made machines: namely in a nutshell: should we (humans) be the sole controllers of AI or should AI control us? And right away, these two perspectives will pit all of people into fundamentally two groups that will fight on this very one issue! The third group of people, who are unsure how to answer that question, will be caught in the crossfire between the above mentioned groups.
So yes, we did stray a bit, but did we really? The question about how to handle job loss was indeed answered, but it had to go down to a very critical fundamental question that involves strong religious convictions. How the free market will respond to this will be reflective of how the 2 groups of people fight over this, and how the third group responds between the two.
Image by Pete Linforth from Pixabay