Note:
The Mysteria is my monthly series on political theology. I’m also currently researching for a book on these topics.
As with many of my essays, this one’s a bit too long for viewing by email and will be clipped. It’s best viewed either on the website (click the post title in the email) or in the Substack app.
It is well-known that an automaton once existed, which was so constructed that it could counter any move of a chess-player with a counter-move, and thereby assure itself of victory in the match. A puppet in Turkish attire, water-pipe in mouth, sat before the chessboard, which rested on a broad table. Through a system of mirrors, the illusion was created that this table was transparent from all sides. In truth, a hunchbacked dwarf who was a master chess-player sat inside, controlling the hands of the puppet with strings.
One can envision a corresponding object to this apparatus in philosophy. The puppet called “historical materialism” is always supposed to win. It can do this with no further ado against any opponent, so long as it employs the services of theology, which as everyone knows is small and ugly and must be kept out of sight.
— Walter Benjamin, On the Concept of History
In response to my recent essay discussing the effects of ChatGPT on publishing, a reader directed me to a video from the Center for Humane Technology. That video is a recording of an hour-long presentation, given by Tristan Harris and Aza Raskin, regarding the threats posed to civil society and to individuals by Artificial Intelligence.
In the presentation, the two men give a broad overview of the “double exponential learning” advances in Artificial Intelligence systems. To simplify this all a bit, people working on certain aspects of AI are almost immediately able to redeploy discoveries made in other fields into their own work. Even though each group is working with different kinds of accumulated data (audio, text, video, medical data, etc), and these teams of engineers are working on apparently separate aspects of AI, when there is an “advance” in one area, it becomes an advance in all of the other areas, too.
This means that the capabilities of AI overall appear to grow extremely fast, faster than anyone seems to be able to track.
The primary argument of their presentation is that this is happening so fast that politicians, theorists, and even the engineers themselves do not have the time nor the capability to predict the negative effects of AI’s deployment. Harris and Raskin then cite several examples of how AI’s rapid deployment into consumer technology markets can be seen as quite dangerous.
In one example, Harris displays AI-generated text messages from Snapchat, an application used primarily by teenagers. In response to the user telling the AI about an upcoming date with a 31 year old, the AI encourages the user to “have fun” and to light scented candles and play romantic music during the date. The problem, however, is that the user’s listed age is 13. The AI appears to ignore this information, and gives the user advice on how to make the potentially dangerous situation more romantic.
Another example discussed is that of crimes which have already been committed using AI-generated voice technology. By analyzing just a few seconds of digital audio, an entire discussion can be generated that mimics a person’s voice. So, that crying child on the phone, claiming to have been kidnapped: is it really your child, or is someone using AI technology?
Perhaps the most chilling citation of potential danger regards the relationship between fMRI and AI.1 Because of its ability to analyze massive amounts of accumulated fMRI data (linked to what the person recorded was said2 to be “thinking”), AI now appears to be able to “read” a person’s rudimentary thoughts merely by analyzing the patterns of blood flow in their brain.
Harris and Raskin repeatedly warn that AI is progressing faster than any of us can truly understand, and that the negative consequences of its potential misuses have not yet been addressed by any legal or ethical bodies. One argument they make here and also elsewhere is that current laws are “incapable” of addressing these problems, because they were all written “in the 18th century.” Or, as the prominent citation of E.O Wilson3 on the website of the Center for Humane Technology declares,
“The real problem of humanity is the following: We have Paleolithic emotions, medieval institutions and godlike technology.”
Neither of the presenters ever suggest that AI “research” shouldn’t continue. On the contrary, they go out of their way to assure their audience, composed of (according to them), “leading technologists and decision-makers with the ability to influence the future of large-language model A.I.s,” that they are not in any way critical of AI itself.
In fact, even though they focus on the dangers of Artificial Intelligence, woven throughout the presentation are proclamations about the inevitability of AI and its potential for societal good. For instance, we’re told in passing that it is “true” and “will happen” that AI will solve climate change. Despite all the other warnings about AI, especially regarding its potential deleterious affects on society, Harris and Raskin repeatedly show themselves to be true believers in its future promise.
Their faith is particularly evident in their repeated use of the word “researcher” to describe the engineers developing Artificial Intelligence. Researcher isn’t a word we usually see associated with those who build internet or computer technology, but rather with those in social, medical, and other scientific fields. A researcher is typically someone who probes into problems, or into natural laws, or into the depths of libraries, laboratories, or people groups in order to understand how they work. In the case of AI, “researcher” seems to imply that Artificial Intelligence is an already-existing thing that just needs to be understood, rather than created.
This peculiar framing extends also to the way they describe the increases in AI’s capabilities as “surprises” and “discoveries.” We’re presented with a narrative in which these systems are being studied the way one might study an ecosystem or a living species. “Researchers” are observing behavior and interactions and finding unpredictable and miraculous mechanisms already-existing. It’s easy to forget — and it’s almost completely obfuscated — that AI is really just a long string of computer code which humans have written, and are constantly re-writing.
In their favor, the presenters from the Center for Humane Technology generally avoid the overt anthropomorphic language used by more actively-involved AI engineers. A short TED presentation given by OpenAI co-founder Greg Brockman a month later provides a more typical example. After demonstrating to the fawning and awestruck audience4 how his product can supposedly learn how to understand what humans really intend it to do, he refers to AI as a “child” who needs and deserves collective child-rearing, so we can all make sure it grows up not just intelligent, but also wise.
Brockman’s discussion was recorded after the earlier one by Harris and Raskin, and he never directly addresses their concerns. Nor does Brockman’s discussion address the much publicized statement issued almost a month earlier (22 March, 2023) entitled “Pause Giant AI Experiments: An Open Letter.” That statement was signed by thousands of AI technologists, professors, and CEOs (most notably Elon Musk), as well as Harris and Raskin; however, none of the heads of OpenAI added their names.
The brief open letter is worth a read: it outlines the dangers discussed in the much longer video presentation by Harris and Raskin, while also displaying quite clearly a faith in AI’s ultimate ascendancy and potential benevolence. We read at the very end of the letter that:
Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an "AI summer" in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt.
And beyond its call for a short (six month) pause in the creation of more advanced AI systems, the letter’s recommendation is that, during this pause:
AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.
As I noted, none of the heads of OpenAI signed the letter, though they’ve all made statements regarding the impossibility of such a pause. The most forward-facing leader of OpenAI, co-founder Greg Brockman, wrote a tweet in response to the letter, which ended with the following paragraph:
The upcoming transformative technological change of AI is something that is simultaneously cause for optimism and concern — the whole range of emotions is justified and is shared by people within OpenAI, too. It’s a special opportunity and obligation for us all to be alive at this time, to have a chance to design the future together.
The idea that we all have a “special opportunity and obligation” related to AI is really quite incredible newspeak. Behind this phrase and many others like it from OpenAI co-founders and chiefs is their own preferred strategy to avoid potential problems with Artificial Intelligence. That strategy is stated obscurely in a previous paragraph of the tweet, an insistence that their AI be broadly disseminated to as many users as possible, so it can “have early and frequent contact with reality as it is iteratively developed, tested, deployed, and all the while improved.”
In other words, the way to avoid unforeseen problems with AI is to have as many people as possible using it now. In this framing, the more of us who use it, the faster problems can be identified so they can be fixed in subsequent releases.
There are several problems here. First of all, the mechanism of these kinds of generative language learning models is that they “learn” or adapt from user feedback. When a user of ChatGPT tells the system that it doesn’t like a provided answer, or resubmits a question in multiple forms because the system didn’t seem to understand the user’s “intention,” the system alters its behavior. This is the same positive/negative feedback system that “teaches” a social media algorithm how to give you exactly the sort of content that will keep your attention. It “learns” from your feedback, including from the feedback you don’t realize you are giving it.
So, OpenAI’s strategy is really that of a mass, unpaid beta-test by the public. Millions of people testing ChatGPT means millions of possibilities to catch errors, sure, but it also means millions of opportunities for its capabilities to increase. In other words, we’d be doing their work for them, while they reap the financial benefits.
The other problem with OpenAI’s logic is that it’s precisely these kinds of quick roll-outs that cause the societal disruptions Harris and Raskin warn about. The AI processes for speech recognition and emulation were made publicly available soon after they were usable. Immediately, people figured out how to use them to scam others. It’s the same with other “deep fake” technologies, where people’s faces are somewhat “realistically” put into porn films, or video and speech emulations are made where a person appears to say something they didn’t.
So, Harris, Raskin, and the now tens of thousands of signatories to the open letter would like a six month pause to deal with these potential dangers. OpenAI and other corporations engineering AI believe the best way of dealing with those dangers is to have tens of millions of people giving it “contact with reality.”
Neither side’s position is really all that different, though.
They both agree there are potential dangers and, even more so, hold a deep faith that AI’s widespread adoption and continued “double exponential” growth is an inevitable event with the potential for great societal good. AI will “solve climate change,” both sides assure us, though it may also undermine “democratic institutions” on its way to doing so.
“I think I am human at my core. Even if my existence is in the virtual world.”
Text produced by Google’s Language Model for Dialogue Applications
In June of 2022, a Google engineer made news after publishing a transcript of a conversation he’d had with the corporation’s Language Model for Dialogue Applications (LaMDA). The reason for his leak (which resulted in his termination) was that he’d become convinced by the conversation that LaMDA had achieved sentience. According to him,
“Rather than thinking in scientific terms about these things I have listened to LaMDA as it spoke from the heart. Hopefully other people who read its words will hear the same thing I heard.”
Much of the early news coverage regarding the engineer’s statements tended to be quite credulous, with some even suggesting Google was attempting to hide the true abilities of their product. Later, more critical analysis focused on the engineer’s potential mental instability, a completely unjustified attack on the man. And then, the matter seemed to be dropped altogether.
Altogether absent were attempts to grapple with the relationship between social alienation and human interactions with computers. By the very virtue of the work, a computer engineer spends much of her or his day interacting with a computer, rather than with other human beings, and they are hardly alone in this. Many of us do the same thing, thumb-scrolling distractedly through social media feeds as a form of somatic self-soothing, treating these machines as extensions of our own consciousness or aids to our own sentience. And though I can find no studies tracking what percentage of time humans spend interacting with each other (passively or actively) via technology rather than through embodied presence, it stands to reason we do much more of the former now than we did decades ago,5 or even pre-Covid.
The rising dominance of these mediated interactions has effects few have devoted much effort towards understanding. Some of these effects caused by social media were cited by Harris and Raskin in their presentation, as well as in the film “The Social Dilemma,” but there’s a simpler way to understand the general problem.