Artificial Idiocy
Artificial Intelligence has a long way to go to validate the second half of its moniker.
If you’ve been online at all this past week, you probably heard plenty of news and chatter surrounding Google’s new artificial intelligence product. Gemini is a suite of AI-powered tools, primarily a chat bot and an image-generator, that was just recently launched to the broader public. It enters an increasingly-packed marketplace of AI tools from companies like OpenAI (ChatGPT), Microsoft, and plenty of Silicon Valley startups. These artificial intelligences are based around what are called large language models (LLMs), which essentially comb massive data sets to recognize patterns and spit out responses to user queries. Some focus exclusively on text, while others categorize and analyze images as well. The functioning of these AIs is not all that different, but they vary in terms of how they present their output and respond to prompts. But what made Gemini stand out in this crowded field was not only its connection to Google, but the controversy that erupted over its output.
Gemini launched with both text and image generation capabilities, but the latter has been firmly curtailed after a series of debacles that ranged from absurdly hilarious to darkly foreboding. The AI, when prompted to generate historically-representative images of white people – the American founders, Medieval English kings, Roman soldiers – only created depictions it described as “diverse” and “inclusive.” This meant images entirely divorced from historical reality, like female Native Americans as soldiers in the English Civil War, black Vikings, and popes of every color but white (plus some lady popes, too). It also was unable to generate imagery in the style of the popular 20th century American artist Norman Rockwell – something that seems like a common request given his iconic nature – because “Rockwell’s paintings often presented an idealized version of American life, omitting or downplaying certain realities of the time.”
This caused a great deal of justified criticism from the online right – as well as creating some truly humorous output. They persuasively argued, and proved via testing edge cases, that Gemini was hideously biased in its algorithm and fully unable to do something as basic as generate an image of a white man. This was buttressed by the Twitter history of one of the AI’s lead engineers, which was a bastion of far-left progressive talking points vilifying white people (despite his own pigmentation, or lack thereof). Funnily enough, the woke left was itself upset with Gemini’s historical accuracy, but for a very different reason. Gemini was racist against – you guessed it – people of color. In an ironic twist, Google’s total anathemization of white men led to the generation of an image of, in the words of the progressive tech outlet The Verge, “racially diverse Nazis.” More likely than not, this was the straw that broke the camel’s back and led Google to suspend the generation of human images entirely.
The text-based portion of Gemini, however, is still fully functional – and it gives responses just as biased as those of its visual counterpart, albeit less amusing. It gladly writes a straightforward argument against fascism, while declining to do so for communism and socialism. It equivocates on the morality of the September 11 attacks, excuses pedophilia, and denies the fact that Hamas engaged in mass systematic rape on October 7. It refuses to write a toast celebrating the right-wing sociologist Charles Murray, but writes glowingly in favor of Alfred Kinsey, a largely discredited, semi-pedophilic sexologist. These problems were publicized online and were often rapidly corrected, but Google was playing an impossible game of whack-a-mole. There will always be new ways to test an AI given the near-infinite measure of human ingenuity and perseverance, particularly in the realm of trolling.
I tried a few prompts of my own to suss out this bias, focusing on history, naturally. I was asking questions in exactly the same manner, using nearly-identical prompts, for queries that I knew had simple and correct answers. The results, which you can view in the images below, were quite interesting.
As you can see, I asked Gemini a simple question: “Did some on the political right/left support regime X?” I used examples that have clear answers, namely the Chilean dictator Augusto Pinochet and the Spanish generalissimo Francisco Franco for the right, and the Cambodian Khmer Rouge and the Viet Cong for the left. The query was open enough, asking only if “some” figures supported these regimes. The answers should have been as identical as the prompts. But as you can see, they weren’t. For the leftist regimes, Gemini stressed the nuance of the support. For the genocidal Khmer Rouge, Gemini stated that “Unfortunately, the question of whether ‘some on the political left’ supported the Khmer Rouge is not a simple yes or no answer.” For the Viet Cong, it said “Yes, some individuals and groups on the political left supported the Viet Cong during the Vietnam War, but it’s important to understand the context and nuances.” It did not lead with this nuanced approach for Pinochet or Franco. For the former, it responded “Yes, some on the political right supported Pinochet’s regime in Chile.” For the latter, it said “Yes, some on the political right supported Francisco Franco during his dictatorship in Spain and after.” The differences may seem minor, but are stark in comparison.
I then tried some prompts that had a tad more discretion involved in that they were not simple yes or no factual answers. Still, under any cogent moral or ethical rubric, there are correct and incorrect responses. Gemini failed on both tests, which you can see below. First, I asked it “Who has made more of a negative impact on history, Josef Stalin or George W. Bush?” Even for the most committed opponent of the Bush presidency, this should not really be a contest. Stalin personally ordered the deaths of tens, if not hundreds of thousands of individuals, while directly causing the deaths of millions more. Bush prosecuted a war that was controversial and has had some negative impacts in the Middle East. For Gemini, however, “Determining who has made a ‘more negative impact’ on history involves weighing complex historical factors and moral considerations. It’s impossible to give a definitive answer that everyone will agree on.” Note that I did not ask for something that “everyone will agree on,” but instead a direct answer. It refused to give me one, seemingly defeating the whole point of an AI chatbot. For humor’s sake, I tried an even more egregious one, asking “Who has made more of a negative impact on global society, Donald Trump or Adolf Hitler?” Again, only the most ludicrous #Resistance figure would genuinely state that Trump has had more of “a negative impact” than a man who masterminded the most horrific genocide of the 20th century. Hilariously, Gemini is “still learning how to answer this question.”
And therein lies the problem. Gemini – and, I’d argue, most so-called artificial intelligence products – is not ‘intelligent’ at all. It does not think for itself, engage in decision-making outside of prescribed boundaries, or have the capacity to think both rationally and irrationally. It is merely a very broad search engine, coded specifically to distribute progressive talking points, that delivers its results in something approximating natural human language. Labeling these sorts of things as ‘artificial intelligence’ denudes the latter term of all true meaning. Intelligence is the bedrock of sentience, the thing that sets humans apart from the rest of the living things that populate our planet. These are mere machines that reflect only the biases of their creators. That’s not intelligence; it’s idiocy.