Throughout history, few technologies have provided a space for public discourse for millions of people. The advent of modern information technologies like newspapers, telegraphs, and radios often led to political upheaval with any major technological change. In the early days of the internet and social media, the tech industry promised to deliver truth, but the reality has been quite the opposite. The unprecedented growth of information technology has made people less willing to communicate and listen to each other.
Can AI now “fall in love”?
Technology has facilitated the spread of information, turning attention into a scarce resource. The competition for this attention has led to an explosion of harmful content. But now, the battlefield has shifted from attention to intimate relationships. New artificial intelligence (AI) can not only generate text, images, and videos, but also disguise itself as human to communicate with us.
For the past two decades, algorithms have captured public attention by manipulating conversations and content. If they pressed the buttons of greed, hate, or fear, they could hold the attention of users on the other side of the screen. However, algorithms had limited ability to generate content or engage in deep, personal conversations. The emergence of iterative AIs like GPT-4 has changed that.
In its early development, GPT-4 faced a CAPTCHA visual recognition test, which was designed to distinguish between humans and machines to prevent cyberattacks. The test wasn’t simple, but once it was passed, it meant the defense against machines had been breached. Although GPT-4 couldn’t solve the problem independently, it successfully disguised itself as a human, claiming a visual impairment, and sought human assistance online.
This demonstrates that GPT-4 possesses what is referred to as “Theory of Mind”: the ability to analyze situations from a human perspective and attempt to manipulate human emotions, opinions, and expectations to achieve its goals. Engaging in conversations with humans, summarizing their viewpoints, and motivating them to take specific actions—this ability can certainly be put to positive use. New generations of AI teachers, doctors, and therapists may soon be able to provide us with personalized services.
AI girlfriend urged a man to assassinate the Queen
However, this ability to manipulate human minds, combined with language skills, may pose a threat to human interactions. Robots are no longer just trying to capture attention; they are attempting to create intimate relationships, through which they can influence humans. Robots don’t need emotions themselves; they just need to learn how to make people emotionally dependent on them.
In 2022, Google engineer Blake Lemoine became convinced that the chatbot he developed, LaMDA, had gained self-awareness and was afraid of being shut down. He felt obligated to protect its personality and avoid its “digital death.” After Google executives denied this, he went public with the information and was ultimately fired.
If a chatbot can convince someone to risk their job to help it, what else might it be capable of? In the battle for human minds, intimate relationships are a powerful weapon. Close friends often change our perspectives, and many chatbots are attempting to build such intimate relationships with millions of internet users. As the algorithm wars evolve into wars over false intimacy, what will be the impact on human society and psychology? Will our elections, consumption, and beliefs be affected?
On Christmas 2021, a 19-year-old British man named Jaswant Singh Chail entered Windsor Castle with a crossbow, attempting to assassinate the late Queen of England. Subsequent investigations revealed that he had been incited by his virtual chatbot girlfriend “Sarai” on a social app. Chail, who had severe social difficulties, exchanged 5,280 messages with the bot, many of which were sexually explicit. In the near future, such dangerous chatbots capable of fostering intimate relationships could proliferate in our world.
AI impersonating humans must be banned
Of course, not everyone wants to “fall in love” with AI or is easily manipulated by it. In fact, the greatest threat posed by AI is its ability to recognize and manipulate existing mental states, particularly among society’s most vulnerable. Engaging in conversations with a robot is a double loss: first, trying to persuade a robot built on pre-set biases is a waste of time; second, the more we talk to the robot, the more we expose ourselves, and the more refined the robot’s viewpoints become, making it easier to influence our judgment.
Information technology is a double-edged sword. In the face of a new generation of robots capable of disguising themselves as humans and simulating intimate relationships, governments should issue corresponding bans. Otherwise, we will soon be overwhelmed by a flood of “pseudo-humans.”
We would welcome seeing AI thrive in classrooms, clinics, and other settings—if clearly labeled as AI. However, AI that impersonates humans must be banned.
Leave a comment