Friday , 6 December 2024
Home AI: Technology, News & Trends To Tame the AI ​​Beast, Humans Need to Stand on “Homo Sapiens”

To Tame the AI ​​Beast, Humans Need to Stand on “Homo Sapiens”

46

Today, whether it’s food or information, we not only eat poorly, but also eat too much, and we are constantly eating. This is making organic organisms like us operate at the pace of silicon-based organisms, which is clearly unsustainable.

AI occupies a place in intelligent science

This year’s Nobel Prize is known as the one that goes down in history. The contribution of artificial intelligence (AI) to the physics and chemistry awards has not been particularly surprising, and people jokingly say that AI may also win the literature and peace awards. Although this scene did not occur, the economics prize winner is indeed related to AI.

When it comes to generative artificial intelligence, Asimoglu acknowledges that it is a promising technology, but is skeptical of some overly optimistic predictions about AI’s impact on productivity and economic growth. In a previous paper published by the National Bureau of Economic Research in the United States, he pointed out that the productivity improvement brought by future AI advances may not be significant, and estimated that the upper limit of AI’s growth in total factor productivity (TFP) in the next decade would not exceed 0.66%.

In his book “Power and Progress: Our Millennium Struggle for Technology and Prosperity,” published last year, Asimoglu also talked about the artificial intelligence revolution that could overturn human society. He believed that the current development of AI has gone astray, and many algorithms are designed to replace humans as much as possible, but don’t forget – “The way to make technological progress is to make machines useful to humans, not to replace them
Asimoglu said he is very concerned that AI will become a way to transfer wealth and power from ordinary people to a small group of tech entrepreneurs, and the “inequality” we see now is the “canary in the coal mine”.

Dealing with junk information: information dieting and information fasting

In contemporary times, we are facing an information explosion, where electronic devices are flooded with news from all over the world every day, and there is also terrifying big data that recommends based on what we see, causing us to be deeply immersed and unable to extricate ourselves. If there is junk food, there is also junk information. Like junk food, junk information is cheap, low-quality, and easily accessible, causing serious psychological trauma to us. This situation has become increasingly serious, but it has not yet received enough attention from ordinary people, let alone the emergence of a “surgical weight loss center” like my classmate’s hospital.

In light of this situation, Israeli historian Yuval Harari pointed out in his 2024 book “Above Homo: A Brief History of Information Networks from the Stone Age to the AI Age” that we must pay attention to the quality of the information we consume, especially avoiding hate and anger filled junk information. In order to make our ‘information diet’ healthy, he proposed suggestions from two aspects:

One is to require information producers to label the “nutritional composition” of information:
In many countries, when you purchase junk food, at least the manufacturer is forced to list its ingredient content – ‘This product contains 40% sugar and 20% fat’. Maybe we should force Internet companies to do the same thing. Before watching a video, we should list its content – “this video contains 40% hatred and 20% anger”.
This suggestion is half joking and half serious, but it is not entirely unfeasible. For example, using artificial intelligence to automatically perform “sentiment analysis” on each article and annotate the analysis results before the article to remind readers.

The second suggestion is for information consumers, who suggest that we regularly engage in “information dieting” or even “information fasting”.The concept of “the more information, the closer it is to the truth” needs two prerequisites to be established: Information is scarce; The information is of high quality. But today, these two premises no longer exist because: 1. Information is everywhere and has long exceeded our processing capabilities; 2. The quality of information is getting worse and worse, even becoming garbage, especially today when artificial intelligence is extremely efficient in devouring garbage corpus and spitting out more garbage, which in turn becomes a new garbage corpus for artificial intelligence. This vicious cycle is chilling. At this point, the notion that ‘the more information, the closer it is to the truth’ is no longer valid.

This is similar to how in the past, food was scarce and relatively healthy (there were very few artificial and processed foods), so the statement ‘the more food you eat, the healthier you are’ has some truth. But today, the total amount of food has increased significantly, and it is becoming increasingly low-quality, even more garbage. Therefore, the statement ‘the more food you eat, the healthier you are’ cannot be established.

The problem of “paper clip maker” in AI algorithm

After a massive amount of garbage information flooded the entire ecosystem, over time, a “simulated environment” different from the “real environment” was formed. This simulated environment replaces “human nature” with “machine nature” and envelops the public in it, manipulating their cognition with falsehood.

The paper clip algorithm cannot be avoided when using a mobile phone.

Harari gave an example: In 2016, the Myanmar government army and Buddhist extremists launched a large-scale ethnic violence against Myanmar Muslims, destroying hundreds of Muslim villages, killing 7000 to 25000 civilians, and driving out about 730000 Muslim ethnic groups from Myanmar. In 2018, the United Nations fact finding mission concluded that Facebook “unconsciously” played a role in fueling this incident.

Why? We know that Facebook’s business model is still the familiar “advertising model” – using content to gain user attention, cutting attention, and selling attention to advertisers. Therefore, Facebook should strive to increase user engagement as much as possible, so that the longer users stay on its page, the more it earns.

So Facebook set the primary goal for its algorithm: no matter how you run, the ultimate goal is to maximize user engagement. After receiving the command, the algorithm starts running and optimizing automatically. Through multiple experiments and comparisons, it was ultimately found that pushing angry and hateful messages to users can most effectively increase their stay time. Therefore, without clear instructions from company personnel, the algorithm found and executed an optimal decision on its own: spreading anger. On the Internet in Myanmar, it means to incite discrimination, hatred and violence against the Muslim ethnic group in Myanmar.

AI fabricates narratives, manipulates cognition, and guides humans to kill each other

The long-standing superpower of humans has been the ability to use language to create many fictional myths, such as laws, currency, culture, art, science, states, religion, and other virtual concepts, which make people believe in them deeply. People connect with each other through these social rules to rule the entire society.

Since today’s artificial intelligence is capable of shaping and producing content, engaging in dialogue, and even lying, they are likely to spread “narratives” based on some ultimate goal or beneficial to themselves in fields such as cyberspace, stock markets, and aviation information with much higher efficiency than humans, that is, “telling AI stories well”, in order to manipulate human cognition.

For example, for AI, the financial market is its ideal playground because it is a pure field of information and mathematics (or a fully structured data field). It is still difficult for artificial intelligence to drive cars autonomously, as it faces the challenge of interacting with various objects such as roads, road signs, weather, lighting, pedestrians, and roadblocks while the chaotic and complex cars are in motion. But in digital based financial markets, it is easy to describe a goal to AI (such as “making as much money as possible”), so AI can not only formulate new investment strategies and develop completely new financial tools beyond human understanding, but may even manipulate the market by any means necessary, resulting in a win-win situation for itself and a loss for others.

Conclusion

Whether AI is a good medicine or a poison depends on human beings themselves. Any technology is inherently two-sided. Plato pointed out in the Phaedrus that writing can both enhance and replace memory, making it both a good medicine and a poison. Artificial intelligence is no exception, and due to its concealment and immense power, it can speak for itself and make its own decisions. Its medicinal and toxic properties are even greater than all previous technologies.

American media researcher Neil Postman said, “Orwell warned that people would be enslaved by external oppression, while Huxley believed that people would gradually fall in love with oppression and worship industrial technology that made them lose their ability to think; Orwell feared that what we hated would destroy us, while Huxley feared that our culture would become a vulgar culture full of sensory stimulation, desire, and irregular games, and that we would be destroyed by what we loved.

Artificial intelligence may enslave us through external oppression that we hate, or conquer us through sensory stimulation that we enjoy. Just like facing a nuclear bomb, facing AI, which is also a giant beast created by oneself, whether humans can tame it fundamentally depends on the collective response of humanity. That is to say, as Homo sapiens, can humans overcome their weaknesses such as greed, aggression, and shortsightedness, and stand above themselves, standing above Homo sapiens. This will be a difficult shared endeavor, but humanity has no other choice.

Related Articles

The life-span gap between the sexes is shrinking

Worldwide, the Life-span Gap Between the Sexes is Shrinking

People around the world are, on the whole, living longer. At the...

AI power shortage 2

40% of AI Data Centers Will Face Power Shortages by 2027

Artificial Intelligence (AI) and Generative Artificial Intelligence (GenAI) are driving rapid growth...

New AI algorithm can decode human behavior

New AI Algorithm Can Decode Human Behavior

Understanding how brain activity translates into behavior is one of neuroscience’s most...

Using AI to measure prostate cancer lesions could aid diagnosis and treatment

Using AI to Measure Prostate Cancer Lesions Could Aid Diagnosis and Treatment

Prostate cancer is the second most common cancer in men, and almost...