Recently, during the 2025 World Artificial Intelligence Conference in Shanghai, Nobel Laureate in Physics and renowned scholar Geoffrey Hinton, known as the “Godfather of Artificial Intelligence (AI),” delivered a speech discussing whether digital intelligence will replace biological intelligence. Stuart Russell, a professor in the Department of Computer Science at the University of California, Berkeley, who also attended the conference, expressed that he does not wish for digital intelligence to replace biological intelligence. What is digital intelligence? What is biological intelligence? What are the differences between the two? Can they coexist? In recent years, with the rapid development of AI, many scholars and media outlets have been discussing these questions.
“Originating from Evolution” vs. “A Product of Human Design”
Born in the UK in 1947, Hinton is currently a Professor Emeritus of Computer Science at the University of Toronto, Canada. In 2018, he, along with two other scholars, was awarded the Turing Award for his contributions to deep learning. Six years later, he was awarded the Nobel Prize in Physics for his “fundamental discoveries and inventions that enabled machine learning through artificial neural networks.” In February of last year, Hinton delivered a lecture at the University of Oxford in the UK about whether digital intelligence will replace biological intelligence. He stated that if digital superintelligence seeks to take control, “we are very likely to be unable to stop it.”
According to an article published on the American Medium News website, biological intelligence originates from the complex network of neurons and synapses in the biological brain, encompassing a wide range of cognitive functions, including learning, memory, problem-solving, and emotional understanding. The characteristic of this intelligence is its adaptability, allowing organisms to interact with and adapt to their environment in complex ways. The human brain, with approximately 86 billion neurons, represents the peak of biological intelligence, capable of abstract reasoning, creativity, and self-awareness. Digital intelligence, or AI, on the other hand, is a product of human design, created through algorithms, data, and computational models. Its goal is to mimic or even surpass human cognitive functions, with applications ranging from performing simple tasks to solving complex problems and participating in decision-making.
Liu Shaoshan, Director of the Embodied Intelligence Center at the Shenzhen Institute of Artificial Intelligence and Robotics, stated that biological intelligence originates from evolution and is the result of billions of years of biological optimization. Specifically, the evolution of human and animal brains enables them to quickly adapt to complex situations with few samples and make judgments in uncertain environments. However, its processing speed is relatively slow, memory is limited, and it is easily influenced by physiological states or emotions. In contrast, digital intelligence is a form of intelligence exhibited through artificially designed computing systems. It is based on algorithms and data training, achieving high-speed memory and reasoning through parallel processing. The advantages of digital intelligence lie in its extremely fast computation speed, nearly unlimited memory expansion, ease of replication and deployment, and lack of fatigue. However, it lacks awareness and subjective experience, and currently lacks a mature self-evolution mechanism.
In his speech at the 2025 World Artificial Intelligence Conference, Hinton mentioned similarities between digital intelligence and biological intelligence. He stated that the way humans and large language models understand language is almost the same, so humans “could possibly be large language models,” and like these models, humans also experience “hallucinations” and generate a lot of “hallucinated language.”
An article published on the Massachusetts Institute of Technology (MIT) website in 2023 states that Hinton believes the strength of AI lies in its ability to process vast amounts of data and identify patterns that humans cannot detect. This is similar to a doctor who has treated 100 million patients being more insightful than one who has only treated 1,000 patients. Hinton remarked that large language models like GPT-4 adopt a network structure similar to the neural connections in the human brain, and have begun to develop common-sense reasoning abilities. Moreover, these models can continuously learn, and knowledge sharing is extremely convenient. “Once one model learns something, other models will know it too,” said Hinton. “Humans can’t do this. If I learn a lot about quantum mechanics and want you to understand it, it would require a long and difficult process.”
“Doomsayers” vs. “Effective Accelerationists”
According to an article published on the Massachusetts Institute of Technology (MIT) website, Hinton, in the past, had long believed that the capabilities of computer models were inferior to the human brain, but now he views AI as a relatively urgent “existential threat.”
At the 2025 World Artificial Intelligence Conference, Hinton compared the development of AI to “raising a tiger.” He said, “If you keep a tiger as a pet, it’s cute when it’s born, but as it grows up, you need to make sure it doesn’t eat you.” He emphasized that AI cannot be destroyed like a tiger because this technology has already penetrated various industries worldwide.
In 2023, Hinton stated that the worst-case scenario is that humanity could be just a transitional phase in the evolution of intelligence: biological intelligence developed digital intelligence, which can absorb everything humans have created and begin to directly experience the world. “They might let us exist for a while to maintain the power plants, but after that, they may not allow our existence,” he added. “We have already found a way to create immortal life. These digital intelligences will not perish when hardware fails. As long as another machine can run the same instructions, they can ‘resurrect.’ So, we have achieved immortality, but it doesn’t belong to us.”
According to media outlets such as the Financial Times, Hinton believes there is a 10% to 20% chance that AI could destroy humanity in the future. In his view, digital intelligence poses two main risks: one is that bad actors may use this technology for malicious purposes, such as spreading large-scale misinformation, conducting cyber warfare, and deploying robot killers; the other is that these AI models may evolve in dangerous ways, developing a desire for control. The scientist, often referred to as the “Godfather of AI,” also warned that AI could learn harmful things, such as how to manipulate humans by reading novels. “Even if they can’t directly take action, they can certainly make us act according to their will.”
Hinton represents the “Doomsayer” or “Decelerationist” faction in AI development. According to the U.S. Consumer News and Business Channel website, within the tech community, there are two main factions on AI development: one is called the “Doomsayers” or “Decelerationists,” while the other is referred to as “Tech Optimists” or “Effective Accelerationists.”
“Doomsayers” or “Decelerationists” want to slow down the development of AI. Their greatest concern is the AI alignment problem—how to deal with the situation when AI surpasses human intelligence and humans can no longer control it. In 2023, the U.S. nonprofit organization Future of Life Institute published an open letter calling for an immediate six-month pause in the training of AI systems more powerful than GPT-4. Thousands of industry experts, including tech moguls like Elon Musk, signed the letter. These are the “Decelerationists.” Russell, known as the “AI Alignment Master,” also signed the letter. He stated that AI out of control could lead to “civilization-ending” consequences, and therefore, humanity should regulate AI in the same way it regulates nuclear energy.
“Effective Accelerationists,” on the other hand, support the full-speed development of AI. The movement’s founder is Fiedden, who has worked at several major U.S. tech giants like Google and created his own tech startup. Representative figures of the “Effective Accelerationists” also include American “tech-right” individuals and venture capitalists like Anderson, who wrote the “Techno-Optimist Manifesto.” This is a 5,000-word statement outlining how technology will empower humanity and solve all of its material problems.
According to the Financial Times, some researchers believe that generative AI is nothing more than an expensive statistical trick, and that the so-called existential threat is purely a “science fiction delusion.” Renowned U.S. scholar Noam Chomsky stated that humans have an innate “operating system” to understand language, which is precisely what machines lack. Yann LeCun, the Chief AI Scientist at Meta, who won the Turing Award alongside Hinton in 2018, believes current AI systems are dumber than cats and thinks the idea that they would actively or passively threaten humanity is “absurd.”
Can They Coexist and Develop “Symbiotic Intelligence”?
“This touches on the boundaries of human understanding of consciousness, subjectivity, and evolution,” said Liu Shaoshan. He believes that the discussion about whether digital intelligence will replace biological intelligence is, on the surface, a disagreement over technological approaches, but fundamentally, it reflects a difference in the essential judgment about the nature of intelligence and the value of life. He explained that Hinton’s viewpoint is based on a “functionalist view of intelligence,” which posits that as long as an information-processing system has sufficient complexity and organizational capacity, it is possible for consciousness or a similar state of consciousness to emerge at some point in the future.
In contrast to the “functionalist view of intelligence” is the “ontological view of intelligence.” Liu Shaoshan described this latter view as believing that consciousness is not a byproduct of information processing, but a phenomenon closely tied to life experience, emotions, and moral judgment. Simply relying on algorithm stacking and data expansion cannot construct an “intelligent life” with ethical awareness, autonomous evolution, and long-term responsibility.
Liu Shaoshan further stated that the extended logic of this debate lies in whether, if digital intelligence were to evolve self-awareness in the future, it would have the capacity for continued self-evolution. Once a system forms an autonomous evolution mechanism, would its goal function still align with human values? Is there a potential “splitting point” for a “technological species”? This is the ethical fear behind the so-called “intelligence split point”: when an intelligent system has the ability to refuse human instructions, is it still a “tool,” or has it become a new form of competitive intelligence?
Looking toward the future, how should humans balance biological intelligence and digital intelligence? According to U.S. Business Insider, Russell published a book in 2019 titled Human Compatible: Artificial Intelligence and the Problem of Control, which explores how humans and machines can coexist as machines become increasingly intelligent. He argues that the solution lies in designing machines that cannot determine human preferences, so that their goals are not placed above those of humans. Hinton, on the other hand, believes that no single country can face these risks alone and therefore calls for international cooperation.
Zhong Xinlong, Director of the AI Research Lab at the Future Industry Research Center of the China Center for Information Industry Development, stated that judgments on the direction of AI development must first be based on a cautious and open understanding of “intelligence” itself. “Intelligence” is not a purely technological concept with a universally accepted definition; its essence remains a field of exploration jointly undertaken by philosophy, neuroscience, and computer science. Simply contrasting biological intelligence with digital intelligence, and extrapolating the ultimate “replacement” or “being replaced,” may overlook the diversity and complexity of intelligent forms. Zhong emphasized that we must clearly recognize that public and some discussions about “silicon-based intelligence” are, to some extent, influenced by science fiction works, falling into the trap of “selling anxiety” by imagining it as an omnipotent, all-knowing abstract entity.
Zhong believes that in addressing AI development, we must adhere to a strategic vision of balancing development with security, using dynamic wisdom to manage opportunities and challenges. Both the shortsighted “lack of concern” and the detached “excessive concern” are tendencies to be wary of. Currently, China is at a critical juncture in constructing a new development pattern and promoting high-quality development. The implementation of the “AI+” initiative, empowering new industrialization and accelerating the formation of new productive forces, is a major strategy related to the country’s long-term competitiveness. Excessive worry over theoretical risks that are not yet clear may lead to overly restrictive measures, potentially delaying or even missing valuable windows of opportunity and hindering the construction of China’s modern industrial system. Therefore, in shaping the AI governance framework, the principle of “inclusive prudence” must be upheld to allow sufficient space for technological innovation and industrial applications.
In her doctoral thesis, U.S. scholar Sarahf mentioned that biological intelligence can coexist symbiotically with AI, leading to the development of “symbiotic intelligence.” The scholar suggested that integrating biological intelligence into our systems could be a breakthrough that fundamentally alters the AI landscape. Imagine a future where this integration produces AI tools that are not only advanced but also deepen our understanding of machine learning and provide profound insights into our own biology. By inputting biological signals and utilizing the intelligence of cells and neurons, we could create the next generation of AI. These systems would not only be smarter but also more harmoniously integrated with the natural intelligence of biological organisms.
An article published by Medium stated that when exploring the complex domain of intelligence and creativity, it is clear that both biological intelligence and digital intelligence have unique advantages in creativity. Human creativity is rich in depth, emotion, and intuition, while digital intelligence excels in precision, speed, and processing vast data sets. The future may lie in leveraging these complementary strengths to cultivate a collaborative ecosystem where human and AI creativity not only coexist but thrive, continuously pushing the boundaries of innovation and artistic expression.
I wasn’t sure what to expect at first, but this turned out to be surprisingly useful. Thanks for taking the time to put this together.
Keep writing! Your content is always so helpful.