As 2025 draws to a close, the AI industry stands at a critical crossroads. On the occasion of ChatGPT’s third anniversary, the technological competition between OpenAI and Google has intensified. The narrowing performance gap among large models has sparked doubts about a development ceiling, yet the industry’s belief in AGI remains unshaken. Despite challenges such as data depletion, the Scaling Law remains the most reliable technical path at this stage. The total installed capacity of large-scale data center projects planned and under construction in the United States has exceeded 45 gigawatts (GW), backed by over $2.5 trillion in investment. Huang Renxun’s proposed three Scaling Laws—covering pre-training, post-training reinforcement learning, and inference—further support the sustained growth of computing power.
Multimodal technology has ushered in its “ChatGPT moment.” Models like Google Gemini and OpenAI Sora have achieved in-depth integration of text, images, and videos, breaking the limitation of large language models that rely on a second-hand textual world. The spatial continuity, physical constraints, and other information contained in multimodal data not only lay the foundation for building robust world models but also open up the closed-loop channel of “perception-decision-action,” driving intelligence to leap from content generation to environmental interaction. Innovations in underlying architectures and learning paradigms are flourishing, with emerging global laboratories taking diverse approaches: SSI focuses on superintelligence safety, Thinking Machines Lab specializes in system reliability, Sakana AI explores evolutionary models to reduce computing power dependence, Liquid AI reconstructs dynamic neural network architectures, and Google’s Nested Learning attempts to solve the problem of catastrophic forgetting.
Notably, the AI4S (AI for Science) field is transitioning from academic breakthroughs to industrial application. A key barrier hindering AI4S from impacting industries has been the high cost, slowness, and poor replicability of verification, rather than inaccurate predictions. In response, a highly significant shift is underway: AI is being directly embedded into experimental systems themselves. Google DeepMind will establish an AI-powered automated scientific research laboratory in the UK in 2026, with initial research areas including superconductors, semiconductor materials, and other critical scientific fields—marking a crucial milestone for AI Science to move beyond algorithms toward experimental physics platforms. This laboratory is not merely about robots conducting experiments; instead, AI handles hypothesis generation and experimental design, robotic systems execute the experiments, and data automatically flows back to update models and optimize strategies, forming a reproducible and scalable closed loop. This step is groundbreaking as it transforms AI4S from an advisor to an executor for the first time, unblocking the experimental closed loop and equipping fields such as materials science, chemical engineering, and drug screening with the conditions for genuine acceleration and even reconstruction.

Additionally, the U.S. “Genesis Mission” executive order, signed by the Trump administration, integrates federal scientific research data with supercomputing resources, promising to address the core pain point of insufficient high-quality data—all aligning with the latest AI trend of combining technological depth with practical value.
Key Breakthroughs: Scene Implementation and Ecological Reconstruction
At the application level, the integration of models and applications has broken the predicament of large models lacking platform effects, and the prototype of an intelligent internet is taking shape. With Agents as the basic network nodes, four types of network effects have emerged—transactional, knowledge-based, workflow-based, and social—achieving a positive feedback loop of “the more it is used, the stronger the whole becomes.” The software industry has entered a new era of personalization, driven by a paradigm shift in AI coding. Anthropic predicts that almost all code will be written by AI within 12 months, and enterprises like Tencent and Meituan already have over 50% of their new code generated by AI. Software has transformed from industrialized products to instant, personalized tools, meeting long-tail demands. The micro-software ecosystem, as seen on Hugging Face Spaces and in Chrome extensions, is taking shape, marking the arrival of an era of software equality.
Industry implementation is shifting from exploratory trials to ROI verification. According to a McKinsey report, 88% of enterprises have adopted AI in at least one function, but large-scale deployment remains below 10%. AI is moving beyond peripheral tasks to penetrate core business processes. In the future, the micro work unit of “one person + N Agents” will become the norm, reconstructing enterprise management logic and the definition of talent. On the hardware front, AI glasses have emerged as a breakthrough. Products like Meta Ray-Ban have gained popularity with a lightweight design of less than 50g, and a single brand is expected to reach 10 million shipments in 2026. Ecosystems such as Google XR and XREAL’s Project Aura are advancing rapidly, driving the computing platform transition from fingertip connectivity to sensory connectivity. Their intent-centered interaction logic will spawn a new ecosystem where skill stores replace app stores, while also generating massive first-person perspective data and raising privacy protection challenges.
Safety and responsibility have become indispensable for AI development. A joint survey by the University of Melbourne and KPMG shows that 58% of respondents consider AI untrustworthy, with public trust on the decline. Over 10% of computing power will be invested in security assessment, red team testing, and other areas. SSI has raised $3 billion to delve into superintelligence safety, and enterprises like Google, Microsoft, and Anthropic have established AI governance committees or trust mechanisms, integrating safety and ethics into the entire R&D process. In 2026, supported by technological beliefs, AI will break through bottlenecks, unlock commercial value through scene implementation, and drive profound transformations in industrial ecosystems and human lifestyles in a safe and controllable manner.