At a recent OpenAI developer conference, a tech giant unveiled its new vision for artificial intelligence development: in the future, AI will permeate various fields through agent technology, and OpenAI’s platform itself is gradually evolving into an operating system-like entity. This strategic transformation not only validates the foresight of an industry observer from two years ago but also charts a direction for the entire AI industry’s development.
As early as 2023, this observer proposed in multiple articles that large language models (LLMs) are not merely technical tools but a new type of cloud-based operating system. He pointed out at the time that comparing LLMs to cloud services or search engines was one-sided, as the core value of LLMs lies in their ability to amplify data value, becoming profit centers rather than just cost centers. Furthermore, the capabilities of LLMs far exceed information retrieval; they possess abilities for content generation, logical reasoning, and even decision support, thus resembling an operating system more than a search engine.
This judgment sparked considerable controversy at the time. However, as time passed, OpenAI’s development path gradually provided strong support for this view. Especially at the recent developer conference, OpenAI explicitly positioned itself as an “operating system” hosting countless AI agents, marking a complete shift in its strategic intent. The role of applications like ChatGPT has fundamentally changed from passive “responders” to active “actors.”
According to the latest news, with the release of the Apps SDK, a complete AI operating system architecture is gradually emerging. In this architecture, the Model serves as the Kernel, responsible for underlying intelligent computation; the Apps SDK acts as the system API, providing standard interfaces for upper-layer applications; GPTs function as the App Shells, becoming the interface for user interaction with the system; and protocols like MCP ensure that hardware (data sources, external tools) can be recognized and invoked by the kernel. In this system, the user plays the role of a multi-process task scheduler, initiating and managing multiple complex task flows simultaneously through natural language.
The emergence of this architecture not only marks that ecosystem construction has entered a deep-water zone but also foreshadows the rise of a vast native agent ecosystem. Although the ultimate winner remains uncertain, it is predictable that tech giants like Google will not sit idly by, and future competition will intensify.

From “Super App” to “General Computing Platform,” OpenAI’s transformation has not only changed its own positioning but also reshaped the application forms and interaction methods of the AI industry. In the new model, Agents built by developers can operate browsers, call other software APIs, manage files, and become the layer above all software—the central dispatch hub. This is the essence of a “General Computing Platform.”
The root of this transformation lies in a fundamental change in the mode of intelligence supply. In the past, CPUs provided computing power, and programmers provided intelligence; now, GPUs provide computing power, and large models become the primary suppliers of intelligence. This change inevitably leads to a reshaping of upper-layer application forms and interaction methods, moving from “categorization” to “dialogue,” from “silos” to being “unified and scheduled by the operating system.”
Looking back at the judgment from two years ago, this observer also pointed out that the AI industry would struggle to form sustainable business models in the short term and might even incur greater losses. This judgment was based on in-depth analysis of business elements, growth models, and vertical ecosystems. He believed at the time that core business elements such as customer bargaining power, cost structure, and competitive landscape had not improved due to technological breakthroughs but were instead exacerbated by pressures from computing power, data, talent costs, and model upgrade iterations. Simultaneously, building AI platforms requires deep specialization in a domain to form systemic products, which runs counter to the internet playbook that pursues “speed.”
Reality has largely confirmed this judgment. Apart from a few leading players, the vast majority of AI startups globally are still struggling with massive losses, searching for product-market fit (PMF). Market trends have also shifted; merely boasting about model parameters is meaningless, as investors and customers are focusing on whether AI can solve specific problems. Industry consensus is converging towards “systemic products” and “vertical solutions.”
Regarding vertical ecosystems, specialized models and applications in fields such as law, finance, healthcare, and education are emerging one after another. Enterprises are gradually realizing that general-purpose models can only solve most problems, while core scenarios must rely on vertical models deeply integrated with their own data and workflows. This confirms the “octopus ecosystem” judgment—each vertical field will have its own large model and ecosystem.
So, how can one develop forward-looking judgment? The key lies in shuttling between “reality” (technical reality) and “concept” (abstract ideas), conducting independent thinking and deduction. First, return to the technological origin, strip away all the halo bestowed by media and capital, and face the technical core of LLMs directly. Then, find the most appropriate abstract concepts for correspondence, such as mapping LLMs to the concept of an operating system. Once the “concept” is determined, one can mobilize all historical knowledge and business principles about operating systems for deduction. Finally, map these deduced conclusions back to the real world for verification.
This process requires the thinker to possess both the pragmatic spirit of an engineer and the abstract capability of a philosopher. The commonality of all forward-thinking that withstands the test of time is not necessarily predicting the future, but rather finding solid points among various uncertain elements, anchoring the analysis in the origin of the technology, and courageously engaging in abstraction and deduction.