Friday , 7 November 2025
Home AI: Technology, News & Trends 2025 Latest AI: Sora 2 Ignites a Video Revolution

2025 Latest AI: Sora 2 Ignites a Video Revolution

61
Sora2

On September 30th U.S. local time, OpenAI officially released its next-generation video generation model, Sora 2, which has achieved leapfrog breakthroughs in image quality and physical logic simulation. Concurrently, OpenAI launched the Sora app on the U.S. region of Apple’s App Store—an application that integrates Sora 2’s capabilities with social features. In just four days, the app claimed the top spot on the U.S. App Store’s free apps chart. Short videos generated by Sora 2 have also spread widely across video platforms such as Douyin, Bilibili, and Instagram, underscoring the model’s extraordinary popularity.

Compared to its predecessor, Sora 2’s breakthrough lies in its advanced ability to simulate the physical world. Previous video generation models often struggled with ensuring the rationality of object movements (e.g., water flow trajectories, light and shadow changes), the coherence of human actions, and the temporal consistency of complex scenes. However, by integrating more advanced diffusion models and Transformer architecture, Sora 2 has elevated video generation from “visually passable” to “narratively expressive,” bringing it closer to achieving a “simulated world” effect.

Simulated world of sora2

First, the physical logic of dynamic scenes is highly accurate. Over the past six months, OpenAI’s team focused intensely on solving the core challenge of “teaching the model to truly simulate real-world operations”—a effort that has now yielded significant results. In videos generated by Sora 2, object movements adhere to real-world physical laws. For instance, the trajectory of splintering glass shatters naturally, the friction between a car’s tires and the ground during turns feels lifelike, and even weather changes (such as water splashes when raindrops hit the ground) are highly realistic, making it hard to spot flaws.

Advancements in human imagery further highlight Sora 2’s breakthroughs. In one demo video, features like a person’s facial details and clothing remain stable throughout. Even the common issue of distorted fingers (e.g., missing or extra digits) in AI-generated videos has been largely resolved by Sora 2 through a spatiotemporal joint attention mechanism that establishes connections between video frames. The rate of abnormal finger counts has plummeted from 17% to 0.3%, making the videos nearly indistinguishable from real footage.

Second, Sora 2 has improved in generating longer videos and supporting multi-camera storytelling. The model can create 20-second short videos with coherent storylines and supports multi-angle camera switching within the same scene. This means AI now possesses a rudimentary “directorial mindset,” capable of automatically planning camera language based on text prompts. Additionally, audio generation is fully synchronized: not only do dialogue and lip movements align perfectly, but ambient sounds and action-related sound effects also adapt to scene changes—delivering a “ready-to-use” experience that eliminates the tedious process of post-production audio editing.

The launch of the Sora app has also simplified the process for users to share such videos. Users only need to type in text prompts to generate videos; they can also upload their own image and video materials to create personalized AI avatars. Even more notably, through the “Cameo” feature, users can collaborate with friends’ avatars to co-create content. According to reports, during OpenAI’s internal testing, many employees made new connections through the “Cameo” function. Sora has not only transformed video production but also created new interaction models, offering businesses new marketing channels. It is fair to say that the birth of Sora 2 has indeed driven generative AI to achieve leapfrog progress—enabling AI to better understand the world and making the creation of a “virtual world” a tangible possibility.

Naturally, Sora 2 has also sparked considerable controversy, with the biggest debate centered on its potential disruption to copyright norms. When Sora 2 was first released, OpenAI allowed the generation of videos featuring copyrighted characters, unless copyright holders proactively contacted OpenAI to opt out. This strategy completely upended traditional authorization models. Shortly after, the Sora app’s feed began featuring fan-made content using well-known IPs such as Nintendo, Disney, and Pokémon—attracting a large user base. However, this also raised alarms among copyright giants like Disney, which moved quickly to opt out. This pressure forced OpenAI to revise its policy from “opt-out” to “opt-in,” meaning copyrighted characters can only appear in Sora 2-generated videos if their rights holders actively choose to participate. How to strike a balance between content playability and copyright protection, and what the outcome of this multi-stakeholder tug-of-war will be, remains worthy of attention in the latest AI news.

Related Articles

Material Intelligence

Material Intelligence: Breaking Boundaries in Architectural and Technological Innovation

Against the backdrop of computing technology advancing from classical computing to quantum...

AI won gold in programming contest

OpenAI and Google AI Models Win Gold in Programming Contest

At the International Collegiate Programming Contest (ICPC) World Finals held earlier this...

Hummingbird Ⅱ binocular full-color glasses

JBD Secures Over $1B in B2 Round

Shanghai Xianyao Display Technology Co., Ltd. (JBD), a leading domestic MicroLED firm,...

IMA-hits-200M-files

IMA Hits 200M Files, Adds AI Reports & Podcasts

October 23, 2025 – Tencent’s IMA Open Day was held in Beijing,...