OpenAI, the artificial intelligence (AI) company that developed ChatGPT, has now entered the war market, delving into a field that Silicon Valley avoided a few years ago but is now investing in. OpenAI and defense technology startup Anduril Industries jointly announced that the two parties will establish a strategic partnership to develop and responsibly deploy advanced artificial intelligence solutions for US national security missions.
Founded in 2017, Anduril is committed to developing and deploying integrated autonomous solutions for various sensors and has experience in deploying robotic systems to automate operations in tactical environments. Brian Schimpf, co-founder and CEO of Anduril Industries, said that the defense solutions created by Anduril meet the urgent operational needs of the United States and its allies. The collaboration with OpenAI will enable Anduril to use its world-class expertise in artificial intelligence to address the global tight air defense capability gap. The two companies are committed to developing responsible solutions that enable military and intelligence personnel to make faster and more accurate decisions in high-pressure situations.
OpenAI CEO Sam Altman pointed out in a statement that OpenAI builds artificial intelligence to benefit as many people as possible and ensure that the technology upholds democratic values. OpenAI’s collaboration with Anduril will help ensure that OpenAI technology protects U.S. military personnel and will help national security agencies understand and use this technology responsibly to protect the safety and freedom of American citizens.
Combination of Counter Unmanned Aerial Systems And Artificial Intelligence
OpenAI has partnered with Anduril Industries to integrate its artificial intelligence technology into the weapons manufacturer’s Counter Unmanned Aerial Systems, marking the most significant progress the artificial intelligence developer has made in the defense field to date. The two parties will integrate OpenAI’s advanced AI models with Anduril’s high-performance defense systems and Lattice software platform to strengthen the national defense system and ensure that military personnel of the United States and its allies are protected from deadly attacks by aerial devices such as drones.
The two companies said that Anduril will rely on OpenAI’s technology to better detect and respond to unmanned (mainly drones) “air threats”. OpenAI also plans to train AI models on Anduril’s Counter Unmanned Aerial Systems (CUAS) threat and operation database for the potential to parse time-sensitive data, improve manual operations, and enhance situational awareness. Both parties firmly believe that this move will “lay a solid foundation for mission success.”
OpenAI will also use Anduril’s data to train the software for these defense systems, so that if a threatening drone is identified, the military can use electronic jammers, drones and other means to shoot it down. Anduril and OpenAI will explore how to use cutting-edge AI models to quickly synthesize time-sensitive data, reduce the burden on human operators and improve situational awareness.
The partnership comes at a critical time as the two companies say the race between China and the United States for dominance in military-use artificial intelligence accelerates. “If the United States gives in, we risk losing the technological advantage that has supported our national security for decades,” the companies said in a joint statement. Anduril and OpenAI said the partnership will focus on developing and responsibly deploying AI for national security missions and will help address urgent gaps in global air defense capabilities.
OpenAI’s Changes
Defense contracts have historically been controversial among employees of consumer technology companies, including sparking a massive protest inside Google in 2018. But the AI industry has recently shown a more open attitude toward such deals. Altman has also been candid about the potential risks of AI, warning that bad actors could use the technology to hack into systems, while U.S. adversaries could use powerful new models to create national security problems.
Earlier this year, OpenAI changed its policy on cooperation with the military. Previously, OpenAI’s policy on eliminating the weaponization of AI was to prohibit the use of its large language models for any military or war-related applications. But in January of this year, the company deleted the “banned in military and war” regulations in its use policy, stating only that “OpenAI’s products cannot be used to harm yourself or others, including through the development of weapons,” and allowing some cooperation with the military.
In recent months, OpenAI has been seeking to expand its partnership with the US government on national security and has expressed its desire to support the public sector in adopting artificial intelligence that upholds democratic values. OpenAI is working with the US Air Force Research Laboratory to use its ChatGPT enterprise tool for administrative purposes. While OpenAI still prohibits the use of its technology for offensive weapons, it has reached some agreements with the US Department of Defense on cybersecurity work and other projects. OpenAI also hired former US National Security Agency Director Paul Nakasone to join the company’s board of directors earlier this year and hired former Department of Defense official Sasha Baker to form a team focused on national security policy.
Some other Silicon Valley technology companies are taking similar measures. OpenAI’s competitor Anthropic announced a partnership with Palantir Technologies and Amazon to provide its technology to US intelligence and defense agencies. Last month, Meta also opened its artificial intelligence model to US defense agencies and contractors.
The anti-drone industry is of key significance to the low-altitude economy and military development. Under the general trend of low-altitude economic development, challenges such as airspace management continue to stand out, and risks such as the illegal flight of low-altitude aircraft represented by drones continue to affect the development of the industry; in the military field, the risks of low-altitude aircraft such as drones are also gradually prominent, and the development of the anti-drone industry is an inevitable trend. Many large technology companies attach great importance to and continue to invest in technology research and development, and the industry has important potential for future development. After the rise of electronic warfare, AI may become increasingly important for ensuring that drones perform tasks in the air. Electronic warfare uses jammers to block GPS signals and radio frequencies that drones rely on for flight. AI can also help soldiers and military commanders analyze large amounts of battlefield data.