Tuesday , 8 October 2024
Home AI: Technology, News & Trends Harnessing Space for AI’s Future Power Needs

Harnessing Space for AI’s Future Power Needs

45

The world has been scrambling for AI computing power lately to keep up with the rapid pace of AI development, and the more forward-thinking (like OpenAI CEO Sam Altman) have even taken the issue to the level of power supply. As OpenAI trains a new generation of big models, computing clusters are even said to be hitting local power grids. If infrastructure capacity doesn’t improve at the same pace as AI technology, perhaps AGI’s bottleneck will become energy.

The growing demand for AI power supply

At this point, Lumen Orbit, a Y-Combinator-funded startup, drew attention to the fact that building AI computing clusters in space was a good idea: direct solar power, passive cooling anywhere, anytime, and the freedom to scale.

According to the latest report, Lumen Orbit’s plans to build a data center in space are already on the agenda, with the company planning to launch its first satellite next year, as well as larger iterations every year until the total server power reaches gigawatt scale.

Lumen Orbit has established a payload manufacturing facility in Redmond, Washington, to design and begin construction and testing of the first spacecraft, which will carry what will be the fastest GPU ever launched into space, about 100 times faster than the most powerful GPUs currently launched into space. The first satellite will be a 60-kilogram demonstration satellite expected to launch in May 2025 as a co-passenger payload on a SpaceX Falcon 9 rocket, with a launch in 2026 for a usable microdata center.

Philip Johnston, the company’s CEO, said Lumen is working with Ansys and Solidworks on satellite design and development and is in the process of filing applications with the Federal Communications Commission and the International Telecommunications Union.

While the challenges of launching a data center for spacecraft capacity are significant, Lumen Orbit has developed a number of conceptual designs from first principles and has found no insurmountable obstacles. With new, reusable, cost-effective, heavy-lift launch vehicles such as Starship and New Glenn coming into service, coupled with the proliferation of on-orbit networks, the vision of orbital arithmetic may become very real.

Training big models in space for efficiency advantages

Why build an AI computing cluster in space? Theoretically, space data centers can utilize high-intensity solar energy 24/7, independent of the day/night cycle, weather, and atmospheric losses (attenuation). This results in much lower marginal energy costs and significant operational cost savings compared to the ground.

Lumen Orbit has done the math for us: compared to the average capacity factor of 24% for terrestrial solar farms in the U.S., the company’s proposed solar arrays in space have a capacity factor of greater than 95%, there is no day/night cycle, and the optimal orientation of the panels is perpendicular to the sun’s rays, regardless of the seasons or weather. As a result, a solar array in space would generate more than five times the energy of the same array on Earth.

Assuming a $5 million rocket mission per launch can be converted into 40 MW of data centers, and the material cost of solar cells is $0.03 per watt, all amortized over 10 years, we would be able to obtain an equivalent cost of energy of about $0.002/kWh. In comparison, the average wholesale cost of electricity in the US, UK and Japan is $0.045/kWh, $0.06/kWh and $0.17/kWh respectively. As a result, orbital data centers can provide energy at a lower cost than today’s energy prices.

The next issue is heat dissipation. The “effective” ambient temperature in outer space is about – 270°C, which corresponds to the temperature of the cosmic microwave background radiation. To use deep space as a heat sink to dissipate waste heat, you need to avoid direct sunlight and design a deployable heat sink. A 1×1m black panel kept at 20°C can radiate about 850 watts into deep space, which is about three times the amount of electricity generated per square meter by a solar panel. Therefore, these radiators need to be about one-third the size of the solar array, depending on the configuration of the radiator.

In space, we can use simpler and more efficient cooling architectures than conventional high-performance arithmetic coolers. It is estimated that we can achieve PUE comparable to state-of-the-art hyperscale terrestrial data centers. in addition, some orbital data centers in orbit have virtually no ‘ambient temperature’ fluctuations (solar radiation varies by no more than about 0.2%) and are in highly stable thermal and mechanical environments, which contributes to thermal control and stability.

Then there’s scalability. Orbital data centers will unlock next-generation cluster sizes never before seen on Earth, generating power well beyond the GW range. They will be able to scale linearly almost indefinitely, independent of the physical and planning constraints of terrestrial projects, and will be scalable in all directions in 3D space.

If current trends in the large model field continue, we will need clusters with several GW of power to train the largest LLMs starting in 2027. assuming that a 5 GW cluster will be used to train a model such as Llama 5 or GPT-6, the power consumption directly exceeds that supplied by the largest power plant in the US. Therefore, such clusters are simply not feasible to run with today’s energy infrastructure, and finding new directions is critical to training the next generation of AI models.

As the AI algorithm operates in space, other satellites will send the data they collect to Lumen Orbit’s constellation, use the onboard GPUs to perform inference, and subsequently output the results of that inference.

Lumen Orbit isn’t the only company working to put data centers in orbit: the EU-funded project ASCEND has been studying the feasibility of a space data center, and Texas-based Axiom Space says it’s working with Kepler Space and Skyloom to build an orbital data center on Axiom’s first capsule, which is expected to launch in 2026-2027.

Related Articles

AI cost 1

How to Reduce Costs to Make AI More Accessible?

Ten years ago, developing DigiOps and AI was only affordable for large...

AI-Generated Virtual Worlds

The Rise of AI-Generated Virtual Worlds: Shaping the Future of Digital Experiences

Artificial intelligence (AI) is no longer confined to simple applications like voice...

Future of Technology

The Future of Technology: Trends Shaping Our World in 2024 and Beyond

In 2024, the world of technology is advancing at a pace previously...

Search tool 1

Top 5 Reverse Video Search Tools for Getting Accurate Results

Have you ever stared at a video and wondered who originally posted...