Computational Power and Artificial Intelligence

A comprehensive analysis of compute's role in AI development, infrastructure demands, and policy implications

By Jai Vipra & Sarah Myers West | AI Now Institute

Published: September 2023

Report Overview

This report examines the critical role of computational power ("compute") in artificial intelligence systems. As AI models grow in size and complexity, their computational requirements are increasing at an unprecedented rate, creating new challenges and implications across technical, environmental, economic, and policy domains.

We analyze the full stack of computational infrastructure—from hardware components to data centers—and explore how compute constraints and allocations are shaping AI development, who can participate in it, and what kind of AI systems get built.

Key Data Points

Compute Demand Growth

The computational requirements for training large AI models have been doubling every 3-4 months since 2012, far outpacing Moore's Law.

Energy Consumption

Training a single large language model can consume electricity equivalent to the annual energy use of 100+ US homes.

Market Concentration

Just three companies control over 65% of the cloud computing market that provides AI training infrastructure.

Carbon Footprint

The AI sector's computational demands could account for up to 3% of global electricity consumption by 2025.

Key Insights Summary

Compute Defines AI Capabilities

The scale of computational resources directly determines what kinds of AI models can be developed and who can develop them, creating significant barriers to entry.

Environmental Impact

The growing computational demands of AI systems have substantial environmental costs, including significant energy consumption and carbon emissions.

Supply Chain Vulnerabilities

AI compute depends on complex global supply chains with concentrated manufacturing and potential single points of failure.

Policy Lag

Current policy frameworks have not kept pace with the rapid expansion of computational infrastructure for AI, creating regulatory gaps.

Hardware Lottery Effect

Research directions in AI are heavily influenced by available hardware, with approaches suited to current computational infrastructure receiving disproportionate attention.

Geopolitical Implications

Control over computational resources has become a key factor in international competition, with export controls and industrial policies shaping access to AI capabilities.

Document Contents

Report Contents

1. Introduction: The Centrality of Compute in AI

Computational power has become a fundamental determinant of AI capabilities. Unlike earlier eras where algorithmic innovations drove progress, contemporary AI advances are increasingly dependent on massive computational resources.

This shift has profound implications for who can participate in cutting-edge AI research, what kinds of AI systems get developed, and how the benefits of AI are distributed across society.

2. How Compute Demand Shapes AI Development

The escalating compute requirements for state-of-the-art AI models create significant barriers to entry, concentrating development capability among well-resourced technology companies.

This computational arms race influences research priorities, favoring approaches that scale with compute over potentially more efficient but less computationally intensive methods.

  • Startups vs. Incumbents: The compute advantage of large technology companies creates significant competitive moats
  • Research Directions: Compute-intensive approaches receive disproportionate attention and funding
  • Global Distribution: Compute capacity is unevenly distributed globally, affecting which regions can participate in AI development

3. Measuring Compute in Large-Scale AI Models

Computational requirements for AI training are typically measured in floating-point operations (FLOPs). The most advanced contemporary models require training runs measuring in the range of 10^23 to 10^25 FLOPs.

These requirements have been growing at a rate that far outpaces improvements in hardware efficiency, leading to exponential increases in the cost of training state-of-the-art models.

4. AI Compute Hardware Stack

The AI hardware ecosystem includes specialized processors optimized for parallel computation, particularly GPUs and increasingly domain-specific architectures like TPUs and other AI accelerators.

Different hardware configurations are optimized for different phases of the AI lifecycle: training versus inference, with distinct performance and efficiency characteristics.

5. Hardware Components and Supply Chains

The global supply chain for AI hardware involves complex interdependencies across design, fabrication, assembly, and distribution, with significant geographic concentration at each stage.

  • Chip Design: Dominated by companies like NVIDIA, AMD, and Google
  • Fabrication: Heavily concentrated in Taiwan (TSMC) and South Korea (Samsung)
  • Assembly and Testing: Primarily located in East and Southeast Asia
  • Raw Materials: Dependence on specialized materials creates additional supply chain vulnerabilities

6. Data Center Infrastructure

Data centers represent the physical infrastructure that houses computational resources for AI training and deployment. Their geographic distribution, energy sources, and cooling systems significantly impact the economics and environmental footprint of AI compute.

Major technology companies are increasingly developing specialized data centers optimized specifically for AI workloads, with particular attention to power delivery and cooling systems.

7. Environmental Impact and Sustainability

The computational intensity of modern AI systems creates substantial environmental externalities, including:

  • Significant electricity consumption for both training and inference
  • Water usage for cooling systems in data centers
  • Electronic waste from hardware turnover
  • Carbon emissions from energy generation

Efforts to mitigate these impacts include improving computational efficiency, locating data centers in regions with renewable energy, and developing more sustainable cooling technologies.

8. Policy Responses and Governance

Current policy frameworks have struggled to keep pace with the rapid expansion of computational infrastructure for AI. Key policy considerations include:

  • Environmental regulations for data center emissions and energy use
  • Antitrust considerations around concentrated compute resources
  • Export controls on advanced computing hardware
  • Standards for measuring and reporting computational efficiency
  • Public investment in compute infrastructure for research

9. Conclusions and Future Directions

Computational power has emerged as a critical factor shaping the development and deployment of artificial intelligence. The escalating compute requirements create significant barriers to entry, environmental challenges, and supply chain vulnerabilities.

Addressing these challenges requires coordinated action across technical improvements in efficiency, policy responses to manage externalities, and structural approaches to ensure broader access to computational resources.

Future research should focus on developing less computationally intensive AI methods, improving measurements of computational efficiency, and designing governance mechanisms for compute allocation and access.