Computational Power and Artificial Intelligence

A comprehensive analysis of compute's role in AI development, infrastructure demands, and policy implications

By Jai Vipra & Sarah Myers West | AI Now Institute

Published: September 2023

Report Overview

This report examines the critical role of computational power ("compute") in artificial intelligence systems. As AI models grow in size and complexity, their computational requirements are increasing at an unprecedented rate, creating new challenges and implications across technical, environmental, economic, and policy domains.

我々は、ハードウェア部品からデータセンターまで、コンピューティングインフラの全階層を分析し、計算リソースの制約と配分が、AI開発の方向性や参画可能な主体、構築されるAIシステムの種類にどのような影響を与えているかを探る。

主なデータポイント

コンピュート需要の伸び

The computational requirements for training large AI models have been doubling every 3-4 months since 2012, far outpacing Moore's Law.

Energy Consumption

Training a single large language model can consume electricity equivalent to the annual energy use of 100+ US homes.

Market Concentration

AIトレーニングインフラを提供するクラウドコンピューティング市場の65%以上を、わずか3社が支配しています。

カーボンフットプリント

The AI sector's computational demands could account for up to 3% of global electricity consumption by 2025.

Key Insights Summary

ComputeがAIの能力を定義する

The scale of computational resources directly determines what kinds of AI models can be developed and who can develop them, creating significant barriers to entry.

Environmental Impact

The growing computational demands of AI systems have substantial environmental costs, including significant energy consumption and carbon emissions.

サプライチェーンの脆弱性

AI compute depends on complex global supply chains with concentrated manufacturing and potential single points of failure.

Policy Lag

Current policy frameworks have not kept pace with the rapid expansion of computational infrastructure for AI, creating regulatory gaps.

Hardware Lottery Effect

Research directions in AI are heavily influenced by available hardware, with approaches suited to current computational infrastructure receiving disproportionate attention.

Geopolitical Implications

Control over computational resources has become a key factor in international competition, with export controls and industrial policies shaping access to AI capabilities.

Document Contents

Report Contents

1. Introduction: The Centrality of Compute in AI

Computational power has become a fundamental determinant of AI capabilities. Unlike earlier eras where algorithmic innovations drove progress, contemporary AI advances are increasingly dependent on massive computational resources.

This shift has profound implications for who can participate in cutting-edge AI research, what kinds of AI systems get developed, and how the benefits of AI are distributed across society.

2. How Compute Demand Shapes AI Development

The escalating compute requirements for state-of-the-art AI models create significant barriers to entry, concentrating development capability among well-resourced technology companies.

This computational arms race influences research priorities, favoring approaches that scale with compute over potentially more efficient but less computationally intensive methods.

  • Startups vs. Incumbents: The compute advantage of large technology companies creates significant competitive moats
  • Research Directions: Compute-intensive approaches receive disproportionate attention and funding
  • グローバル分布: Compute capacity is unevenly distributed globally, affecting which regions can participate in AI development

3. Measuring Compute in Large-Scale AI Models

AIの学習に必要な計算量は、通常フロップス(FLOPs)で測定されます。現代の最先端モデルでは、10^23から10^25フロップスに及ぶ学習実行が必要となります。

These requirements have been growing at a rate that far outpaces improvements in hardware efficiency, leading to exponential increases in the cost of training state-of-the-art models.

4. AIコンピュートハードウェアスタック

AIハードウェアエコシステムには、並列計算に最適化された専用プロセッサ、特にGPUや、TPUおよびその他のAIアクセラレーターのようなドメイン固有アーキテクチャが増え続けているものが含まれる。

異なるハードウェア構成は、AIライフサイクルの異なる段階、すなわちトレーニングと推論において、それぞれ異なる性能と効率特性を持ち最適化されている。

5. ハードウェア構成要素とサプライチェーン

The global supply chain for AI hardware involves complex interdependencies across design, fabrication, assembly, and distribution, with significant geographic concentration at each stage.

  • Chip Design: NVIDIA、AMD、Googleといった企業が支配的な
  • Fabrication: 台湾(TSMC)と韓国(Samsung)に著しく偏在
  • 組み立てとテスト: 主に東アジアと東南アジアに位置しています
  • Raw Materials: Dependence on specialized materials creates additional supply chain vulnerabilities

6. データセンターインフラストラクチャ

データセンターは、AIの学習と展開のための計算リソースを収容する物理的インフラです。その地理的分布、エネルギー源、冷却システムは、AIコンピューティングの経済性と環境フットプリントに大きく影響します。

Major technology companies are increasingly developing specialized data centers optimized specifically for AI workloads, with particular attention to power delivery and cooling systems.

7. Environmental Impact and Sustainability

The computational intensity of modern AI systems creates substantial environmental externalities, including:

  • Significant electricity consumption for both training and inference
  • Water usage for cooling systems in data centers
  • Electronic waste from hardware turnover
  • Carbon emissions from energy generation

Efforts to mitigate these impacts include improving computational efficiency, locating data centers in regions with renewable energy, and developing more sustainable cooling technologies.

8. Policy Responses and Governance

Current policy frameworks have struggled to keep pace with the rapid expansion of computational infrastructure for AI. Key policy considerations include:

  • Environmental regulations for data center emissions and energy use
  • Antitrust considerations around concentrated compute resources
  • 先端コンピューティングハードウェアの輸出規制
  • Standards for measuring and reporting computational efficiency
  • Public investment in compute infrastructure for research

9. Conclusions and Future Directions

Computational power has emerged as a critical factor shaping the development and deployment of artificial intelligence. The escalating compute requirements create significant barriers to entry, environmental challenges, and supply chain vulnerabilities.

Addressing these challenges requires coordinated action across technical improvements in efficiency, policy responses to manage externalities, and structural approaches to ensure broader access to computational resources.

Future research should focus on developing less computationally intensive AI methods, improving measurements of computational efficiency, and designing governance mechanisms for compute allocation and access.