Why EC2 Capacity Blocks for ML?
With Amazon Elastic Compute Cloud (Amazon EC2) Capacity Blocks for ML, you can easily reserve accelerated compute instances for a future start date. Capacity Blocks supports Amazon EC2 P5e, P5, and P4d instances, powered by the latest NVIDIA H200 Tensor Core GPUs, NVIDIA H100 Tensor Core GPUs, and NVIDIA A100 Tensor Core GPUs, respectively, as well as Trn1 instances powered by AWS Trainium. EC2 Capacity Blocks are colocated in Amazon EC2 UltraClusters designed for high-performance machine learning (ML) workloads. You can reserve accelerated compute instances for up to six months in cluster sizes of one to 64 instances (512 GPUs or 1024 Trainium chips), giving you the flexibility to run a broad range of ML workloads. EC2 Capacity Blocks can be reserved up to eight weeks in advance.
Benefits
Use cases
-
NVIDIA
-
Arcee
-
Amplify Partners
-
Canva
-
Dashtoon
-
Leonardo.Ai
-
OctoAI
-
Snorkel