Breakdown
- Offers up to 20 petaflops of FP4 horsepower
- Second-gen engine cuts down bits in each neuron to four from H100's eight
- Could cost between $30,000 and $40,000
The new B200 artificial intelligence GPU has 208 billion transistors, nearly three times the capacity of the H100.
Powerful enough to train artificial intelligence models, the H100 GPU has single-handedly changed Nvidia’s fortunes in the sector. Amazon Web Services and Microsoft Azure deploying the chip in cloud instances.
Mark Zuckerberg’s Meta reportedly plans to purchase 350,000 H100 chips to help it build an AI model with “human-like intelligence.”
Even as the company is basking in the lead it has gained with the H100, Nvidia already announced the successor to the highly successful AI graphics card during this year's GTC conference.
This new GPU is the Blackwell B200, which boasts 208 billion transistors instead of the 80 billion packed within the H100's compact body. The increased transistor count contributes to 20 petaflops of FP4 processing power.
Nvidia’s chief executive officer (CEO) Jensen Huang, who introduced the attendees at GTC 2024, claimed that 2,000 Blackwell GPUs can train a model with 1.8 trillion parameters while consuming only 4 megawatts of power.
The same task would require 15 megawatts and 8,000 units of the H100, making the B200 approximately 75% more efficient than its predecessor.
Nvidia said this performance boost is due to a second-gen transformer engine that reduces the number of bits for each neuron from eight to four. This gives the Blackwell GPUs twice the computational power, bandwidth, and model size of the Hoppers.
CEO Huang concluded that this means that the B200 can reduce costs and power consumption by up to 25%.
How much would interested companies like Meta spend to acquire these GPUs? While they have no specific figures yet, the Nvidia CEO implied that each B200 may cost buyers between $30,000 and $40,000.
Explore new topics and discover content that's right for you!
News