TECH NEWS – A retailer has listed a DGX B200 server based on the Blackwell architecture, and it’s high-end AI hardware that’s going to cost a lot.
Broadberry was the one who listed how much you have to pay for the DGX B200 server, and we wrote the other day that there is so much demand for Blackwell that “the greens” see a year’s worth of inventory booked up in advance because there is such a big AI boom that it portends a very bright future for Nvidia. Anyone who buys one of these servers is likely to be a billionaire, as the machine can be bought for half a million dollars in its base package. Actually, it’s $515410, so a little more.
So why does it cost so much? Well, each DGX B200 has eight B200 GPUs, giving you up to 1.4TB of GPU memory (1440GB), and using HBM3E, the memory bandwidth is an impressive 64TB/s. It has 72 petaFLOPS FP8 training and 144 petaFLOPS FP4 inference performance, access to the Nvidia network, DGX BasePOD and DGX SuperPOD foundations are included, and it also includes Nvidia AI Enterprise and Nvidia Base Command software. The machine is equipped with two Intel Xeon Platinum 8570 processors, two 1.92TB NVMe M2 and eight 3.84TB NVMe SSDs, and 4TB of DDR5 memory.
So it’s not a desktop PC, because that’s not what Nvidia designed it for. Compared to the previous architecture, the Hopper, it’s a huge improvement, and it’s no coincidence that big tech companies like Microsoft and Meta are interested in the hardware. And for them, $500,000 is not such a big cost, given the sustained growth of artificial intelligence, according to a recent Morgan Stanley analyst.
Nvidia recently shipped the first batch of DGX B200 systems to OpenAI, as the two companies have a very exclusive relationship. If a listing has appeared, it’s likely that several tech companies have already purchased some of Nvidia’s machines.
Source: WCCFTech, Broadberry
Leave a Reply