TECH NEWS – According to the company, demand for their GPUs is outstripping supply, as the tech giants will soon be deploying thousands of artificial intelligence chips.
In addition to ChatGPT, other language, image, and video-generating tools rely on the processing power of AI, and this is where Nvidia’s strength lies. That’s why major tech companies using ChatGPT (such as Microsoft) are using Nvidia GPUs to provide the power they need to meet their growing AI needs. It could lead to a shortage of graphics cards equipped with this technology in the coming months.
FierceElectronics reported that OpenAI was training ChatGPT on 10K Nvidia GPUs, but since it has come to the attention of the public, the system has become overloaded. That’s why OpenAI has introduced ChatGPT Plus, which costs $20 per month, and not only gives you access to the servers even in case of overload but also faster response times and early access to new features and improvements for your money. “It is possible that ChatGPT or other deep learning models could be trained or run on GPUs from other vendors in the future. However, Nvidia GPUs are currently widely used in the deep learning community due to their high performance and CUDA support. CUDA is a parallel computing platform and programming model developed by Nvidia, allowing efficient computation on its GPUs. Many deep learning libraries and frameworks, such as TensorFlow and PyTorch, have built-in support for CUDA and are optimized for Nvidia GPUs.,” the site says.
Forbes reports that both Microsoft and Google will integrate a ChatGPT-like LLM (language learning model) into their search engines. For Google, if it were to add this technology to every search, it would need 512,820 A100 HGX servers with a total of 4,102,568 A100 GPUs, which would be a capital expenditure of about $100 billion in network and server costs, and the site says that will never happen. And Investing.com points out that instead of the 10,000 Nvidia GPUs used in the ChatGPT beta, the company is now up to 25,000: “We think that GPT 5 is currently being trained on 25k GPUs – $225 mm or so of Nvidia hardware – and the inference costs are likely much lower than some of the numbers we have seen. Further, reducing inference costs will be critical in resolving the ‘cost of search’ debate from cloud titans.”
Gamers beware: Nvidia’s GPU supply is sparse in these first three months due to the Chinese New Year, which will affect high-end cards the most. And these have better AI capacity than server hardware at a fraction of the price and can be bought up by companies…
Source: WCCFTech
Leave a Reply