Following up on our previous story, it looks like NVIDIA’s GPU growth is expected to accelerate in the coming months due to the rising popularity of ChatGPT.
NVIDIA AI GPUs Might Face Shortages Due To Increase Demand From AI Giants Utilizing ChatGPT & Other AI Generation Tools
As reported earlier, ChatGPT and other language/image/video generation tools rely heavily on AI processing power and that’s where NVIDIA’s main strength lies. This is why major tech companies that are leveraging ChatGPT are utilizing NVIDIA’s GPUs to power their growing AI requirements. It looks like NVIDIA’s prowess in this industry might just cause a shortage of the company’s AI GPUs in the coming months.
As reported by FierceElectronics, ChatGPT (Beta version from Open.AI) was trained on 10,000 GPUs from NVIDIA but ever since it gained public traction, the system has been overwhelmed and unable to meet the demand of a large user base. This is why the company has announced a new ChatGPT Plus subscription plan which will not only provide general access to the servers even during peak times but also deliver faster response times and priority access to new features & improvements. The ChatGPT Plus subscription is available for $20 per month.
“It is possible that ChatGPT or other deep learning models could be trained or run on GPUs from other vendors in the future. However, currently, NVIDIA GPUs are widely used in the deep learning community due to their high performance and CUDA support. CUDA is a parallel computing platform and programming model developed by NVIDIA that allows for efficient computation on NVIDIA GPUs. Many deep learning libraries and frameworks, such as TensorFlow and PyTorch, have built-in support for CUDA and are optimized for NVIDIA GPUs.
via FierceElectronics
Major tech giants such as Microsoft and Google are also planning to integrate ChatGPT-like LLMs into their search engines, reports Forbes. For Google to integrate this within every search query, it would require 512,820 A100 HGX servers with a total of 4,102,568 A100 GPUs which should end up around $100 Billion of Capex alone in terms of server and networking costs.
Deploying current ChatGPT into every search done by Google would require 512,820.51 A100 HGX servers with a total of 4,102,568 A100 GPUs. The total cost of these servers and networking exceeds $100 billion of Capex alone, of which Nvidia would receive a large portion. This is never going to happen, of course, but fun thought experiment if we assume no software or hardware improvements are made.
Investing.com reports that Analysts have predicted that the current model of ChatGPT is being trained on around 25,000 or so NVIDIA GPUs versus the 10,000 NVIDIA GPUs that were used by BETA.
“We think that GPT 5 is currently being trained on 25k GPUs – $225 mm or so of NVIDIA hardware – and the inference costs are likely much lower than some of the numbers we have seen,” wrote the analysts. ” Further, reducing inference costs will be critical in resolving the ‘cost of search’ debate from cloud titans.”
via Investing.com
This might be good news for NVIDIA but not so great for consumers, especially gamers. If NVIDIA sees an opportunity in its AI GPUs business, it might prioritize supply towards those instead of gaming GPUs. The gaming GPUs are already reported to see limited supply this quarter owing to the Chinese New Year and while the stock is still there, this might pose a problem for the high-end GPUs which are already in scarcity. Furthermore, high-end GPUs also offer higher AI capabilities as the server parts at a much lower price and they could become a lucrative option, further stripping down supply from gamers.
It remains to be seen how NVIDIA responds to this huge demand from the AI segment. The GPU giant is expected to announce its earnings for Q4 FY23 on the 22nd of February, 2023.
The post NVIDIA GPU Demand To Exceed Supply As Green Team Bets On ChatGPT, Thousands of AI Chips Expected To be Incorporated By Tech Giants by Hassan Mujtaba appeared first on Wccftech.
This content was originally published here.