As Nvidia’s recent surge in market capitalization clearly demonstrates, the AI industry is in desperate need of new hardware to train large language models (LLMs) and other AI-based algorithms. While server and HPC GPUs may be worthless for gaming, they serve as the foundation for data centers and supercomputers that perform highly parallelized computations necessary for these systems.
When it comes to AI training, Nvidia’s GPUs have been the most desirable to date. In recent weeks, the company briefly achieved an unprecedented $1 trillion market capitalization due to this very reason. However, MosaicML now emphasizes that Nvidia is just one choice in a multifaceted hardware market, suggesting companies investing in AI should not blindly spend a fortune on Team Green’s highly sought-after chips.
The AI startup tested AMD MI250 and Nvidia A100 cards, both of which are one generation behind each company’s current flagship HPC GPUs. They used their own software tools, along with the Meta-backed open-source software PyTorch and AMD’s proprietary software, for testing.
Comments are closed.