In its second-quarter earnings report for 2023, Tesla revealed its ambitious plan to address vehicle autonomy at scale with four key technology pillars: an extensive real-world dataset, neural net training, vehicle hardware, and vehicle software. Notably, the electric vehicle manufacturer asserted its commitment to developing each of these pillars in-house. A significant milestone in this endeavor was announced as Tesla started the production of its custom-built Dojo training computer, a critical component in achieving faster and more cost-effective neural net training.
While Tesla already possesses one of the world’s most potent Nvidia GPU-based supercomputers, the Dojo supercomputer takes a different approach by utilizing chips specifically designed by Tesla. Back in 2019, Tesla CEO Elon Musk christened this project as “Dojo,” envisioning it as an exceptionally powerful training computer. He claimed that Dojo would be capable of performing an exaflop, or one quintillion (1018) floating-point operations per second, an astounding level of computational power. To put it into perspective, performing one calculation every second on a one exaFLOP computer system would take over 31 billion years, as reported by Network World.
The development of Dojo has been a continuous process. At Tesla’s AI Day in 2021, the automaker showcased its initial chip and training tiles, which would eventually form a complete Dojo cluster, also known as an “exapod.” Tesla’s plan involves combining two sets of three tiles in a tray, and then placing two trays in a computer cabinet to achieve over 100 petaflops per cabinet. With a 10-cabinet system, Tesla’s Dojo exapod will exceed the exaflop barrier of compute power.
Comments are closed.