Elon Musk has revealed the power and cooling needs of Tesla’s supercomputing cluster at Giga Texas.

Elon Musk has revealed the power and cooling needs of Tesla’s supercomputing cluster at Giga Texas.

Tesla is currently in the process of building a huge supercomputing cluster at its Gigafactory Texas, and CEO Elon Musk this week highlighted how much power the training center will aim to use by this year and next.

In a post on X on Thursday, Musk said that the supercomputer center will be sized for around 130MW of power and cooling this year, before increasing to more than 500MW over the next 18 months. Musk also noted that half the chips used will be Tesla’s hardware, while the other half will be made up of Nvidia or other company’s hardware.

The statement comes as many in the Tesla community have noticed the company nearing completion of its South factory expansion, which will house the massive training center. The site will require massive cooling needs, and many have pointed out the giant fans that will be used to help cool the training center in recent weeks.

In addition to the fans, the center will utilize four water tanks, huge underground water lines and a huge cooling infrastructure which is being built out on the building’s second floor.

You can see the massive cooling structure in the video below, as captured by drone operator and Giga Texas observer Joe Tegtmeyer on Friday.

Musk initially confirmed the supercomputing cluster last month, calling it  a “super dense, water-cooled supercomputer cluster.”

Musk later estimated that Nvidia hardware would make up $3-$4 billion of Tesla’s AI-related expenditures this year, a total which he valued at around $10 billion. About half of the purchases will use Tesla’s own AI inference computer, as well as Dojo, according to Musk.

“Of the roughly $10B in AI-related expenditures I said Tesla would make this year, about half is internal, primarily the Tesla-designed AI inference computer and sensors present in all of our cars, plus Dojo,” Musk wrote. “For building the AI training superclusters, NVidia hardware is about 2/3 of the cost.”

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest