FinTech moves fast. News is everywhere, clarity isn’t.
FinTech Weekly delivers the key stories and events in one place.
Click Here to Subscribe to FinTech Weekly's Newsletter
Read by executives at JP Morgan, Coinbase, BlackRock, Klarna and more.
On March 14, Elon Musk posted on X that the Terafab Project launches in seven days.
What Terafab is, however, is not in doubt.

What Terafab is
Tesla first confirmed Terafab on its January 28, 2026 earnings call. Musk told investors the company needs to build a chip fabrication facility to avoid a supply constraint it projects will materialise within three to four years. The facility would combine logic processing, memory storage, and advanced packaging under one roof — vertically integrated chip manufacturing on a scale no private company outside Taiwan and South Korea currently operates.
The project carries an estimated cost of approximately $25 billion, forming part of Tesla's record capital expenditure plan for 2026, which exceeds $20 billion. CFO Vaibhav Taneja acknowledged on the earnings call that the full Terafab cost is not yet incorporated into that figure.
Production targets are specific. The facility is designed to produce between 100 and 200 billion custom AI and memory chips per year, targeting an initial output of 100,000 wafer starts per month with a stated ambition to scale toward one million — roughly 70% of TSMC's current total output, in a single US facility.
Tesla is targeting 2 nanometre process technology, the most advanced node currently in commercial production. Tesla's fifth-generation AI chip, AI5, is among the first products Terafab is designed to produce, with small-batch production expected in 2026 and volume production projected for 2027.
Who Terafab is for
The immediate answer is Tesla. The AI chips power Full Self-Driving software, the Cybercab robotaxi programme, and the Optimus humanoid robot line. Musk's projections for Optimus require chip volumes that no existing external supplier can commit to on Tesla's timeline. TSMC and Samsung are current partners.
The less obvious answer is xAI. Musk has described Terafab's scope as encompassing chips for Dojo, Tesla's supercomputer used to train Full Self-Driving models, and for xAI's Grok model training infrastructure. The Memphis supercluster that xAI currently operates is already one of the largest GPU clusters in existence. Terafab is the supply chain that would make the next generation of that infrastructure independent of external suppliers entirely.
As FintechWeekly reported, xAI hired Devendra Chaplot — Mistral AI co-founder and Thinking Machines Lab founding member — to work on Grok model training, alongside Andrew Milich and Jason Ginsberg, the engineers who scaled Cursor to a $2 billion revenue run rate. The pattern across all three hires is a company rebuilding its model and product layers simultaneously. Terafab is the infrastructure layer underneath both.
The competitive context
If Terafab succeeds, Tesla becomes one of a handful of entities capable of producing frontier AI silicon in-house at volume — changing its cost structure for autonomous vehicles and robotics, and reducing xAI's dependency on third-party compute entirely.
March 21 is the next marker on that path.
Editor's note: We are committed to accuracy. If you spot an error, a missing detail, or have additional information about the Terafab project or related developments, please email us at [email protected]. We will review and update promptly.