Startup chipmaker Cerebras Systems Inc. announced that it’s built the first of nine artificial intelligence supercomputers in a partnership with Abu Dhabi, part of an effort to provide alternatives to systems using Nvidia Corp. technology.
Condor Galaxy 1, located in Santa Clara, California, is now up and running, according to Cerebras founder and Chief Executive Officer Andrew Feldman. The supercomputer, which cost more than $100 million, is going to double in size “in the coming weeks,” he said. It will be followed by new systems in Austin and Asheville, North Carolina, in the first half of next year, with overseas sites going online in the second half of 2024.
The project is part of a dash to add computing power for AI services, which require the kind of heavy-duty processing that’s become a specialty of Nvidia — the world’s most valuable chipmaker. The Cerebras machinery, which Feldman describes as the biggest purpose-built AI computing center, is an attempt to satisfy that need with a novel approach.
It also marks a deeper push into the field by the United Arab Emirates, which is betting on next-generation technology with a firm called Group 42, or G42. The company is focused on pushing artificial research toward practical uses in areas such as aviation and health care.
The new supercomputers will be operated by Cerebras and used for G42 projects. Any excess capacity will offered commercially as a service.
“The United Arab Emirates was the first nation to have minister for AI. They have a university for AI,” said Feldman. “They believe that this is a transformative technology for their economy.”
For Cerebras, based in Silicon Valley, the new systems provide a showcase that it hopes will lead to wider adoption. The company’s offerings rely on massive chips that are made out of whole silicon wafers — disks that are normally sliced up to create multiple components.
Read More: Artificial Intelligence Chip Startup Nabs $4 Billion Valuation
Feldman argues that his processors have the advantage of being able to deal with large data sets in one go, rather than only working on portions of the information at a time. Compared with Nvidia’s processors, they also require less of the complicated software needed to make chips work in concert, he said.
This year, cloud computing providers such as Microsoft Corp. and Amazon.com Inc.’s AWS have been stocking up on Nvidia processors to keep up with runaway demand for OpenAI’s ChatGPT and other generative AI tools. Nvidia has about 80% of the market for the so-called accelerators that help handle these workloads.
With his computing rollout, Feldman aims to demonstrate that the AI explosion won’t just benefit the giant tech companies that can afford big-budget equipment.
“There is a misconception that there are only seven to 10 companies in the world that could buy at scale to make a difference,” he said. “This vastly changes the conversation.”
Feldman’s processors are so large they won’t fit in traditional machinery, leading Cerebras to offer its technology in specially built computers. The machines also rely on standard processors from Advanced Micro Devices Inc. — the company that bought Feldman’s previous startup, SeaMicro Inc.
One of the new supercomputers will be capable of training software on data sets made up of 600 billion variables, with the ability to increase that to 100 trillion, Cerebras said. Each one will be comprised of 54 million AI-optimized computing cores.
(Company corrects statement from CEO in sixth paragraph.)