Hungry for AI? New supercomputer contains 16 dinner-plate-size chips

Cerebras Andromeda is a 13.5 million core AI supercomputer
Zoom in / Cerebras Andromeda is a 13.5 million core AI supercomputer.

Cerebras Systems on Monday unveiled its 13.5-million-core Andromeda AI supercomputer for deep learning, Reuters reported. According to Cerebras, Andromeda provides one 1 exaflop (1 quintillion operations per second) of AI computing power at 16-bit half-precision.

Andromeda is a cluster of 16 interconnected Cerebras C-2 computers. Each CS-2 contains a single Wafer Scale Engine chip (commonly referred to as “WSE-2”), the largest silicon chip to date, measuring approximately 8.5 inches square and containing 2.6 trillion transistors arranged in 850,000 cores.

Cerebras built Andromeda for $35 million in a data center in Santa Clara, California. It is designed for applications such as large language models and has already been used in academic and commercial applications. “Andromeda provides near-perfect scaling with simple data parallelism on large GPT-class language models such as GPT-3, GPT-J, and GPT-NeoX,” Cerebras said in a press release.

Also Read :  Mobile Banking Apps: Current Trends and What the Future Holds

The Cerebras WSL2 chip is about 8.5 inches square and has 2.6 trillion transistors.
Zoom in / The Cerebras WSL2 chip is about 8.5 inches square and has 2.6 trillion transistors.

The brain

The phrase “near perfect scaling” means that as more CS-2 computing units are added to Cerebras Andromeda, the training time on the neural network decreases by “near perfect proportions,” Cerebras says. Typically, using GPU-based systems to scale deep learning models by adding more computing power can see diminishing returns as hardware costs increase. Furthermore, Cerebras claims that supercomputers can perform tasks that GPU-based systems cannot.

GPU impossible work was demonstrated by one of Andromeda’s early users and scaled almost perfectly on GPT-J with 2.5 billion and 25 billion parameter long sequence lengths (MSL 10,240). Users tried to do the same task on Polaris, a 2000 Nvidia A100 cluster, and the GPUs couldn’t do the job due to limited GPU memory and memory bandwidth.”

It’s not yet clear whether these claims hold up to external scrutiny, but in an era where companies train deep learning models on ever-increasing clusters of Nvidia GPUs, Cerebras appears to offer an alternative approach.

Also Read :  Carolina McCauley: Cleaning expert shares a sticky gadget that cleans dirt from her handbag

How does Andromeda stack up against other supercomputers? Frontier, currently the world’s fastest, is located at Oak Ridge National Labs and can run at 1.103 exaflops at 64-bit double definition. It cost 600 million dollars to build this computer.

Also Read :  Smartphone App To Assess Structural Integrity Of Bridges: Study

Andromeda is now accessible and can be accessed remotely by multiple users. It is being used in research by commercial writing assistant JasperAI and Argonne National Laboratory and the University of Cambridge.


Leave a Reply

Your email address will not be published.

Related Articles

Back to top button