Supercomputers are used by governments and research institutions around the world to solve some of science’s most complex problems, such as hurricane forecasting and modeling atomic weapons. In most cases, a supercomputer is actually a computing cluster that is comprised of hundreds or thousands of individual computers that are all linked together and controlled by software. Each individual computer is running similar processes in parallel, but when you combine all of their computing power you end up with a system that is far more powerful than any single computer by itself.
Supercomputers often take up the size of a basketball court and cost hundreds of millions of dollars, but as Github user Wei Lin has demonstrated, it’s possible to make a homebrew computing cluster that doesn’t break the bank.
As detailed on Wei Lin’s Github repository, they managed to make a computing cluster using six ESP32 chips. These chips are microcontrollers—a computer with minimal memory and processors—similar to a Raspberry Pi, but far cheaper.
A single Raspberry Pi costs around $30, an ESP32 only costs about $7 (this is because they are manufactured in China, while Arduino and Raspberry Pis are manufactured in Europe). So even though others have made computing clusters from Raspberry Pis—including a 750 node cluster made by Los Alamos National Lab—these can quickly become expensive projects for the casual maker. Lin’s six node cluster, on the other hand, cost the same as a single Pi and has three times as many cores.
The main challenge, according to Lin, was figuring out how to coordinate computing tasks across each of the chips. For this, they used a program called Celery that is optimized for synchronizing computing tasks across several cores.
In a video Lin demonstrated using a three node cluster to run a word count program. As detailed by Lin, the computer’s software basically dispatches a list of tasks to the cluster—in this case having to do with word counts—and then the nodes each retrieve a task from the list, execute it, and then return a result before retrieving a new task from the list. At the same time, the nodes are communicating with one another to coordinate their efforts.
While you probably won’t solve the toughest problems in physics by scaling this computer cluster architecture, it is a pretty neat application for inexpensive hardware that is capable of quickly performing computations in parallel and is a nice way to learn how supercomputers actually work without breaking the bank.