FYI.

This story is over 5 years old.

Tech

Researchers Say Nano Servers Are the Future of Cloud Technology

A rack of servers on a single chip.
Photo: Alex Schwenke/Creative Commons

Data Centers, the sprawling structures that house our email accounts, tweets, and Facebook posts take a lot of space and energy to operate. And as life increasing moves into the cloud, these energy suckers are getting bigger. But, according to an article by a couple of researchers at the University of California San Diego, it may not have to be that way. The data centers of the future, they say, will require shrinking a rack of servers down to a single chip.

Today’s data centers are mesh closets storing 20 to 40 servers per rack each holding 8 to 16 computer processing unit (CPU) cores, hundreds of gigs worth of memory, and 10 terabytes of storage. A lot of energy is required to keep these machines running and ready to handle a sudden surge in user activity. In a report last year the New York Times quoted worldwide data warehouses as consuming 30 billion watts of energy—the equivalent to 30 nuclear power plants.

Advertisement

Researchers Yeshaiahu Fainman and George Porter suggest that shrinking servers down to an atomic level will not only increase information processing speed, but also reduce energy consumption. Their dream server-chip would also update inter and intra processor workings to include a hybrid packet-optical-circuit-switching network. Basically this would allow for a greater flexibility in the way information is transferred. Depending on network traffic a piece of data could travel directly down a single optic line or disperse, taking many optic channels, and reassembling at a destination.

But the meat and potatoes of Yeshaiahu Fainman and George Porter’s server-rack-on-a-chip vision is really about taking the existing framework for a server rack and recreating it at the nano-level. They say that miniaturizing all server components so that several servers can fit onto a computer chip would increase processing speed. Making circuit systems to support all these mini-components using advanced lithography is already feasible, but scientists have yet to realize nano-transceivers and circuit-switchers—the key components that transmit data. And while silicon chips are increasing being used to transmit data-carrying light waves in fiber optic networks, efficiently generating light on a silicon chip is still early in its development. The researchers offer some solutions, like including light generating nanolasers in the chip design.

Advertisement

Ultimately Fainman and Porter’s report acts more as a wish list than a confirmation of what is to come. They say that even before server chips network can become a reality, engineers will need to find an energy efficient way to keep data centers from overheating.

Getting processor cores to effectively dissipate heat is a key problem plaguing many researchers in the field. The majority of energy consumed by data centers is used to keep the facilities cool. In recent years, a few pioneers have made strides. Facebook’s data center near the Arctic Circle uses the naturally frigid climate to keep it’s servers from overheating, bypassing the need for air conditioning or extensive fan systems. They also stripped servers of unnecessary extras like protective plastic casing to prevent heat trapping.

At the federal level, the U.S. Department of Energy’s National Renewable Energy Laboratory has developed an innovative liquid cooling system that uses water to capture waste heat and redistribute it to other areas of the building that need heating.

And while there is scant information on energy efficient data centers it is out there. Facebook provides plans for its arctic data center on the Open Compute Project and NREL is open to involving interested researchers in development of its high performance data center.

Still, the technology is a long way off from being cost-efficient. Facebook spent $300 million putting together it’s arctic facility.

The server rack on a chip is a really interesting idea and potentially a logical step in taking data centers to the next level. But until key components are brought down to size and energy-efficient cooling advances further we won’t be seeing entire data centers shrink down to a single rack anytime soon.