FYI.

This story is over 5 years old.

Tech

Fog Computing Brings Big Data Back to Earth

But what's fog computing anyway?
Shutterstock

The cloud is simple. Maybe not in practice, but the concept is intuitive as anything in computing. Computers do computations, and networks connect computers. Thus, networks connect computers with computations. That's it. That's the cloud. Fast networks mean we can offload computing functions to other computers that we never have to see. Virtual computers, even.

Mobile computing exists because of this, of course. iPhones are powerful in some large part because they depend on powerful computers in the cloud. The cloud seems like a natural end-result, but wasn't/isn't prepared for the big data deluge. The ubiquitous computing enabled by the cloud, from smart-phones to Internet of Things devices, has meant vast overflowing oceans of data—endless quantification of the behaviors, environments, and machines of connected humans. All of it demands bandwidth. More and more, everyday.

Advertisement

The more data we push and pull from it, the further the cloud becomes. The solution, it seems, is fog. We bring the cloud down to ground-level. Rather than distant servers, or in addition to them, we let data saturate our own devices. We identify those things likely to be required by nearby devices and store them locally wherever storage space is available. There are just so many devices now down here, from obvious things like smart-phones and tablets to routers and IoT sensors.

"An estimated 50 billion 'things' will be connected to the Internet by 2020."

Cisco generally gets the credit for the term fog computing, aka edge computing. A 2015 whitepaper explains, "Today's cloud models are not designed for the volume, variety, and velocity of data that the IoT generates. Billions of previously unconnected devices are generating more than two exabytes of data each day. An estimated 50 billion 'things' will be connected to the Internet by 2020."

These things generate data fast and require it to be processed fast. Consider an industrial control system or aircraft fly-by-wire system. (A commercial jet generates 10 terabytes of data for every 30 seconds of flight; an oil rig piles up 500 GB in a week.) It makes sense in these cases to keep data closeby. Latency is minimized while internet bandwidth is conserved for those use-cases where the cloud is neccessary, such as highly-intensive data processing tasks.

"The fog extends the cloud to be closer to the things that produce and act on IoT data," the Cisco paper continues. "These devices, called fog nodes, can be deployed anywhere with a network connection: on a factory floor, on top of a power pole, alongside a railway track, in a vehicle, or on an oil rig. Any device with computing, storage, and network connectivity can be a fog node. Examples include industrial controllers, switches, routers, embedded servers, and video surveillance cameras."

If this isn't as intuitive as the cloud, it shouldn't be. Implementing fog computing takes considerable cleverness. A fog application has to be able to determine the time sensitivity of data processing tasks and what nearby devices are best suited to that processing. Output from an industrial control loop might need to be processed at the closest possible node. Less time-critical and more processing-intensive tasks might be shipped to more power, centralized nodes.

The truly big data, requiring highly-intensive processing, might be shipped to the cloud after all. The key realization is that the cloud isn't a one-size-fits-all solution to the computing future. We can depend on fog for keeping airplanes in the sky.