No matter what growth rate predictions you subscribe to for cloud computing in the next few years, the demand for cloud is going to drive a counter-demand for computing at the network’s edge. Requirements for low latencies, massive data volume as well as non-technical rules around maintaining data control will drive the need for processing on the ground, not in the cloud.
First, let’s look at cloud growth. Cisco predicts in their latest Global Cloud Index that by 2021, 94 percent of all workloads will run in some form of cloud environment. Part of this expectation fuels their prediction for growth in massive “hyperscale” data centers. Tens of thousands of servers humming away in facilities the size of multiple football stadiums, often with their own power sources like hydroelectric or wind turbines. Huge facilities run by folks like Amazon, Google, Facebook and Microsoft.
Cisco does concede that while all of these workloads will be cloud-based, not all of those clouds are far away. Local virtualized environments and “fog” computing will act to limit traffic requirements by processing locally.
In the article bold titled, “The era of the cloud’s total dominance is drawing to a close,” The Economist asserts that the pendulum of centralized vs. distributed computing is going to swing back again:
Since emerging in the 1950s, commercial computing has oscillated between being more centralised and more distributed. Until the 1970s it was confined to mainframes. When smaller machines emerged in the 1980s and 1990s, it became more spread out: applications were accessed by personal computers, but lived in souped-up PCs in corporate data centres (something called a “client-server” system). With the rise of the cloud in the 2000s, things became more centralised again. Each era saw a new group of firms rise to the top, with one leading the pack: IBM in mainframes, Microsoft in personal computers and AWS in cloud computing.
Powerful processors and virtualized applications are the answer for local processing of large amounts of data. The easy example is self-driving cars. Generating as much as 25 gigabytes an hour of data, a single vehicle is spewing out nearly 30 times the data of an HD video stream. Before all that data can be uploaded and instructions sent back, the car might have driven off a cliff or over a cyclist. Data capture and analysis has to happen locally. The industry marketing people are saying it’s like cloud computing, but local. And what’s a cloud that’s close by? You’ve got it — fog computing.
Oh, and data possession matters too — many countries have laws requiring data to stay within their borders or even a company’s physical facility. For their part, many companies worried about leaks prefer to keep data in-house.
Network management at the edge?
Network management isn’t that different. Centralized management tools are great at collecting data, but much like self-driving cars, you don’t want to rely on the network for mission critical device changes and recovery of the gear that creates that network. But, remove that dependence and put data collection, analysis at the edge and you open up a new level of confidence and reliability.
That’s what Uplogix does from an out-of-band perspective. By utilizing the console ports and dedicated Ethernet connections to reduce dependence on the network itself for retrieving state and performance information, as well as reliably executing predefined automated tasks, Uplogix is like an automated admin with a crash cart. Except, we’re connected to all devices at once and working 24×7.
Call it fog computing if you want, but we call it going beyond out-of-band.