The Cloud has landed! It’s become Foggy

We’re about to go back once again in the circle to decentralize and give a greater role to local storage and computing power.

It depends on the nature and the amount of data that needs to be stored and it’s process demands. With the enormous rise of the amount of data because of the ‘Internet of Things’ the nature of the data is becoming more and more diffuse. These developments lead to yet another revolution in data area: The Fog.

Smarter? Or gathering more data?

More and more devices are equipped with sensors; cars, lampposts, parking lots, windmills, solar power plants and from animals to humans. Many of these developments are currently still in the design phase, but it will not be long before we live in smart homes in smart cities and we are driving our cars by smart streets wearing our smart tech.

Everything around us is ‘getting smarter’ / gathers more data. But where is that data stored, and why? Where is all that data processed into useful information? The bandwidth of the networks we use, grows much slower than the amount of data that is send through it. This requires thinking about the reason to store data (in the cloud).

If you want to compare data from many different locations, for instance data from sensors in a parking lot via an app where the nearest free parking space is, then the cloud is a good place to process the information. But what about the data that can even better be handled locally?

Data Qualification

The more data is collected, the more important it will be to determine the nature of the data is and what needs to be done with it. We need to look at the purpose of the collected data. For example: If the data is used for ‘predictive maintenance’, which monitors something so that a timely replacement or preventive maintenance can take place, it does not always make sense to send the data to the cloud.

Another example is the data that is generated by security cameras. These typically show 99.9% of the time an image of room/space that has not changed. The interesting data is the remaining 0.1% where there is something to see. The rest can be stored locally, or even not at all. This filtering of useful and useless data calls again for local power.

This decentralization of computing power and storage is a recent trend that Cisco calls ‘fog computing’. With distributed intelligence an often more effective action can be taken in response to the collected data, and unnecessary costs of bandwidth and storage can be avoided. This is a development that goes very well with the transition to the cloud.

Cisco

Fog Computing is a paradigm that extends Cloud computing and services to the edge of the network. Similar to Cloud, Fog provides data, compute, storage, and application services to end-users. The distinguishing Fog characteristics are its proximity to end-users, its dense geographical distribution, and its support for mobility. Services are hosted at the network edge or even end devices such as set-top-boxes or access points. By doing so, Fog reduces service latency, and improves Quality of Service (QoS), resulting in superior user-experience. Fog Computing supports emerging Internet of Everything (IoE) applications that demand real-time/predictable latency (industrial automation, transportation, networks of sensors and actuators). Thanks to its wide geographical distribution the Fog paradigm is well positioned for real time big data and real time analytics. Fog supports densely distributed data collection points, hence adding a fourth axis to the often mentioned Big Data dimensions (volume, variety, and velocity).

Unlike traditional data centers, Fog devices are geographically distributed over heterogeneous platforms, spanning multiple management domains. Cisco is interested in innovative proposals that facilitate service mobility across platforms, and technologies that preserve end-user and content security and privacy across domains.

The future? It will be hybrid with foggy edges.

Price reductions for IaaS lead to?

In the last six months the continued decline in pricing for IaaS is a signal that more business is sought.

IBM thinks that the prices and profit margins for x86s will be under continual pressure and they sold their server business to Lenovo. This shows that IBM thinks that the server hardware is already commoditized, so few more cost reductions in basic cloud infrastructure can be expected.

From a supplier perspective: The lower prices can be a signal that IaaS might actually become a loss leader to get users into the cloud store and then to PaaS and SaaS offerings. They will try to sell basic IaaS users other cloud services on top of IaaS.

From a user (IT Department) perspective: IaaS displaces only hardware cost; PaaS displaces hardware, OS and middleware costs; and SaaS displaces all application costs.

Amazon, Google, Microsoft and other cloud providers need a customer base so they can sell their cloud-specific services on PaaS and Saas. Price reductions for IaaS will keep that base, and opportunities to upsell into the emerging cloud-specific service market will grow.

Every cloud will be potentially a hybrid, so users and providers will rely on deployment and management tools that converge on a common model.