Almost ten years after the introduction of cloud computing, the cloud is the mainstream computing paradigm. The majority of modern enterprises store data and deploy applications in the cloud in order to benefit from its capacity, scalability, elasticity and pay-as-you-go features. Moreover, emerging applications are, in most cases, cloud-enabled. Despite the huge success of the cloud, there is also a rise of new systems and applications, which drive the conventional cloud paradigm to its limits. Typical examples of such applications are those involving distributed networked devices and data streams at a large scale. When handling distributed data streams and related networked applications (such as Internet-of-Things (IoT) and Big Data applications, centralized cloud paradigms suffer from a significant waste of network and storage resources, as large amounts of data, of low business value, is integrated in the cloud. For example, in cloud-based distributed sensing applications most sensors are producing large volumes of useless data, which unnecessarily consumes cloud storage and network bandwidth. This is typically the case when sensor data, that does not frequently change (e.g., temperature information), is streamed into the cloud.
In order to cope with these distinct requirements of IoT/Big Data applications the cloud industry has recently introduced edge computing paradigm (also called “fog computing”), which extends conventional centralized cloud infrastructures with an additional storage and processing layer that enables execution of application logic close to end-users and/or devices. Edge computing foresees the deployment of edge (or fog) nodes, between the cloud and the devices. Different types of edge nodes are envisaged in the edge computing paradigm, ranging from embedded devices with limited storage, memory and processing capacity, to conventional computers and computing clusters. According to the OpenFog consortium, edge/fog computing is aimed at supporting multiple industry verticals and applications, while enabling services and applications to be distributed anywhere between the cloud and devices, including deployments close to the devices layer.
Edge Computing: The Benefits
For specific classes of IoT and Big Data applications, edge computing provides distinct benefits over conventional cloud architectures, including:
- Bandwidth and storage savings: Edge nodes can filter the data streams in order to get rid of information without business value. In this way, only useful information is streamed, stored and ultimately processed in the cloud, which leads to significant bandwidth and storage savings. The larger the scale of the applications, the more significant the savings.
- Low-latency processing: Information processing at the edge incurs much lower latency than cloud processing. This is very important for real-time applications such as optimization of the energy grid or traffic rerouting applications in urban areas, where decisions must be taken in nearly real-time.
- Proximity processing: Edge computing is ideal for applications where information must be processed close to the end-user or the devices (i.e. the field). This is, for example, true in the case of mobile applications in the consumer space, where processing close to the user improves the responsiveness of the application and optimizes user experience. There are also similar examples in the industrial space, in cases where decisions about driving a robot must be taken locally rather than at the cloud level. Note that the need for proximity processing co-exists in several use cases with the need for real-time low-latency processing.
- Location and context awareness: Edge nodes are typically associated with specific locations and can therefore enable location-based services. This enables a wave of new business services that utilize network context and locations, such as services associated with users’ context, points of interest and events.
- On-premise isolation and privacy-friendliness: Edge nodes are isolated from the rest of the edge network, which is a foundation for providing increased security and privacy, including data protection for applications that ask for it (e.g., healthcare applications). Indeed, in an edge computing deployment, personal data can remain isolated in an edge node without a need to stream them to a centralized cloud. This provides increased security control and flexible compliance to privacy regulations.
- Scalability: The edge computing paradigm provides decentralized storage and processing functions, which offers better scalability when compared to conventional centralized cloud processing. While the cloud is promoted as an infinite scalable model itself, Big Data applications are in some cases questioning this scalability.
When to go Edge
The edge computing paradigm is targeting specific classes of applications with the following characteristics:
- Large scale geo-distributed applications, such as distributed sensor networks applications, which have to deal with data processing at local level, prior to providing data to the cloud. Typical examples include environmental monitoring and smart city applications, which are based on the collection and processing of streams from thousands or even millions of distributed sensing devices. Edge nodes enable the decentralization of these applications, which facilitates scalable and low-latency processing, while leading to significant bandwidth savings.
- (Near) Real-time applications, which cannot tolerate cloud latency. Prominent examples of such applications can be found in the use of Cyber-Physical Systems in the areas of manufacturing, transport and energy management. In these systems commands to the physical systems (e.g., robots, smart meters and machines) should be provided in real-time and based on data processing close to the physical infrastructures.
- Mobile applications, including applications that comprise of fast moving objects such as autonomous cars and connected rails. These applications need to access resources that reside in the close proximity of the moving objects, which can be perfectly supported by the deployment of appropriate edge nodes.
- Distributed multi-user applications with privacy implications such as applications in need of data protection or compliance to privacy directives. These applications benefit from isolated storage and management of private data, which can hardly be provided at the cloud level.
Hence, while there are still a large number of applications that are adequately supported by cloud computing infrastructures and services (e.g., transactional enterprise applications), there are also a considerable number of emerging data intensive applications (e.g., smart cities, smart manufacturing, and connected car) that need to leverage the benefits of edge deployments.
Rise of Edge Computing
The momentum of edge computing applications is clearly reflected on the proliferating industry initiatives about edge computing, including standards, products and services. Prominent examples include:
- Standards: The OpenFog consortium has very recently (February 2017) released a reference architecture (RA) for edge/fog computing.
- Gateways: Dell has released an “Edge Gateways” product line, which is destined to support the deployment of edge nodes and empower the edge analytics. Likewise, SIEMENS is providing a gateway device that can be deployed as an edge node in industrial environments. As another example, Ubuntu is also offering a platform for deploying intelligent edge devices.
- Edge analytics frameworks: An Apache open source project called Edgent has been recently established in order to enable data stream processing within edge devices, including computationally constrained environments.
In a world of billions of connected devices and high-volume streaming datasets, deploying cloud in not enough. Enterprises planning to engage or already dealing with large scale IoT/Big Data applications should also get ready for the edge. Fortunately, industry experts are already supporting this new endeavor and can help you make the most of it.