Over the past decade, service providers, enterprises and regional cloud service providers have benefitted from building or leveraging centralized data center and cloud architectures. However, a new class of applications is emerging that has a new set of requirements that cannot be met by a centralized cloud architecture. These applications are driven by a rise in new technologies such as artificial intelligence (AI), machine learning (ML) and the Internet of Things (IoT). This new class of applications demands that compute resources be deployed at the network edge, closer to users and things. 5G is another important technology that will accelerate the deployment of compute at the edge – however, the deployment of edge compute is not dependent on pervasive 5G deployment.
There are four main reasons why data processing needs to happen at edge locations for these new workloads:
- Low latency, especially round-trip delay, which becomes significant for a number of applications such as virtual reality, gaming and public safety.
- Bandwidth cost: with a dramatic rise in IoT, more and more data is being sent toward the central cloud. Data thinning at the edge can reduce bandwidth costs for applications like handling large numbers of video surveillance streams.
- Autonomy: ensuring that if the location is disconnected from the central cloud it will still perform; for example, in public safety applications with multiple sensors and actuators that may be interacting locally.
- Privacy: ensuring high confidence in the location of stored data to satisfy data sovereignty regulations and consumer privacy.
As the demand for distributed cloud increases, service providers, enterprises, regional cloud providers and colocation/exchange providers will need to build next-generation mini and micro data centers across multiple edge locations to meet new workload requirements. With this new architecture comes increased operational complexity and a need for a counterbalance – a highly automated network fabric that can make multiple edge locations appear as one logical unit in order to simplify the management of multiple remote data center sites.
Next-Generation Software-Defined Network Fabric for Distributed Cloud and 5G Edge
Pluribus Networks Adaptive Cloud Fabric™ (ACF), powered by the Netvisor® ONE network operating system, delivers a controllerless software-defined networking fabric that provides a VXLAN virtual overlay and is ideal for simplifying the networking required for distributed cloud.
The Adaptive Cloud Fabric can seamlessly scale across multiple data center locations, including centralized, near edge and far edge sites, and provides the following characteristics to deliver lower operational and capital costs, ultra-low latency and increased agility and resiliency:
- Geographically distributed locations tied together into one (or multiple) logical unit(s).
- Highly available and resilient network fabric with no single point of failure.
- Full network state and controller intelligence distributed to every node in the fabric that eliminates the complexity, cost and controller-to-switch latency incurred with external controller architectures.
- Advanced VXLAN services along with granular network slicing/segmentation, multi-tenant services and integrated network performance monitoring telemetry support.
- Easy administration and fabric automation through a single point of management from any switch through REST APIs or CLIs using Ansible, Pluribus UNUM and other tools.
- White box economics and freedom from legacy network constraints. The Pluribus Adaptive Cloud Fabric is powered by a wide range of open networking switches including hardware from Dell EMC, D-Link Systems and Edgecore, as well as the Pluribus Freedom™ Series network switches.