Distributed Cloud Networking

With Distributed Cloud Networking, Seamlessly Interconnect Distributed Edge Computing Locations into a Single Unified Networking Fabric

Over the past decade, telecommunication service providers, enterprises and cloud service providers have built out centralized data center and cloud architectures. The advantage of centralized data centers is that scale delivers cost-effectiveness and centralization enables the colocation of technical operations teams with the infrastructure. However, a new class of applications is emerging that has requirements that cannot be met by a centralized cloud architecture, that can be met by distributed cloud architecture. These applications are driven by a rise in new technologies such as artificial intelligence (AI), machine learning (ML), and the Internet of Things (IoT). This new class of applications demands that compute resources be deployed at the network edge, closer to users and things. 5G is another important technology that will accelerate the deployment of compute at the edge. The deployment of edge compute is not dependent on pervasive 5G deployment, but 5G is dependent on pervasive deployment of edge compute.

Edge Computing Drivers

There are four main reasons why data processing needs to happen at edge locations for these new workloads:

  • Low latency, especially round-trip delay, which becomes significant for a number of applications such as virtual reality, gaming and public safety.
  • Bandwidth cost: with a dramatic rise in IoT, more and more data is being sent toward the central cloud. Data thinning at the edge can reduce bandwidth costs for applications like handling large numbers of video surveillance streams.
  • Autonomy: ensuring that if the location is disconnected from the central cloud it will still perform; for example, in public safety applications with multiple sensors and actuators that may be interacting locally.
  • Privacy: ensuring high confidence in the location of stored data to satisfy data sovereignty regulations and consumer privacy.

As the demand for distributed cloud networking increases, service providers, enterprises, regional cloud providers and colocation/exchange providers will build next-generation mini and micro data centers across multiple edge locations to meet these new workload requirements. With this new distributed cloud architecture comes increased operational complexity and a need to counterbalance it with simplification – a highly automated network fabric that can make multiple edge locations appear as one logical unit in order to simplify the management of multiple remote data center sites.

Learn about emerging network requirements for Edge Compute

New Automation Requirements for Distributed Cloud Networking

Fabric Slices

Netvisor ONE OS and the Adaptive Cloud Fabric radically automate and simplify leaf/spine networking for distributed cloud/edge.

Whereas centralized data centers have cost advantages due to scale and colocated technical resources to manage network operations, edge locations are highly distributed and are often deployed in challenging environments like central offices or lights-out facilities such as modular data centers with no on-site operations staff. This new distributed cloud architecture requires hardware and software that is purpose-built for these environments and that delivers comprehensive automation and visibility to simplify operations.

In many cases hardware must be purpose-built for the edge location. Pluribus partners with Celestica, Dell EMC and Edgecore, which provide a number of unique hardware solutions tailored to different environments. For example, Celestica has recently introduced the Edgestone™ Switch, which is designed for central office environments and is 288mm deep, NEBs compliant, and offers all front-panel access.

In these constrained edge environments, an efficient yet comprehensive network automation solution is critical. Unfortunately, software defined network (SDN) automation, virtualization, segmentation and traffic visibility solutions, understandably, have been designed for highly centralized data centers. These current solutions typically require multiple controllers running on top of multiple servers at each edge location. In a central data center, these additional servers for network automation are “in the noise,” but in an edge mini data center they incur significant cost and consume power and space that is precious. Centralized controllers also suffer from latency penalties as they communicate back and forth with the switch/router infrastructure and also act as a single point of failure. Furthermore, because of their centralized heritage, these controllers often require a controller-of-controllers to manage multiple sites, which increases integration and deployment complexity.

Pluribus has developed a solution that is purpose-built for distributed cloud networking and edge. Our Netvisor ONE OS and Adaptive Cloud Fabric are completely distributed and designed to run on top-of-rack and spine switches that must be deployed for connectivity anyway. These switches are becoming more powerful from a control plane perspective, boasting multi-core CPUs, and increased RAM and SSD memory. Our “controllerless” solution is optimal for edge networking, delivering a cost-, space- and power-efficient solution that provides full network automation, including SDN automation of the underlay, a virtualized overlay fabric, network segmentation/slicing, and comprehensive visibility and analytics.

Next-Generation Software-Defined Network Fabric for Distributed Cloud Architecture and 5G Edge

Pluribus Networks Adaptive Cloud Fabric™ (ACF), powered by the Netvisor® ONE network OS, delivers a controllerless software-defined networking fabric that provides a VXLAN virtual overlay and is ideal for simplifying the networking required for distributed cloud architecture.

Distributed Cloud with Adaptive Cloud Fabric

The Adaptive Cloud Fabric can seamlessly scale across multiple data center locations, including centralized, near edge and far edge sites, and provides the following characteristics to deliver lower operational and capital costs, ultra-low latency and increased agility and resiliency:

  1. Geographically distributed locations tied together into one (or multiple) logical unit(s).
  2. Highly available and resilient network fabric with no single point of failure.
  3. Full network state and controller intelligence distributed to every node in the fabric that eliminates the complexity, cost and controller-to-switch latency incurred with external controller architectures.
  4. Advanced VXLAN services along with granular network slicing/segmentation, multi-tenant services and integrated network performance monitoring telemetry support.
  5. Easy administration and fabric automation through a single point of management from any switch through REST APIs or CLIs using Ansible, Pluribus UNUM and other tools.
  6. White box economics and freedom from legacy network constraints. The Pluribus Adaptive Cloud Fabric is powered by a wide range of open networking switches including hardware from Dell EMC, D-Link Systems and Edgecore, as well as the Pluribus Freedom™ Series network switches.

Contact Pluribus today for more information

Ready to improve your Data Center Network?