fbpx

Service Provider Use Case: Distributed Cloud for Edge Compute

Share:

In earlier blogs in this series, we covered data center architecture trends, network virtualization and overlays, traditional network automation and advanced SDN automation as well as the importance per-flow telemetry and performance monitoring tools. I also covered how all of those tools are pulled together to deliver an agile and resilient data center network infrastructure for enterprise private cloud deployments in Enterprise Use Case: Active-Active Data Centers for Private Cloud. In this blog I will take a similar tact, but with a focus on how the technology can support Service Provider edge compute network fabrics.

What is Distributed Cloud?

It is clear that there is momentum in the industry to move from a highly centralized data center model to a more distributed data center architecture. This is the case for enterprises, but it is even more pronounced for Service Providers including telcos, mobile network operators, cable operators, cloud providers and numerous other SPs. Over the past few decades these service providers have built out centralized data centers to support both their back-office operations as well as to support service delivery – for example data centers delivering 3G and 4G services to mobile customers. The advantage of centralized data centers is that scale delivers cost-effective operations and enables the colocation of technical operations teams with the infrastructure.

However, new applications are emerging that require a more distributed cloud architecture. These applications often include artificial intelligence (AI), machine learning (ML), and the Internet of Things (IoT), to name a few, but there are many other examples too. These applications have various requirements driving distribution such as improved user experience, lower latency for real time cloud processing, reducing bandwidth costs, data sovereignty and so on. These new application requirements demand that compute resources be deployed at the service provider network edge which could be in a colocation facility, in the central office, at the bottom of a 5G cell tower or even all the way out on the factory floor or on the wind farm.

Edge Compute Resources Deployment Locations

5G is another important technology that requires distributed compute to support virtual network functions (VNFs) and containerized network functions (CNFs) such as vEPCs. 5G will drive the deployment of compute at the edge but the other applications mentioned above also drive the need for edge compute. In other words, edge compute is not dependent on pervasive 5G deployment, but 5G is dependent on pervasive deployment of edge compute.

How does a Network Fabric Play a Role?

Whereas centralized data centers have cost advantages due to their scale and collocated technical resources, edge locations are highly distributed and are often deployed in challenging environments like central offices or lights-out facilities such as modular data centers with no on-site operations staff.

Distributed Cloud/Edge - Where are we now & what's next? diagram

The diagram above highlights that leaf and spine data centers are deployed in each location and that, depending on the size of the deployment, there are going to be some number of data center network switches that need to be managed and configured to deploy new services like VLANs, VRFs and security policies.  If the service provider NetOps leadership does not want a tech driving around from site to site configuring switches one by one or even sitting in a central location configuring switches one by one, a comprehensive network automation solution that is designed for these constrained environments is critical.

SDN automation paired with network virtualization is the ideal approach – however traditional SDN solutions require multiple external controllers running on top of multiple servers at every edge location and typically require controllers of controllers as well to stitch multiple sites together. All of this per-site management infrastructure overhead is fine in a large central data center but becomes more and more problematic as these sites get smaller and smaller.

The optimal approach in this edge scenario a distributed, controllerless SDN solution based on the principles of open networking and disaggregation that is designed from the ground up to automate the leaf and spine underlay fabric and the virtualized overlay fabric at each site. Such a solution would not have the cost, space or power burden of external controllers and should be designed so that it can be inserted into a brownfield network and work with any underlying transport such as dark fiber or layer 3 IP connectivity or MPLS.

The fabric architecture should also be designed so that it can stretch seamlessly across N sites regardless of geographic distance. The solution must have topological flexibility to group together edge sites that make sense to be grouped together into one fabric along with the ability to create multiple fabrics that can be managed independently but that can share network services across the fabrics.

Additionally, the SDN solution should be integrated with orchestration solutions like OpenStack or Kubernetes distributions such as OpenShift from Red Hat. When stretching across sites the network overlay should allow the orchestrators to be stretched as well, further reducing the need to have dedicated orchestration servers at each site. The fabric solution should also have full REST API equivalence with the CLI to provide simple programming of the network and orchestration of compute, storage and networking. In an ideal scenario a REST API call is only made to one switch and the SDN fabric would ensure the config change is populated to all switches in the fabric reducing the communications and processing load on orchestrators and OSS systems.

Simplifying Leaf/Spine Complexities for Edge Deployments diagram

In an SP environment robust support of multi-tenancy and network segmentation/slicing is also essential to provide security for tenants as well as isolation and resource prioritization for various applications. It is also important to support the ability for service chaining between network segments and across tenants as necessary for composing complex services or supporting tenant partnerships.

Finally, granular telemetry that can track every end point connected to the fabric and every flow across the fabric is critical to managing, monitoring and trouble shooting the network.

The Pluribus Netvisor ONE OS is based on open source such as Linux Foundation Ubuntu and Free Range Routing and supports a disaggregated networking model by partnering with bare metal hardware vendors such as Dell, Edgecore, Celestica and others. The OS supports a multitude of high-performance DC network switches including switches that are NEBS compliant and designed specifically for Telco and Mobile Network Operator edge environments.  The Pluribus Networks distributed, controllerless Adaptive Cloud Fabric SDN software is built into the Netvisor ONE OS and automates the underlay and overlay without the need for external controllers saving cost, power, space and complexity. It is designed such that the SDN intelligence is in the leaf switches and by using open standard protocols it works with any existing spine or WAN infrastructure for seamless brownfield insertion. It is also designed to easily stretch across multiple edge sites, support robust multi-tenancy, network slicing/segmentation, service chaining and rich per-flow telemetry.

You can read more about the Pluribus distributed cloud solution here.

If you have any interest in learning more, feel free to send me an email at mcapua@pluribusnetworks.com or click here for a demo request.

Share:
Share:

Subscribe to our updates and be the first to hear about the latest blog posts, product announcements, thought leadership and other news and information from Pluribus Networks.

Subscribe to Updates

Share:

About the Author

Mike Capuano

Mike Capuano

Mike is Chief Marketing Officer of Pluribus Networks. Mike has over 20 years of marketing, product management and business development experience in the networking industry. Prior to joining Pluribus, Mike was VP of Global Marketing at Infinera, where he built a world class marketing team and helped drive revenue from $400M to over $800M. Prior to Infinera, Mike led product marketing across Cisco’s $6B service provider routing, switching and optical portfolio and launched iconic products such as the CRS and ASR routers. He has also held senior positions at Juniper Networks, Pacific Broadband and Motorola.