New levels of scalability for hybrid clouds and edge computing
Can a data center fabric be both extremely simple to operate and extremely scalable?
Today we answered that question with a resounding, “Yes!”
The Adaptive Cloud Fabric was already the industry’s most comprehensive, cost-effective and easy-to-use solution for data center fabrics and distributed clouds. It was, and is, Networking – Simplified.
Now we’re taking it to new heights of scalability. Today we announced architectural innovations to scale in multi-dimensions, so the Adaptive Cloud Fabric can support larger fabrics within centralized data centers for scalable hybrid and private clouds and across more widely distributed edge data centers and other edge computing sites.
Here’s the brief summary of the two key innovations:
- Thousand-Node Fabrics: Architecture innovations to extend the advantages of controllerless SDN automation across larger fabrics.
- Open Fabric Extension with EVPN: Enabling multi-fabric federation and multi-vendor interoperability.
In the rest of this blog, I’ll discuss why fabrics need increasing scalability and summarize these two innovations to deliver it, along with a couple of customer examples. In coming weeks, we’ll do a deeper dive into both topics, so stay tuned.
The Challenge: Data center network fabrics must scale in multiple dimensions to meet the challenges of distributed clouds and edge computing.
Private cloud is the multi-cloud anchor: analysts estimate that 75% of workloads will remain in private clouds rather than migrating to public clouds. Private clouds are also growing in scale and becoming more distributed with expansion of multi-site data centers and edge computing. These trends lead to a challenge for data center network fabrics: how to build a distributed multi-site data center fabric that can scale up in core DCs and out to edge DCs, while being extremely cost-effective, automated, agile, and easy to operate.
Figure 1: Fabrics Must Scale in Multiple Dimensions
With our new scaling innovations, we believe we have the industry’s best answer to the challenge.
With a hierarchical multi-pod architecture and new capabilities incorporated into the Pluribus UNUM management system (Figure 2), Pluribus enables the Adaptive Cloud Fabric to scale far beyond current levels and meet virtually any customer’s scalability requirements. The new architecture will scale to meet customer demands for 256, 512, or even 1024 switches in a unified fabric.
Figure 2: Extending the advantages of controllerless SDN automation to larger fabrics
Customer Case Studies: Larger and More Distributed Data Center Fabrics
We have active deployments that are demonstrating increasing scalability of unified fabrics in both larger centralized data centers and highly distributed edge data centers.
Large active-active multi-site data center: A government agency deployed a 64-switch Adaptive Cloud Fabric in a single data center and expects to grow it to a unified multi-site 128-switch fabric across two data centers, enabling active-active operations for high application availability, agile service delivery, optimized capacity utilization and simplified operations (Figure 3).
Figure 3: Government Agency Unified Multi-site Fabric
For more details on this example, read the case study.
Highly distributed cloud fabric for edge computing: Trilogy Networks is leading the Rural Cloud Initiative (RCI), working with rural telecom network providers to deliver highly distributed edge computing capabilities for 5G and Internet of Things (IoT) applications in precision agriculture, oil and gas, mining and other predominantly rural industries. Trilogy, which recently announced completion of Phase 1 of the RCI Farm of the Future and is transitioning to production network deployments, is using the Adaptive Cloud Fabric to connect hundreds of highly distributed edge computing sites into a unified fabric. Each service provider partner’s unified fabric will potentially cover dozens or even hundreds of sites from regional data centers to central offices and multi-tenant micro data centers in base stations, central offices and other edge locations (Figure 4).
Figure 4: Adaptive Cloud Fabric in Trilogy Rural Cloud Initiative
Open Fabric Extension with EVPN
BGP EVPN provides an increasingly popular, standards-based approach to create overlay networks using various data plane encapsulations, with VXLAN encapsulation most commonly used for data center overlay fabrics. While BGP EVPN and the Pluribus Adaptive Cloud Fabric can both be used to create a mesh of VXLAN tunnels and manage control-plane address learning in the overlay, they do so in different ways. BGP EVPN uses the BGP protocol to distribute control plane information, while the Adaptive Cloud Fabric uses the controllerless SDN fabric to do so. This SDN-based, protocol-free approach substantially reduces the complexity of configurations required in each switch and can help to increase scalability by reducing the protocol processing demands on switch CPUs.
Combining these advantages with built-in automation and a rich set of overlay services, the Adaptive Cloud Fabric creates overlay networks that are simpler and more operationally efficient with more service capabilities than many overlays based on BGP EVPN. That’s why Pluribus customers have pushed us to scale the Adaptive Cloud Fabric in a way that preserves all of its advantages, as discussed above.
Nonetheless, BGP EVPN is a valuable tool that can complement the Adaptive Cloud Fabric in various applications, and that’s why we are also supporting it.
Multi-vendor interoperability is one such application. Figure 5 shows how BGP EVPN can support extension of overlay services from an Adaptive Cloud Fabric to fabrics provided by other vendors.
Figure 5: Multi-vendor Interoperability Using Open Fabric Extension with EVPN
As the figure shows, we can enable service extension without requiring BGP EVPN configuration on every switch in the Adaptive Cloud Fabric. EVPN only needs to be enabled on a border leaf cluster (two switches), which provides the mapping from the fabric control plane to the BGP control plane.
BGP EVPN can also be useful to give customers options in how they scale their Pluribus-delivered network fabrics by seamlessly interconnecting multiple Adaptive Cloud Fabrics and extending services between them (Figure 6).
Figure 6. EVPN Fabric Extension for Architectural Flexibility
By enabling EVPN interconnection of Adaptive Cloud Fabrics, we allow customers to optimize the scale of each individual fabric based on their own constraints, which might include highly geographically distributed networks or a desire to partition network operations to align to application availability zones. EVPN interconnection can also enable scaling beyond the thousand nodes supported in an individual Adaptive Cloud Fabric to create an extended fabric of many thousands of switching nodes.
For a deeper dive into BGP EVPN, see my blog, BGP EVPN for Scaling Data Center Fabrics: Pros and Cons, Deployment Options and Trade-offs.
These innovations open up many new possibilities for Pluribus customers to scale their data center fabrics in multiple dimensions without sacrificing operational simplicity. It’s just one more example of how Pluribus continues to deliver on our mission to simplify networking.
Learn more by watching our webinar on-demand: Thousand-node Fabrics: Scaling with Controllerless SDN and EVPN.
Subscribe to our updates and be the first to hear about the latest blog posts, product announcements, thought leadership and other news and information from Pluribus Networks.
About the Author
Jay Gill is Senior Director of Marketing at Pluribus Networks, responsible for product marketing and open networking thought leadership. Prior to Pluribus, he guided product marketing for optical networking at Infinera, and held a variety of positions at Cisco focused on growing the company’s service provider business. Earlier in his career, Jay worked in engineering and product development at several service providers including both incumbents and startups. Jay holds a BSEE and MSEE from Stanford and an MBA from UCLA Anderson.