The industry is seeing an increasing distribution of data centers, driven by a desire to improve the end user experiences, requirements to increase cloud availability zones and the rise of edge compute. At Pluribus we like to use the term “distributed cloud” to define this emerging architecture – effectively a marriage between distributed compute locations and the cloud consumption model. While Kubernetes and containers are on the rise, there are many existing deployments of private cloud across both enterprise and service providers where OpenStack is used as the orchestrator. OpenStack is a well-known open source cloud orchestrator that is used to manage, automate and monitor cloud compute, network and storage infrastructure. In practical terms, Openstack is used to instantiate virtual machines (VMs) in a private cloud environment and at the same time automatically configure the network and storage associated with that VM. Likewise, if the VM is moved or torn down OpenStack reconfigures network and storage appropriately.
A major attribute of distributed cloud is service continuity and disaster recovery within or across regions. Many enterprises and service providers use OpenStack to build private clouds for their geo-distributed cloud networks and offer multiple availability zones to protect the applications from data center failure. These data centers can range in distance and size within a region or across geographically disparate regions.
Edge Deployments, Improving User Experience
In addition, many telcos, cloud providers and enterprises are deploying compute much closer to end-users to provide a better user experience and lower latency for emerging technologies such as IoT, Artificial Intelligence (AI), Machine Learning (ML), Smart City, self-driving cars and much more. Telcos, for example, are transforming their central offices (COs) and even radio base stations into virtualized data centers. There are currently over 20,000 COs in the US alone so this is a tall order and must be done with an eye towards complete automation and lowest cost.
This increasing distribution raises a number of challenges with the cost effectiveness of OpenStack. Today a typical OpenStack deployment requires a number of non-revenue generating infrastructure servers at every data center location. For example this is from Red Hat OpenStack R13 environment requirements documentation:
- 1 host machine for the Red Hat OpenStack Platform director
- 3 host machines for Red Hat OpenStack Platform Compute nodes
- 3 host machines for Red Hat OpenStack Platform Controller nodes in a cluster
- 3 host machines for Red Hat Ceph Storage nodes in a cluster
Clearly deploying 10 non-revenue generating servers (or even 3 non revenue-generating servers) at every location can be costly and complex. In order to avoid this it is ideal to be able to stretch an overlay networking fabric across multiple smaller distributed data centers so they all appear as one single data center. This in turn can “stretch” the OpenStack controller infrastructure so the servers only have to be at one DC location, dramatically reducing costs. There are other advantages to stretching an overlay network as well – providing one touch configuration changes across all nodes of the fabric, seamless interconnectivity, and a unified underlay/overlay network fabric to simplify day 1 and day 2 operations with fabric-wide provisioning of services, network segmentation for security and increasing network visibility.
Many vendors offer network fabrics that are controller-based, using proprietary controllers like Cisco APIC, Big Switch Big Network Controller or open source controllers like Open Daylight (ODL). Similar to OpenStack, typically two or three of these controllers are required for redundancy and deployed at every data center location. These controllers use a southbound API to program network services on the physical infrastructure and they integrate with OpenStack via the Modular Layer 2 (ML2) plugin. With this architecture each data center is automated, but the data centers are not automated together – they act as islands. Therefore, in addition to the multiple non-revenue generating servers deployed at every location, this architecture requires another orchestration layer to coordinate across the data center islands – for example, Cisco ACI’s Multi-site Orchestrator solution. Needless to say, this requires a lot of integration work between all the components and is an unnecessary burden in terms of cost, space, and power.
An alternative approach is to use a controllerless SDN-automated overlay fabric that is integrated with OpenStack. This is the approach Pluribus takes with our Netvisor® ONE and the Adaptive Cloud Fabric™. Pluribus distributes the SDN intelligence and the fabric state database into the user space of the switches that are deployed for leaf and spine connectivity. Each switch has a CPU, RAM and SSD and with innovative software these resources can be leveraged as a distributed compute platform to deliver SDN with no external controllers required. Pluribus has integrated with OpenStack via the ML2 plug-in, enabling a single OpenStack instance to program the Pluribus network overlay across multiple data center sites. This solution brings the simplicity and agility needed to deploy cloud services faster, in a distributed cloud network from one single point without the need for costly external controllers. The Pluribus Adaptive Cloud Fabric also provides programmable APIs to manage and orchestrate network resources across the entire fabric.
Let’s look at the key benefits that Pluribus brings for distributed OpenStack cloud deployments:
1) Simplify & Accelerate Overlay Service for Federated OpenStack Distributed Cloud
Pluribus uses the OpenStack plugin mechanism to create a Layer 2 network (VLAN) associated with each VM. The Pluribus ML2 plugin, which provides powerful and simplified overlay automation services, enables customers to create a network through OpenStack. OpenStack configures the VLANS in the Adaptive Cloud Fabric from a single point, and then the Adaptive Cloud Fabric configures the VLANs throughout the fabric and automatically maps to VXLAN overlay tunnels.
Now customers can expand their network to meet new demands with a very simple architecture and without needing multiple controllers at multiple data centers.
2) Simplify OpenStack Horizon for Distributed Management
The Pluribus Adaptive Cloud Fabric can span across a region over a standard IP network and it provides an overlay fabric abstraction that stretches across multiple sites. This allows the OpenStack administrator to launch compute instances (VMs), manage networks, and set limits on the distributed cloud resources across multiple geographical locations from a single point of authentication and control.
3) Tenant (VM) Visibility from Federated OpenStack Distributed Cloud
The Pluribus Adaptive Cloud Fabric also provides visibility of VM MAC address, IP address, Port, State and VXLAN mapping information. This allows the OpenStack administrator to monitor, troubleshoot, and track workload mobility across a geo-distributed cloud network from a single point of management.
The Pluribus Adaptive Cloud Fabric is purpose-built for distributed cloud networking and edge compute architectures delivering an integrated SDN-automated underlay and overlay data center fabric that does not require any external controllers. With Pluribus ML2 plug-in for OpenStack it enables a single instance of the OpenStack control infrastructure to be stretched across multiple data center sites. This solution brings management simplicity, business agility, resiliency and security for cloud providers or enterprises deploying private cloud to orchestrate and deliver cloud services faster across geo-distributed cloud environments from a single-point of control.
Please refer Pluribus OpenStack ML2 Plugin Deployment Guide for more details.
If you would like to see the demo – Pluribus ML2 Plugin Integration with OpenStack, then please join the webinar on Sept 16th , 2020 – Register here.
Subscribe to our updates and be the first to hear about the latest blog posts, product announcements, thought leadership and other news and information from Pluribus Networks.
About the Author
Chirag Kachalia is a Senior Technical Marketing Engineer at Pluribus Networks with a deep technical expertise and long term experience in Service Provider, Metro, Core and Data Center. He is currently focusing on articulating the Pluribus Solution to customers and partners as well as assisting with pre-sales activity and working with Product Management teams to continually improve products.