SDN and Openflow- Enabling Network Virtualization in the Cloud: Part I

[Admin note: Sunay would be posting a three part series on SDN and Openflow where he would explain why the networking community has been gripped with excitement about these developments and what implications these developments will have on network virtualization for enterprises, cloud providers and in general, anybody who needs to have policy or rule based networks or needs to upgrade their networking equipment every couple of years or so. Before we dig into Sunay's first part of the series, it might be useful to share links to two useful documents about openflow from www.openflow.org.The first is a whitepaper providing introduction to the protocol, system architecture, and use cases and the second is the Openflow Specification v1.1.0 implementation document. ]

The first article of the series focuses on the protocol itself. The 2nd article will focus on how people are trying to develop it and some end user perspective that I have accumulated in last year or so. The last article in series will discuss the challenges and what are we doing to help.

Value Proposition

The basic piece of Openflow is nothing more than a wire protocol that allows a piece of code to talk to another piece of code. The idea is that for a typical network equipment, instead of logging in and configuring it via its embedded web or command line interface (the way you configure your home wifi router), you can get the Controller from someone other than the equipment vendor. Now technically and in short term, you are probably worse off because you are getting the equipment from one guy and the management interface from other guy and there are bound to be rough edges. [Note: We assume that our goal is getting a better mid-term and long-term ROI and manifold ease of management]

In other words, Openflow creates a standard around how the management interface or Controller talks to the equipment so the equipment vendors can design their equipment without worrying about the management piece and someone else can create a management piece knowing well that it will manage any equipment that support Openflow. So people who understand standards ask whats the big deal? One still can’t do more than what the equipment is designed to do!! And bingo! That’s is the holy grail around any standard. By creating the standard, you are separating the guys who make equipment to focus on their expertise and guys doing management to make the controllers better. This is in no way different from how computers work today. Intel/AMD create the key chips, vendors like Dell, HP etc. create the servers and Linux community (or BSD, OpenSolaris, etc.) create the OS and it all works together offering a better solution. However, and more importantly for any business, it achieves one more thing – it drives the hardware costs lower and creates more competition while allowing the end user to pick the best hardware (from their point of view) and the best controller based on features, reliability, etc. There is no monopoly, plenty of choices and its all great for end user. It especially makes sense in the networking space where innovation had been lacking for a while and few companies have been used to huge margins because users had no other choice. [For a more detailed explanation of how economic theory supports this claim of commoditizing hardware and reducing network switching costs, read Rolf's post on the Economics of Open Networking on the Pluribus Networks blog]

One trend that is driving the fire behind SDN is network virtualization. Both Server and storage side (H/W and OS) have made good progress on this front but Network is still far behind. By opening up this space, SDN has allowed people like me (the OS and Distributed Systems guys) to step into this world and drive the same innovation on network side. Thus, it’s not an overstatement to say that Openflow/SDN are great standards for the end user and for people who understand it, see the power behind it.

Key Features

Openflow Spec 1.1.2 is just out with minor improvements while 1.1.1 has been out for few months. Most of the vendors only have 1.0.0 implemented. So if you look at the specs [Ref: the links above], you will see data structures and message syntax needed for a controller to talk to a device it wants to control. Functionality wise, its can be grouped under following categories (understand that I am trying to help people who don’t want to read hundreds of pages of specs):

  • Device discovery and connection establishment: where you tie in a controller to a device that it wants to control.
  • (Creating the) Flows: In a typical network, there is different type of traffic mixed in, packets for which can be grouped together in the form of a flow. If you look at layer 2 header, packets for the same VLAN can be a flow, packets belonging to a pair of MAC addresses can be a flow and so on. Similarly packets belonging to a IP subnet or IP address plus TCP/UDP port (service) can be termed as a flow. Any combination of Layer 2, 3 and 4 headers that allows us to uniquely identify a packet stream on the wire is termed as flow and Openflow protocol makes special efforts to specify these flows. An Openflow control can specify a flow to a switch which can apply it to specific ports or to all ports and ask the switch to take special actions when it matches a packet to a flow.
  • Action on matching a flow: As part of specifying the flow, the protocol allows the controller to specify what action to take when a packet matches the flow. The action can range from copy the packet, decrement Time to Live, change/add QoS label, etc. But the most important action (in my view) is the ability to direct the original packet (or a copy) to specific port or to the controller itself.
  • Flow Table: Where the flows are created. For an actual device, this is typically the TCAM where the flow is instantiated and applied to incoming packets. Most devices are pretty limited in this and can typically support a very small set of flows today. The protocol allows for specifying multiple tables and the ability to pipeline across those tables but given the state of today’s and mid-term hardware, single table is all we can work with.
  • The last piece is the Counters. Most of the devices support port level counters which the openflow controllers can read. In addition, the protocol supports flow level counters but the current set of devices are very limited in that as well.

Putting it all together

Hopefully, now that we understand the components, we can see how it all works together. A controller (which a piece of code) running on standard server box starts and discovers a device that it wants to manage. In today’s world, that device typically is an ethernet switch. Once connected, it puts the device under its control and sets flows with actions and reads status from the device.

As an example, assume that a user is experimenting with new Layer 3 protocol and s/he can add a flow that makes the switch redirect all matching packets to the controller where the packet gets modified appropriately and redirected through a specific egress port on the device. Much easier to implement since controller itself is a piece of code running on standard OS so adding code to it to do something experimental is pretty straightforward. The most powerful thing here is that the user is not impacting the rest of the network and doesn’t need his/her own dedicated network.

My own favorite (that we have experimented with) is a debugging application for a data center or enterprise where the user needs to debug his own client/server application. The user can try and capture the packets on multiple machines running his clients and server(s) but the easier thing would be to set a flow on the switch based on server IP address and TCP port (for the service) and an action that allows a copy of all matching packets to be sent to the controller with a timestamp. This allows the user to debug his application much more easily.

Again, the important thing to remember is that the power of Openflow and Software Defined Networking is in allowing people to innovate and enabling someone to solve their problem by writing simple code (or use code provided by others). Its important to keep in mind that it is a switch that is a powerful device since everything goes through it and allowing it to be controlled by C, Java, or Perl code is empowering it even more. Eventually, the control moves from the switch designer to application developers (to the discomfort of the switch vendors :)

So finally, how does it help Network Virtualization and Cloud?

This is the reason why I am so excited and ended up spending time writing the blog. The key premise in the world of virtualization is dynamic control for resource utilization. Again, network utilization and SLAs are important but the key part we need to solve is the utilization of servers. The holy grail is a large pool of servers each running 20-50 virtual machines that are controlled by software which optimizes CPU/memory utilization. The key issue here is that the Virtual Machines are grouped together in terms of applications they run or the application developer who controls them. To prevent free for all, they typically are tied together with some VLAN, ACL code, have a network identity in terms of IP/MAC addresses, and SLA/QoS etc. For the controlling Software to migrate the VM freely, it needs to manage the VM network parameters on the target switch port as well. And this is where the current generation of switches fail.

At present, network switches require human intervention to configure the various network parameters on the switch that match the VM. So in order for a VM to migrate freely under software control, it still requires human intervention on the network side. With Openflow, the Software orchestrating the server utilization by scheduling the VMs based on policies/SLAs, can set the matching network policies without human intervention.

Just the way a typical server OS has a policy driven scheduler which control the various application threads on dozens of CPUs (even a low end dual socket server has 6 core each with multiple hardware threads), the Openflow allows us to build a combined server/storage/network scheduler that can optimize the VM placement based on configured policies.

Again, Openflow is just a wire protocol and a pseudo standard but it allows people like me to add huge value which wasn’t possible before. In the next article, we will go deeper into what people are trying to build and look at some more specific use cases. Stay Tuned and Happy Holidays!!

Subscribe to our updates and be the first to hear about the latest blog posts, product announcements, thought leadership and other news and information from Pluribus Networks.

Subscribe to Updates


About the Author

Sunay Tripathi

Sunay Tripathi

Sunay is the CTO and a Co-Founder of Pluribus Networks. Prior to Pluribus, Sunay was a Senior Distinguished Engineer for Sun Microsystems, and was the Chief Architect for Kernel/Network Virtualization in Core Solaris OS. Sunay has an extensive 20+ year software background, and was one of the top code contributors to Solaris. Sunay holds over 50 patents encompassing network and server virtualization.