Cisco Lawsuit against Arista – Driven by SDN?

It’s already old news that Cisco filed a patent and copyright infringement lawsuit against Arista. While this is unfortunate, it is pretty understandable. White label and brite label switching are stealing major market share as switching hardware becomes a commodity. This is pushing differentiation into software and Cisco wants to protect its software and management (“Cisco CLI”). This creates a major opportunity for SDN switching software to shine and make inroads into the Datacenter and Enterprise. But first, it’s useful to understand the reason people are looking for a change and how SDN switching addresses that desire.

Legacy Switching vs SDN Switching

Obviously there are economic reasons to move to SDN switching on merchant silicon and open architectures, but those are not the primary reason given that networking spend is very small portion of overall datacenter spend. Finding the real reason requires digging deeper into applications and the (mostly) Intel chips that run all major datacenter and enterprise applications. These applications have become so complex to install and debug that the application vendors are increasing packaging then inside Virtual Machines (VMs). Fortunately Intel CPUs are becoming powerful enough to run several hundred VMs per server. The CIO is actually trying to reduce his operational spend on managing bare metal applications (which includes software life cycle management that starts from installing and debugging such applications). In a simplest case, if you assume 200 VM per server, 40 servers per rack and a POD of 20 racks (a size right about in the middle of the leaf/spine architecture sweet spot,) you end up with 160K VMs today (and 1M VMs tomorrow with more denser racks coming online).

Each VM comes with a MAC address that is used for the Layer 2 switching that we known and loved for the past 30+ years. Layer 2 switching for 160K VMs starts to be a challenge for merchant silicon shipping today, leaving little headroom for future growth considering the most powerful switch chips coming out now have Layer 2 tables maxing out at 288k entries. So all of sudden the Layer 2 switching doesn’t work even in a simple POD. Combine this with Layer 2 weakness around Loops (STP) and broadcast based discovery for VMs (Address Resolution Protocol commonly known as ARP) where every 15-20 minutes each server/VM kills its known ARP entries and fakes amnesia (to deal with stale ARP entries – a very simple tutorial is available from http://www.cs.virginia.edu/~cs458/slides/module06-arpV2.pdf Virginia Tech CS456 class) and you can see that we have a really big problem.

Obviously, the logical solution is to go Layer 3, which doesn’t suffer from these issues, but we start getting limited by Layer 3 host tables. Also servers have all been designed around Layer 2 (ask any kernel developer and he only knows TCP/IP over Ethernet) since inception. Additionally, VMs like to migrate and preserve their IP addresses, which also makes it harder (if not impossible) to migrate across Layer 3 boundaries. Enter the world of Layer 2 over Layer 3 tunneling (VXLAN, NVGRE, etc – a more detailed tutorial is available from http://www.cse.wustl.edu/~jain/cse570-13/ftp/m_09avb.pdf Univ. of Washington). This brings its own set of problems around tunnel creation and management and performance issues and moves towards more complex, closed and expensive solutions.

The Figure below shows the decision state of the customer who is trying to use the economic argument for his switching hardware needs but ends up with complex, proprietary and expensive software.

In addition, as applications become more complex and virtualized, a lot of traffic and activity is VM to VM (called East-West traffic) so traditional services, like firewalls, that used to sit in front of the datacenter have become fairly useless. Putting these services at the WAN boundary of the datacenter is like saying that we need police only at the city limits of New York when bulk of the citizens and miscreants are inside the city and don’t cross the city boundaries.

Based on our conversations from bulk of the enterprises and datacenter operators, we get the sense that what legacy networking the legacy vendors are trying to protect doesn’t even address today’s needs (forget about tomorrows). This was evident from ONUG in October 2014 in New York where I was in the Switch Overlay Panel as an industry expert.

The top ONUG finding were (in priority order):

  • End-to-End Monitoring
  • Control Plane Scale and Quick Convergence: 100k end points
  • Correlation of Overlay with Underlay State and Performance

What!! This is the demand letter from top banks on Wall Street. A few years back, the demand letter would have had several 3 letter acronyms just like a decade ago when we were still trying to add different knobs to TCP and IP or coming up with things like Unix 98 and POSIX standards. Legacy networking has become just that – a standard and no one is looking for new protocols. The biggest protocol body IETF is a lonely place these days (or so I have been told by a friend, I’ve not been to any meetings for a long time).

So bulk of the action moves to new world of SDN switching which is actually addressing the problems we face today. Plus given its inherent open architecture (built from open source) and economic advantage, it is understandable that the world of legacy networking keeps shrinking and who gets what parts of that shrinking pie typically gets resolved by lawsuits.

Ingredients of SDN Architecture

Given that problem space has moved from a box view to global view that   allows Virtualization to thrive and a CIO to reduce its operating spend and meet his visibility and security needs, there was no point starting with the legacy networking. The SDN focus was the network and   not the individual switches; Virtual ports and not Physical ports; Virtual Wire and not Physical Links;

In addition, the new generation of switch H/W provided two fundamental changes. The Switch Control Bus moved to PCIe Gen2 and Control CPUs became Intel CPUs with 4-8Gb memory (its more expensive to buy 1Gb DIMM then actually a 2Gb DIMM). This allowed us to get out of the  world of embedded systems and adopt Open Source Server Operating Systems to build the SDN switching solutions. This also allowed people like me who have worked on real kernel all our life to enter the world of switching and open the switches for applications and virtualization at scale.

A quick look at the ingredients for SDN Switch OS

  • More powerful converged switch platforms based on merchant silicon and   cheaper then legacy switches.
  • A full Eco-system of Open source OS from Linux/Illumos/BSD to virtualization technologies like KVM/Bhyve/Containers/Dockers.
  • A range of offering of Open source protocols from Quagga/Bird etc.

The picture above contrasts the world of legacy switch and legacy switch OS with a modern OCP compliant switch and a server-type OS driving the switch chip. A Brite box version (new Gartner Terminology for branded white box) adds even more compute, memory and PCIe based storage to create a very powerful hyper-converged appliance that Netvisor uses to run even more powerful applications.

The Architectural DNA of Netvisor – A Powerful & Open Switch Hypervisor

Most of the SDN world is filled with technical terms ranging from controller, data and control plane separation, control plane performance to overlay orchestration, etc. People often fail to understand the reason why these widgets are important. Looking from a CIO perspective, he or she is looking to solve the following problems (in some priority order):

  1. Make the Infrastructure more Economical to deploy (the CAPEX argument)
  2. Make the Infrastructure simpler to manage, scale and debug (the OPEX argument)
  3. Make the Infrastructure a profit center (add to the Top line) by allowing rapid deployment and management of the applications that make money

Most switch OSs are designed to address the economic argument by supporting merchant silicon based switches. This allows customers to buy networking hardware just like the server hardware and since more people are buying the same hardware, volume drives the cost lower. Control and data plane separation allows the switch OS to delivered as pure software to run on some arbitrary hardware. This is probably the most widely understood argument around power of SDN and why architecturally its very different from a legacy switch OS which is designed with custom hardware in mind and doesn’t share much in common with the server world.

The term controller is used to describe an arrangement where a single device provides a single point of management for multiple switches, simplifying infrastructure management. There are many different ways to implement controllers on top of the switch OSs but Netvisor is a distributed operating system as well as bare-metal controller, sharing state and configuration using TCP/IP based protocols (open and standards-based, leveraging open software from server clusters). It is a plug and play architecture where you add and remove switches as you need them and since it is standards based, it can work with legacy switchs, thus supporting brown field deployments.

Netvisor also uses L2/L3/encap/decap/flow tables as caches while bigger tables stay in software. All modern switches have quad core control processors and 4-16Gb RAM to support full fledged server OSs. This is where control plane performance (defined as control plane IOPS) matters – to support software based learning, so you can have a distributed but global view which allows ARP suppression and broadcast-free Layer 2 domains. So to add to the simplicity argument, for customers needing their Layer 2 networks in virtualized environments, Netvisor copes well with broadcast storms and STP shutting down random ports and does so right out of the box with full CLI/UI/API (C/Java/REST). For people still wanting to use tunnels to connect their racks, Netvisor supports switch to switch VXLAN tunnels with encapsulation and decapsulation done in hardware, yielding line rate performance without any penalty. Since Netvisor has global visibility of each virtual port (and VM), the tunnels are created on demand. In short, Netvisor has a built-in control plane and orchestration engine (hardware offloaded) to support variety of merchant silicon based switches, enabling easier to manage, scale and debug network.

The modern switch chips have a control plane that runs over PCIe and also sports a sizable TCAM. One of the unique advantages of TCAM is its ability to match on wild-card fields in the packet header and take actions like redirect-to-CPU or copy-to-CPU while switching packets at multi terrabit per second. Also given that most modern switches are Server-Switch (multi-core CPU with 4-16Gb RAM), Netvisor’s Multi-Threaded and very powerful control plain keeps track of every application flow for the infrastructure and provides in-built applications for end-to-end monitoring, security and analytics that helps correlate physical and virtual network with the end applications. Some of the biggest Wall Street demands for next generation networking!!

Conclusion

It is obvious that applications drive the business and that much of the infrastructure in place today that does not meet the needs of the various applications running on it should be considered legacy. SDN switching is designed from the ground up to support modern applications and virtualization technology at scale while keeping it simple and economical. Modern switch OS and hypervisors open the switching platform for more powerful applications related to network services, end-to-end monitoring, security and analytics. Using an old cliché – this is a “win-win” for the CIO who no longer has to make hard choices between the network he or she really wants and the one he or she can actually afford.

Subscribe to our updates and be the first to hear about the latest blog posts, product announcements, thought leadership and other news and information from Pluribus Networks.

Subscribe to Updates


About the Author

Sunay Tripathi

Sunay Tripathi

Sunay is the CTO and a Co-Founder of Pluribus Networks. Prior to Pluribus, Sunay was a Senior Distinguished Engineer for Sun Microsystems, and was the Chief Architect for Kernel/Network Virtualization in Core Solaris OS. Sunay has an extensive 20+ year software background, and was one of the top code contributors to Solaris. Sunay holds over 50 patents encompassing network and server virtualization.