Netvisor: the Pluribus Network Hypervisor Deconstructed

Netvisor – Type 1 Distributed Switch Hypervisor

The question often comes up around what is Netvisor so lets try and deconstruct it to get some idea:

  • Netvisor is a Type 1 bare metal distributed switch hypervisor
  • Purpose built to support current generation of physical switches
  • Support applications that can take advantage of running in-network
  • Heavily incorporates open source kernel and software components from many open source projects
  • Unique kernel code and C libraries – nvOS
  • which implements unique features like multi-threaded vflow, analytics and cluster-fabric.


A bare Metal Distributed Switch Hypervisor

Netvisor is a best of the breed server OS that draws from multiple open source projects but also implements a very high performance, multi-threaded control plane over physical switch chip. Coupled with its inbuilt cluster-fabric support, it allows hot plug of switch nodes to provide high availability, fabric wide switching, routing and flow tables that use the H/W tables as cache, and helps virtualize the network.

Purpose Built to run on Physical Switches

Unlike most network controllers that run on servers, Netvisor is explicitly designed to run bare metal on switch chip that is memory mapped into the Netvisor Kernel over PCIe. The fine-grained decisions are taken by reading a writing registers using simple PIO read/write instead of sending control packets to external controllers. The new generation of switches that serve 1.2 to 1.8Tb of B/W with sub micro second latency. As the network speeds in a data center are growing from 10Gbps to 40/100Gbps, Netvisor is designed to make sub micro second decisions that involve multiple switches allowing the entire network to work as a giant resource pool that can be programmed and virtualized just like a server.

Support in-Network Applications

A large number of applications today want to run on the switch itself and take advantage of the fact that they have a programmable switch underneath them. Because these switch have Layer2/Layer2 tables and TCAMs that can be programmed, Netvisor provide fabric wide switching/routing tables and flows that use the H/W as a cache and allows applications like virtual router, virtual load balancer, virtual firewall, IDS/DDOS engines, Orchestrations engines, Controllers, Analytics engines, auditing engines, SSL gateways etc to run on the switch itself without having any dependency on the underlying hardware and yet take advantage of H/W offload provided by Netvisor (using underlying switch chip). The applications can be written in C, Java, Perl, Python and are binary compatible between different switch chips and platforms.

Heavily incorporates Open Source

Netvisor is a full Type I switch Hypervisor that allows users to boot any Linux distro to deploy their application and use switch and cluster-fabric APIs to control and program the network. Netvisor uses KVM and Open Vswitch underneath along with Quagga routing suite. The platform supports comes from BSD while it incorporates ZFS and Crossbow technologies from OpenSolaris. About 80% of Netvisor is built from Open source technologies.

nvOS – the kernel

The Netvisor kernel has everything a server kernel has along with additional support for Layer2/Layer3 switching in forms of fabric wide switching, routing and flow tables and multi-threaded C, Java APIs to program them. It also has protocol daemons and CLI, and UI that provide a full layer 2/3 switching experience.

Unique features like vflow, Analytics, and Cluster-fabric

While the switch side API are available for any application, Netvisor has bundled applications that tracks physical servers, Virtual machines running on the Servers connected to the switch, congestion analytics, application flows and ability to manipulate those flows across the cluster-fabric.


Netvisor runs on any switching platform built using Broadcom Trident and Intel Alta series of switch chips. For control processor, it uses low end processors like Intel Rangeley chip with few gigabytes of   memory and small flash storage to more Facebook Server-Switch like platforms (that were pioneered by Pluribus Networks in 2012 but are mainstream now). At higher end, Netvisor runs on dual socket Intel’s Ivy bridge class processors with up to 512Gb Main memory and 12TB PCIe based flash that allows in to serve network, storage and virtualized services for entire rack.

North Bound API

Apart from supporting ‘C’, Java API that allow users and application to run bare metal on Netvisor, it exports Openflow and OVSdb style API that allows the Network to be controlled via 3rd party controllers. Netvisor also has Neutron and Cinder plugins and Openstack-horizon extensions to allow the virtual networks, analytics, services etc to be managed via standard Openstack Horizon GUI and 3rd party controllers like Redhat, Oracle, etc.

About the Author

Sunay Tripathi

Sunay Tripathi

Sunay is the CTO and a Co-Founder of Pluribus Networks. Prior to Pluribus, Sunay was a Senior Distinguished Engineer for Sun Microsystems, and was the Chief Architect for Kernel/Network Virtualization in Core Solaris OS. Sunay has an extensive 20+ year software background, and was one of the top code contributors to Solaris. Sunay holds over 50 patents encompassing network and server virtualization.