Tom Hollingsworth (@networkingnerd) recently wrote “Why Facebook Wedge is Revolutionary” in Network Computing. Good piece.
We agree strongly with many of the things he says, for example “[Wedge] shows that hardware doesn't need to come from an established networking hardware vendor to be useful in the datacenter.”
He also points out a couple important parts of the Wedge architecture are game changers – one being the inclusion of meaningful compute and the other being that having an OS based on *nix opens some doors that have up until now been closed.
At Pluribus we were delighted to see the Facebook announcement of Wedge in June, but then we drilled down a bit. They are certainly on the right track, but we think we are just a bit closer to what the vision should be.
Case in Point: Compute. While they are using relatively low spec, embedded class CPUs, we are using honest to God server class Xeons – depending on the box either one or two with either 6 or 8 cores each (yielding 6-16 cores per box). That is a lot of CPU (which we bundle with significantly more RAM and storage). This means that you can do a lot more with the Pluribus take, both the F64 Network Computing Appliance and the E68 Server Switch are spec’ed significantly higher than Wedge. Need to host network services like PXE boot or even some L4-7 services like load balancing, firewalls or whatever? We can do that.
We agree with the basic ideas behind their OS, FBOSS, but again think our approach with Netvisor – the Pluribus Network OS and Network Hypervisor is in the sweet spot. Netvisor, like FBOSS, makes good use of open source mixed with some secret sauce. In our case, there are about 2 million lines of code worth of secret sauce – if you are going to blaze new trails, sometimes you need to swing a machete. Or Vi or EMACs or whatever.
An important distinction, unique to Netvisor, is that the entire Pluribus network can be managed as a single entity because we enable something we call a cluster-fabric. For you server guys, you can think of a cluster fabric as being much like a network fabric but implemented via tried and true server clustering techniques (just need IP connectivity and in some cases multicast for discovery), three phase commit and all.
We also are blazing some trails in the world of network virtualization. Yes, some will serve you up a Type 2 hypervisor and tell you that is all you need, but when the rubber hits the road you really don’t want to (or can’t afford to) pay that Type 2 Performance Tax and that is where Netvisor (Network Hypervisor, get it?) comes in. We let you run bare metal (Type 1) – you get full performance from your hardware, not the partial performance you get from a Type 2 hypervisor running on top of someone’s OS. If you are having trouble wrapping your head around the difference, picture trying to run PC games on a PC vs trying to run PC games on a Mac running Parallels. Don’t get me wrong, Parallels is wonderful for many things but high performance gaming simply isn’t one of them. Same thing applies to Type 2 network hypervisors.
Anyway, it is great to see Facebook, a real leader in the data center (a quick look at their PUE numbers should help drive that point home) driving some innovation and progress in networking. It is even better when the new horizons of some really sharp and clueful people are places where you have already been for a couple years.
Thanks for reading.
About the Author
Matt Bushell is the Sr. Director of Product Marketing for Pluribus Networks. Immediately prior, he worked three years at Nlyte Software, a data center infrastructure management provider, where he served most recently as their Director of Product and Corporate Marketing. Prior to Nlyte, Matt worked at IBM for more than ten years, helping to launch multiple products in their Information Management and SMB groups including DB2 10. Matt has an engineering degree from Northwestern University and an MBA from the Kellogg School of Management.