Securing Your Data, One Next-Generation Data Center at a Time


The IT industry is going through a fundamental shift in how datacenters are managed and operated. If you’re an avid reader of the Pluribus Networks blog, you’ve seen our take on the approaches out there today as well as where we are focusing our efforts with our vision. One area that continues to gain attention no doubt due to some mega-companies getting breached and having intrusions linger (e.g. Sony, Target) is data center network security.

Before every one of my meetings, I always make it a point to ask what my customers are trying to get out of our time together. This is a great best practice to make sure we are all on the same page but it is also a phenomenal way to understand where their most crucial requirements are.

Over the past six months, each of these meetings has had Security move from a “nice to have” into one of the top two priorities in every one of my meetings. Here’s what I’ve learned about what security means to companies large and small as well as some considerations that I suggest if you need to consider security as a top requirement for your data center.

There have been countless stories of security breaches at very well-known enterprises with thousands of customer records compromised. Today’s reality is more about asking the question “Have I been compromised?” rather than “Will I be compromised?” It sounds morbid but I challenge you to find any CSO that isn’t constantly worried about how to mitigate against these risks. In fact, they usually ask the question, “What else could I have done to enhance my security profile?”. Security really is a multi-tiered process, not a one-stop purchase. You must first understand your network traffic, then be able to operate and apply policy on it ubiquitously with a simplified point of management.

In the traditional sense of how networks were designed, this was not easy to do. Not impossible, but difficult and extremely expensive. We tried our best with acceptable success to do all of this with trust zones and DMZs. If you had particularly interesting or valuable data or sections of the network that you wanted to protect, you would set up a separate monitoring fabric and dedicated monitoring appliances to sample, record and analyze with a high-cost point. You might have invested in NetFlow, but then you still needed more hardware to address the analysis of this data. Even then, you were only seeing a small portion of the data in your network. Gaps were everywhere and it was very possible to miss an intrusion event no matter how robust your monitoring approach was.

This type of approach worked really well when applications were static, one or two-tiered, and primarily dealt with application traffic crossing a trust boundary in a north-south direction in and out of a data center. You would simply go to your favorite firewall provider and purchase a (really expensive) bare metal appliance to be installed in-between traffic where you wanted a trust boundary created. You installed some ACLs and hoped that your ACL bloat wouldn’t overrun the memory on the firewall. (Be honest, you know you never wanted to be the person who had to delete an ACL from a de-provisioned application!!). You could then physically segment your network and control traffic in or out and then report on that important traffic with a separate monitoring infrastructure. Not easy to deploy and by all means, very difficult to keep running and up to speed. We didn’t even mention what needs to happen when you switch interface speeds or if your internet connection had to grow to scale more traffic. In both cases, this meant spending more money to give you the same basic functionality.


Ripe for a change, don’t you think?

Today, the world is different. Workloads are virtualized and literally everywhere, most times even stretching across multiple data center sites. Network architects are extending Layer 2 adjacencies everywhere with VXLAN and even bridging applications to the cloud for extra scale or performance. It is impossible to continue legacy security practices with such a dynamic and demanding workload structure. If you’re like any of the customers I’ve spoken to over the past few months, you may also be looking into a next-generation data center solution to help alleviate these pains. You might not normally put data center planning and security into the same bucket, (in fact today they are still very different approaches), but with a little bit of careful planning and research, you can find a lot of value in a solution that can help address both.

How does one unify a security policy across a massive number of data center nodes and ensure compliance with policy? This might be simple across two or three compute pods, but things start to get a little crazy once an application spans boundaries that are constantly changing. If you’ve been doing some research, buzz words of micro-segmentation and policy provisioning might come to mind but what do they really mean? How should security play a role in your next data center?

Allow me to share my learnings from 6 months of conversations with some of the country’s largest and smallest organizations. Regardless of size, companies that are taking security seriously all have had some desire to bring each of these components to their next design.

Any security architecture should make it easy to:

  1. Integrate any existing investments in third-party security provider solutions. There should be no reason to rip and replace a firewall, traffic analyzer or a load balancer just because you are upgrading your network. All of these should work together regardless of your choice of vendor
  2. Track every endpoint, bare-metal or virtualized, to provide monitoring and logging to show precisely where a highly mobile virtual machine was during any event. This snapshot should be very easy to obtain.
  3. Allow segmentation at any level (macro or micro) and allow you to control where these segmentation boundaries are without a need to restrict where a workload resides to obtain this functionality
  4. Provide deep visibility and telemetry into each application and its associated data flow throughout your datacenter(s). Sampled data or summarized trends are great but all it takes is ONE file transfer flow to compromise your trust boundary.
  5. Allow for full auditing, reporting and trending analysis. It should be very simple to verify what policies are in place, report on them and allow you to set a baseline of what normal behavior is.
  6. Enable full flow control of these flows and allow various actions to be taken on each depending on a defined policy, at the local pod level or across the entire fabric. It should also be simple to add, remove and modify these policies. If needed, it should also be very easy to record flows that match policy so that they can be audited and replayed at a later date.
  7. And finally, this architecture should be easy to manage and allow for higher-level orchestration tools to integrate its components seamlessly into a provisioning workflow.

Security shouldn’t be an afterthought, instead, it should be a consideration from day one as a core component that is a foundation of an end-to-end architecture that secures the data that means the most.

For more information on all of these features and to see if the Pluribus Networks architecture based on our Virtualization-Centric Fabric (VCF™) security vision aligns with your organization’s needs, please visit us at or email me directly at, I’d love to hear from you!

Solutions from Pluribus Networks: East/West Traffic Security, Accelerating Big Data ApplicationsManage VDI PerformanceConverged Infrastructure OptimizationNetwork Performance  Monitoring



Subscribe to our updates and be the first to hear about the latest blog posts, product announcements, thought leadership and other news and information from Pluribus Networks.

Subscribe to Updates


About the Author

Jonathan Cornell

As Principal Architect at Pluribus Networks, Jonathan is tasked with driving industry recognition and technical adoption of software-centric approaches to solving traditional networking problems using integrated application-flow visibility, single point of management and flow-based policy models.Jonathan's background includes previous roles in product management and systems engineering, specializing in server virtualization, data center networking and storage switching solutions. He has advanced sales and hands-on technical experience with the entire Cisco portfolio of Nexus switching, MDS switching, UCS and ACI product families in addition to VMware vCenter and ESXi environments. His prior engagements have focused extensively on designing application centric fabrics, ultra-low latency financial trading networks, and multi-site, mission critical data centers for well-known enterprises.