The IT industry has grown up carefully balancing the delivery of new technical capabilities with the need to monitor them. This is a delicate balancing act since tangible new capabilities are the exciting part of the story (the star of the show) and delivering broad new services based on those capabilities tends to be where they invest the majority of their time. Monitoring has tended to be part of the “paperwork” (aka “supporting actor”) and has been approached with a diminutive level of priority and effort. A very defend-able position due to the high cost and complexity of traditional monitoring.
In many cases, the perceived need for monitoring simply hasn’t risen to a level that needed to be acted upon with any sense of urgency. Hence today, the vast majority of Fortune 1000 companies are ill-prepared to leverage any sort of monitoring, at the very point in time that they would like to aggressively start deploying new applications like VDI, Big Data, IoT and Converged Infrastructure. And when they do start deploying new apps, they realize very quickly that monitoring has become a critical success factor.
All of this is changing. IT professionals are realizing that as hyper-converged infrastructures, virtual desktops and big data applications are being deployed in their world, the performance of each portion of the technology matters to the end user experience. The network has rapidly become a highly relevant part of the story, and yet the visibility into its very operation is limited at best, and still non-existent in many cases.
So the successful execution of modern business initiatives is dependent on having needs insight from each of the pieces of the pie. The storage elements, the compute engines and even the network transports are all critical to the overall success of all modern applications and the end-user experience. IT must align its technical efforts with delivering business services in a predictable and manageable fashion. This is not a technical challenge, but a business one.
That is where the urgent need for monitoring comes into play, at a priority level like never seen before. Network troubleshooting is part of life in the era of digital transformation, and in a recent market report, IDC predicts that more than 70% of that digital transformation will be completed over the next 24 months. Time is short in “IT” terms.
While modern application, software-defined and converged appliance vendors go to great lengths to abstract and resolve discrete component failures, failures do happen. And they happen all the time. And since the abstracts are so robust, the functions continue to operate, but they do so at a reduced level of performance. Databases and/or networks don’t stop, they just reduce the number of transactions they can handle per second when a failure exists. It’s a somewhat grey area that has only recently become a concern due to the digital transformations underway. The end user experience is the measure of success, and for that you need monitoring across all subsystems.
Enterprises have been slow to react to the monitoring opportunities in the past due to cost and complexity of agents and taps, packet brokers, application and packet analysis, and other very highly technical solutions. And when they did consider monitoring, the complexity of using these solutions was overwhelming. But that was OK in the past, before these new applications demanded it. Today is different.
But just like the new applications have become a recent staple in modern IT infrastructures, so should the monitoring solutions required to support those business services. No longer are network technicians limited to complex and expensive solutions that include sFlow, NetfFow and even PCAP analysis (three common network monitoring choices of the past), but they can now find network insight solutions that do much more than just show packets. They show business services usage. They become the window to the very core of everything in IT, and provide a level of data center insight that can only be found when new monitoring solutions are deployed. They can become the single most important investment that a company can make during their transformation journey.
Make no mistake, monitoring of your IT infrastructure is essential today, and easy to come by, once you start looking for it. Modern solutions exist that don’t build on older approaches, but rethink the problem itself. For the network, we can now get past sFlow and NetFlow deployed in a smattering of places, and look at the switching fabric itself as the instrumentation. In fact, today workhorse switches can provide network telemetry much better and at a fraction of the cost of old-school packet brokers and analytics tools.
Modern business requires modern network monitoring. Contact Us Today to Learn More.
Read about Pluribus Telemetry in our Netvisor Fabric Visibility Solution Brief.
Subscribe to our updates and be the first to hear about the latest blog posts, product announcements, thought leadership and other news and information from Pluribus Networks.