Use real-time data insights to power any use case

May 17, 2024
Author:Jeff Aboud
worm's eye-view photography of ceiling

Use real-time data insights to power any use case

A couple of months ago, I wrote a blog discussing why it’s so important for security practitioners and leaders to get real-time alerts on potential security events and how they can help thwart adversaries’ activities before they even get started. But while deep, real-time insights certainly deliver immense value for cybersecurity, they’re far from the only group that can benefit from it.

IT leaders can benefit every bit as much as their security counterparts from gaining deep insights into their data in real time. If you think about it, it makes total sense; there are innumerable decisions IT has to make immediately, based on the warning signs in their data. Data buffering from congested LAN or WAN links may be a warning sign that the network will experience a failure in the near future. I think we can all agree that there’s tremendous value in understanding this imminent failure when the network just begins showing signs of this buffering, rather than waiting until users complain. By knowing early, IT can reduce the time and money required to return performance levels to normal ― and may be able to avoid the outage altogether.

Similarly, if you conduct any sort of eCommerce business, even a relatively minor slowdown in the speed of your payment processing system may be an early indication that the payment system may be headed for an outage. By knowing this in real time, before the latency is even noticeable by your staff or end users, you now have the time to fully investigate the issue, and to do something about it if the system is, indeed, about to fail. Or, consider the alternative: wait until the latency gets bad enough that it’s easily noticed by your staff; or worse, wait until customers start complaining. Both of these situations are extremely late (or possibly even after-the-fact) indicators that put you into scramble mode. And what’s worse, consider all the lost business while your system is down.

Or, you could be a fraud analyst at a bank with hundreds of thousands of customers and tens of thousands of ATMs all over the country, which makes fraud detection an on-going, Herculean effort. But what if you could detect anomalous transactions as they begin to occur, and therefore block the activity before it causes significant financial damage to the institution? Having the deep insights you need to make rapid decisions in real time can be a game-changer that has the potential to save the bank millions of dollars every year.

Another common use case is the detection of shadow IT. This falls in the camp of both the IT and Security departments. Having the ability to detect unauthorized devices, users, applications, and traffic on the network in real time enables IT to gain and maintain control over the network. But that obviously has immediate and on-going benefits for Security, as well, since only devices and applications that meet existing security protocols will be allowed. It also provides the opportunity to monitor and detect any anomalous traffic in real time, thereby enabling security to stop potential attacks before they can achieve their intended goals.

In truth, the possible use cases are limited only by your imagination. But before you can power any use case, you first have to cut through the noise of your data, so you can focus exclusively on what’s truly important. This is no small task, given the tremendous growth in the amount and types of data over the past several years. Our world is becoming so inundated with data that it’s increasingly difficult to distinguish between what’s important and what’s simply “noise”. With a current volume of approximately 120 zettabytes globally and expected to grow to 180 zettabytes by the end of next year, organizations of all sizes are getting buried with data. On average, about a third of this data has no value, whatsoever, because it’s redundant, malformed, incomplete, etc. Another 52 percent is relevant, but doesn’t really need to be analyzed immediately, so it can be sent to long-term storage and simply rehydrated whenever it’s needed. That leaves just 15 percent of your data that’s critical and therefore must be analyzed immediately. It’s this 15 percent that you want to focus on, to power your use cases.

Of course, cutting through all of that noise to find what’s really important requires a data orchestration platform that’s capable of removing (or setting aside in long-term storage, if the data is needed later for compliance purposes) the data that delivers absolutely no value, whatsoever. It must also be able to enrich, normalize, and optimize the data to make disparate data types work together and therefore make the entire dataset more valuable. This is table stakes for any data orchestration platform. But remember the most important component to every use case discussed above ― the ability to gain deep insights in pure real time.

Gaining these insights in real time requires data to be collected as close as possible to where the data is being produced, and well in front of your analytics platforms, by an observation platform that can add value to data in transit. It must also be able to collect and observe data across every aspect of your hybrid network ― using a single product. If one product collects data from physical, on-premises assets and another collects it from cloud-based devices, the platform won’t be capable of observing all of your data together, and will therefore lack context. But an observation platform that’s capable of unifying your data and observing it as a single dataset, all in pure time, has the unique ability to immediately detect anomalies, potential system problems, and security issues and send you alerts in real-time 

By gleaning this clarity from your data in real time, you gain the precious time you need to take decisive action and get ahead of potential problems, so you can stop them before they become major issues. The benefits of this easily transcend specific use cases and functional responsibilities. Bottom line, everybody can benefit, regardless of what they’re trying to achieve.