Tuesday, August 23, 2022

How deep observability powers strong cybersecurity and network insights across complex cloud environments

The growing prevalence of complex multi- and hybrid-cloud environments has opened a Pandora’s Box of unseen risks around security and performance. 

But unlike when IT and network operators had the tools and access to track their own internal systems and data, the mixed-cloud model of today is much harder to know and secure. Pandora’s Box is open but observing what’s going on in and around it is cloaked by inadequate means to gain actionable insights amid all the distributed variables.

 

Enter deep observability and its capabilities, which are designed to provide rich access to multi-cloud and mixed-network behaviors. Such observations and data gathering can be analyzed to rapidly secure end-to-end applications and protect sensitive data.

 

Stay with BriefingsDirect as we explore the latest advances around deep observability, and show how a neutral deployment approach for observation technology spans more infrastructure and services to best protect and accelerate digital business success.


Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. View the video.

 

To learn how deep observability puts cloud chaos and hard-to-know risks back under control, BriefingsDirect welcomes Shane Buckley, President and CEO of Gigamon. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.


Here are some excerpts:


Gardner: Shane, what makes knowing and securing today’s complex cloud activities especially challenging?

 

Buckley: That’s a great opening question. We’ve seen over the last half of a decade or more the desire for organizations to be able to be flexible in terms of where they deploy their workloads. Traditionally workloads were deployed in the data center, with a hard perimeter, lots of security, and compliance needs were met within the organization.

Buckley

Then came the desire to create more flexible workloads, to run faster, to scale better, and also to reduce cost. The cloud model offered ways to gain these great advantages. But, as we often say here at Gigamon, the cloud is simple -- until it isn’t.

And so now organizations are looking at deploying more workloads in the public clouds as well as using colocation providers and within private cloud environments by leveraging technology such as VMware. We are also seeing the emergence of containers and Kubernetes as a technique to provide better automation, higher scalability, and lower cost.

 

The cloud conundrum

 

The great flexibility that the cloud provides is very positive to companies. It allows them to move faster. And that’s essential in the era of digital transformation because more organizations, driven by the COVID-19 pandemic, want their applications to flexibly reach more customers through remote access, handheld devices, mobile phones, and computers.

 

The snag is that the security footprint doesn’t track as straightforwardly as the workload boost when moving from the protected data center to a shared cloud environment. This is the conundrum companies face today. How do they make sure they can run their apps fast, stay secure, and innovate? These requirements are at loggerheads with each other. And that’s one of the major challenges that the Gigamon team’s solutions address.

 

Gardner: There are always trade-offs when adopting technology, of course, but when we’re forced to move quickly, the trade-offs can become riskier. When businesses could control their network perimeter, they knew what was coming and going. Now, we must take the good with the bad traffic. So, if you can’t control the perimeter, how can you at least moderate the risk?

 

Buckley: For many years technologies such as observability have been used for application performance monitoring. Observability, of course, is the technique of looking at an application’s performance remotely, leveraging things such as metrics, events, logs, and traces, which are commonly called MELT data, and have been very effective.

When you shift an application to the cloud, you don't have the same controls you enjoy when the application sits within your own infrastructure and you have control from the network layer right up to the app layer.

The issue though is one of security. And if you want to secure your applications, if you want to take a workload from a protected data center where you have layers and layers of security --  because security has always been about defense and depth -- and you want to shift that application into a cloud-based environment, you don’t have the same controls that you enjoy when the application sits within your own infrastructure and you get control from the network layer right up to the application layer.

 

So, the big question for chief information security officers (CISOs) and security professionals today is, “How do I secure the applications I have deployed to give the organization the flexibility, but maintain the security posture and compliance?” It’s become the number-one issue that CISOs face today as they try to support the business and the organization’s desire to run fast and innovate. The missing part is how to stay secure. It’s really, really complex.

 

Gardner: Now, ideally you should be able to attain the same level of control, visibility, and security in the cloud deployments that you had on-premises. Is that not ever going to be possible? Isn’t it simply a matter of putting the right technology in place?

 

Buckley: Traditionally, organizations have used layered defense tools such as firewalls, web application monitoring technologies, data leakage-prevention technologies, and the capability to encrypt and decrypt traffic streams. Yet more than 90 percent of the threats in organizations sit inside these encrypted traffic streams, which are largely bound to the tools.

 

As one moves to the cloud environment, this gets a lot more complicated because you don’t own the network. The network is owned by the cloud provider. And so how, in a public cloud environment specifically, or as one deploys via containers, can you see inside and see what’s happening?

 

Observe deeply to stay out of deep trouble

 

The emerging technology to fix this issue is deep observability. We refer to it as the deep observability pipeline. Deep observability is about taking the technique of observability to the applications by looking deeply inside the flow of the network traffic. Because logs and traces are mutable, they can be turned off.

 

And, in fact, in many environments where applications are compromised, the nefarious actor will either turn logging off or more perniciously they will overwrite the log. In that way, the security operation center (SOC) is fooled into thinking the application is performing as usual, because logs have been muted.

 

Network traffic is immutable. It cannot be changed. If you take a hard copy of traffic going to or from an application or server and you diagnose it, you know exactly what’s happening within that traffic flow. The ability to get to that level of granularity, that level of fidelity in terms of what’s happening inside the application -- and extract key information, which then you can send to the tools -- is really, really powerful.

 


It’s a technique that we at Gigamon have used for more than 15 years: The ability to extract network traffic insights and send them to advanced tools in a consistent way, same as we have done when the workload sits in a data center. Now, one can do the same with an application or workload in a data center, or move it inside a container, or inside the private cloud, or any public cloud. Anywhere across the hybrid-cloud continuum, we have a consistent approach in how we implement your security insights layer.

 

Gardner: It’s one thing to be able to capture and observe; it’s another thing to be able to deal with a fire hose of data. How, during the past 15 years, have we benefited from handling streaming in near-real-time these massive amounts of data in and around networks and cloud environments?

 

Buckley: You raise an interesting point. Networks are now operating faster and faster. More and more applications are talking to each other, particularly using technology like microservices where one application may make a call to multiple other applications -- often referred to as east-west traffic. That east-west traffic is not just traditionally inside the physical data center, this time it could be across multiple cloud service providers, or across multiple different domains; who knows?

How do you capture all the east-west traffic? Cyber professionals will tell you that lateral movement is how nefarious actors and hackers get access across the estate.You need to catch them from an east-west perspective.

As more of this traffic exists, how do you capture all the information about the traffic? Traditional firewalls, even cloud-based firewalls, typically capture north-south traffic. How do you capture all this east-west traffic, too, because as cyber professionals will tell you, lateral movement is how nefarious actors and hackers get across the estate. You need to catch them from an east-west perspective as well.

 

Secondarily, a lot of the key information happens on that east-west basis. It’s where you get the context of use. But trying to take all the traffic from all the applications all the time creates massive bloat. Typically, a customer’s security information and event management (SIEM) capability will fill with pretty useless information, and so it takes the SOC way too long to delve through the details and find that one needle in multiple haystacks.

 

The ability to instead extract only the relevant information, the metadata, from this traffic flow reduces the data volume by more than 90 percent, meaning you have a lot fewer haystacks to find that one needle in.

 

It also means that you can extract the data from public-cloud networks and reduce the cost of the deployment. You pay less in fees to the cloud providers as they take traffic in and out of the cloud environment to support custom or on-premises tools -- even though the application is sitting inside a public cloud. And that gives tremendous flexibility and tremendous consistency. It means that you can keep your security posture and ensure compliance is maintained across an organization.

 

Gardner: We want deep observability to also be extensible observability. It must observe across an end-to-end continuum of hybrid-cloud services and data flows. How difficult is it to get both deep and pervasive observability?

 

Buckley: It isn’t as difficult as it used to be. The technology now exists. Certainly, at Gigamon we’re providing what we call our deep observability pipeline to customers in addition to the traditional observability they get from many IT vendors. And by deep observability pipeline I mean the ability to look at the application workflows and the traffic that’s going to and from those workflows at the network level and extract the data. Typically, it’s metadata that’s extracted and creates the pipeline of actionable intelligence. That is then sent forward to the relevant tools, to SIEMs and other devices, which can then absorb or extract the information.

 

If you have a network detection and response tool, Gigamon provides high-fidelity traffic that has been optimized, via metadata extraction, to provide the best possible context behind that information. That’s in addition to the other observability infrastructure that you may have. Gigamon also has partnerships with many of the leading observability vendors, whereby we feed directly into their dashboards and systems the high-fidelity pipeline of deep observability information. Customers have the option of doing it multiple ways.

At the end of the day, security is about defense and depth. It’s important for organizations to ensure a consistent security posture regardless of where the workload sits. Nobody gets a hall pass if they move a workload from a protected environment such as a physical data center to a cloud environment where it’s less protected. That doesn’t make business sense. We have to make sure we provide the same level of protection as that application workload moves in a flexible way from on-premises to colocation or public-cloud models.

 

Gardner: I suppose all the players in this ecosystem benefit when they have access to the network data and observations. There’s no sense in trying to corner the market, if you will, or building a walled garden around the observations. It should be ecumenical observability data access in order to be the most useful and impactful, right?

 

Observability that’s neutral and scalable

 

Buckley: That’s 100 percent correct. You hit the nail on the head. We often describe Gigamon as being Switzerland; neutral, we don’t have a dog in the fight. Our job is to do the best possible job to take the most relevant information across all these different platforms and these different workloads and send it to whatever toolset that the customer is looking for -- whether you have one tool or whether you have 1,000 tools, it doesn’t matter to us.

 

We’ve always been neutral at Gigamon in providing the best contextual information to make the best possible decisions from a network, application performance, and security perspective. And the ability to provide deep observability pipeline information extends that now to all forms of cloud in a way that’s never been done before. We are completely egalitarian. We are completely open. We will send whatever information you want, to whatever tools you want.

 

Gardner: At Gigamon, you said that this deep observability technology has been in the works for 15 years, but this use case, this hybrid-cloud problem set, wasn’t evident 15 years ago. How has the background of Gigamon put you in a position to be able to deliver on these technologies and capabilities?

 

Buckley: If we rewind back a number of years, customers attached a toolset to a SPAN port or a switch to access the traffic. That, of course, becomes very unreliable because switches are not designed to ensure that every single packet of data on the SPAN port is transferred. There’s congestion inside the switch, too. When some anomaly happens in the network, oftentimes those packets and that information is lost, and so it’s just not fit for function.
 

Gigamon pioneered and invented the attack-and-aggregation technology that allows you to take a copy of traffic -- whether it’s north-south or east-west -- you can aggregate it together and send the traffic to the desired tools. Over the last 12 or so years, we’ve enhanced that to optimize the traffic, extract the metadata from the network, put application filtering rules in, and decrypt traffic at the center. As a result, we see the information uniquely across the entire infrastructure. You only do an encryption once; you don’t have to do it multiple times.

 

We have protected and supported the largest, most secure, most complex networks in the world. As these networks evolved to provide cloud, multi-cloud, and hybrid-cloud techniques, we have used the same architectural approach. It’s been tried and tested over the past 15 years. So instead of physical taps inside these physical networks, you have virtual taps or Open vSwitch (OVS) mirroring techniques in the cloud. We then have virtual aggregation versus physical aggregation. We have virtual optimization versus physical optimization.

The technique we use inside the cloud is the same textbook approach that we've provided to CISOs and organizations for many years, and they have relied and depended upon. Now we can scale this within cloud environments.

The technique we use inside the cloud is the same textbook approach that we’ve provided to CISOs and organizations for many years and that they’ve relied and depended upon. Now we’ve been able to transform this technology from an embedded solution inside a very high-performance hardware device to provide tremendous scalability -- scale up and scale out -- within cloud environments.

 

As a result, you get low overhead and very light touch. This can be built into the orchestration and automation systems that the customers have. Then it can be scaled up and scaled out, always providing the same level of protection as we used to do with our Gigamon hardware technologies that are famous within the biggest and the fastest data centers on the planet.

 

Gardner: If we have deep observability and it’s pervasive across cloud environments, we extract the metadata, which can be very valuable. We’ve talked about the security use case, but it seems to me that such observability provides intelligence in other areas, too.

 

Particularly nowadays, as the general cost of cloud use is going up, are there ways to extend observability value to help make the best use of your cloud spend? Perhaps to compare and contrast your cloud activities for the best minimum and viable fit?

 

Make the most of your cloud spend

 

Buckley: Super question, and, of course, the answer is, yes. The concept that one has to send everything to everywhere all the time is not scalable in today’s world. Whether you’re running 400 gigabits per second links to your physical data center or whether you’re running on the fastest cloud platforms in the world, it doesn’t matter.

 

There is a nearly infinite amount of data being sent across these very large networks on a daily basis. So, the capability to optimize the data flow, to eliminate the unnecessary data -- whether it’s duplicates of the data, whether it’s having the full payload that’s no longer required because the metadata is sufficient -- the ability to extract that information without losing the fidelity of information and reduce the quantity of information by over 90 percent saves companies and organizations tremendously.

 

Take, for example, the speed and the capacity of their firewalls, of their other security devices across the network, and their application performance tools. If you’re seeing a tenth of the traffic across the infrastructure, you need a tenth of the performance of the tools. This is beneficial to the customer because you end up saving money, and in a potentially recessionary environment, this is even a more important message.

 

But, in addition to that, because we also see all the east-west traffic, we can send more information to the tool, while it actually needs to process less. So instead of just seeing an onset of traffic, we can add that east-west dimension as well. We can also ensure that the traffic is decrypted so that all the bad stuff inside the encrypted stream is highlighted. In a very simple way, the blind spots are where the bad guys and gals hang out. We illuminate those blind spots, so we can know where they hang out. We do that in a way that sends less traffic to the network.

 

Gardner: What are some of the top cloud use cases for deep observability in practice? What are the benefits that organizations are getting in real terms? How does this help a CISO sleep better at night?

 

Migrate to cloud with good security

 

Buckley: Typically, a customer comes to us, and they have used Gigamon for a decade or more. We are the visibility analytics provider for their infrastructure. We have helped protect their infrastructure for a long time.

 

And now they have a cloud-migration project and so a requirement to move workloads. In many cases, financial institutions want to move workloads to a colocation provider or private-cloud environment. They often leverage a solution like VMware’s NSX to move an application or workload to a public-cloud provider, such as AWSMicrosoft Azure, or Google Cloud Platform, whatever. And they’re saying, “How do I do this in a way that ensures that I can get compliance approval and maintain my security posture?”

 

We’ll work with them usually on two types of migrations. One that’s a lift-and-shift, where you take the application as it is, which is the preponderance of applications within larger organizations. You pick it up, bring it across, and drop it inside the environment -- the container, private cloud, public cloud, or whatever. Then we reattach all the network, application forms, and the security tooling in a way that is similar to what they did before. You don’t lose anything, and you maintain everything that you had from a security and applications forms’ perspective.

 


T
he second type of application migration has the customer saying, “Hey, we’re modernizing this application to make sure it works more efficiently in a cloud environment, so it can scale up and scale out, and be in line with what the environment needs.”
 

That migration approach might require different tooling, but we use the very same technique. We ensure that we can capture all the traffic going to and from that application. We can process it and optimize it, as I just described, and then we work with the customer to determine the tooling for compliance and what the CISO needs to ensure the security posture of those business applications -- and then we put that all in place.

 

Also, now we’re seeing as many workloads move from the public clouds to a hybrid-cloud model as we’ve seen going the other way. Oftentimes customers say, “I tried an application in a public-cloud environment, but it doesn’t give me the performance and the cost savings that I expected -- and so I want to move it back.”

 

We enable that type of customer to have the flexibility to take the application and put it back where it was -- or put it somewhere else. Maybe they want to put it inside a private-cloud environment, or maybe they’re moving from a private-cloud environment to 100 percent public.

 

Whatever the customer wants to do, we will work with them to understand where it was, where it’s going, what the potential needs are. We will ensure they maintain the compliance and the security posture of that application, as well as the performance because that remains a very important component, too.

 

Gardner: We’re not just talking about deep observability for security and performance benefits, but you’re bringing up an important workload's portability capability. And any way to help move workloads among hybrid-cloud deployment options while maintaining security posture presents a huge digital business and economic benefit. Have people been able to share with you some of the cost benefits that I suspect are there?

 

Money-saving choices, app by app

 

Buckley: When we run the analysis with customers, we see a return on investment (ROI) in less than six months in terms of the cost associated with the Gigamon deployment and the savings that they’ll get on a go-forward basis. And that’s just direct costs. That doesn’t include operational costs and efficiencies that come with modernizing applications or moving them to a cloud framework to begin with. The multiple benefits are quite significant.

 

Incidentally, the latest research shows that the level of deployment to the public clouds is not as great as had been forecast. The forecast was that we should now be close to having 60 to 70 percent of applications moved to the public cloud. But we’re seeing a resurgence of the colocation model as people leverage container-based technologies and private-cloud technologies. As a result, we’re seeing the public-cloud providers themselves offering on-premises and/or colocation capabilities to leverage the flexibility and the ease-of-use of those data center-hosted application stacks.

 

And so, the visibility gained from deep observability to choose whatever is best on a per application basis is becoming very, very important. Regardless of what the enterprise does in terms of deployment options, they will ultimately be able to save money.

 

Gardner: Your heritage places you in the wheelhouse of a network operations executive or leader. But what you just described is something a bit higher, if you will, in the organization, at the architecture decision-making level. That means those making major decisions about deployment strategies. Do you need to make Gigamon’s value then known to a different persona, perhaps at the architect or Chief Technology Officer (CTO) level?

 

Buckley: I would say network operations executives and organizations have always been core to the success of our business. They saw uniquely the advantage of having a single platform or a fabric that gave flexibility of deployment, flexibility of scale, load balancing, and all the great advantages that our technology provides to customers.

From a deep observability perspective, most organizations have handed the responsibility of securing their hybrid cloud environment to the CISO. So now we have the opportunity to work with the app security and SecOps people, as well as the network security people.

For over half a decade now we’ve been working very closely with security groups as well -- from CISOs to security architects, security operations groups, etc. -- to understand their problems. In many ways, the value of our fabric has been tremendously well-received within security operations over half a decade.

 

From a persona perspective, whether you’re a network operations (NetOps) leader or a security operations (SecOps) leader, obviously we work very closely with both. From a deep observability perspective, most organizations have handed the responsibility of securing their hybrid-cloud environment to their CISO. Now, oftentimes within the CISO’s organization, which is becoming larger clearly, there are new sub-personas within that space. And so we have the opportunity to work with application security people as well as the traditional network security or security operations folks in addition to who we work with today.

 

The good news though is that they’re all super-connected. They have a lot of alignment between them.

 

Gardner: They should be.

 

Buckley: Yes, they should be. And so, we’re well-known. Gigamon is well-known as being inside these environments. It’s been the core platform to ensure that we provide that security footprint.

 

Certainly, we are spending a lot more time talking to business information security officers (BISOs), too, as well as the application security folks to help them understand how this technology can be leveraged within a hybrid-cloud environment.

 

Gardner: How about vertical use cases? Is there low-lying fruit? You mentioned finance. I imagine the regulatory issues there are pressing. But where does the rubber hit the road first and best for deep observability needs?

 

Zero trust everywhere

 

Buckley: Financial services obviously is a hotspot for organizations trying to secure their infrastructure, for obvious reasons. The other area that’s very important to us is our public sector business, on a global basis. The US federal government particularly has taken a very progressive view on security, with the recent executive order from President Biden for zero trust and the implementation of zero trust across federal organizations and contractors. We’re very close to that issue as well.

 

Security in hybrid-cloud uses many of the techniques that we leverage within zero trust. And within zero trust there are typically seven pillars, one of which is visibility and analytics. It’s considered foundational to have zero trust security, in that if you can’t see stuff, you can’t secure it. And all elements and the other pillars depend on the visibility and analytics pillar to operate.

 

Zero trust is not just sought by the governments; it’s of course being adopted and being used by organizations around the world. If you look at protecting critical infrastructure, for example, it’s a really big deal. So sometimes we get involved in conversations with operational technology (OT) and protecting OT devices, whether it’s healthcare, nuclear facilities, and other hardened and critical facilities for organizations, that becomes a really big deal for customers as well.

 


Within the hyperscalers and the software as a service (SaaS) vendors, many of the big SaaS vendors use Gigamon to provide that layer of protection to their applications because the customer often can’t secure it intrinsically themselves. So, you’ll find Gigamon’s approach or connection across many different verticals on a worldwide basis.

 

As we increasingly move to 5G, a core element of 5G is the capability to extract information from these ultrahigh speed networks and to provide correlation between the user plane and the control plane to provide the right traffic to the right tools at the right time.

 

In many of those networks, you see Gigamon is at the center of the ability to deploy the infrastructure as well. So, we’re present in a lot of these different verticals and ecosystems because it’s the same problem, but it’s just used in a slightly different way. And when you’re a fabric, which Gigamon is, you have the benefit of being able to deploy, whether it’s a software footprint or software/hardware footprint or any combination across all these different environments.

 

Gardner: We’ve used the word ecosystem quite a bit and that implies partners working together with other companies. Is there a channel and/or partnership benefit here? How does Gigamon and deep observability fit into a whole larger than the sum of the parts?

 

Buckley: As you would imagine, we work with some of the best and leading system integrators and value-added resellers and other partners on a worldwide basis. They have the ability to take all the piece-parts and bring them together. When Gigamon is deployed successfully, we’re a fabric. We provide this pipeline of actionable intelligence to customers and to tools. And then there are other tools to take advantage of that.

We're the heartbeat that makes the networks and applications run. We bring the whole value chain together. We can ensure that one plus one equals three -- or five or six.

The architectural design of the network is somewhat changed because we’re at the center. We’re the heartbeat that makes the networks and applications run. We work with a lot of the vendors to ensure that they bring the whole value chain together. They have the experience dealing with all of the security, application, performance, and networking tools so that they can interconnect it all in the appropriate way to optimize and protect the network traffic.

 

Partnerships within the channel and vendor community are super important. Many ecosystem vendors we work with include through joint marketing, jointly entering global markets with better capabilities, and via joint events. In doing so, we can ensure that one plus one equals three -- or five or six. We do that on a regular basis.

 

Gardner: Shane, in conversations I have in the field, we often talk about the most important imperatives facing organizations. Security, best use of the cloud, understanding and controlling your data, and being better able to understand your customers to provide a better experience are all among the top concerns.

 

And one of the salient common elements among all of these is having better intelligence about what’s going on, both in the business operations and the IT systems, and then how to constantly improve them. It seems to me that deep observability is an essential core constituent in supporting an intelligence drive within any organization.

 

Do you see machine learning (ML) and other analytics capabilities evolving from the benefits of deep observability and the metadata that you’re providing?

 

Eliminate blind spots, increase intelligence

 

Buckley: I agree a billion percent with what you just said. Having the right information at the right time is incredibly important for professionals, whether you’re in security or to make the appropriate decisions to protect the organization. How many times do people say, “If only I knew; if it was only possible for me to see. I had no idea that they lay inside this application or this part of my network when I was compromised.”

 

The ability to eliminate blind spots so that the security team has the best possible opportunity to protect the infrastructure is of prominent importance. Make no mistake, this is a cat and mouse game. In 2021, 68 percent of US organizations were hacked, and ransomware was demanded. Some 50 percent of those had to pay ransomware. And in the cat and mouse game, the mouse is winning, not the cat.

 

Our job is to make sure that we slow the mouse down and give the cat an opportunity to catch it faster and protect the infrastructure. But it will continue to be that cat and mouse game because as soon as we -- and I mean the whole ecosystem, not just Gigamon -- put our systems together better, the bad folks figure out ways to compromise it. That’s just the way it is.

 

But by streamlining the information, by optimizing the information, and ensuring that we can provide absolutely the right information -- actual intelligence -- to the right tools at the right time, we minimize the chance that the mouse gets away.

 

Listen to the podcastFind it on iTunesRead a full transcript or download a copy. View the video. Sponsor: Gigamon.

 

You may also be interested in: