There is no question that the footprint of today’s data center is rapidly moving toward the virtual. This changes so many things about the way IT operations functions that we must start asking hard questions about security, continuity, and control of our data. Perhaps one of the biggest questions is this - what happens when everything is a file?
All of our virtual server and desktop instances are simply files run by hypervisors.
The trend toward Software-Defined Data Centers (sometimes abbreviated SDDC) is moving fast. Increasingly, organizations are implementing Software-Defined-Networks (SDN), systems, and application instances, with less focus on hardware-based tools and standalone software installation.
As things become software-defined, it’s worth revisiting the ideas behind the “Goldilocks Zone” concept. There is a balance between security context and proper isolation techniques within a data center, but that balance may be wholly different in a virtual environment than a physical one.
A primer can be found here in an article written by Tom Corn, VMware’s VP of Security Strategy.
To start any discussion about security within a virtual or software-defined environment, we have to revisit the questions I posed in my last blog post. Let’s explore each of these here, with more emphasis on SDDC and SDN to come in later posts.
Should security controls focus more on hardware integration, or deeper hooks into the hypervisor?
This is likely a question that has no single answer, as both of these are worthy objectives. However, for the vast majority of controls, especially those that will be in a software-defined environment, the hypervisor now acts as a kernel stand-in for any system and application instances running within them. Just as an OS kernel manages hardware calls and resources for the user-mode applications within the OS, a hypervisor manages this for all the virtual aspects of your environment. One of the key tenets I often convey to my SANS students in virtualization and private cloud security courses is this:
“Whomever gets lowest in the stack, wins.”
This must be a new mantra for teams everywhere. It has always been about the software stack. The hypervisor is now the lowest in the stack, and so integration with the hypervisor kernel becomes paramount for security controls that look to detect and prevent attacks within the virtualized or software-defined components operating above. While hardware integration is a fascinating idea, and may offer the only true means of validating and monitoring hypervisor and operating system integrity, the number of tools and opportunities to work at this level are few.
Do agent-based, agentless, or hybrid antimalware and endpoint security tools make the most sense in high-density virtual environments?
This is really a question of architecture and resource utilization, and puts us squarely in the “Goldilocks Zone” conversation I mentioned earlier.
There are definitive trade-offs to any of these options, such as:
- Agent-based endpoint security: More scanning and detection/prevention capabilities, with significant resource overhead.
- Agentless endpoint security: Highly reduced resource overhead per virtual instance, but some real-time scanning and heuristics capabilities may be somewhat reduced or even incapacitated.
- Hybrid approaches: A hybrid approach may offer the best of both worlds, with a very small, lightweight agent that integrates into each VM and also is tightly coupled with the underlying hypervisor. But this could also be very vendor-specific and potentially less compatible in multi-vendor environments.
What solutions will provide the most seamless architectural and operational shift from on-premise installation to cloud service provider infrastructure, with no loss of introspection and monitoring?
As we move from virtualization to private cloud, and from private cloud to hybrid architectures that integrate with cloud provider environments (we hope!), the need for security controls that we can configure, install, and maintain from afar grows accordingly. Today, we’re discovering that the traditional security controls we know well, ranging from log and event management to network monitoring to access controls and encryption, don’t translate to external cloud providers and hosting environments. We simply don’t have the right tools, or enough integration at lower layers of the stack, to play at the same level within the cloud (at least not in most cases).
Some tools are getting better, and automation and scripting technologies are playing a big part in this (another topic we’ll be covering in upcoming posts). To some degree, we also need the cloud providers to cooperate and allow more access to the hypervisors we leverage within their environments.
Today, the vast majority of providers are highly disinclined to offer this level of access, especially in multitenant scenarios - there are obvious reasons for this, of course. No cloud provider wants one tenant installing hypervisor-integrated security tools and getting that low in the stack…so how do we compensate for this?
Time will tell how we handle all of these thorny issues, but the fact of the matter is this - security teams need deep access to the hypervisor kernel. We need more and better tools that play at deep levels of the virtual and software-defined data center. Security teams must radically alter their worldview of what the risks are in our data centers today.
Look for upcoming blog posts that delve more into these challenges, and how we’re starting to address them!