Category: Automation

Software Defined Security

Got you with that title, huh? :)

Every couple of weeks, I join a number of other folks in the security business on Edward “Texiwill” Haletky’s Virtualization Security Podcast. Today’s episode, Sept 6th 2012, was a bit of a round-up of VMworld 2012 with a pretty good discussion on Big Data and Software Defined X, where X could be Datacenter, Network or Security.

Near the end of the podcast we were talking about what would Software Defined Security actually be. I chimed in with my thoughts. No surprise if you follow this blog, but for me it started with infrastructure.

SDx

First, for the purposes of this blog post, let me share my definition of Software Defined X. SDx is where everything (and I mean EVERYTHING) is programmatically accessible. Basically, everything is available and manipulated via API’s.

At VMworld, VMware talked about “Software Defined Security as vShield/vCNS (vCloud Networking and Security). Allwyn Sequeira from VMware had a great presentation on this here. They are doing an excellent job around the network portion of what one could call Software Defined Security and I’m sure they’ll knock it out of the park moving forward. But, as usual, I want more. Let me explain.

Here’s what I’d like to see from VMware

Today, the objects in vCenter (VM’s, network, storage, etc) can be controlled using the RBAC capabilities of vCenter. But I think it’s time to start thinking about more. What I’d like is an enabling technology. One that can enable a new/better way of managing, security and reporting objects in vCenter. I’d like to see the ability to add digitally signed meta-data to an object such as a VM, network or storage. Because it is signed it provides verification that it hasn’t been tampered with. This should work in hand in hand with a root of trust that starts from the hardware on up.

Why digitally sign? Why not just put stuff in the .VMX file? Because, any admin with enough privileges can manipulate that data. Signing the meta data would mean that the meta data would be invalidated or changed if the VM was copied for example. All this has to happen at the Hypervisor and control plane (vCenter) level.

For VMware vSphere 5.1, VMware has added Tagging, which, while useful, isn’t signed so it can’t be fully trusted. It’s a handy thing for IT guys but not good enough for security.

So, if we could digitally sign information it means we can now start to do interesting things like apply and enforce policy, generate reports and orchestrate actions. Here’s an example of what I’m thinking of.

  1. Create new VM
  1. VM meta-data added. I create a digitally signed tag of “PCI” using a “PCI” key.
  2. VM is also digitally signed to only run in a specific cluster
  • Upon creation, I choose to try and put the VM on a non-PCI network.
  1. The policy enforcement engine says only non-PCI VM’s are allowed on the non-PCI network and blocks that action.
  2. And sends a log to my SIEM and into my GRC solution!
  • Ok, I change the network to be PCI and the VM is ready to be powered up.
  • A disgruntled admin logs in but he doesn’t have PCI rights so he can’t change the VM. He also can’t copy the VM.
  • And because I’ve created a policy that PCI VM’s can only be managed via a VMware Orchestrator workflow that’s been signed with the PCI key, even *I* can’t delete the VM without going thru an approved PCI workflow in Orchestrator

As you can see, this type of ability would go a long way to managing LOTS of VM’s that fall under different regulatory compliance umbrellas. Working in concert with logging solutions and GRC solutions, you can be assured that only the right people are touching the right things and that workflows can be enforced at the infrastructure layer, ensuring compliance. Also, because everything is programmatically addressable, it becomes VERY easy to measure and report on all those actions and workflows, sending that information into a Governance, Risk and Compliance solution.

So what’s the downside?

The big downside is that you really, really need to architect the security of the control plane. With it all being in software, you need to be even more paranoid…er…diligent about securing all the logins, network, etc that these API’s will be running over. For example, you may want to run all the control plane parts in a separate, non-routable network to minimize exposure to the bad guys.

I want to hear from you!

I’d love to hear your thoughts on this line of thinking. As you can tell from the podcast, I got a number of the other participants to agree. What I’d really like to see is someone picking this apart. It’s how we’ll all learn. This blog entry is by no means complete. More of a stream of consciousness like most of my blog posts.

Disclaimer

This blog was written with NO advanced information from VMware, purely from my brain. VMware, if you are doing something like this then I can’t wait to see it! :)

Thanks,

mike

Checkbox Security

 

Is security something that you feel you HAVE to do? Are you doing the bare minimum required by your auditor? Are you “Checking the box”?

In my role as Virtualization Evangelist, I seem to talk to mostly IT people. I endeavor to educate them on using VMware infrastructure as a layer (or multiple layer) of defense in depth. I spend a LOT of time trying to connect the dots between security and IT. I keep running into the same issues over and over.  The attitude of “I’ve got a firewall and AV so I’m ok” is pervasive.

Newflash: You’re not OK. Just ask your security guy.

There are a lot of really nasty people out there who are trying hard to get at your stuff. Firewalls are porous and AV, well, it’s not going to help you with a zero day attack. I’m not knocking firewalls and AV. They most definitely have their place as part of the “Defense in Depth” story. Just pointing out that they can’t be your ONLY solution.

Checking the Box

Sure, you can implement all the stuff that you HAVE to to check the box. You may even get the thumbs up from your auditor that you’re “Compliant”! But are you SECURE? Are you protecting the assets of the business or just covering the assets? (Read into that what you will :))

What’s needed is a sea-change in approaching security. Using every asset at your disposal is critical. With the changes coming in VMware vSphere V5.1, you’ll now have more security tools at your disposal. For example, in all editions of vSphere V5.1 is the inclusion of vShield Zones and Endpoint, providing you the ability to manage your firewalls at the vNIC level, providing increased isolation between VM’s. This is a great first step in being able to use firewalls and AV at scale.

Also, and here I go again, you need to leverage automation. Measurement of critical assets and those measurements feeding into a GRC solution like RSA Archer can help you wrap a workflow around things that need to be fixed and track if/when they do get fixed. It’s critical that the IT organization work with security by providing them the data they need to provide better security with minimal impact to the business.

What I present to customers

As I call out in my recent presentation, “Understanding the Measured Risks of Cloud Security”  this attitude of securing with just a firewall isn’t good enough. Also read the blog post “The Palace of Harmonious Virtualization” as well

I want to hear from you!

What I’d love to hear from is customers that ARE using the virtual infrastructure to provide new ways of securing their environments. Reply here or send me an email. I’d love to showcase some of your thoughts as well.

Thanks,
mike

Going Rogue- How did that data get in the cloud?

How much of your corporate data is sitting on an unused virtual machine running on the infrastructure of a cloud service provider? “Ah, but Mike, I don’t have any VM’s running “in the cloud!” Oh really? Want an easy way to check? Go to your Finance organization and ask for a report of corporate credit card use at Amazon. You may be surprised.

Now, I’m not knocking Amazon at all. A great company doing some really innovative stuff. They’ve made it so easy to start up a virtual machine that I worry when my kids are going to start using it!

But for that very reason of ease of use, you need to know if someone in your organization, frustrated with the response of “It’ll take IT a month to provision you that” just went “rogue”. He just couldn’t wait a month for a web server to be provisioned. So he went over to EC2 and start spinning up things and copying data and taking credit cards because IT couldn’t do it fast enough.

It’s this type of scenario that’s contributing to why many organizations are looking at how they can provide, to the business, the same type of flexibility and speed of an EC2-style environment but from within their own datacenter. This is the essence of “Private Cloud”. And when combined with the ability to link a Private Cloud to a Public Cloud, a Hybrid Cloud. The nirvana of being able to “burst” virtual machines off of my infrastructure and on to service providers infrastructure, all the while maintaining security.

Yea… I’m going to put a  sensitive virtual machine or data out into “the cloud” that I have less visibility and control of than my own datacenter? Really?

Well, maybe. But only after you do the next step.

Assess and Measure Risk

imageHow can we, from a security standpoint, really make this work? Like any good security person will tell you, it’s about assessment and measurement of risk. Just because you can, doesn’t mean you should. In the case of virtual machines and data, the VM’s and the data that reside on them need to be assessed, measured for risk, classified and tagged. As I point out in the slide on the left, we need to start calculating a risk score on a VM or the data and based on that risk score, we keep the VM or data in-house or allow it to move to a datacenter out of our control.

Note that I have only 4 variables on the slide

  1. Data Sensitivity
  2. Workload IP (intellectual property)
  3. Business Requirements
  4. Job Security

Obviously, there can be many more variables that can and should be considered.

  • What about the current threat levels that I get from tools like RSA Netwitness?
  • Is there a new piece of malware out there that attacks the technology I used to develop the application?
  • Is it near the end of the quarter and someone is a little antsy and wants things in-house until after the quarter?

All these things and more should be considered when deciding whether stuff should run in your datacenter or a datacenter out of your control.

For example, say I have two servers. One is a web server with a bunch of static images that’s just there to serve up the images in a catalog and the other is it the application server that falls under PCI because it’s dealing with credit cards. As a simple exercise, we could tag the first as “Non-PCI” and the second as “PCI”.

Today, if you are doing this calculation exercise, it’s probably a manual process. But if you’re talking about cloud-scale, this will have to be an automated process.

A look to the future of automated security

Think about this for a second. All sorts of threat info is coming into your Security Operations Center. Based on that information, the security tools kick off changes to the virtualization and cloud infrastructure (that is SO easy to automate) and VM’s either move in or out of different locations or states based on the real-time data.The assessment and risk measurement isn’t a one time thing. It needs to be a continuous process.

In our server example above, if you want to step the classification process up, your DLP solution scans the servers and if PCI data is found, the classification or tag would change, resulting in the VM being pulled out of the public datacenter and back into the private datacenter.

Obligatory Star Trek Reference

How cool would that be? Just like Star Trek, sensors detected a threat, shields come up automatically (I never could understand why someone had to give an order for that to happen!), phasers start charging and the klaxon goes off. You adjust your tunic with The Picard Maneuver and take command of the situation before the Romulan de-cloaks to fire her final, crippling shot! Yes, I just mixed my TOS/TNG references.

Isn’t that how it should be? No surprises? Pre-determined workflows kicking off to protect my assets. Computers doing what computers do best, the manual, tedious tasks we don’t want to do so we can concentrate on the bigger issues like how many people are following you on Twitter? (1,462 but who’s counting)

So, as we come full circle and you’re now considering running that report on Amazon purchases over the past year and catching up with Star Trek on Netflix, remember that these Risk Scores are not calculated by the guy with a corporate credit card and a need for a web server.

And I would hope you’d agree that doing this in the physical world is MUCH harder. The automation capabilities of the virtual/cloud infrastructure can really enable security to work in a more measurable, consistent and adaptive way. The bad guys are adapting all the time.

Thanks for reading. Please comment. I’d love to hear feedback. I’d especially like to hear dissenting views. After all, I’m not a dyed in the wool security guy. I don’t wear a red shirt.

mike
(Not Lt. Expendable)