In this video you'll learn:
  • 120
    [2:00]
    How OpenShift can make your application more secure.
  • 264
    [4:24]
    In which ways securing applications in OpenShift is different than in your existing environment.
  • 732
    [12:12]
    How to apply scalable vulnerability management to OpenShift environment.
  • 1180
    [19:40]
    How to keep your OpenShift environment compliant.
  • 1418
    [23:38]
    How to secure your OpenShift environment with Twistlock runtime.
  • 1558
    [25:58]
    Deployment architecture of Twistlock for OpenShift.
Transcript:

OpenShift Red Hat Video Transcript

Welcome everybody to another Image Builders SIG, part of the OpenShift Red Hat commons community and this week we have Michael Withrow with us who is going to talk about how to make your containers more secure with some of the technology from Twistlock. He will talk about it in the context of OpenShift security.

The way we do these SIG meetings is a little different. We’re going to let Michael talk for 20 or 30 minutes and give a demo and then we’ll open it up for a conversation with everybody on the line. You can ask questions in the chat and talk amongst yourselves in the chat while he’s talking. Once he’s done talking and taking a break we’ll turn it on and let everybody talk with Michael and ask any questions they might have.

So without any further adieu welcome to the Image Builders SIG Michael and why don’t you take it away.

All right thank you Diana I really appreciate it and thank you for attending. I understand time is money and so I really appreciate it.

All right my name is Michal Withrow and I’m the Director of Solution Architecture here at Twistlock. What do we do as a company? Really what we’re focused on is you have the (Name – 1:29) system and you have enterprise needs. Essentially we are the technology that fits in between those 2 to bridge them together from a security posture perspective.

So what you really want to think about is we provide a single pane of glass for the DockerEco System and as you look at this is OpenShift and whether you use OpenShift or whatever that technology is, using an OpenShift registry and all those different kinds of things like that really think of it as a single pane of glass for that entire setup you might have in that particular regard.

So looking at a little bit of the vision from a product and company perspective, as you look at it the value proposition for containers is out there, the dev ops, look at the immutable, the statelessness that exists from a containerized eco-system perspective we’re seeing massive adoption across the globe. Really every industry pillar, every geo is a massive adoption ring that’s going on from containers.

One of the things that easily gets overlooked is essentially as the security teams get integrated, as dev ops is leading that particular charge essentially I’m taking hey look my traditional monolithic approach to security, can I take that and apply it to containers? Then as I look at that is containers a better opportunity for security or less or whatever it is? And that’s a lot of the conversation that we have with customer after customers as they are looking at productionizing their containerized eco-system.

The reality is, as you look at it, containers really changed the attack vector that exists for your application when you move it to a container. And really what we’re talking about is with the right tooling essentially you can improve the security posture of that application by moving it to a containerized technology. And I’ll walk you through our technology to show you how that comes true.

Really what we’re talking about from a capabilities perspective just to base on everybody but we’re essentially talking about the fundamental characters of containers; whether you’re talking about the minimalistic state right…hey look I’m building a Redis image and essentially I have the Redis application in there and that is essentially a singular focus from that particular perspective and it’s very declarative. So I’m basically building a JSON file, a Dockerfile or maybe a Docker compose file or maybe I’m doing it through OpenShift and I’m having my OpenShift deployment and I’m building a deployment off of what that image is going to look like on disparative/clarative in that regard, what’s repeatable and essentially every time it gets deployed it’s getting deployed that way. It’s very declarative in that nature.

And most importantly, essentially a container by nature is immutable. You don’t have agents in there making them stateful. Essentially they’re stateful and immutable in that particular regard. Which, like I said, those are the key terms I want you to think about as we walk through what we do from a product perspective.

So when you look at it as well and really driving that point home there are a couple of key security advantages that exist with a containerized eco-system over a traditional monolithic. It really comes down to the automation. So heavy automation, integration with the CI pipeline process, maintenance through Meta Data and really what you want to think about is when you look at key containerized deployments essentially what’s happening is that essentially a traditional monolithic application gets broken down.

So essentially inside of a monolithic application you might have 3 nodes in there representing each level of the tier and across each of those nodes you might have 15 applications in there and essentially makes up your application. In a containerized eco-system you wouldn’t have those 3 BM’s you’d actually have 10 containers, each container representing that particular application as it ties into that particular overarching application in that regard. Essentially each one is autonomous in that regard and individually updated without impact with the other one across that tier.

So when you look at traditional monolithic essentially what you’re looking at in that regard is you have heavy dependencies, you have agents and all these kinds of things like that and so now when you talk about upgrades now essentially you have to go through and do regression testing, you have to go through CNDB boards and get and get downtime and all these kinds of things like that that you have to work through.

So when you look at it now start pulling back what are some of the security challenges that containers actually bring to the mix as well? So commodities of scale is the biggest thing. So we’re commonly seeing as I look across enterprises we’re talking to…hey look I have 5,000 images, hey I have 10 or 15,000 containers which is 5 to 8 times greater than we typically saw in a virtual machine environment so that is sprawl off those kinds of things like that. And not real ownership for what is exposed to process. So you have legacy images and those kinds of things and now I’m on version 100 of this image but I still got 1 through 99 that exist in this environment and things like that. So the VM sprawl conversation is comes back when you talk about containers.

Obviously a heavier rate of change. Instead of with traditional VM environments you might see a quarterly or bi-annual type of update of that application; whereas, with containers daily and weekly updates are commonplace in that regard; so a higher rate of change across that eco-system.

Then essentially the security responsibility moves and now it’s essentially further upstream and now it’s in the hands of the developer and not necessarily falls on the infrastructure team. In that regard you think application specific and not that edged based security question that typically fell on the infrastructure team. Now it’s making security an entire stream based responsibility where developers are typically last in line when you talk about security.

Now when you look at containers what is the value prop of containers? By nature a container is portable so essentially you get away from that conversation that vendor lock in to X Y & Z. Maybe you’re running a multi cloud or in a public cloud whatever it is. Essentially when you look at our technology we provide that portability as well because not only does our technology protect the containerized eco-system but essentially our technology is containerized technology as well.

What that means now is think of a REST API based product that really hooks into the entire eco-system. So maybe you deployed an AWS or you deployed on Prim and you land on OpenShift V3.2 inside of that environment and deploying an OpenShift registry those kinds of things, we can provide a single pane of glass into that environment.

Then when you look at our product it’s a life cycle management based product really providing cradle to scale security. So really starting in that CI pipeline process, maybe using Jenkins or Drone maybe you’re using Bamboo whatever that might be from that particular regard, we can integrate deeply into that environment. So as those images are getting built before they get to the registry, before they get attached to a deployment script and pushed out into pods those kinds of things we’re going to integrate at that particular point.

Then as it does get pushed into a registry and now you have it in run time out in a pod from that perspective we’re going to provide a security posture of that entity all the way through all the different steps it will take in its life cycle. And we do all this without you having to give up any of your control and all the data lives in your environment. Like I said before, maybe you have an open cloud where you deployed OpenShift before in that private cloud and now you’d drop Twistlock into that OpenShift environment which is in a private cloud. All the analysis that has taken place from a security perspective is happening in your environment so we don’t pull anything out to assess or analyze your particular environment.

We have many ways we can support…like I said, I talked about public and private and we have many customers who are government Air Gap network segregated environments and we full support them through Tar files as well from a product perspective.

What do we do from a product perspective? As I alluded to before, we are life cycle management so build through run. And one of the few things we do and really where most customers reach out to us is around the vulnerability management. So think of integrating into the build CI pipeline process where images are built and we’re going to assess and actually restrict the state of that image based on thresholds and things like that.

Then as that image moves upstream into the registry we’re going to assess the vulnerability state, the malware state, the threat state of that image as it exists in the registry as well. Then now downstream you’re running that inside of a pod inside your OpenShift environment. We can tell you the vulnerability state of that image and actually restrict that vulnerability from running inside your OpenShift pod as well from that perspective.

We also tie in compliance. So think CI as benchmarks for containers, industry standards around compliance and things like that we can tie into that environment as well and restrict…say you have a security posture you want to maintain and you’ve done a good job of building a clean image, now say someone tries to run that image, as an example, you don’t want that to run as root or they expose a certain port or SSH or something like that. Now when they run it we’re going to assess that state from a compliance perspective and see it’s outside of the compliance posture that you’re trying to maintain and restrict that image from being run outside your pod.

Then we also tie in some access control mechanisms. As you look at OpenShift there are a couple of different ways that you can integrate from that perspective. Then rounding it out from a product perspective we tie in anomaly based detection across your running entities through our run time defense mechanisms.

So pulling the covers back a little bit further, what you want to think about is essentially how we work from a product perspective. First and foremost, we want to generate a bill of materials off that image. Maybe the first time we see that image is through the CI pipeline and so we have native plug-ins for Jenkins and Team City. If you’re not running Jenkins or Team City we have an independent product called Twistlock Scanner which is really a shell script you call from a shelf. That really allows you to integrate with any type of registry or CI pipeline process that you might have. Like I said, you might be a Bamboo shop or a Drone or Circle CI or something like that so all you do is have a shelf script that would call out that SH file and then it would unbuild, scan and you would pump out those results from that perspective.

Maybe the first time we see that image is actually upstream in your registry right? As long as it’s a V1 or V2 based registry we can integrate with it. So Red Hat’s registry looks at all the industry registries we can integrate with. So as that image gets moved from your CI pipeline process into that registry it’s been there for a while right. So we can assess the state of that image as well as in the net registry.

Really what you want to think about is how we’re doing that. Essentially what we’re doing is simply enough conducting a static image analysis off that JSON file and really understanding what is the base layer. Is it Debian based, Alpine based as an example? What is the framework? Are we rebuilding a Java or Ruby app or something like that?

Then in the Read/Write layer that sits on top…what is your custom code? What are the files you’re sticking in there? What are the systems? So things like that we’re kind of looking in there. The idea is that we grab those signatures and stash them in the mongo database that comes with the product.

Now that you have that information in your environment because “we” isn’t the term I want to leave with you today, it essentially looks at everything that sits in your environment. So we want to detect the CVE’s, the 0 days and the malware that exists against the entities we defined in that image. I’m going to be a little more transparent in that regard…when you think of packages and so the package management in that particular regard we’re going to look and go through the package. Maybe you have a TAR file in there where you’re going to basically install a package through that TAR file. So things like that we can see that and inspect it and now we’ve stashed it. So now we know look you have these packages as part of the image, whether it’s installed through a package manager or some type of TAR file in that particular regard.

Now as that image goes through its life cycle essentially we bring in a real time intelligent CVE streaming service where we pull from a wide range of providers. Think of all the different operating systems, all the different languages from NIST CIS, the NVD database and all those kinds of things like that. We have a 0 Day…we partnered with a company called Exodus which does 0 Day mal hunting, we have some partnerships with some IP and malware sites as well to pull in that IP and malware data. Then we have some pretty aggressive machine learning algorithms as well that we pump in as you’re going through that and that lends to the life cycle.

So now from a scenario perspective you’ve built a Java image that has wget or SSH Exposed or whatever it is and that’s your deploying out. As it went through the CI pipeline process we assessed the state of that and through the threshold base process it got allowed to be posted to the registry. Obviously now it’s been in the registry for 3 weeks and now over those 3 week, of course, these components have lit up and the package manager has told us there are vulnerabilities and we’re going to pump it into the environment and light up those components across that eco-system like a Christmas tree in that particular regard.

But essentially we don’t even stop there from that particular perspective. So whether or not you have it in the CI pipeline process that image is in the registry and it’s been there for a month or now you have a running container that’s been running for 6 months. That’s where the CVE service is going to come in and as a package becomes vulnerable we already know from a signature perspective where that package transpires across your eco-system and all those images attached to containers attached to pods we’re going to light that up.

Then taking it further we actually have the ability to block those vulnerabilities as well; whether you’re trying to build an image and when someone tries to run that image think image integrity in this particular regard. So you have a scenario where you built a clean image right? Now you have a clean image on build but that image has now been living in the registry for 3 weeks. So obviously over the 3 weeks a couple of the packages now have vulnerabilities. So now when someone tries to do a deployment off that image and you have the policy set to block essentially we will restrict that image from being deployed into a pod because it has vulnerabilities in it.

So the idea from a dev op’s perspective is that that image would get thrown away because you don’t upgrade or patch containers, you just throw that container away and deploy another container in its place or another image in its place that doesn’t have that vulnerability that you’re Admin or someone can do an OpenShift deployment into your environment. And if you have a running OpenShift pod think of us as a proxy that works through the Docker Socket in that particular regard, so if we need to restrict and you have policy that you don’t want…that this application is very critical to you and you don’t want to have it out and running with vulnerabilities and so you can move the policy to block them in that regard we’ll restrict it. So now you have Java in that image and suddenly Java lights up, essentially we would restrict that image. So in that regard we would kill that image because essentially it’s adhering to container best practices.

If a container has an anomaly in it or vulnerability in it you don’t patch or upgrade it, you throw it away and build another one. So we can talk about what that means from a scenario perspective but we allow you from a product perspective to (1) be alerted to the fact there is a vulnerability and if you choose to restrict that vulnerability.

This is what that CI pipeline process looks like. So think that a developer is going to be building an image and injecting it into the pipeline and we’re going to go through and the CI tool is going to notify us via the API or plug-in saying look a scan needs to be initiated and we’ll conduct an ad hoc scan and publish the results back to that CI pipeline process. Then through there you can do a threshold based failure and so think low, medium and high from a CVE scoring perspective.

So what you would say is and maybe you have off shore developers or partners who are giving you images and you can pump them through this process and see if you even want to check them into your registry, maybe it’s a trusted registry from that perspective. So think of a scenario where maybe a step to build the images is Step 3 and Step 4 would be for us to scan it, Step 5 would be for us to publish results and Step 6 would be for you to push it to your downstream registry whatever that mechanism might be.

Compliance and I’ll kind of grease through this and we can talk through this a little more as we go through, but the idea here is we can tie in CIS benchmarks for containers and we can also tie in all your industry standards. Maybe you want to have application specific configurations that you’d want to adhere, so every time we deploy Mongo it has to be deployed this way or every time I deploy Tom Cat it has to be deployed this way. We can enforce that from a compliance perspective through data sets. You upload the data sets to us and we allow on a very granular fashion on how you want to enforce that compliance posture.

So through actions and so the action might be I want to be alerted when this particular configuration is tripped or maybe it is an application specific configuration. Think I have a Tom Cat website and I want to make sure that every single time it’s deployed it’s deployed over 8043 and not 4043 as an example. Now a new developer comes in and missed that part of the common engineering criteria and so when they tried to deploy it out they tried to turn it on over 4043. You’d have a setting in here that says look if this image is running off 4043 then restrict it, I want to block that configuration. Take it back to the developer to move it over to 8043 and now they can actually do a deployment off that.

We also in here tie in image integrity. What you want to think about here is cryptographically certified images. So you’d have a scenario of (1) as I’m building that image now I know that image is trusted. I want to make that image trusted so I’m going to integrate with content trust and associated shaw value with a tag with that image on push maybe through my CI pipeline process whatever it is.

So in this scenario take your Red Hat registry and associate that with Twistlock as a trusted registry. The we in turn would associate that with policy and say look now that you built this gold image now what we’re going to do from a process perspective is certify that and say only these people can pull that cryptographically image. And more importantly, the people in that group can’t back door or go around that policy.

As an example, they don’t want to use your trusted image they want to go to Docker or IO and pull the Swiss cheese version of that image which we would restrict from a process perspective.

So access control you look at OpenShift has some capabilities there. Really the key thing you want to think about here from a product perspective is there are a lot of ways we can compliment in that particular regard, but really what we’re talking about in this particular regard is really providing the detailed audit trail off the back end off every single transaction that goes across that doc or daemon.

So whether it’s a user based access or mechanism or a privilege, think Sudo or Root in that particular regard we provide that forensic transpired or crossed that daemon as we kind of go through.

Now you have running entities so we round out the product with our anomalous based detection and really the idea here is that we really bring what we did in vulnerability management where we did that static image analysis. You kind of see that here on the left. Then we would plus that up with a launch time Meta data. Now you deploy that pod and have that running entity, we’re going to look at that launch time Meta data and through that we’ll detect certain things. So think application specific detection here for our first leg of machine learning.

So what we’re doing here is very application centric, look we detect H2BD and this must be in a factory file so let’s go find the configuration file and figure out what ports are in association with this particular deployment or this might be Mongo so let’s go look at these files and find this SH file or whatever it might be. Maybe it’s Tomcat or Redis or something like that and essentially that’s what the machine learning is there it’s looking for application specific tags and pulling it out.

The idea from a product perspective is we want to build a predicative model because we’re taking advantage of the declarative nature of containers. So by nature hey look you’re saying the purpose of this container is to run Redisso we’re going to grab all that Redis specific information and white list that.

The key thing about this is we do that completely automatically on the back end from a product perspective and all you did from a productive perspective was tell us where your images were. In this scenario, all you would do is drop Twistlock into your OpenShift environment and we’d take it the rest of the way. We will assess the vulnerability state and provide the anomalous based state of your entities as they’re running because we already know what they should be running because we built a predictive model off the images running inside of your OpenShift eco-system. And like I said, the value prop here is we did that completely automatically without you having to do anything from a product perspective.

Then we also introduced a feature set we call Twistlock Advanced Threat Protection which is a deeper set of machine learning algorithms. So here think of a scenario where you deployed this image or container into OpenShift. Now there’s OpenShift normal processes that run on the back end and if we’re auditing every single transaction what that would mean is we’re generating a lot of white noise and false positives of traffic traversing across your environment.

So through that we would detect OpenShift and know because we run OpenShift and we pump it through our machine learning algorithms and see that these are the normal processes and then clue those into the white list for that particular application. Now what you’re left with from a result perspective is really those only true anomalies that you have to react off of. Then that is ever evolving and as Diane and I were talking before OpenShift just released 3.3 so if you go from 3.2 to 3.3 there are some changes in there and what that means from our perspective is we’re going to pump in 3.1, 3.2, 3.3 into our machine learning algorithms, look at that OpenShift environment in a running state and grab all the signatures off that and pump it through our intelligent stream service and so you get the benefit of that in a continuous state.

Time is limited or I’d walk you through a little deeper but how we bring this home and this kind of gets into the demo.

The idea from an architectural perspective, as I was alluding to before, we are a containerized based infrastructure. On the left is our intelligent stream service I was talking about and see all the different CV information we’re pulling from Red Hat oval feed from NIST and CIS or maybe its Java based or Python or Ruby or something like that. Then we have our threat feed where we partner with Exodus and Proof Point and we’re getting those 0 Day, those IP and malware data. Then in the Twistlock labs we’re writing a deep set of machine learning algorithms, constantly enriching those and taking all the images that sit in Docker or IO and taking all the ECS and OpenShift and all these different environments and building them up and pumping them through the machine learning algorithms to assess the running state and filter out all the operational noise and, like I said, you’re only left with application anomalies from that perspective.

On the right is your environment and, like I said, it’s a public cloud, private cloud, a hybrid cloud or maybe you’re deploying it on physical servers, the idea is you have a doc or daemon in that regard. We’re going to give you an SH file which has 2 TAR files in there, one for our console and one for our defender. You’ll do a Docker run off there so to speak and basically deploy them as containers on top of your doc or daemons in whatever capacity they might be in.

Simply enough the console is an Alpine based image running a no JS console with the Mongo database. In there is where you configure all your policies. Out of the box we have all of the policies turned on and set to Alert. So think of it as a starting point or guideline and really we’re thinking about it as a time to market perspective and so out of the box all you have to do is deploy our product in your environment and then tell us where the registry is and where your containers are and we’ll start assessing that. Really we’re just alerting that particular perspective and then maybe you have a PCI based application or a certain application that has PI data you’re really worried about.

Maybe you want to build a policy that is specific to that application and move that to block because you’re really worried about the security from that particular application. We allow you the granularity to segregate out your applications, your containers, your images, etc.

Like I said, you’ll suck your users in, in groups and we don’t really care what kind of identity you have. To basically provide access control if you are using some other access control mechanism we integrate with that as well. We compliment, as I said, providing audit trails and things like that.

So then your cluster management and obviously Kubernetes is a native capability for us and so same thing, since OpenShift sits on top of that that’s where the tight integration happens for our perspective. So as you lay that environment down we just implement with the doc and function as a proxy in that regard.

Now that you’ve done that and built out all your policy the idea is you attach that policy on the defender. The defender wears multiple hats in this environment and so think across your OpenShift deployment and this would go on all your slaves from matching the perspective as you go across. So in my demo I’ll show…I basically have a 3 node Kubernetes cluster with a manager and 2 slaves and so deploy the defender across that policy topology and so as I start deploying pods into that entity we’re going to start assessing the state of those pods inventory and the assess the state of what we discovered in that inventory is kind of how it works.

So really that defender, as I said, one is for configuration management assessing what’s there and providing that inventory data and enforcing the policies that the console has set and then providing an audit trail off all the transactions that happen across that daemon regardless of what level of transaction they are.

So think of a scenario where…look you’ve done your due diligence and limited pseudo off match perspective and build in all your control policies for all the OC commands in that particular regard. Now an Admin tries to run an OC policy and they get an access denied. What does a normal Admin do? They’re going to run a pseudo OC to try and scale that out. If they do that then we’ve got them from a forensic perspective and we’d detail out that from a forensic trail perspective.

So this is the easiest way you’d look at it from that particular regard. Look you have an Orchestration Manager, you have your Docker engine with your orchestration agent and we just integrate those defenders in that particular perspective. Through the orchestration if you are just using vulnerability management run time defense essentially you’re only need us on the slave nodes in that particular regard to provide the vulnerability state and the run time state of the containers in that topology.

Before I get into the demo I’ll pause and ask if there are any questions.

Diane: So far I haven’t seen any questions yet. Why don’t you move right into the demo and then we’ll have some time right after the demo and that will be great.

Perfect! As we go through this is our Twistlock console and what you’re seeing here is I have a very simple OpenShift deployment with a Red Hat console and a Node 1 and Node 2. I’ve already deployed Twistlock on that topology and like I said out of the box Twistlock is already starting to assess the images, looking at the risk state of the containers that are a part of that, looking at the access violations, the system calls, the process violations, network violations that exist across this topology.

The idea is, first and foremost, as part of your OpenShift deployment you have a registry. So now it’s a result of your configuration. So from a flow perspective what you essentially have is we’ve broken the UI into 3 buckets. First and foremost is configure and here I want to configure what I want to protect. So my Users and Groups and maybe I’m an LDAP environment or SAML maybe I’m a swarm or Kubernetes or maybe it’s Mesosphere or OpenShift I can integrate any of those in. This is just a config file that turns on by default and you would toggle those on. So whatever orchestration you have you’d tweak the config file and you would light that capability up.

We have a couple of different ways to deploy the defender and the native way is right through a curl command. You would run it directly on your slave inside your Kubernetes cluster. So this is a good point to pause and talk about we do have a large bank of customers tahat are already running OpenShift. One of the things we’re doing from a product perspective is we’re going to generate some deployment scripts for Twistlock inside the OpenShift environment so it’s not just a native install and it’s actually a managed entity inside your OpenShift environment. So we’re about 2 to 4 months out from having that capability but in the interim think of it as a main deployment inside your OpenShift topology. Then once we get the deployment scripts it will tie into the full capabilities of the OpenShift topology.

The idea is look I have those nodes and I’ve kind of scaled out the OpenShift environment. So, like I said, you’re doing access control and you’re deploying the master but you want to do vulnerability management, compliance and run time defense you would use the node in that particular perspective but you have those kind of deployed out from maximum capability.

Here is where you start building policy and so think Kubernetes in this regard and it kind of ties into the OC commands. The idea is here look I have all the API calls and I can get very granular. Do I want to allow or deny them access to those APIs? And here is that level of granularity I was talking about.

Then you start building out and look here’s Application A, here’s Application B and maybe I segregate at that level or maybe I segregate at staging and things like that. I start building out policies that line up to that and then I start segregating out who can do what and then an audit trail off the back end.

From a trust perspective the key thing is vulnerability and so here are the vulnerabilities and what I can is look maybe I’m building a Java based application and simply enough by default you see that notification to alert and maybe I want to tweak that to block and things like that from a product perspective.

Here in compliance we actually go through and we have all the CI benchmarks. As an example, one thing I wanted to show was a quick demo for you and so I’ll limit member usage on a particular application. Now 510 off maximum perspective I’m going to set it to block and I save that from a policy perspective. So I’ll kind of toggle back from that perspective. Now I have this host running and so now what I’ll do simply enough is I’m going to build another app.

Obviously this application we built has a vulnerability in it and now we set policy to say let’s run that image. I’m going to try and run that image to match the perspective and now it’s going through and building it. What you’ll find is when I go to Status…when it’s going through essentially it’s going to get blocked at Container Create. Once I go through and do it as describe off that one you’ll see a line in there that says no I can’t…hold on.

Now as it’s going through what we’re going to see as it gets to that point we’ll see a note in here and I’ll just do a refresh and you’ll see where we come in and block that perspective. See there is a successful pull and now it’s trying to do a container create and essentially it’s going to get blocked from a scenario perspective.

So as we go through and I’m waiting for that to catch up and I’ll give it a second. But the idea is that now we have that block in and as that deployment happens what you want to think about is when it gets to the point where it actually does the deployment off that pod, when that call goes through the daemon we’re going to pick it up through the socket and assess that image to see if it has vulnerabilities and restrict that from being done.

Obviously, every time the deployment scripts try to push that into the pod we’re going to restrict that. So OpenShift is going to continually try that right so we’re going to block that every single time until you basically pull that out. So what I’ll show from a scenario perspective is hey look you attached this image and simply enough I can remove that vulnerability and you can see that deployment kind of going through. So we’ll let this catch up and it’s going to make a liar out of me.

So it’s created and still started and I’m waiting for it to catch up and then I’ll show because I know I have it what I wanted to show. I’ll get it running in a second but the demo gods always mess with me in that regard.

Diane: Absolutely when doing a live demo.

Yeah but what I’m getting at is when it gets to the point where it’s actually building the containers, see it started the container…wait maybe I didn’t have the polices set and so maybe I missed setting the policy so let’s do that. Oh I did it on the wrong policy. Let me do 510 and set it to block and save it. I’ll toggle this one and go to 510 and turn it off so I don’t break anything and save that.

Now to do the demo correctly let’s just do another one right. For the essence of time I’m going to build another one then you’ll see because now I have the policy set right you’ll see when it gets to a point where…now we can actually go through and when it gets to that point we’ll see it’s actually restricted from a policy perspective.

Now walking back through while that’s cooking what we have here is we’re looking across all of your pods and we can see the pods on here. See the state of the pod, of the last commands that were run across that pod and here is the first one I kind of did from that regard. I can pull up some compliance information and obviously this container is running as Root, Apparmor is not configured, Seccomp, and limit memory usage I checked 510.

I can see the host topology here. So from our perspective we’re not really a host layer protected product but essentially we’re from the docker daemon and so as you have that docker daemon you deploy the OpenShift on top of that docker daemon and we’re protecting that across that particular topology and you can see how the state of the host is running. Think network card, think storage because if those get affected that affects the containers on the topology.

Then as we kind of go through look here is the daemon config files and notice everything looks good. Now we’re kind of looking at the images as they exist across. And here is on the registry and here’s Docker IO as I look across and what we bring in from a product perspective is look here I am breaking down the vulnerabilities and so go out to the NVD database and you can see the state of that vulnerability. But really providing eyes into the environment and so here is that image and here is the compliance posture of that image, here is the process info, here is what we detected as part of that image…remember this is going back to that white list I was talking about.

Here are the packages that we determined were part of that. As we go across you see this is a pretty big image in that regard. Then as you look across the CVE perspective of that particular package in that image as well and you can see how they light up in that particular regard. Then from a configuration perspective where that image is deployed and you can kind of see that configuration data.

So all across this you can say this one has 29 and so I can go in here and see the vulnerability state, go out to the NVD database and then do that.

So really all you’re looking at and all you’ve done is deployed through your CI pipeline process whatever that might be, you integrated into the registry and we’ve already added the registry in here and so now you’ve dumped that image in the registry and we’re going to automatically assess the state of those images that sit in that registry.

Here’s the Red Hat registry, here’s 3.1 and here is the state of the images in 3.1 right? Obviously 3.3 is better right in regards to 3.2. We move across and we’re showing the state of those images that sit in those registries. Then, as I said, as you deploy those images across a topology and now start deploying pods we’re giving you eyes into the state of those pods across that OpenShift deployment.

Now I’ll take a breath and see if there are any questions.

Diane: It’s really cool and interesting to see the vulnerabilities in some of the OpenShift containers too. Hopefully 3.3 is better.

I did a lot of work a while back with compliance. Do you have any output, I mean I see CSV but like a vulnerability report?

Right now in the product we’re going through and we’re starting some remaps and so some stuff will come out in that regard. Really the easiest way to answer the question is essentially throughout the product if you need to export anything out we give you the ability to do a CSV export. You can filter so you can get very specific information and just not a raw dump. But really throughout the product it’s just a CSV export of the vulnerability state, the compliance state and things like that.

Diane: That’s really handy to have. Now if you can only make it look at all the licenses for all the components.

That’s a really interesting question and we’ve actually…so when we sat down and thought about this product, essentially we looked at where does it fit in the market. Obviously there are a lot of good products out there that already do hardware based assessments and things like that, Cloud Passage, TripWire, etc. as we go through.

Then we have a great partnership with a company called Sonatype who does licensing integration as well. We said well Sonitype is obviously deep in the licensing so we said we’re going to stay in our niche from that perspective.

Diane: Yes that was the bane of my existence.

Yeah I’ve heard that from quite a few customers so I know exactly where you’re coming from.

Diane: Yeah I think in a new containerized world where people can bring containers in from lots of different places this security level of auditing and compliance reporting is really necessary. But I’m also wondering whether someone has a piece of software in their container that your company doesn’t have the license for is another big thing.

Yeah. The easiest way to answer that is Sonitype is one of our partners and they do have that capability. We’ve actually integrated; we were finalizing the engineering of getting our products together. So if you were a Sonitype customer that would be one thing you could get because obviously they have the licensing, the Java specific information and we have all the containerized eco-systems.
We’re thinking of bridging those to capabilities together from a product perspective to really help answer that question but it is very product specific in that regard.

Diane: Is your demo complete?

Yeah so let me go back and see if we got that restriction. Obviously, I demoed this all morning and it was working…there it is! Now you can see air sinking pods get failed to create, running containers and basically policy blocked it right. Now it’s basically in a looping state. So OpenShift is trying to make that OC call and every time that call comes through our integration socket we’re blocking that. So simply enough I can go into Trust to show kind of the flexibility of it and go into Compliance and go into this policy, check my setting and go into 510 and set it back to alert and save it. Then when I go back into here I’m essentially…you’ll see that deployment kick off. Then OpenShift will go off and that’s running and so now it’s off and running.

Diane: We’re seeing it now in the terminal mode and we’re also seeing it on the Twistlock page. But if the IT team installs Twistlock and I’m a developer how does this surface in the OpenShift UI if I’m using it online?

That’s kind of what we’re talking about is that. One of the things we’re working on from a product perspective is actually bringing that to fruition and actually having some exposure in the UI. I know my CTO just had a meeting with Red Hat and so those conversations are going on as we speak and we’re working through that. So all the different vendors are working to make sure it’s the most seamless experience possible. So that’s one of the things we have to do.

Like I said, where we’re at right now from a product perspective is manual integration into your product. But one of the things we’re working to bring in is that automation through the UI.

Diane: Perfect! I think that’s a natural next step. Hopefully we can get that done and get you back. A developer would go okay a WTF.

The reality is how it would work on the back end and so to bring that scenario home what we’re talking about here is I’m just showing the reaction right. What will happen is as that deployment goes through in the UI and this is what is kind of happening on the back end through the daemon right and what you’re seeing in that particular regard as things go through and that would get exposed back in the UI that the deployment failed and whenever they look in the logs that’s the kind of information they’ll see on the back end.

So kind of what I showed here is the back end integration and then what we need to build is the front end experience.

Diane: Don’t get me wrong I’m loving you can do this and block that. I think that’s a wonderful thing and I think anyone who is on the outside of the house is really loving it too. I’m just trying to visualize in my head where to expose it in the UI…

Yeah!  What I should do and maybe we should follow up with you Diane is as we get further down from an engineering cycle perspective we’ll schedule another call and now 6 months later here’s where we’re at as an example.

Diane: Absolutely I think that would be a great thing to do. You’ve done a really good job because there haven’t been too many questions here. I’m going to give everyone a last chance and maybe if you can go back to your slides and put up your informational slide so people know how to get a hold of you.

Do you have questions? We’ll wrap this up in a bit. We’re almost at the end of our hour so if anyone has any questions put them in the chat or raise your hand and I’ll unmute you.  But I think you’ve done an awesome job covering this and hopefully there won’t be any flaming containers out there. We’ll post this video in the blog post at Sig.com shortly but it might take a day or two to get them cycled through and we’ll have them available and ready for you on our YouTube channel as well.

Thanks again Michael and we’ll get you back in 6 months or hopefully sooner when it’s more integrated and exposed for developers. Thanks again.

Thank you very much for getting me on and it was a pleasure talking to everybody.

What’s Next?

  • Download our guide on how to securely configure a Linux host to run containers.
  • Sign up for our guide on how containers can revamp your approach to security.
← Back to Resource Library Next Resource →