The concept of microsegmentation and isolation is a fundamental network security best practice few would argue with. Microsegmentation helps reduce risk by containing potential compromise and reducing its “blast radius.” If done well, it makes it more difficult for an attacker to move within an environment just by compromising one component within it.
Technical tools for achieving microsegmentation date all the way back to VLANs on physical switches and are a fundamental feature in software defined networks (SDNs). So called “Next Gen” Firewalls (NGFWs) also often heavily emphasize isolation scenarios in addition to their Layer 7 protocol awareness. However, the actual adoption and usage of microsegmentation significantly lags awareness of its benefits.
The reasons for this are primarily due to complexity and operational burden. It’s hard to manually configure SDN and VLAN architectures that map closely to the actual topology of your apps. It’s harder still to maintain those over time as your apps evolve. Most organizations, for example, provide some basic segmentation, such as having separate networks for internet facing traffic, internal traffic and storage traffic. However, very few take segmentation down to the app itself and implement truly “least privilege” segmentation for every workload in the environment. For example, while most organizations will connect the front end of a public-facing app to the Internet through a DMZ, few will separate the traffic behind this front end, such as segmented compartments for cache, queue and storage traffic.
The rise of cloud native app architectures is both the greatest challenge and potential salvation to this problem. Consider that a typical legacy app, when refactored into microservices, may have literally an order of magnitude more discrete components and that those components will be updated much more frequently than they have in the past. If organizations struggled with microsegmentation when dealing with a few VMs, what hope could they have for segmenting that same app when it’s running in two dozen containers and updated daily? Fortunately, the same notions of programmable infrastructure that enables running these apps in a highly automated orchestrated manner also enables defining segmentation and isolation in a declarative programmatic way.
This is a key innovation that enables a set of technologies, known as service meshes, to abstract the network topology and routing away from the infrastructure layer and build it into the app itself. A service mesh allows developers to easily link together microservices using a declarative model-based approach that sits “above” the physical and even traditional SDN layers in the underlying infrastructure. For example, a service mesh can enable a developer to build their front-end components to simply connect to a cache “service,” without being hard-coded to a specific IP address, host or even cloud provider. The service mesh ensures this service is always available, can manage access to it, provide for default encryption of the traffic between services and enable advanced load balancing scenarios like canary deployments and A/B testing. Examples of service meshes include Istio and Linkerd.
While service meshes provide important capabilities for general app deployment and management, they also provide a potential solution to making microsegmentation a practical reality. The core problem in doing microsegmentation with legacy tools is the disconnect between the tooling providing the segmentation and the app whose components are being segmented. If the rules used to do the microsegmentation aren’t perfectly correct, you’ll break the app; if they’re not highly precise and specific, you lose much of the potential advantage from doing the segmentation. Further, you have to ensure these rules are always completely in sync with the app as it evolves over time. Consider the thousands of apps large organizations have and the constant barrage of security incidents competing for their attention and it’s easy to see why most take a very coarsely grained approach to segmentation.
Service meshes address this problem by moving the segmentation definition out of the infrastructure and putting it alongside the app itself. No longer does the infrastructure need to be manually configured to align with the app. Instead, because the solution is entirely in software and entirely run by the orchestrator (like Kubernetes) used to run the apps, developers can declare security as part of their app deployments directly. The same mesh model used in development and test can follow the app into production with perfect fidelity. This makes it practical to isolate each individual microservices without the friction that historically prevented wide scale use when manual infrastructure configuration was required.
Like NGFWs, service meshes like Istio can also enforce protocol compliance (such as ensuring only HTTP traffic is allowed between entities) and encryption of transport protocols. Unlike NGFWs, though, the configuration isn’t divorced from the app and doesn’t need to be separately managed. Instead, everytime the app is built, it can be validated against the same mesh policy used in production and can be versioned in any repository, like GitHub. Equally critically, because the service mesh is purely software, that same configuration is entirely portable, even between different cloud providers and on-premises deployments.
Service meshes provide more than just microsegmentation but they make microsegmentation a practical reality for the first time at scale. Much as Kubernetes abstracts the underlying compute capability from the hardware and operating system, so too do service meshes abstract routing and connectivity from the underlying networking. Once the routing and connectivity is separated from the infrastructure and can be treated as code, it becomes practical to have a truly application tailored, least privilege approach to microsegmentation.
If this topic interests you, watch my 2018 Kubecon + CloudNativeCon NA session.
Follow us on Twitter
Follow us on Twitter for real time updates on the cloud native ecosystem, Twistlock product, and cloud native security threats.
How to Lock Down the Kernel to Secure the Container HostRead the Blog
One Chapter Ends, Another BeginsRead the Blog
The Greatest Security Risks Lurking in Your CI/CD PipelineRead the Blog
Cloud Platform Radar: Powerful Cloud Asset IdentificationRead the Blog
Securing Serverless Functions: Visibility with Serverless RadarRead the Blog