Background
As we’ve written here before, there’s increasing urgency for organizations—especially those operating in a regulatory environment—to adopt a zero trust network architecture. Just what that means and how to do it may not be immediately clear. When it comes to microservices applications, the National Institute of Standards and Technology (NIST) offers guidance for microservices security in the SP 800-204 series, co-written by Tetrate co-founder Zack Butcher (which we’ve also covered on this blog).
NIST’s reference architecture for microservices security is Kubernetes and the Istio service mesh. In this article, we’ll look at NIST’s recommendations for using a service mesh for authentication and authorization in microservices applications.
At the heart of a zero trust posture is the assumption that an attacker is already in your network. All of these policy recommendations will help prevent potential attackers from pivoting to other resources should they breach your network perimeter. If you use a service mesh as described in the NIST reference platform, all of these capabilities are built into a dedicated infrastructure layer that acts as a security kernel for microservices applications. This means security policy can be applied consistently (and provably) across all your apps—and so your product development teams don’t have to be security experts for your apps to run safely.Service mesh allows fine-grained access control to be layered on top of traditional security measures as part of a defense-in-depth strategy. The mesh sits as a powerful middle layer in the infrastructure: above the physical network and L3/L4 controls you implement, but under the application. This allows more brittle and slower-to-change lower layers to be configured more loosely—allowing more agility up the stack—because controls are accounted for at higher layers.
1. Place a PEP around Every Service, Ingress, and Egress
At its core, the proxies in the mesh data plane act as reference monitors: dedicated components of the system that enforce access control policy over all subjects and objects—and that are non-bypassable, protected from modification, and verified and tested to be correct.
Policies are defined in the service mesh control plane and the mesh itself takes care of mapping those policies to low-level configuration of the Envoy proxies in the data plane. Those proxies mediate all application communication, acting as policy enforcement points around every service and at ingress and egress. The mesh enforces authorization and authentication controls that are external to (and independent from) the application and independently verifiable. It’s this aspect of the mesh as a security kernel that makes it critical for microservices security.
2. Support Strong, Provable Workload Identity and mTLS Between Services
Every workload should have a strong, provable identity and it should be possible to enforce mutual TLS (mTLS) communication between them using those identities (SP 800-204B, SAUN-SR-1) . mTLS ensures that communication between services is mutually authenticated as well as confidential and tamper resistant.
The policy to require mTLS should be configurable at multiple levels of specificity (below) such that lower levels (more specific) inherit configuration from higher levels with the option to override. At minimum, you should be able to require mTLS a) globally (the entire service mesh); b) per-namespace; c) per workload/microservice; d) per port.
To ensure that a server is the authorized location for a service and to protect against network hijacking, there should be a secure naming service that maps the server identity to the microservice name provided by the secure discovery service or DNS (SP 800-204B, SAUN-SR-2).
This is hard to get right, which is why you need a dedicated layer like service mesh to do it for you. And, you don’t need to turn on mTLS for everything all at once. You can incrementally add where it’s most needed and as your organization’s appetite for adoption grows. For a deep dive into how mTLS works, see our post on mTLS by the Book.
3. Declare and Enforce Policy about Which Services Can Connect to Which Other Services—For Every Service in Every Application
There should be a facility for declaring policy about which services can connect to which other services and should be enforced for all services. At minimum, it should be possible to declare that policy at the namespace level—as in, services in namespace A can call services in namespace B. Ideally, even more fine-grained policy declaration should be supported so that it’s possible to specify access restrictions down to the level of specific operations on individual services—as in, service P in namespace A can perform ‘GET /path’ on service Q in namespace B (SP 800-204B, SAUZ-SR-1).
4. Attach End User Credentials to Every Request Between Services—with a Standard Way to Extract and Validate Them
Attach end user credentials to every request between services and enforce the presence of those credentials (SP 800-204B, EUAZ-SR-3), even when the application also enforces authentication and authorization independently. This request authorization policy must provide instructions for extracting the credential from the request and validating it (SP 800-204B, EAUN-SR-1). These organization-wide controls allow for critical functions like audit to be implemented more easily to serve central teams responsible for compliance and controls.
A common pattern is to exchange an external end user credential, like an Oauth bearer token, at ingress for an internal credential encoded within a JWT. The internal credential carries capabilities as well as principal to allow some authorization decisions to be made locally. This can minimize calls out to an external authorization service, reducing the time/expense and mitigating centralized failures.
5. Use Model-Based Authorization Policies Such as RBAC and ABAC for Resource-Level Authorization
In addition to the authorization and authentication policies that can be executed locally in the data plane, there should be support for model-based authorization provided by an external authorization service. These model-based policies should be expressive enough to contain at least the following elements (SP 800-204B, APE-SR-1):
- Type—ALLOW or DENY.
- Target/scope—namespace, service/application name, version.
- Source—which services are authorized to access the target.
- Operations—which operations, e.g. the HTTP verbs GET, POST, etc.
- Conditions—authorization constraints expressed in terms of key-value pairs of metadata about the context of the request such as, allowable source and destination IP addresses, the allowed audience, user agent, etc.
For more information on using model-based policy, check out Zack Butcher’s presentation at the joint Tetrate-NIST Conference on ABAC for Microservices Applications Using a Service Mesh.
Set a Default Authorization Policy
There should be a default authorization policy that mandates the security best practices we’ve discussed. The default authorization policy should (SP 800-204B, AP-SR-3):
- Reject all unauthenticated requests.
- Mandate end-user credentials be present on every request.
- Restrict communication to services within the application’s own namespace.
- Allow communication across namespaces only through an explicit authorization policy.
A Deeper Dive
We’ve covered just the highlights of NIST’s authentication and authorization recommendations for microservices applications here. For deeper dive into the U.S. federal standards for microservices security, check out these resources:
- NIST Standards for Zero Trust: the SP 800-204 Series – Tetrate
- Tetrate’s Guide to Federal Security Requirements for Microservices
- Zack Butcher’s Presentation on NIST Standards for Microservices Security
- mTLS by the Book
- How Istio’s mTLS Traffic Encryption Works as Part of a Zero Trust Security Posture
- NGAC Vs RBAC Vs ABAC
- Zack Butcher’s Presentation on ABAC for Microservices Applications Using a Service Mesh
###
If you’re new to service mesh and Kubernetes security, we have a bunch of free online courses available at Tetrate Academy that will quickly get you up to speed with Istio and Envoy.
If you’re looking for a fast way to get to production with Istio, check out Tetrate Istio Distribution (TID). TID is Tetrate’s hardened, fully upstream Istio distribution, with FIPS-verified builds and support available. It’s a great way to get started with Istio knowing you have a trusted distribution to begin with, have an expert team supporting you, and also have the option to get to FIPS compliance quickly if you need to.
Once you have Istio up and running, you will probably need simpler ways to manage and secure your services beyond what’s available in Istio, that’s where Tetrate Service Bridge comes in. You can learn more about how Tetrate Service Bridge makes service mesh more secure, manageable, and resilient here, or contact us for a quick demo.