Micro Focus is now part of OpenText. Learn more >

You are here

You are here

5 ways a service mesh can better manage app data sharing

public://pictures/zecc4hr7_400x400.jpg
Matthew David Digital Leader, Accenture
 

A service mesh is a dedicated layer you build into your apps that manages how the various elements of an app share data with one another. It's a different approach from other systems that manage application-level communication.

Service meshes generate a visible infrastructure layer that documents the health of different parts of apps as they interact, making it easier to optimize communication and avoid downtime as your app grows.

Cloud-native applications are often architected as a complex network of distributed microservices running in containers. The applications running in these containers use Kubernetes as the standard for container orchestration.

But many companies using microservices quickly run into microservices sprawl. The rapid growth in microservices has created challenges when it comes to creating standardized routing among services, versions, authorization, authentication, encryption, and load balancing managed by a Kubernetes cluster.

When that happens, you'll need to use a service mesh to manage application data sharing across your controlled Kubernetes environment. Here are five key benefits of doing so.

1. Splits business logic from the application

You can split the business logic, network, and security policies of the application using a service mesh. A service mesh offers you the ability to connect, secure, and monitor your microservices.

  • Connect: Services can discover and talk to each other with a service mesh, which makes it easier for the flow of traffic and API calls between services/endpoints. 
  • Secure: Policy enforcement can be more effectively applied with a service mesh. It establishes reliable communication between services.
  • Monitor: A service mesh makes it easier to see your microservices. You can integrate many out-of-the-box monitoring tools, such as Prometheus and Jaeger, into a service mesh.

Your entire network of distributed microservices can be controlled with these three critical features from a service mesh.

2. Provides more effective transparency into complicated interactions

Decomposing an application into several microservices doesn't automatically turn it into a network of independent services. The app acts as a single, standalone application. The microservices share the same code repository and are part of your unique architecture. Indeed, each microservice is less like a service managed across multiple applications than it is like a component of the parent application.

Distributed components are why software developers want the ability to trace request services and then debug the services.

The service mesh becomes a dedicated infrastructure layer into which all service-to-service communication passes. The service mesh role in the DevOps stack is to provide uniform telemetry metrics at the service call level.

Service meshes analyze data such as source, destination, protocol, URL, status codes, latency, and duration. In many ways, the data a service mesh captures is similar in principle to that in a web server log.

3. Improves security in service-to-service communication

An increase in microservices leads to a corresponding rise in network traffic—which provides hackers more opportunities to break into the network communication stream. Implementing mutual Transport Layer Security (TLS) as a full-stack solution with a service mesh helps secure interactions within the network.

The three vital areas where a service mesh can secure communication include: 

  • Authenticating services
  • Encrypting traffic among services
  • Enforcing security policies

Service mesh providers authorize and authenticate security certificates in the proxies, enabling the validation of requests and ensuring access controls. Many third-party service mesh solutions allow the creation of authorization rules based on the identities in the certificate exchanged via mutual TLS protocol.

That said, a service mesh won't solve all of your communication challenges. Be vigilant for any potential opportunity a hacker might be able to take to breach your environment.

4. Offers better encryption

With so much communication among microservices, robust encryption is essential to your infrastructure. To this end, a service mesh employs keys, certificates, and TLS configuration to ensure continual encryption that doesn't fail.

A service mesh provides policy-based authentication that allows two services to establish mutual TLS configuration for secure service-to-service encrypted communication and end-user authentication. The responsibility to implement encryption or manage certificates moves from the app developer to the framework layer.

5. Makes technical needs easier to tackle

Service mesh definitions often center on service-to-service communication, but you get a lot of other stuff with that.

With a service mesh, you can now identify the failure. 

You can break down how you address your technical needs with the following three examples: 

  • Visibility: End-to-end traffic and service monitoring, logging, and tracing
  • Security: Validated TLS authentication for communication among services without code changes
  • Policy: Leverage label-based routing and track routing decisions

Service mesh limitations

By using a service mesh you can resolve the challenges you face from running a sprawling microservices architecture. But a service mesh also can introduce a few undesirable effects, including:

  • Added complexity: Proxies, sidecars, and other components increase complexity in environments that are already convoluted,
  • Slowness: Adding a layer on top of existing layers has the potential to slow down network efficiency.
  • Learning curve: Developers and operations teams need to understand the impact of a new service layer.

Even with these limitations, service meshes offer benefits in the right environments, especially those that include small, decomposed applications running on Kubernetes. 

Who is building service meshes?

There are many solid open-source service mesh providers. The three leading open-source tools are Consul, Istio and Linkerd. Here's a quick rundown of each.

Consul

Consul comes with a complete set of features required for a service management framework. The origin of Consul was as a tool to manage services running on Nomad. Over the years, it has grown to support multiple other data centers and container management platforms, including Kubernetes.

Istio

Istio is a Kubernetes-native solution originally developed by the ride-hailing firm Lyft. It is backed by Google, IBM, Microsoft, and many other companies.  

Istio split the data and control planes by using a sidecar-loaded proxy. The sidecar caches information so that it does not need to go back to the control plane for every call. A Kubernetes cluster manages the control planes as pods. You get better resilience if there is a failure of a single pod in any part of the service mesh.

Linkerd

Linkerd is also a popular service mesh run on top of Kubernetes and, due to its rewrite in v2, its architecture is very close that of Istio. But Linkerd focuses on simplicity. In other words, Linkerd is smaller and faster than Istio, though it currently has fewer features.

How service meshes work

A service mesh is a layer in your network for microservices that you can manage. The mesh provides microservices discovery, load balancing, encryption, authentication, and authorization. 

You implement a service mesh by providing a proxy instance, called a sidecar, for each service instance. Sidecars manage interservice communications, monitoring, and security‑related concerns. All of these features can be abstracted away from individual services. This way, operations teams can easily maintain the service mesh and run the app in production, and developers can release code, support, and maintenance for the application code.

A service mesh complements the tools you use to manage cloud applications, and for this reason, it's a great problem solver. If you are running applications in a microservices architecture, you're probably a good candidate for a service mesh. It allows you to declutter the increased complexity you gain from an extensive collection of microservices. 

Get started now

If you're building microservices, you probably anticipate specific needs down the road, such as scaling rapidly and adding new features to meet business needs. Your microservices architecture will likely change as you add more complexity to your environment. That's where a service mesh can help. 

  • Developers focus on the business value they can add instead of having to figure out how to connect services.
  • Apps become more resilient, since the service mesh can redirect requests away from services that have failed.
  • You can continuously optimize communication in your runtime environment by using performance metrics.

Start planning for the future by experimenting with a service mesh now. You will discover a uniform way to connect, manage, and observe microservices-based applications with behavioral insight into, and control of, your networked microservices. 

Just keep in mind service meshesare still in their early days. Expect plenty of changes ahead.

Keep learning

Read more articles about: App Dev & TestingApp Dev