Micro Focus is now part of OpenText. Learn more >

You are here

You are here

Semantic monitoring: Why it's great for microservices

public://pictures/Erez Yaary .jpg
Erez Yaary Fellow, Cloud Chief Technology Officer, Micro Focus
A workstation
 

Semantic monitoring (a.k.a. synthetic monitoring) runs a subset of an application's automated tests against the live production system on a regular basis. The results are pushed into the monitoring service, which triggers alerts in case of failures. This technique combines automated testing with monitoring in order to detect failing business requirements in production. It's become more popular as a way to monitor applications, especially applications that have microservice architectures.

Let’s take a deep look at semantic monitoring and the current case studies around its use to help you determine if you should think about implementing it in your own production systems.

 

The rise of microservices

Containers lower the operational barrier for development teams to build and deliver new software into the hands of their customers quickly while maintaining a consistent pipeline. Containers also drive a design principle of reusable services that perform one single and well-defined task, which is actually what microservices are all about.

Every microservice can support more than one transaction, thus creating a mesh of reusable components that promote developer and operations efficiency through well-defined operations practices of lifecycle, scalability, and security.

As a microservice is taking part in multiple business transactions, making sure it performs according to spec can’t be accomplished simply by testing it in isolation as you would in most testing strategies today. Instead, it should be also tested by running the complete set of business transactions when it’s in production.

Microservices monitoring in practice

Because microservices are a relatively new element in the modern data center, monitoring practices have not evolved much from the basic test-centric approach that developers employ during the development stage.

In production, those microservices, often delivered in container format, are being monitored using the basic container monitoring infrastructure that observe metrics such as CPU and memory utilization. Oftentimes container infrastructures do not contain an elaborated monitoring mechanism. If one is added, it still maintains a very system-level approach and rarely has the capability to monitor from the business process semantics.

This leads us to consider two vectors for microservices monitoring: service layer monitoring and semantic monitoring.

Service layer monitoring follows the basic component, or a collection of the same type. This approach is usually the first step of monitoring, asking: “Is my microservice working as designed, and is the service layer (a collection of the same types of microservices) scaling as defined?”

Semantic monitoring approaches microservice monitoring from the business transaction, or semantic, perspective. Instead of monitoring each component that serves a business transaction, synthetic transaction monitoring is employed to ascertain how well the transaction performs for the business and the customers who use it.

These two approaches work great together. They allow for faster issue triaging and isolation once an issue is detected, reducing mean time to repair (MTTR). Specifically, it triages the two vectors—transaction and service layer—to pinpoint which transactions might be affected by poor performance or availability. Then it detects which service layer and specific microservice instance is at fault, almost in the same triage flow.

Implementing semantic monitoring

Semantic monitoring is rooted in the functional testing realm of synthetic transaction. This is where scripts mimic real users interacting with the application or business transaction to validate that the code is being developed and integrated as planned.

While testing environments give you the freedom to break things in a sandbox, testing in a production environment requires careful consideration, since those scripts may actually activate business systems and inject false transactions.

One common approach is to design test handling into the production code. This handling code will notice test code based on special hints in the production tests and treat those tests in a nondisruptive way. It will also send the logs and metrics from those production tests back to a different part of the monitoring system.

Monitoring systems that handle metrics and events can be notified of errors in the semantic layer that may contain additional information for aiding the triaging process. This might include information on the microservice that encountered an error or the relevant logs containing error records.

Why choose semantic monitoring?

Semantic monitoring could be executed on a regular basis as part of the operations standard protocol and serve as a helpful business transaction validation mechanism. You should consider implementing it if you want to be sure that not only the service layer is operational, but also the business layer.

Practicing semantic monitoring allows you to verify that a transaction performs as expected in production, thus increasing confidence in delivering higher-quality applications and services for customers.

Further reading

 

Keep learning

Read more articles about: Enterprise ITIT Ops