Micro Focus is now part of OpenText. Learn more >

You are here

You are here

How to configure—and harden—your managed Kubernetes services

Daniel Selman Kubernetes Consultant, Microsoft
Sujit D'Mello Principal Consultant, Microsoft

Most people consider Kubernetes, the popular container orchestrator, to be enterprise-ready. But as organizations evaluate it, they quickly realize that they don't want to manage the servers, software, and other core components associated with Kubernetes. They'd rather use a managed Kubernetes service.

But there's a catch. These services, the most popular of which are offered by Amazon, Google, and Microsoft, give you a preconfigured Kubernetes environment that's automatically provisioned. However, most enterprises want their managed services configured and hardened to meet their unique compliance and security needs. 

Fortunately, there are ways to access your managed Kubernetes infrastructure without shooting yourself in the foot. Here's how to perform common enterprise configuration tasks such as monitoring, security, startup scripts, and administration the Kubernetes way. 


The Kubernetes community offers many common conventions for monitoring. For example, Prometheus is the go-to tool for monitoring and alerting in many open-source stacks. The tool's creators were inspired by Borg Monitor, a monitoring tool that paved the way for Kubernetes.

Additionally, many cloud providers offer native monitoring tools for their managed Kubernetes services, such as Microsoft's Container Monitoring Solution for the Azure Kubernetes Service and Google's Stackdriver Monitoring for the Google Kubernetes Engine.

But enterprises often have custom requirements for tools and logging. For example, third-party tools may be required due to organizational policy and the monitoring teams' best practices or expertise.

Whether you're using a commercial tool or an internal one, logs are centrally sourced on each node at /var/lib/docker/containers, and you can integrate your managed Kubernetes cluster by forwarding the logs from this central location. You can also leverage this centralized location to configure custom logrolling settings using the standard logrotate tool.


Out of the box, Kubernetes' native options may not meet the needs of security teams that have strict requirements around Linux machine configurations. This issue is further complicated by the fact that you can't access the underlying virtual machines (VMs) in a managed Kubernetes environment.

Often, you must accomplish administrative tasks by accessing VMs through Secure Shell (SSH) to install custom software. But it's not possible to do that in a secure manner in a managed Kubernetes environment, and it's not feasible when working with clusters at scale—especially when you introduce cluster auto-scaling.

Instead, you need to apply these hardening steps through a Kubernetes-native, scalable, flexible solution via DaemonSets. At KubeCon we'll be talking in more detail about hardening through installing anti-malware software, but this process applies to a variety of custom software installation use cases. 


DevOps practices introduce strict code-configuration requirements prior to deployment. Simply integrating custom configurations and installations into the Dockerfile for the image itself may not meet these requirements, since that would require rebuilding all images every time you update a standard.

Instead, consider an application configuration method that invokes a script at runtime to pull and execute a bootstrapper script and other artifacts from a static location. This approach enables updates to the bootstrapping process, or other files such as certificates, without requiring changes to the underlying image.

You can use this runtime bootstrapping process to meet many enterprise requirements, such as for externalizing application configurations, enforcing SSL for cluster-level traffic, and customizing your host files.

Build on the fundamentals

These approaches to monitoring, security, and bootstrapping are fundamental to integrating managed Kubernetes into systems and organizations with nontraditional requirements. There's a lot more to learn. 

To discover more best practices for using managed Kubernetes services in the enterprise, come to our session at KubeCon + CloudNativeCon, December 10-13 in Seattle, Washington. We will be speaking on December 11 at 1:45 pm.

Keep learning

Read more articles about: Enterprise ITIT Ops