rimusz blog

Cloud computing, clusters, containers, Kubernetes, Helm ...

Kubernetes controllers and operators are great but are they safe?

network security

Evolution of Kubernetes security

In the early days of Kubernetes, before Role-Based Access Control (RBAC) was introduced pods ran in the Kubernetes cluster with full admin rights when a service account was assigned. Basically, this gave users full admin rights to the cluster. Imagine how scary was that? Don’t get me wrong, these days were fun and interesting times to see how Kubernetes security evolved over time resulting in role-based authentication and more.

So why do we love to hate Helm’s Tiller?

Then Helm’s Tiller came along, the Kubernetes package manager, making our lives easier for packing, releasing and upgrading applications. All was dandy until RBAC was released forcing us to assign the admin role to Tiller on the Helm server side. Everybody started to complain that Tiller had too much power over the cluster. The reactions varied. Some stopped using Helm with Tiller altogether while others ran multiple Tillers locked down per namespace with RBAC rules which made the setup very difficult to maintain. Another started to use the helm template to pipe the output to kubectl. This resulted in releasing a number of helm-wrappers to simplify the management of helm/tiller versions across multiple namespaces.

This is when I published the Tillerless Helm blog post and recommended using a secure method for running Helm with Tiller. Basically, you run the Tiller outside the cluster together with your Helm CLI while assigning the user set with upgrade release rights in the kubeconfig (the same way as helm CLI and kubectl use it). Basically, this should sort the Helm/Tiller problem until Helm v3 is released which you might have heard already will have no Tiller!

Are Kubernetes operators and controllers secure?

Operators and Controllers are Kubernetes API extensions. An Operator is an application-specific controller allowing you to create, configure and manage complex application instances. Examples of leading operators include Prometheus and PostgreSQL. Using a controller, you can develop a workflow based on internal logic by watching events generated by the Kubernetes API objects such as a namespace, deployment or pod, or your own custom resource definition (CRD).

Operators and controllers share the same deployment principles as Helm's Tiller and require granting full admin rights to the cluster such as in the following situations:

  • Setting the PostgreSQL operator to manage the database clusters across all the namespaces.
  • Locking a specific namespace.
  • Running a PostgreSQL operator per each namespace.
  • Running simple custom controllers.

So the question that comes to mind is why isn't the community raising havoc about the security threats to operators and controllers?

What about Kubernetes custom controllers?

Everybody likes to use controllers and operators but hates to use Tiller, when actually, Tiller is a custom controller. Kubernetes custom controllers allow you to develop your own custom business logic by watching events from Kubernetes API objects. GitOps is a way to do Continuous Deliver introduced by Weaveworks to run operations using Pull requests to Git. The general consensus is that Git is defined as the central source of truth. I agree 200% on that, as it allows you to have commit history and always trace your releases.

There are plenty of tools that use a custom controller in your Kubernetes cluster to watch for changes in the git repo and roll out a new release with helm or kubectl. They really do a great job of implementing GitOps. For example, the git repo watching for changes is very cool but requires full admin rights in order to deploy changes to all namespaces or run multiple controllers, one per namespace to make it more secure and contained. Sounds like a familiar setup?

Best practices for securing your controllers and operators

  1. When using operators, you don’t have much choice, as they have to run in the same cluster where you deploy your PostgreSQL clusters. We recommend running the operator per namespace in a multi-tenancy environment.

  2. You don’t have to run Tiller or custom controllers in your cluster in order to update or scale your applications. Don’t worry about giving too much access to the cluster API, or locking down per namespace and then having the burden of maintaining many Tillers/Controllers and wasted computing resources. The old school way of using a dedicated CI/CD cluster does the job really well. Just refrain from exposing it externally without proper security. Pipelines should use different kubeconfig files depending on the application requirements when installing or upgrading and don’t forget to include kubectl, Helm and Tiller as part of your pipeline.

  3. Do not store or hardcode default passwords in your Helm charts as you may forget risk having a security breach at a later stager. I have created a simple Helm plugin called helm-linter which helps to find hardcoded passwords in chart's values.yaml file. Please feel free to take it for a spin, and feedback and PRs are welcome.

Rimantas (Rimas) Mocevicius

Cloud Native, Kubernetes, Co-founder of Helm, CKA, kubectl: Command-Line Kubernetes in a Nutshell and CoreOS Essentials books author, Kubernaut at JFrog

Planet Earthhttps://rimusz.net