Kubernetes Day 2: What Happens Now?

Christian MelendezFri, 01/10/2020 - 12:47

Everyone is talking about Kubernetes, and people’s interest in learning and using Kubernetes continues to grow. You might already be convinced that you need Kubernetes features like self-healing, autoscaling, and progressive rollouts. However, Kubernetes is still not easy, and many questions may arise once you start using it. What happens after you’ve successfully deployed your “Hello World” application in Kubernetes on day one? 

Some of the questions you might have on day two are: How do I know if everything in the system is working correctly? How do I work with systems that can’t work with ephemeral data? How do I stay secure when using Kubernetes? And does our development workflow need to change? The answers to these questions aren’t short, but I’ll give you some hints in today’s post.

The real world is full of systems that are far more complex than your first “Hello World” application. So, on day two after starting with Kubernetes, you might be wondering about the following types of applications.

What About Distributed Applications?

Kubernetes’s core components to get started with a simple application include objects like pods, deployments, and services.

But to deploy a microservices application, there are other important considerations. For instance, you need to implement health checks in the app so that Kubernetes knows if the app is healthy or not.

You have two options for configuring health checks: readiness and liveness probes.

Readiness probes help Kubernetes to know if the pod is ready to receive traffic.

Liveness probes are how Kubernetes know when to restart a pod. Sometimes applications have a memory leak, and a simple restart could alleviate the problem temporarily.

You also need to know what you can do to understand what’s happening in Kubernetes. When you have too many pods, understanding how they interact with each other could be complicated. How do you get traces, logs, or metrics? An approach that is becoming popular is to use a service mesh, which is a networking layer that you configure on top of Kubernetes objects. For instance, you can set strict communication rules. Or, you can integrate the service mesh with other tools like Prometheus and Grafana to get microservices metrics.

What About Scaling Applications?

Kubernetes is well known for having built-in capabilities to scale applications horizontally. If you’re running a Kubernetes cluster in a public cloud, most of the cloud providers support the ability to configure autoscaling rules for nodes. When the control plane can’t schedule more pods because there’s not enough capacity, you can set the cluster to add more nodes automatically.

Microsoft Azure has the support to add virtual or serverless nodes. But scaling in Kubernetes is a little bit tricky, as the cluster can take around twenty minutes to scale. For instance, Shopify had to create a custom autoscaler for Kubernetes because they have unexpected patterns and spikes in traffic.

To configure autoscaling policies effectively, you have to know what the minimum resources your applications need to work effectively are. You need to understand what the patterns are when the load increases. Is the CPU proportional to the traffic the pod receives? Great, then you can configure autoscaling in pods using CPU usage.

When you set the requests and limits values for CPU and memory, you will help the Kubernetes scheduler make decisions more effectively. Google has a pretty good explanation for “why” and “how” to limit resources in a pod and the cluster.

What About Stateful Applications?

Stateless applications are great for many reasons, mainly because they’re designed to scale quickly. However, there are still many stateful systems, like databases or legacy systems. Managing state in Kubernetes is not mature yet. You need to understand about persistent volumes and persistent volume claims. You also need to know how to integrate storage technologies, how to configure pods and applications, etc. Even though there are projects like Rook that are helping operators to configure volumes, there are still many use cases that are not covered yet.

In the space of databases, there are popular projects like Vitess (which YouTube uses) to run MySQL databases. But you also have projects like KubeDB for no-SQL and relational databases.

The databases topic in Kubernetes is still a topic where some agree and others disagree. But what’s certain is that companies like Zalando are running databases in production on top of Kubernetes. And Google has a decision tree to consider using Kubernetes or a PaaS solution for databases.

But databases are not the only use case. There are systems like WordPress or Drupal that depend heavily on storage. These types of systems can’t afford to lose data to provide the functionally they offer.

What About Security Practices?

Security is one of the most critical topics in Kubernetes, mainly because, as Ian Coldwater says, “Kubernetes is insecure by default.” An example of a default configuration that makes Kubernetes insecure is that role-based authentication control (RBAC) is not active by default. (Although, some cloud providers, like Azure, do not allow you to create a cluster without RBAC enabled.) Google, for example, has decided to not give you the Kubernetes dashboard because of the type of permissions it needs to run. A good Kubernetes dashboard alternative is Octant, which runs in the client-side and with the permissions the user has.

Another example of insecurity in Kubernetes is secrets. Secrets are not encrypted; they’re only encoded in a base-64 format. You need third-party tools like HashiCorp Vault to encrypt and decrypt secrets.

In the networking aspect, you can configure networking policies to enable minimum ingress and outbound communication between pods. And if you want to secure at the services layer, you can use networking policies from service meshes like Istio.

Lastly, you can add another layer of security you can use admission controllers to intercept calls to Kubernetes’s API. For instance, you can configure policies to allow pods pulling images from specific registries.

You can learn more about other security practices in the official blog from the CNCF.

What About Development Workflows?

Another essential aspect that many people overlook is the development workflow. Traditionally, developers’ role in the workflow is coding, building, and testing. However, when using containers and Kubernetes, this workflow could be longer.

And the slowest part is to build the container image, push the image, pull the image, and deploy it in Kubernetes. This frustrates many developers, and they see Kubernetes as an operations concern. But the benefits of having a production-like environment in a developer workstation are invaluable, and with Kubernetes, this is possible.

Although there’s no perfect tool yet, there are pretty useful tools that allow developers to continue being productive when developing applications with Kubernetes. For instance, a popular project is Telepresence. With Telepresence, the developer’s workstation becomes part of a Kubernetes cluster. All the traffic that the cluster receives could be redirected to a local workstation. It’s as if your workstation becomes part of the cluster. Developers can continue using their existing tools.

Kubernetes in Real Life

The truth is that not everyone is using Kubernetes, but almost everyone wants to have Kubernetes. Why?

Everyone has a different reason, and more and more companies will continue to adopt Kubernetes. However, the learning curve could be overwhelming, and in this post, I only covered a few things that you need to consider when working with Kubernetes.

For instance, what if you need to work with hybrid environments while you migrate your workloads to Kubernetes? Or how do you work with custom resource definitions (CRD)?

Kubernetes’s ecosystem is significant, and it’s growing fast. There will be a time when Kubernetes becomes a commodity, but there’s still a lot of work to do before that happens.