This post is about the new Kubernetes API Priority and Fairness (APF) feature. I will like to share with you what I have learned and show you how to define policies to prioritize and throttle inbound requests to the Kubernetes API server. Then I will also go over some metrics and debugging endpoints that you can use to determine if APF is affecting your controllers.
🆕 APF is enabled by default in Kubernetes 1.20, as a beta feature. For earlier versions of Kubernetes, it can be enabled via the
APIPriorityAndFairness feature gate.
If you have been working with container virtualization and orchestration software like Docker and Kubernetes, then you probably have heard of network namespace.
Recently, I started exploring the Linux
ip command. In this post, I will show you how to use the command to connect processes in two different network namespaces, on different subnets, over a pair of
Container runtime uses the namespace kernel feature to partition system resources to achieve a form of process isolation, such that changes to the resources in one namespace do not affect that in other namespaces. …
This post is a write-up on some of the key concepts of the Pulumi SDK. If you are using Pulumi too, I hope you will find the content in this post useful.
Recently, I finished my first Pulumi program where I used Pulumi to provision some resources on Azure. The program manages a virtual network with 3 private subnets, a handful of private virtual machines grouped by availability sets with OS disks attached, a L4 load balancer to handle external HTTP traffic, and an Azure Bastion service to enable secure external SSH access.
This is what my virtual network topology…
If you are interested in finding out how
kubectl exec works, then I hope you will find this post useful. We will look into how the command works by examing the relevant code in kubectl, K8s API Server, Kubelet and the Container Runtime Interface (CRI) Docker API.
kubectl exec command is an invaluable tool for those of us who regularly work with containerized workloads on Kubernetes. It allows us to inspect and debug our applications, by executing commands inside our containers.
kubectl v1.15.0 to run an example:
exec command runs a
If deploying applications to Kubernetes is part of your daily development workflow, you will find Tilt very useful. Tilt can continuously build and deploy your local code change to your Kubernetes cluster.
In this post, I will show you how I use Tilt to automate continuous build and deployment of the Linkerd control plane components on Minikube.
The Linkerd control plane is composed of a few components; namely, Controller, Identity, Grafana, Prometheus, Proxy Injector, Service Profile Validator and Web. Each of these components has at least one service container, a proxy sidecar container and a proxy-init init container.
Following William’s post on gRPC Load Balancing on Kubernetes without Tears, I become interested in finding out how much work is actually involved to implement gRPC load balancing.
In this post, I’ll like to share with you what I’ve learned about using the gRPC-Go
resolver packages to implement a simple client-side¹ round robin load balancing. Then I will show how you can use Linkerd 2 to automatically load balance gRPC traffic, without any application code change or deployment of additional load balancer.
I have posted the code used in this post on my GitHub repo. …
I use Minikube to manage my local development Kubernetes clusters. Often, I find the needs to spin up one-off clusters to review and test pull request changes, reproduce bugs and try out new releases, while retaining a default environment for my ongoing sprint tasks.
In this post, I’ll like to share with you how I use Minikube profiles to manage multiple Minikube instances. Each Minikube instance hosts a single local Kubernetes cluster. I’ll also be sharing the script that I have been using to extend the
profile command with additional features such as:
In this post, we’re going to experiment with deploying Linkerd 2.x to a Kubernetes cluster that uses network policies to govern ingress network traffic between pods.
UPDATE (Sept 24, 2020): The
linkerd inject command has since been modified to use the same code path as the auto proxy injection feature. When applying the
deny-all-ingress network policy to the
linkerd namespace, ensure that traffic from the K8s API server is permitted to port 443 of the proxy injector, service profile validator and tap components.
A network policy defines ingress and egress rules to control communication between pods in a cluster. It…
Following my lightning talk in the Intro: Linkerd session at KubeCon NA 2018, a few people have expressed interest in my performance benchmark results, where I compared a Linkerd2-meshed setup and an Istio-meshed setup on GKE, using Fortio. This blog post is a write-up of the results.
All the scripts and report logs can be found in my GitHub repository.
The Terraform scripts used in this experiment provision a GKE 1.11.2-gke.18 cluster in the us-west1-a zone. Once the cluster is ready, the following components are installed:
Blogging cloud native stuff. Principal Software Engineer @Red Hat