Traffic Encryption Performance in Kubernetes Clusters

By Samuel Stolicny

January 18, 2021

Building a hybrid Kubernetes cluster among various environments (public providers and on-premise devices) requires a layer of reliable and secure network connectivity. Choosing the right encryption technique can dramatically impact the network performance of your newly built cluster.

The purpose of this benchmark is to find out which encryption method affects cluster performance the most. The main metrics measured were latency, throughput, and CPU utilization. To ensure a variety of encryption methods we chose Linkerd service mesh mTLS, Wireguard VPN, and Cilium IPsec transparent encryption. Each method has its advantages and disadvantages that will be reflected in the benchmark results.

Setup and Methodology

We worked with four different cluster setups at Hetzner, a cloud provider offering a virtualized IaaS solution. The benchmarks were conducted on four virtual Ubuntu servers, three of which were our tested worker nodes and one master.

To improve reproducibility, we used Terraform as the infrastructure as code software tool, and KubeOne as Kubernetes cluster lifecycle management tool. Calico was used as a CNI except in the case of Cilium.

For a worker node, we used virtual servers with 2 VCPU (Dual-core AMD EPYC 2nd gen., CPU speed 2495 MHz) and 2 GB RAM. Each worker node had one lightweight alpine pod – deployed via statefulset – on which the benchmarks were executed. The entire set of benchmarks was performed three times and each time on a freshly deployed cluster. Within each benchmark, we’ve measured the performance among all three nodes/pods.

To test the network TCP throughput between pods we used iperf3. CPU utilization was measured during the throughput benchmark using Prometheus deployed on a master node to avoid any possible influence of the monitoring tools on the measurements.

Table of versions:

Ubuntu 20.04
Kubernetes v1.19.0
Calico v3.17.1
Cilium v1.9.1
Linkerd 2.9.2
Wireguard v1.0.2

ping command:

ping -c 100 -i 0.1

iperf3 client command:

iperf3 -c -f M

Latency benchmark

Average ping latency in miliseconds bar chart

Network latency is a major aspect of a high-performance container environment. As can be seen in the graph above, Wireguard still delivers quite low latency compared to a cluster without encryption. In order to achieve automated and reproducible execution, the pings were executed via bash script to ensure that pods ping each other and values are parsed effectively. The final result is calculated from all of the measured latency values between pods.

Throughput benchmark

Average TCP througput in MBytes/sec bar chart

Similarly, iperf3 was executed via bash script to ensure throughput measurement between all the pods. The iperf3 test result between two pods (client-server) returns the amount of data transmitted in ten one-second intervals sent by the client. The final result is the average of these intervals between all pods. As expected, Wireguard performed worse than the cluster without encryption, but surprisingly far better than Linkerd mTLS and Cilium IPsec.

CPU utilization benchmark

Average CPU utilization bar chart

The main reason we measured CPU utilization during the throughput benchmark was to verify that none of the encryption methods are causing significant performance pressure on the cluster. All values were collected via a NodeExporter-Prometheus-Grafana stack.

Conclusion

The benchmark revealed important performance differences between several encryption techniques. Wireguard has a significant advantage in throughput compared to Linkerd mTLS and Cilium IPsec, although at the cost of increased latency, which is a critical aspect of a distributed microservice system.

Previous
Previous

Cloud-agnostic Kubernetes Clusters