Linkerd 2.0 and Istio Performance Benchmark

Ivan Sim
7 min readDec 20, 2018

Following my lightning talk in the Intro: Linkerd session at KubeCon NA 2018, a few people have expressed interest in my performance benchmark results, where I compared a Linkerd2-meshed setup and an Istio-meshed setup on GKE, using Fortio. This blog post is a write-up of the results.

All the scripts and report logs can be found in my GitHub repository.

Environment Setup

The Terraform scripts used in this experiment provision a GKE 1.11.2-gke.18 cluster in the us-west1-a zone. Once the cluster is ready, the following components are installed:

  • Istio 1.0.3
  • Linkerd2 edge-18.11.1
  • Fortio 1.3.1

Istio was installed following the Helm installation instructions provided in the Istio documentation.

Fortio is chosen as the load generator, echo servers and report server for its minimal dependencies, and support for HTTP, HTTP2, GRPC and TLS. It also offers useful command line arguments to control the queries per second rate (-qps), number of connections (-c), the duration of the tests (-t) and the exact number of calls (-n ). Visit its GitHub repository for more information.

Table 1 describes the GKE node pools formation.

Table 1: GKE node pools formation

The Linkerd2 and Istio control planes, along with all thekube-system components are deployed on a n1-standard-2 machine. The echo servers and load generator are deployed on dedicated n1-standard-1 machines using a combination of node taints and node selectors.

Test Setup

This gke.sh script was used to instantiate a series of load tests. Each test case started off by using Fortio to send a load of HTTP requests to the echo servers, followed by a load of GRPC ping requests. The load was sustained for a predefined period.

Each test case was repeated at least 10 times on the baseline, Linkerd2-meshed and Istio-meshed echo servers.

The focus of these tests was on intra-cluster service-to-service HTTP and GRPC ping requests. No TLS nor ingresses traffic were involved.

The result logs were saved to a persistent disk and read by the Fortio report server.

Understanding The Charts

All the result charts utilize dual y-axes to depict the latency in milliseconds and the queries-per-second rates.

The non-black, colored lines, corresponding to the primary y-axis on the left, represent the latency. The color of the lines is used to depict the minimum, median, maximum and different percentiles values, as defined in the legend.

The black lines in the charts represent the queries-per-second rates, and they correspond to the secondary y-axis on the right.

Benchmark Results (HTTP Requests)

This experiment started with an attempt to determine how much load (in terms of queries-per-second) the echo servers could handle under the given setup. The Fortio load generator was started with the following configurations:

  • Queries per second (-qps): 0 (maximum rate)
  • Number of connections (-c): 32
  • Test duration (-t): 30 seconds

No limit constraints were set on both the Linkerd2 and Istio proxies’s CPU and memory utilization.

(During the test runs, it was observed that the istio-telemetry HPA was effected where it autoscaled the number of istio-telemetry pods from 1 to 5. I assumed this had no significant impact on the Istio proxies performance.)

Chart 1 shows the baseline latency. The queries-per-second rates (black) range from 30K to 35K per second, with the p99.9 latency (red) hovering between 5.6 ms to 6.2 ms.

(The maximum latency was turned off to remove the outliers.)

Chart 1: Baseline latency (HTTP requests, 0qps, 32c, 30s)

What happened if we dropped a Linkerd2 proxy in between the load generator and the echo server? The queries-per-second rates (black) dropped to about 10K to 12K per second. The p99.9 latency (red) ranged between 8.6 ms to 11.1 ms, as seen in Chart 2.

Chart 2: Linkerd2-meshed latency (HTTP requests, 0qps, 32c, 30s)

When the same tests were repeated on the Istio-meshed setup, the queries-per-second rates (black) were a lot lower, ranging from 3.2K to 3.9K per second. The p99.9 latency (red) ranged from 32.5 ms to 72.6 ms. The more consistent p99 latency (orange) ranged between 23.2 ms to 35.0 ms.

Chart 3: Istio-meshed latency (HTTP requests, 0qps, 32c, 30s)

Once the queries-per-second rates of all the setups were determined, the next step in this experiment was to repeat the load tests across all the echo servers using only 75% of the lowest rate. In this case, the Istio-meshed lowest rate was selected. This ensured that the load generator wouldn’t incur sleep falling behind event warnings when testing against the Istio-meshed echo servers. For more information on how to improve on the results accuracy, refer to the Fortio FAQ.

The load generator was restarted with the following configurations:

  • Queries per second (-qps ): 2374 (maximum rate)
  • Number of connections (-c ): 32
  • Test duration (-t): 5 minutes

The next chart shows the baseline latency. The p99.9 latency ranged from 5.0 ms to 6.0 ms.

Chart 4: Baseline latency (HTTP requests, 2374qps, 32c, 5 mins)

On the Linkerd2-meshed setup, the p99.9 latency (red) ranged from 8.0 ms to 12.0 ms.

Chart 5: Linkerd2-meshed latency (HTTP requests, 2374qps, 32c, 5 mins)

Finally, chart 6 shows the p99.9 latency (red) incurred by the Istio-meshed setup, ranging from 35.0 ms to 55.0 ms. The p99 latency (orange) fell in the range of 22.6 ms to 27.2 ms.

Chart 6: Istio-meshed latency (HTTP requests, 2374qps, 32c, 5 mins)

Benchmark Results (GRPC Ping)

A similar experiment was performed using GRPC ping queries. As in the case of the HTTP benchmark tests, the first part of this experiment attempted to determine the load that the echo servers could handle. This was achieved by starting the Fortio load generator with the following configurations:

  • Queries per second (-qps ): 0 (maximum rate)
  • Number of connections (-c ): 32
  • Test duration (-t): 30 seconds
  • GRPC ping: grpc -ping

No limit constraints were set on the Linkerd2 and Istio proxies’s CPU and memory utilization.

Chart 7 shows the latency incurred by the baseline setup. The queries-per-second rates (black) ranged from 14K to 16K per second, with a p99.9 latency (red) of 9.2 ms to 12.0 ms.

Chart 7: Baseline latency (GRPC ping requests, 0qps, 32c, 30s)

In the Linkerd2 setup, the queries-per-second rates (black) ranged from 6.5K to 8.5K. The p99.9 latency (red) ranged from 11.5 ms to 16.5 ms as seen in Chart 8.

Chart 8: Linkerd2-meshed latency (GRPC ping requests, 0qps, 32c, 30s)

The Istio-meshed setup could handle about 2.8K to 3.6K queries-per second (black), with the p99.9 latency (red) ranged from 30.0 ms to 75.0 ms. The p99 latency (orange) was more consistent, ranging from 18 ms to 30 ms.

Chart 9: Istio-meshed latency (GRPC ping requests, 0qps, 32c, 30s)

Repeating the load tests at 75% of the lowest rate previously determined, the load generator was restarted with the following configurations:

  • Queries per second (-qps ): 2113 (maximum rate)
  • Number of connections (-c ): 32
  • Test duration (-t): 5 minutes

The baseline setup incurred a latency of 9.0 ms to 11.0 ms, as seen in the next chart.

Chart 10: Baseline latency (GRPC ping requests, 2113qps, 32c, 30s)

The latency observed in the Linkerd2-meshed setup ranged from 11.0 ms to almost 14.0 ms.

Chart 11: Linkerd2-meshed latency (GRPC ping requests, 2113qps, 32c, 30s)

Finally, the latency observed in Istio-meshed setup ranged from 36.0 ms to 45.0 ms.

Chart 12: Istio-meshed latency (GRPC ping requests, 2113qps, 32c, 30s)

Conclusion

In this experiment, both the Linkerd2-meshed setup and Istio-meshed setup experienced higher latency and lower throughput, when compared with the baseline setup. The latency incurred in the Istio-meshed setup was higher than that observed in the Linkerd2-meshed setup. The Linkerd2-meshed setup was able to handle higher HTTP and GRPC ping throughput than the Istio-meshed setup.

A prospective future performance benchmark test will involve using Fortio to generate TLS traffic as well as POST messages with configurable payload size.

In my next post, I will share the memory and CPU utilization data of both the Linkerd2 and Istio proxies.

Special thanks to @alenkacz for encouraging me to write this blog post.

Disclosure: As of Feb 2019, I have joined Buoyant Inc. as a software engineer to work on the open source Linkerd 2.x project. This article was posted prior to my employment with Buoyant Inc.

--

--