This guide demonstrates how to configure local rate limiting for L4 TCP connections destined to a target host that is a part of an OSM managed service mesh.
Prerequisites
- Kubernetes cluster running Kubernetes v1.22.9 or greater.
- Have OSM installed.
- Have
kubectl
available to interact with the API server. - Have
osm
CLI available for managing the service mesh. - OSM version >= v1.2.0.
Demo
The following demo shows a client fortio-client sending TCP traffic to the fortio
TCP echo
service. The fortio
service echoes TCP messages back to the client. We will see the impact of applying local TCP rate limiting policies targeting the fortio
service to control the throughput of traffic destined to the service backend.
-
For simplicity, enable permissive traffic policy mode so that explicit SMI traffic access policies are not required for application connectivity within the mesh.
-
Deploy the
fortio
TCP echo
service in thedemo
namespace after enrolling its namespace to the mesh. Thefortio
TCP echo
service runs on port8078
.Confirm the
fortio
service pod is up and running. -
Deploy the
fortio-client
app in thedemo
namespace. We will use this client to send TCP traffic to thefortio TCP echo
service deployed previously.Confirm the
fortio-client
pod is up and running. -
Confirm the
fortio-client
app is able to successfully make TCP connections and send data to thefrotio
TCP echo
service on port8078
. We call thefortio
service with3
concurrent connections (-c 3
) and send10
calls (-n 10
).As seen above, all the TCP connections from the
fortio-client
pod succeeded.Total Bytes sent: 240, received: 240 tcp OK : 10 (100.0 %) All done 10 calls (plus 0 warmup) 10.966 ms avg, 226.2 qps
-
Next, apply a local rate limiting policy to rate limit L4 TCP connections to the
fortio.demo.svc.cluster.local
service to1 connection per minute
.Confirm no traffic has been rate limited yet by examining the stats on the
fortio
backend pod. -
Confirm TCP connections are rate limited.
As seen above, only 30% of the 10 calls succeeded, while the remaining 70% was rate limitied. This is because we applied a rate limiting policy of 1 connection per minute at the
fortio
backend service, and thefortio-client
was able to use 1 connection to make 3/10 calls, resulting in a 30% success rate.Examine the sidecar stats to further confirm this.
-
Next, let’s update our rate limiting policy to allow a burst of connections. Bursts allow a given number of connections over the baseline rate of 1 connection per minute defined by our rate limiting policy.
-
Confirm the burst capability allows a burst of connections within a small window of time.
Reset the stat counters on the
fortio
server pod’s Envoy sidecar to see the impact of rate limiting.As seen above, all the TCP connections from the
fortio-client
pod succeeded.Total Bytes sent: 240, received: 240 tcp OK : 10 (100.0 %) All done 10 calls (plus 0 warmup) 1.531 ms avg, 1897.1 qps
Further, examine the stats to confirm the burst allows additional connections to go through.
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.