Sector 7G's Learning Center

Articles • Tips • Tutorials

Dial in Latency with netem

Jun 16, 2021 | Greg Butler

Audience & Level: Non-technical & Intermediate

Intro
Understanding Network Latency concluded by reinforcing the need to test under network latency delays and that doing so need not be complicated. This article introduces netem, a native Linux program for controlling traffic or, as its name implies, network emulation.

Starting simply
netem can do much more than add latency to a network interface, such as QoS, rate shaping, jitter, and packet loss–essentially as slow and messy of a network you’d like while still having one. Third party emulation products exist, but if they are not using netem under the hood maybe they should. With a few Linux commands, we’ll “inject” delay thus emulating additional response time attributable to network latency both accurately and quickly.

Pre-reqs

  • Dev/test environment. This should never done in any other environment.
  • A Linux server where latency should realistically be added. Here we’ll add latency to a front-end load balancer so URLs and other config remain unchanged. We’ll refer to this as “lb01”.
  • A client to test latency. We’ll use another machine on the same LAN/vnet. We’ll refer to this as “client01”.
  • A “backend” machine proxied by lb01 and also on the same vnet. We’ll refer to this as “http01” and ensure our injected latency is not affecting the traffic between it and lb01.
  • Desired latency (full RTT). We’ll use 100ms here.

Steps

  • Measure baseline latency by pinging lb01 from client01 and http01 from lb01.client01 -> lb01
    $ ping 10.2.2.4
    rtt avg 0.782 ms
    lb01 -> http01
    $ ping 10.2.3.4
    rtt avg 0.758 ms

  • Execute the following on lb01 (the node where latency will be injected):
    $ sudo tc -s qdisc

    Output detail may vary by distro and net config but while details are not important, save the output because it represents the state of the machine without any network emulation.

  • Execute the following on lb01 (assumes eth0 is the applicable interface):
    $ sudo tc qdisc add dev eth0 root handle 1: prio
    $ sudo tc qdisc add dev eth0 parent 1:1 handle 10: netem delay 100ms
    $ sudo tc filter add dev eth0 protocol ip parent 1:0 prio 2 u32 match ip dst [client01’s IP]/32 flowid 1:1

    With the last command, and assuming “client01’s” IP is 10.2.4.4, we’d execute:

    $ sudo tc filter add dev eth0 protocol ip parent 1:0 prio 2 u32 match ip dst 10.2.4.4/32 flowid 1:1

    The first command puts eth0 in “tc mode” (transmission control), the second applies 100ms packet delay, and the third applies the delay to only when the destination IP is, in this case, client01’s. Granted, “real world” network latency occurs during send and receive, but to keep this illustration simple, we’ll just add the complete RTT to outbound packets to the client. A more accurate emulation would delay packets on both client and server (50ms).

  • Execute the following again on lb01:
    $ sudo tc -s qdisc

    Output will be different than before we injected latency, namely the presence of a line resembling this:
    $ qdisc netem 10: dev eth0 parent 1:1 limit 1000 delay 100.0ms
  • Verify you’ve added 100ms only between client01 and lb01, and not between lb01 and http01, with ping.

Reverting
Returning a “netem’d” adapter back to default/no emulation by executing:
$ sudo tc qdisc del dev eth0 root handle 1: prio

And of course ping to verify.

Can’t Revert?
netem is well documented, but like anything it’s possible for something to not go according to plan. Fortunately netem config is not permanent and when all else fails:

$ sudo reboot