Redis Latency Graph
Client Latency
Redis comes with a client-server architecture. In some cases, multiple clients may try to connect to the Redis server simultaneously. Since Redis is single-threaded, this will introduce a client queue where the server serves one client process at a given time. Hence, the concurrency latency arose. Therefore, succeeding clients might need to wait.
Command Latency
Every command takes some time to execute. It can vary from microsecond to seconds. Hence, it has been identified as a latency source. Most of the Redis commands take constant or logarithmic time complexity. At the same time, some commands take O(N) time complexity. They are considerably slower.
Round-Trip Latency
Round-trip is the time it takes to get a response from the Redis server after a command has been executed on the client-side. There might be different causes for round-trip latency, such as network slowness, fork operations, and OS paging.
Redis Latency Monitoring
Real-time applications use Redis, where performance is crucial. Hence, it is rewarding to have insights on Redis latency that will be helpful to take measures in advance. From version 2.8.13, Redis has added the latency monitoring component to its toolbox. This component is capable of recording latency spikes per event or specific code path.
Latency Events or Code Paths
The latency events(code paths) are nothing more than the generic or specific operations performed by Redis, such as generic commands, fork system calls, and unlink systems. When it comes to the generic commands, there are two main events defined by Redis.
- command
- fast-command
The “fast-command” event is defined for Redis commands that have O(1) or O(log N) time complexity, such as HSET, HINCRBY, HLEN, etc.
The “command” code path measures the latency spikes for the other commands with O(N) time complexity.
Enabling Latency Monitoring in Redis Server
The latency values depend on the application’s nature. One application might consider 10 milliseconds as high latency. At the same time, another application considers 1 second as a high value. Hence, Redis offers you an option to define the latency threshold in the server. By default, the threshold value is 0. There are two ways to set this value in Redis:
- Using the “CONFIG SET” subcommand in runtime
- Modifying the Redis configuration file
The CONFIG SET subcommand
You can use the config set subcommand with the parameter and its value to set the threshold value, as shown in the following. Here, we set it as 500 milliseconds.
Modifying the redis.conf file
We can start the Redis server by providing all the configurations in a configuration file called “redis.conf”. In the “LATENCY MONITOR” section, you can set the “latency-monitor-threshold” parameter value accordingly.
It is recommended to restart the Redis server after modifying the configuration file.
The LATENCY GRAPH Subcommand
The “LATENCY” command offers several subcommands to retrieve event-based latency information. One of the popular commands is “LATENCY GRAPH”. It draws a graph against the time when the event has happened. This graph is based on ASCII symbols and ranges from minimum latency value to maximum. The latency spikes are normalized between min and max latencies.
Let’s use the “debug sleep” command to check how the latency graph information is generated.
Syntax
The “event_name” parameter can be any event defined by the Redis latency monitoring framework, such as command, fast-command, fork, etc.
Example 01 – Applications With Latency Below the Threshold
Let’s use the “debug sleep” command to generate some latency spikes. It will go to sleep until the specified timeout. Since the latency threshold is 500 ms, we will issue sleep commands with a timeout lower than 500 ms.
debug sleep .2
debug sleep .3
Next, we will issue the latency graph command as shown in the following:
That would ideally generate the ASCII-style latency graph for the previous commands. Since the command execution time is lower than the threshold value in all the three “debug sleep” commands, Redis will not generate latency spikes. If we assume this is our real-time application, you are all good. There are no latency issues attached.
Output:
As expected, it says no samples are available for this particular event.
Example 02 – Applications With Latency Greater Than the Threshold
Let’s issue some debug commands with a timeout value greater than the threshold value. Usually, it is better to reset all the previous latency spikes before the next set of commands, as shown in the following:
Next, we will issue the debug sleep commands with a timeout value of more than 500 ms.
debug sleep .9
debug sleep 1
Output:
As expected, the ASCII-styled graph has been generated by Redis. The “_” denotes the lowest latency value, and the “#” symbol denotes the highest latency spike that occurred within the last 20 seconds. This graph can be interpreted vertically. Each column is for an event that has occurred within the last seconds, minutes, or days. The left-most column can be interpreted as the event that happened 20 seconds ago, the next one has happened 14 seconds ago, and the last column denotes an event that occurred 4 seconds ago.
Conclusion
Redis is used as a data store for real-time applications. Therefore, the performance aspects are crucial. The latency monitoring framework is a component offered by Redis to monitor the latency spikes for predefined events. The “latency graph command” can be used to generate the ASCII-styled latency spikes for a given time frame. It is used to identify the latency trends in your application and take necessary actions in advance. The latency spikes will be generated if the latency time is greater than the threshold value. The latency threshold value might differ from one application to another based on the nature.
Source: linuxhint.com