Welcome to part 2, where we’ll take a look at how to test the rate limiter.
First of all we have to make a couple changes to the class we’ve created in part 1.
The first change is to make the interval
and timeout
instance variables configurable during initialization.
This allows us to speed up tests.
We can make the limiter wait 100ms between calls instead of the default 1s.
We also leave the initial values as defaults.
The second change, the @redis_key
variable, makes the key configurable and allows us to run test groups in succession without having to wait for the interval
to end.
We will call redis.del
directly from our tests and because the key is configurable, our tests don’t have to know the DEFAULT_REDIS_KEY
value.
class RateLimiter
TimedOut = ::Class.new(::StandardError)
DEFAULT_REDIS_KEY = "harmonogram_#{Rails.env}_toggl_api_rate_limiter_lock".freeze
DEFAULT_INTERVAL = 1 # seconds between subsequent calls
DEFAULT_TIMEOUT = 15 # maximum amount of time a single call should wait for a time slot
def initialize(redis = Redis.current, redis_key: DEFAULT_REDIS_KEY, interval: DEFAULT_INTERVAL, timeout: DEFAULT_TIMEOUT)
@redis = redis
@redis_key = redis_key
@interval = interval
@timeout = timeout
end
# ...
attr_reader :redis, :redis_key, :interval, :timeout
The tests
Let’s move on to writing tests. First, the setup.
RSpec.describe RateLimiter do
subject(:rate_limiter) { described_class.new(redis, interval: interval, timeout: timeout, redis_key: redis_key) }
let(:redis) { Redis.current }
let(:timeout) { 1 } # 1s to fail fast
let(:interval) { 0.1 } # 100ms to make tests faster, but avoid false positives
let(:redis_key) { 'rate_limiter_test_key' }
before do
# reset the limiter to avoid unnecessary delay between examples
redis.del(redis_key)
end
end
We use a named subject because, well, that’s just my preference. Then set the timeout and interval low enough to speed tests up but high enough to see if all of this actually works.
describe '#with_limited_rate' do
it 'runs the provided block' do
expect { |b| rate_limiter.with_limited_rate(&b) }.to yield_control
end
it 'returns the value returned from provided block' do
expect(rate_limiter.with_limited_rate { 123 }).to eq 123
end
end
First we add a couple sanity checks.
There’s no point in checking if calls are rate limited if the basics don’t work.
At this point, accidentally removing the yield
or adding extra lines after it will trigger test failures.
The second step is to make sure that subsequent calls to with_limited_rate
get executed with the configured delay.
def calculate_interval(times)
times.sort.reverse.each_cons(2).map { |ab| ab.reduce(&:-) }.min
end
# ...
context 'when called multiple times' do
it 'runs the provided blocks in sequence with specified interval', :aggregate_failures do
times = []
rate_limiter.with_limited_rate { times << Time.now }
rate_limiter.with_limited_rate { times << Time.now }
expect(times.count).to eq(2)
expect(calculate_interval(times)).to be_within(0.06).of(interval)
end
end
To do that we create an empty array and within the blocks passed to the limiter, we add timestamps to the array.
After running we check if the smallest time difference between subsequent timestamps is within 60ms of our configured interval.
We add a calculate_interval
method that helps us in the calculation.
Why 60ms?
Remember the magical offset from part 1? It adds 10-50ms to each sleep
to allow us to prioritize callers with more retries.
The highest offset is 50ms and we add 10ms to account for execution time, giving us 60ms.
Now, we can take it one step further and test if the same happens when multiple instances of the limiter are used.
context 'when multiple instances are called at the same time' do
it 'runs the provided blocks in sequence with specified interval', :aggregate_failures do
times = []
2.times do
described_class.new(interval: interval, timeout: timeout).with_limited_rate do
times << Time.now
end
end
expect(times.count).to eq(2)
expect(calculate_interval(times)).to be_within(0.06).of(interval)
end
end
The idea is the same as before, but we’re creating a new instance for each call.
So far so good. Now we can take it to the next level by testing if rate limiting works when called from multiple threads.
context 'when called from multiple threads at the same time' do
let(:mutex) { Mutex.new }
it 'runs the provided blocks in sequence with specified interval', :aggregate_failures do
times = []
Array.new(2) do
Thread.new do
described_class.new(interval: interval, timeout: timeout).with_limited_rate do
mutex.synchronize { times << Time.now }
end
end
end.map(&:join)
expect(times.count).to eq(2)
expect(calculate_interval(times)).to be_within(0.06).of(interval)
end
end
For this purpose we make use of the Mutex
class, which makes it possible to safely mutate the same array from different threads.
This is as far as we’ll take it.
Ideally we’d want a test that can verify if rate limiting works for multiple processes too, but frankly I have no idea how to do that.
One more thing we could add though is some tests around error handling.
context 'when timeout reached' do
let(:timeout) { 0 }
it 'raises a custom exception' do
expect do
2.times { rate_limiter.with_limited_rate {} }.to raise_exception(described_class::TimedOut)
end
end
end
By setting the timeout to 0 and calling twice we can trigger the second block to time out. A the same time we want to make sure that even if a block raises an error, the next call happens after the interval.
context 'when block raises an error' do
it 'allows next call after interval', :aggregate_failures do
times = [Time.now]
begin
rate_limiter.with_limited_rate { raise described_class::TimedOut }
rescue described_class::TimedOut
rate_limiter.with_limited_rate { times << Time.now }
end
expect(times.count).to eq(2)
expect(calculate_interval(times)).to be_within(0.06).of(interval)
end
end
Fin
So there we go. We’ve created a rate limiter that synchronises multiple processes. Then we’ve checked it by creating tests with increasing complexity. As in part one, you can check out a working demo.