Exponential Backoff in NestJS

I was recently building a REST API and fell in love with NestJS. It has everything you could wish for - sensible abstractions, clean separation of routing and business logic and lots of helpful utilities. Throw in TypeORM with class-validator and you have yourself a really pleasant and scalable backend development experience.
Creating some basic API resources and auth middleware with JWT access tokens was a breeze, so when I set out to add rate limiting to the API, I was certain this would only take a few minutes. In principle, this was true - NestJS has a nice ThrottlerModule that can be added as global guard and will block repeated requests that exceed a certain frequency. For example, to allow 3 requests per second and 100 requests per minute, you would add the following config to your AppModule:
@Module({
imports: [
ThrottlerModule.forRoot([
{
name: 'short',
ttl: 1000,
limit: 3,
blockDuration: 60000,
},
{
name: 'long',
ttl: 60000,
limit: 100,
blockDuration: 60000,
},
]),
],
})
export class AppModule {} The fine-grained control over request frequencies is nice, and the blockDuration parameter lets you specify how long a user should be kept out if they violate the rate limit. However, the backoff penalty stays the same, even for repeated violations. After some digging, I was surprised to realize NestJS didn’t offer exponential backoff - a strategy where you double the penalty for every violation (up to a maximum). Exponential backoff is a very established way to protect critical routes like auth endpoints and discourage attackers from brute-force attempts. If you have ever repeatedly entered the wrong PIN on your iPhone and been subjected to increasingly tormenting retry intervals, you know what I’m talking about.
The Guard
I needed exponential backoff, so a custom throttler it was. My requirements were:
- Extend the existing
ThrottlerGuardmechanism. - Block requests from the same IP or user ID (if logged in).
- Adjust the block duration on repeated violations, up to a maximum.
The first two requirements were easy to implement as you can simply extend the ThrottlerGuard class and override the getTracker method:
// exp-backoff-throttler.guard.ts
@Injectable()
export class ExpBackoffThrottlerGuard extends ThrottlerGuard {
protected getTracker(req: Record<string, any>): Promise<string> {
return req.user?.id || req.ip;
}
} The third one turned out to be tricky. It took me a while to realize you cannot flexibly override the blocking mechanism, as the hit count and timeToExpire are opaquely managed inside the storageService.increment method. Since I didn’t want to write a new storage service at this point, my throttler would have to maintain its own hit count and block expiration.
I created a basic BlockEntry that remembers the minimum and current block duration, the hit count of the requester and the timestamp when the block will be lifted:
// exp-backoff-throttler.guard.ts
type BlockEntry = {
minDuration: number;
blockDuration: number;
totalHits: number;
blockedUntil: number;
};
... With this, we can track every requester in a blockMap:
// exp-backoff-throttler.guard.ts
...
@Injectable()
export class ExpBackoffThrottlerGuard extends ThrottlerGuard {
private blockMap: Map<string, BlockEntry> = new Map();
...
} The parent ThrottlerGuard class exposes the throwThrottlingException method, so when a rate limit is violated we update the blockMap as follows:
// exp-backoff-throttler.guard.ts
...
@Injectable()
export class ExpBackoffThrottlerGuard extends ThrottlerGuard {
private blockMap: Map<string, BlockEntry> = new Map();
private readonly MAX_BLOCK_DURATION = 5 * 60 * 1000;
...
protected throwThrottlingException(
context: ExecutionContext,
throttlerLimitDetail: ThrottlerLimitDetail,
): Promise<void> {
const { key, totalHits } = throttlerLimitDetail;
const blockEntry = this.blockMap.get(key) as BlockEntry;
const { blockDuration } = blockEntry;
this.blockMap.set(key, {
...blockEntry,
blockedUntil: Date.now() + blockDuration,
// Double the block duration for every hit (up to the max)
blockDuration: Math.min(this.MAX_BLOCK_DURATION, blockDuration * 2),
totalHits,
});
return super.throwThrottlingException(context, throttlerLimitDetail);
}
} Finally, we have to override the handleRequest method to detect whether a hit occured within the period tracked by the blockMap:
// exp-backoff-throttler.guard.ts
...
@Injectable()
export class ExpBackoffThrottlerGuard extends ThrottlerGuard {
...
protected async handleRequest(requestProps: ThrottlerRequest): Promise<boolean> {
const getThrottlerSuffix = (name?: string) => (name === 'default' ? '' : `-${name}`);
const {
context,
throttler,
limit,
ttl,
blockDuration: dur,
getTracker,
generateKey,
} = requestProps;
const { req, res } = this.getRequestResponse(context);
const tracker = await getTracker(req, context);
const key = generateKey(context, tracker, throttler.name || 'default');
// Create a new block map entry if it doesn't exist
if (!this.blockMap.has(key))
this.blockMap.set(key, {
minDuration: dur,
blockDuration: dur,
totalHits: 0,
blockedUntil: 0,
});
const blockEntry = this.blockMap.get(key) as BlockEntry;
const { blockDuration, blockedUntil, totalHits } = blockEntry;
const now = Date.now();
// Check if a hit occured within the block period
if (blockedUntil && now < blockedUntil) {
const remTime = Math.ceil(blockDuration / 1e3);
res.header(`Retry-After${getThrottlerSuffix(throttler.name)}`, remTime);
await this.throwThrottlingException(context, {
limit,
ttl,
key,
tracker,
totalHits: totalHits + 1,
timeToExpire: remTime,
isBlocked: true,
timeToBlockExpire: remTime,
});
}
const allowed = await super.handleRequest({
...requestProps,
blockDuration,
});
// If we get here, the request is allowed and we lift the block
if (blockedUntil) {
this.blockMap.set(key, {
...blockEntry,
blockDuration: blockEntry.minDuration,
blockedUntil: 0,
});
}
return allowed;
}
...
} And that’s all there is to it. The full code can be found in this Gist, feel free to use it in your own projects.
Using the Guard
To use the new guard, simply add it as a provider in the AppModule:
import { ExpBackoffThrottlerGuard } from './guards/exp-backoff-throttler.guard';
@Module({
providers: [
{
provide: APP_GUARD,
useClass: ExpBackoffThrottlerGuard,
},
],
})
export class AppModule {} NestJS has a nice @Throttle decorator that lets you override rate limits for individual routes. In my API I have generous rate limits on most routes (since they are all auth-protected anyway), but do enforce a strict limit of 2 tries per minute on the public auth token endpoint:
@Controller('auth')
export class AuthController {
@Public()
@Throttle({ default: { ttl: 60000, limit: 2 } })
@Post('token')
async createToken(@Body() authDto: AuthDto) {
...
}
} When you hit this endpoint with cURL or Postman, the response headers will look like this:
curl -D - --json '{ ... }' http://localhost:3000/auth/token
HTTP/1.1 200 OK
...
X-RateLimit-Limit: 2
X-RateLimit-Remaining: 1
X-RateLimit-Reset: 60 Hit it a second time and you get:
HTTP/1.1 200 OK
...
X-RateLimit-Limit: 2
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 54 On the third try:
HTTP/1.1 429 Too Many Requests
...
Retry-After: 60 … and the fourth:
HTTP/1.1 429 Too Many Requests
...
Retry-After: 120 This will increase all the way up to MAX_BLOCK_DURATION (5 minutes in my case). If you refrain from sending further requests during the penalty timeout you’ll get in the API’s good graces again. Nice!
Future Improvements
As I am currently only using a single backend instance, the custom guard works well for my needs. I see two improvements that might be worth tackling in the future:
Larger deployments will require multiple API servers and thus need a distributed throttler cache for the rate limit to be enforced consistently. The community already provides such a throttler in the throttler-storage-redis package; this could be forked to provide a version supporting exponential backoff.
In the mid term, it would be nice if exponential backoff was integrated into the core NestJS throttler. This would require extracting the hit count and expiry calculation from the abstracted storage provider and would thus likely be a breaking change. Seeing how exponential backoff is a pretty fundamental rate limit strategy, I do think it would be worth it.
Thanks for tuning in. As always, if you like this kind of content or have questions, please comment below and subscribe to the RSS feed to be notified of future posts.