Rate Limiting Bypass in Adonisjs with Mongodb
Rate Limiting Bypass in Adonisjs with Mongodb — how this specific combination creates or exposes the vulnerability
In Adonisjs applications that use MongoDB as the primary data store, rate limiting can be bypassed when rate-limiting logic is implemented at the application layer without accounting for how MongoDB operations are distributed or retried. Adonisjs does not enforce process-level concurrency limits, so a client can open multiple parallel requests that each independently pass an in-memory or in-request counters check before any write is issued. If the rate limiter relies only on request count per IP within a time window, and does not coordinate with database-side state, an attacker can split traffic across multiple sources or exploit retry storms caused by transient MongoDB errors.
MongoDB-specific conditions that contribute to bypass include unindexed or inconsistently indexed fields used for rate-limiting keys, which cause slow lookups and race conditions under load. When an application queries a collection to read a counter and then updates it in separate steps (read ➜ compute ➜ write), concurrent requests can read the same pre-update value and each proceed as if the limit has not yet been reached. This is an example of a time-of-check-to-time-of-use (TOCTOU) race condition. If the application uses MongoDB’s findOneAndUpdate without appropriate atomic operators or does not leverage transactions in a multi-document context, the counter increments can be lost or interleaved, effectively multiplying the allowed request volume.
Additionally, if the application stores rate-limit state in a capped or non-replicated MongoDB collection without proper write concern, network blips or primary step-downs can cause counter updates to be dropped or delayed. Retries from the driver or client-side logic then re-apply the same logical request, and because the limiter did not enforce idempotency keys or request deduplication, the effective rate exceeds the intended threshold. The combination of Adonisjs routing and MongoDB’s eventual-consistency characteristics in certain deployment modes can also allow an attacker to exploit timing differences between API gateway enforcement and database commit visibility, especially when reads are served from secondary nodes in a replica set.
Real-world attack patterns mirror OWASP API Top 10 #3: Broken Object Level Authorization (BOLA) when rate limiting is tied to user-level permissions and an authenticated subject abuses missing per-resource throttling. For example, an endpoint like POST /api/users/:id/promote may check ownership but not enforce a per-user rate limit, allowing rapid privilege escalation attempts. Instrumentation gaps make this harder to detect; if logs do not capture MongoDB operation timestamps and driver correlation IDs, defenders cannot reconstruct the sequence of bypass attempts.
To detect such bypasses during a middleBrick scan, the tool checks whether per-endpoint rate limiting is coordinated with data-layer state and whether counter updates are atomic. Findings include missing idempotency handling, non-atomic increments, and inconsistent index usage on rate-limiting keys. Remediation guidance focuses on making limits durable and atomic, and on correlating application telemetry with MongoDB server metrics to ensure that throttling is effective under concurrency and failure conditions.
Mongodb-Specific Remediation in Adonisjs — concrete code fixes
Secure remediation centers on using atomic update operators and, where necessary, multi-document transactions to make rate-limiting operations indivisible. In Adonisjs with the MongoDB driver, prefer updateOne with $inc and $bit for counters, and ensure uniqueness constraints on compound keys that include the time bucket. This prevents lost updates and ensures that concurrent requests compete safely at the database level.
Atomic counter implementation
The following example shows a robust per-minute rate limiter using a single MongoDB document per bucket key. It uses updateOne with upsert to create the bucket if it does not exist, and $inc to atomically increment the request count.
import { DateTime } from 'luxon';
import Database from '@ioc:Adonis/Lucid/Database';
async function isAllowed(userId: string, limit: number = 100): Promise<boolean> {
const bucket = `rate_limit:${userId}:${DateTime.local().startOf('minute').toISO()}`;
const result = await Database
.connection()
.collection('rate_limits')
.updateOne(
{ _id: bucket },
{ $inc: { count: 1 }, $setOnInsert: { createdAt: new Date() } },
{ upsert: true }
);
// The driver returns the updated document; we need to read the new count safely.
const current = await Database.connection().collection('rate_limits').findOne({ _id: bucket });
return current.count <= limit;
}
export default async function rateLimitMiddleware(ctx) {
const allowed = await isAllowed(ctx.request.ip(), 30);
if (!allowed) {
ctx.response.status(429).send({ error: 'Too Many Requests' });
}
}
Compound keys with TTL for automatic cleanup
Use a compound identifier that includes a time bucket and an optional resource identifier, and create a TTL index so that MongoDB automatically removes expired buckets. This avoids storage bloat and keeps checks efficient.
import { MongoClient } from 'mongodb';
const client = new MongoClient(process.env.MONGODB_URI);
await client.connect();
const coll = client.db('app').collection('rate_limits');
// Ensure TTL index on createdAt to auto-delete old buckets
await coll.createIndex({ createdAt: 1 }, { expireAfterSeconds: 3600 });
async function allowRequest(key: string, limit: number): Promise<boolean> {
const bucket = `${key}:${Math.floor(Date.now() / 60000)}`;
const res = await coll.updateOne(
{ _id: bucket },
{ $inc: { count: 1 } },
{ upsert: true }
);
const doc = await coll.findOne({ _id: bucket });
return doc.count <= limit;
}
// Usage in an Adonisjs controller
Route.get('/resource', async ({ request }) => {
const key = `ip:${request.ip()}`;
if (!await allowRequest(key, 45)) {
throw Response.badRequest('Rate limit exceeded');
}
return { data: 'ok' };
});
Transactions for multi-document consistency
If your rate limit depends on multiple collections or must be strongly consistent, use MongoDB sessions and transactions. Adonisjs with the native driver supports sessions for operations that must appear atomic.
const session = client.startSession();
session.startTransaction();
try {
const buckets = await Promise.all([
coll1.withSession(session).updateOne({ _id: 'key1' }, { $inc: { used: 1 } }, { upsert: true }),
coll2.withSession(session).updateOne({ _id: 'key2' }, { $inc: { used: 1 } }, { upsert: true }),
]);
await session.commitTransaction();
} catch (err) {
await session.abortTransaction();
throw err;
} finally {
session.endSession();
}
Indexing and schema design
Ensure the field used in the _id or query predicate is indexed. A compound index on { _id: 1, createdAt: 1 } supports efficient lookups and TTL cleanup. Avoid storing rate-limit state in non-indexed fields or in arrays, which degrade performance and increase collision risk under high concurrency.
middleBrick scans surface risks such as non-atomic increments, missing indexes, and lack of idempotency handling. By aligning your Adonisjs implementation with these MongoDB-safe patterns, you reduce the likelihood of rate-limiting being bypassed through concurrency or retry paths.
Related CWEs: resourceConsumption
| CWE ID | Name | Severity |
|---|---|---|
| CWE-400 | Uncontrolled Resource Consumption | HIGH |
| CWE-770 | Allocation of Resources Without Limits | MEDIUM |
| CWE-799 | Improper Control of Interaction Frequency | MEDIUM |
| CWE-835 | Infinite Loop | HIGH |
| CWE-1050 | Excessive Platform Resource Consumption | MEDIUM |