Api Rate Abuse in Feathersjs with Oauth2
Api Rate Abuse in Feathersjs with Oauth2 — how this specific combination creates or exposes the vulnerability
Rate abuse in FeathersJS applications that use OAuth 2.0 often centers on how token issuance and protected endpoints are designed and validated. When an API relies on bearer tokens issued via OAuth 2.0 flows (such as the client credentials flow for service-to-service calls or the authorization code flow for user access), attackers can exploit two complementary weaknesses: insufficient rate limiting on token acquisition endpoints and missing or misconfigured rate controls on resource endpoints protected by those tokens.
Consider an OAuth 2.0 setup where the token endpoint issues access tokens with no per-client or per-user rate limits. An attacker can automate credential or client secret abuse to repeatedly obtain tokens, leading to token exhaustion and potential DoS against the authorization server or to amplify abuse against downstream APIs. Even if resource endpoints have some throttling, tokens themselves become an attack vector: if token validation is inexpensive and untracked, an attacker can flood the API with valid bearer tokens, consuming server-side processing and database connections while staying under per-endpoint thresholds.
FeathersJS does not enforce rate limits by default. If you add OAuth 2.0 via an authentication strategy such as oauth2 or @feathersjs/authentication-oauth without explicit rate controls, you effectively couple token issuance and API access. An unauthenticated attack surface (black-box scanning) can probe token endpoints and protected routes at high volume. Findings from such scans often reveal missing rate constraints on both /oauth/token and service routes, enabling rapid token acquisition and request flooding. This maps to common OAuth 2.0 threat patterns like token request flooding and credential stuffing, which can degrade availability and bypass intended access controls.
In combined deployments, misalignment worsens risk. For example, an OAuth 2.0 protected service might enforce scope checks but omit numeric rate limits, allowing a token with broad scopes to be reused thousands of times per minute. During an unauthenticated scan, this shows as weak rate limiting on authenticated paths and increases exposure to data exfiltration or business logic abuse. Because FeathersJS services are often thin wrappers around databases or external integrations, high request volumes can overload backends even when tokens appear valid.
To detect these issues, run a black-box scan against the OAuth 2.0 flow and protected endpoints. Configure probes against token acquisition paths and representative resource routes to observe whether rate limiting is applied consistently across authentication boundaries. Effective detection highlights missing or inconsistent throttling and helps prioritize fixes that align token issuance controls with API endpoint protections.
Oauth2-Specific Remediation in Feathersjs — concrete code fixes
Remediation centers on adding explicit rate limits at two layers: the token endpoint and the FeathersJS service layer. Use a shared store such as Redis to coordinate limits across instances and ensure consistency. Below are focused, syntactically correct examples that integrate cleanly with FeathersJS and OAuth 2.0 workflows.
Rate limiting token acquisition
Apply a rate limiter to the OAuth 2.0 token route to restrict how frequently a client can request tokens. This example uses express-rate-limit in front of the Feathers app, scoped to the token path.
const rateLimit = require('express-rate-limit');
// Limit token requests to 10 per minute per client IP (use a more specific key in production)
const tokenLimiter = rateLimit({
windowMs: 60 * 1000,
max: 10,
standardHeaders: true,
legacyHeaders: false,
keyGenerator: (req) => {
// Prefer client_id when available in body or query for per-client limits
return req.body.client_id || req.ip;
}
});
app.use('/oauth/token', tokenLimiter);
For per-client limits using client_id (recommended), ensure your OAuth 2.0 provider passes client_id in the request body and the limiter uses it as shown. This prevents a single client from exhausting tokens while allowing normal multi-client operation.
Rate limiting protected Feathers services
After authentication, apply service-level rate limits. Below is a Feathers service hook that enforces limits using a Redis-backed algorithm to keep counts consistent across nodes.
const { RateLimiterRedis } = require('rate-limiter-flexible');
const redisClient = require('redis').createClient({ url: process.env.REDIS_URL });
const rateLimiter = new RateLimiterRedis({
storeClient: redisClient,
points: 60, // 60 requests
duration: 60 // per 60 seconds
});
app.use('/messages', {
async before(hook) {
try {
// key by user id when authenticated, fallback to connection IP
const key = hook.params.user ? `user:${hook.params.user.id}` : `ip:${hook.params.connection.remoteAddress}`;
await rateLimiter.consume(key);
} catch (rej) {
throw new Error('Too many requests');
}
}
});
This hook runs before service methods and rejects requests that exceed the configured budget. By tying the key to authenticated user IDs, you enforce per-user caps even when a single token is shared across sessions.
OAuth 2.0 setup with token binding and limited scopes
Ensure your OAuth 2.0 strategy issues tokens with minimal scopes and validates token usage. Below is a concise configuration example that avoids overprivileged tokens and integrates with rate-limited endpoints.
const authentication = require('@feathersjs/authentication');
const oauth2 = require('@feathersjs/authentication-oauth');
app.configure(authentication({
entity: 'user',
service: 'authentication'
}));
app.configure(oauth2({
model: require('./models/oauth-model'),
accessTokenExpiresIn: '15m',
allowBearerTokensInQueryString: false,
requiredScopes: {
'/messages': ['read:messages'],
'/admin': ['admin:read', 'admin:write']
}
}));
With short-lived access tokens and required scopes, even if a token is used beyond intended rate limits, the impact is bounded by lifetime and scope. Combine this with the rate limiters above to tightly couple token acquisition controls with resource protections.