Api Rate Abuse in Hanami with Openid Connect
Api Rate Abuse in Hanami with Openid Connect — how this combination creates or exposes the vulnerability
Rate abuse in a Hanami application that uses OpenID Connect (OIDC) for authentication can occur when rate limiting is applied only after the identity of the caller is established, or when limits are scoped per identity rather than per client or per token. Hanami routes requests through a stack that typically authenticates via OIDC tokens (e.g., from an authorization server like Keycloak or Auth0). If rate limiting is implemented naively—such as counting requests per user ID extracted from a valid ID token—an attacker who acquires a valid token can still saturate backend endpoints. Conversely, if limits are applied before OIDC validation, unauthenticated attackers may probe the public surface without effective throttling. The vulnerability is exposed when rate controls are misaligned with the authentication boundary introduced by OIDC: tokens may carry claims like sub, client_id, or scopes that are trusted after validation, but the upstream rate limiter may not consider the token’s origin, audience, or revocation state.
Consider a Hanami endpoint that relies on an OIDC bearer token to identify a user. A typical pattern is to verify the token and then use the sub claim for authorization and rate tracking. If the rate limiter increments a counter per sub, an attacker with a single token can exhaust the quota for that user, causing denial of service for legitimate users sharing the same identity. Moreover, if token introspection or JWKS validation is performed on each request, the combined latency can affect the perceived effectiveness of rate limiting, especially under high concurrency. Attack patterns such as token replay or credential stuffing can amplify this when tokens are leaked or predictable. Real-world analogs include cases where OAuth client IDs are used as rate-limiting keys without considering that a compromised client secret enables a distributed attacker to rotate tokens and bypass per-client caps.
To detect this with middleBrick, you can scan the public endpoints of a Hanami service that uses OIDC, even without credentials. The scan’s Authentication and Rate Limiting checks run in parallel and correlate findings across the 12 security checks, including Input Validation and Data Exposure. Because middleBrick references real frameworks like OWASP API Top 10 and maps findings to compliance frameworks such as PCI-DSS and SOC2, it helps highlight misconfigurations where rate limits do not account for OIDC token boundaries. The unauthenticated scan will surface whether rate limiting is present, whether it considers authenticated contexts, and whether excessive agency patterns (e.g., token reuse or over-privileged scopes) are detectable in the API surface.
Openid Connect-Specific Remediation in Hanami — concrete code fixes
Remediation for rate abuse in Hanami with OpenID Connect centers on aligning rate-limiting keys with the security boundary established by OIDC and ensuring validation occurs before mutable state is changed. You should scope rate limits by a combination of token issuer, client_id, and, when appropriate, the token audience (aud claim), rather than relying solely on the user identifier (sub). This prevents a single compromised token from affecting all users under the same identity provider. Hanami applications often use a middleware or endpoint-level guard to validate JWTs using a library such as jwt and verify signatures via JWKS. Below is an example of how to integrate OIDC validation and rate limiting in a Hanami controller.
First, ensure your Hanami app verifies the OIDC token and extracts claims safely:
require 'jwt' require 'net/http' class ApiController < Hanami::Action def call(params) token = request.env['HTTP_AUTHORIZATION']&.to_s&.sub('Bearer ', '') if token begin # In practice, fetch JWKS from your OIDC issuer and cache it decoded = JWT.decode(token, nil, true, { algorithm: 'RS256', iss: 'https://your-oidc-issuer/', verify_iss: true, verify_aud: true, aud: 'your-api-audience' })[0] # Attach claims to request for downstream use request.env['OIDC_CLAIMS'] = decoded rescue JWT::DecodeError -> { halt 401, { error: 'invalid_token' }.to_json } end end private def current_user(claims) UserRepository.find_by_oid_sub(claims['sub']) end end
Next, implement rate limiting that uses a composite key derived from the validated token. For example, using a Redis-backed middleware in Hanami, you can combine the issuer (iss), client_id, and audience (aud) to create a rate key that reflects the OIDC context:
require 'redis' class RateLimitMiddleware def initialize(app) @app = app @redis = Redis.new end def call(env) claims = env['OIDC_CLAIMS'] if claims issuer = claims['iss'] client_id = claims['client_id'] audience = claims['aud'] key = "rate_limit:#{iss}:#{client_id}:#{aud}" current = @redis.incr(key) @redis.expire(key, 60) if current == 1 if current > 100 # threshold for per-token limit return [429, { 'Content-Type' => 'application/json' }, [{ error: 'rate_limit_exceeded' }.to_json]] end end @app.call(env) end end # Then insert this middleware into your Hanami stack. Use `bundle exec hanami middleware` or configure in `config/application.rb`.
For production, rotate the Redis key TTL to match your token expiry and consider sliding windows to mitigate burst abuse. You can also tie rate limits to scopes, so tokens with limited scopes are assigned lower thresholds. middleBrick’s Pro plan supports continuous monitoring and CI/CD integration, which can alert you if rate-limit configurations drift or if unauthenticated endpoints expose high-risk paths. Its GitHub Action can fail builds when risk scores degrade, helping you enforce OIDC-aware rate controls before deployment.