Bleichenbacher Attack in Hanami with Bearer Tokens
Bleichenbacher Attack in Hanami with Bearer Tokens — how this specific combination creates or exposes the vulnerability
A Bleichenbacher attack is a cryptographic padding oracle exploit that can recover plaintext or forge tokens when a server leaks information about the validity of ciphertext padding. In Hanami, this pattern can manifest around Bearer token handling when token validation performs distinguishable error responses based on padding correctness. If an endpoint that accepts a Bearer token returns different HTTP status codes or response bodies for malformed tokens versus tokens with valid padding but incorrect integrity, an attacker can iteratively adapt chosen ciphertexts to decrypt or forge tokens without knowing the secret key.
Consider a Hanami app that uses JWTs as Bearer tokens and performs decryption + signature verification in a way where padding errors are not normalized before response handling. An attacker who intercepts or observes a 401 with a specific error message for certain tokens can send modified tokens to the same endpoint and observe timing differences or error message variations. Over many requests, the Bleichenbacher adaptive-chosen-ciphertext method can be used to recover the signing or encryption key material, or to produce a valid token with elevated claims. This is especially relevant when tokens are validated using libraries that expose padding errors and the application does not enforce constant-time verification or consistent error handling.
In the context of the 12 security checks run by middleBrick, this scenario falls under Input Validation and Authentication. A scan against a Hanami endpoint that accepts Bearer tokens may surface findings where error handling leaks cryptographic validity, missing rate limiting that would throttle adaptive attempts, or an absence of secure token comparison practices. Even without credentials, middleBrick’s unauthenticated scan can detect behavioral differences that indicate a potential padding oracle by analyzing response consistency and timing across crafted requests.
Example of a Bearer token usage in Hanami that can be vulnerable if padding errors are not handled uniformly:
# config/routes.rb
Rails.application.routes.draw do
post "/api/transfer", to: "transfers#create"
end
# app/controllers/transfers_controller.rb
class TransfersController < Hanami::Controller
def create
token = request.headers["Authorization"]&.to_s.sub("Bearer ", "")
begin
payload, header = JWT.decode(token, ENV["JWT_SECRET"], true, { algorithm: "HS256" })
# Process transfer using payload["account"], payload["amount"]
render json: { status: "ok" }
rescue JWT::ExpiredSignature, JWT::VerificationError
halt 401, { error: "invalid_token" }.to_json
rescue ArgumentError => e
# If this branch reveals distinct error details or timing, it may aid a Bleichenbacher oracle
halt 400, { error: "token_malformed", details: e.message }.to_json
end
end
end
In the example, an ArgumentError from malformed base64 or padding may be surfaced with details that differ from a verification error. An attacker can use this to distinguish between malformed ciphertext and valid ciphertext with bad padding, enabling a Bleichenbacher adaptive attack. middleBrick’s checks for Input Validation and Authentication aim to surface such inconsistent error handling and recommend uniform responses and constant-time verification patterns.
Bearer Tokens-Specific Remediation in Hanami — concrete code fixes
Remediation focuses on ensuring that token validation does not leak cryptographic validity through timing or error messages. Implement constant-time comparison where applicable and ensure that all token-related failures result in the same generic response and status code. Avoid branching logic that exposes details about why a token is rejected.
First, normalize error handling for all token-related failures. Return the same HTTP status and body regardless of whether the issue is expired signature, invalid signature, or malformed token.
# app/controllers/transfers_controller.rb
class TransfersController < Hanami::Controller
def create
token_header = request.headers["Authorization"]
unless token_header&.start_with?("Bearer ")
halt 401, { error: "invalid_token" }.to_json
end
token = token_header.sub("Bearer ", "")
begin
# Use a verified method that avoids leaking padding info; ensure library handles it
payload = JWT.decode(token, ENV["JWT_SECRET"], true, { algorithm: "HS256" }).first
# Process transfer using payload["account"], payload["amount"]
render json: { status: "ok" }
rescue JWT::ExpiredSignature, JWT::VerificationError, ArgumentError
# Always return the same generic response for any token validation failure
halt 401, { error: "invalid_token" }.to_json
end
end
end
Second, prefer libraries and algorithms that are not vulnerable to padding oracles. For HMAC-based tokens like HS256, ensure your JWT library uses constant-time verification internally. For asymmetric tokens like RS256 or ES256, use libraries that implement safe unpadding and signature checks. Avoid custom crypto or low-level operations that expose timing differences.
Third, add rate limiting to mitigate adaptive attacks. Even if errors are uniform, an attacker should not be able to submit many crafted tokens in a short period.
# Gemfile
gem "hanami-actions"
# In a dedicated action or plugin
class RateLimitAction < Hanami::Action
def initialize
@limiter = ActiveSupport::Cache::MemoryStore.new
end
def call(params)
ip = params["REMOTE_ADDR"]
key = "token_failures:#{ip}"
failures = @limiter.read(key) || 0
if failures >= 30
halt 429, { error: "too_many_requests" }.to_json
end
@limiter.write(key, failures + 1, expires_in: 60)
@app.call(env)
end
end
# Then wrap your controller or specific routes with the rate limiter.
Finally, consider using opaque reference tokens where possible and validate them via a secure introspection endpoint rather than local cryptographic validation that may expose subtle timing differences. This shifts the security burden to a dedicated auth service and reduces the attack surface in Hanami applications.
middleBrick’s scans can validate whether your endpoints exhibit consistent error handling and whether rate limiting is present, helping you confirm that remediation is effective.