HIGH api rate abusehanami

Api Rate Abuse in Hanami

How Api Rate Abuse Manifests in Hanami

Api rate abuse in Hanami applications typically occurs through unprotected endpoints that allow unlimited request volume, enabling attackers to exhaust server resources, bypass rate limits, or trigger denial-of-service conditions. In Hanami's architecture, this vulnerability manifests in several specific ways.

Unprotected action methods represent the most common attack vector. When developers forget to apply rate limiting middleware to critical endpoints, attackers can send thousands of requests per second. For example, a public API endpoint that processes user data without any throttling:

module Api::V1
  class UsersController < ApiController
    def show
      user = UserRepository.find(params[:id])
      render json: user
    end
  end
end

Without rate limiting, this endpoint can be hammered continuously, potentially exposing user data through enumeration attacks or overwhelming the database.

Batch processing endpoints are particularly vulnerable in Hanami applications. When APIs accept bulk operations without proper rate controls, attackers can submit massive payloads that consume disproportionate resources. Consider this Hanami action handling bulk user creation:

module Api::V1
  class BulkController < ApiController
    def create
      users = params[:users].map { |user_data| UserRepository.create(user_data) }
      render json: users
    end
  end
end

This endpoint has no protection against submitting hundreds or thousands of user records in a single request, nor does it limit how frequently such requests can be made.

Authentication bypass scenarios are especially dangerous in Hanami. When rate limiting is applied only to authenticated endpoints but not to public ones, attackers can exploit the unprotected paths. A common pattern is protecting user account endpoints but leaving public data APIs wide open:

# Protected endpoint (has rate limiting)
class AccountController < ApiController
  before { authenticate! }
  before { rate_limit! } # Properly configured
end

# Unprotected endpoint (no rate limiting)
class PublicDataController < ApiController
  # No authentication, no rate limiting
end

This creates an asymmetric security posture where attackers can freely abuse public endpoints while authenticated paths remain protected.

Hanami's slice-based architecture can inadvertently create rate abuse opportunities when slices are deployed independently without consistent security policies. A development slice might lack the rate limiting middleware present in production slices, creating security gaps during development or staging deployments.

Hanami-Specific Detection

Detecting API rate abuse in Hanami applications requires both manual code review and automated scanning approaches. The most effective detection combines runtime monitoring with static analysis of your Hanami codebase.

Manual detection starts with examining your action classes for missing rate limiting middleware. In Hanami, you should look for before filters that apply rate limiting. A properly protected endpoint will have explicit rate limiting configuration:

class Api::V1::ProductsController < ApiController
  before { rate_limit calls: 100, period: 3600 } # 100 requests per hour
  
  def index
    products = ProductRepository.all
    render json: products
  end
end

Missing or inconsistent rate limiting across similar endpoints indicates potential abuse vectors. Pay special attention to endpoints handling sensitive operations like user enumeration, data export, or batch processing.

middleBrick's black-box scanning approach is particularly effective for Hanami applications because it tests the actual runtime behavior without requiring source code access. The scanner sends controlled request bursts to your endpoints and analyzes response patterns, timing, and error codes to identify rate abuse vulnerabilities.

Key detection patterns middleBrick identifies in Hanami applications include:

  • Endpoints that accept unlimited requests without 429 Too Many Requests responses
  • Batch endpoints that process large payloads without size or rate constraints
  • Public APIs that lack any form of request throttling
  • Endpoints with inconsistent rate limiting across similar functionality
  • Missing rate limiting on authentication endpoints (password reset, login attempts)

middleBrick's LLM/AI security scanning also detects emerging rate abuse patterns specific to AI-powered Hanami endpoints, such as excessive token consumption or repeated prompt injection attempts that could indicate automated abuse.

Runtime monitoring complements static analysis. Using Hanami's middleware stack, you can implement request counting and alerting for suspicious patterns:

class RateLimitMiddleware
  def initialize(app, calls: 100, period: 3600)
    @app = app
    @calls = calls
    @period = period
    @store = MemoryStore.new
  end
  
  def call(env)
    key = rate_limit_key(env)
    count = @store.increment(key)
    
    if count > @calls
      [429, { 'Content-Type' => 'application/json' }, [{ error: 'Rate limit exceeded' }.to_json]]
    else
      @app.call(env)
    end
  end
end

This middleware can be added to your Hanami application's slice configuration to provide baseline protection while you identify specific rate abuse vulnerabilities.

Hanami-Specific Remediation

Remediating API rate abuse in Hanami applications requires implementing proper rate limiting at both the application and infrastructure levels. Hanami's modular architecture provides several approaches for adding robust rate limiting.

The most straightforward remediation is using Hanami's built-in middleware capabilities. Create a reusable rate limiting module that can be applied across your application:

module RateLimiting
  def self.included(base)
    base.before { apply_rate_limit }
  end
  
  private
  
  def apply_rate_limit
    key = "rate_limit:#{request.ip}:#{controller_name}:#{action_name}"
    count = Redis.current.incr(key)
    Redis.current.expire(key, 3600) if count == 1
    
    if count > rate_limit_threshold
      halt 429, { error: 'Rate limit exceeded' }.to_json
    end
  end
  
  def rate_limit_threshold
    100 # Default to 100 requests per hour
  end
end

Apply this module to your API controllers:

class Api::V1::BaseController < ApplicationController
  include RateLimiting
  
  # Override threshold for specific controllers
  def rate_limit_threshold
    50 # More restrictive for sensitive endpoints
  end
end

For Hanami applications using Redis (common in production), you can implement sliding window rate limiting:

class SlidingWindowRateLimiter
  def initialize(redis, key, limit, window)
    @redis = redis
    @key = key
    @limit = limit
    @window = window
  end
  
  def allowed?
    now = Time.now.to_f
    cutoff = now - @window
    
    # Remove old entries
    @redis.zremrangebyscore(@key, '-inf', cutoff)
    
    # Add current request
    @redis.zadd(@key, now, SecureRandom.uuid)
    
    # Check count
    count = @redis.zcard(@key)
    
    if count > @limit
      @redis.zremrangebyrank(@key, 0, -1) # Clean up
      false
    else
      true
    end
  end
end

Hanami's slice architecture allows you to implement different rate limiting strategies per slice. For example, your public API slice might have different limits than your internal admin slice:

# In your slice's configuration
config.middleware.use RateLimitMiddleware, calls: 1000, period: 3600 # Public API

# Admin slice has stricter limits
config.middleware.use RateLimitMiddleware, calls: 100, period: 600

For batch operations, implement payload size limits and processing quotas:

class Api::V1::BatchController < ApiController
  before { validate_batch_size }
  
  def create
    if params[:items].size > batch_limit
      halt 413, { error: 'Batch size too large' }.to_json
    end
    
    # Process items with rate limiting
    results = params[:items].map do |item| 
      process_item(item)
    end
    
    render json: results
  end
  
  private
  
  def batch_limit
    100 # Maximum items per request
  end
end

middleBrick's continuous monitoring in the Pro plan can verify your remediation efforts by continuously scanning your Hanami APIs and alerting you if rate limiting configurations are bypassed or if new endpoints are deployed without proper protection.

Frequently Asked Questions

How does middleBrick detect rate abuse in Hanami applications without access to source code?

middleBrick uses black-box scanning to test your Hanami API endpoints by sending controlled request patterns and analyzing responses. The scanner identifies rate abuse by detecting endpoints that accept unlimited requests, lack proper 429 responses, or process requests without timing constraints. It also examines response patterns for signs of resource exhaustion or inconsistent rate limiting across similar endpoints.

What's the difference between rate limiting and rate abuse prevention in Hanami?

Rate limiting sets boundaries on legitimate API usage (e.g., 100 requests per hour), while rate abuse prevention addresses scenarios where attackers intentionally bypass or overwhelm these limits. In Hanami, this means implementing robust middleware that can't be easily circumvented, using distributed rate limiting with Redis for scalability, and protecting batch operations from resource exhaustion attacks that standard rate limiting might not catch.