Api Rate Abuse in Hanami with Cockroachdb
Api Rate Abuse in Hanami with Cockroachdb — how this specific combination creates or exposes the vulnerability
Rate abuse in a Hanami application backed by CockroachDB can manifest when API endpoints lack effective request throttling, allowing a single client to generate a high volume of queries that stress both the application layer and the distributed SQL layer. Hanami encourages explicit routing and service objects, but if rate limiting is omitted or implemented only at the edge, an attacker can exploit unthrottled write or read endpoints to trigger excessive database transactions. CockroachDB, while horizontally scalable, still experiences load from frequent SQL transactions, increased contention on hot rows, and amplified WAL I/O when many concurrent sessions perform writes within short time windows.
In a typical Hanami setup, controllers often map directly to repository methods that issue SQL via Sequel or an ORM layer. Without a rate limiting check, an endpoint such as POST /api/v1/orders can be called repeatedly to create records, each resulting in an INSERT transaction in CockroachDB. Because CockroachDB uses a distributed transaction model, high-volume inserts can cause range lease contention, node-side queueing, and increased latency even if the cluster has sufficient capacity. Moreover, endpoints that query by non-optimized keys or perform full table scans can amplify load, as CockroachDB must coordinate across nodes to serve the requests. The absence of per-client or per-IP rate limits in Hanami routes allows this behavior to continue until operational alerts trigger, at which point the issue may already degrade user experience and increase infrastructure cost.
The interaction between Hanami’s request lifecycle and CockroachDB’s concurrency model means that abuse is not limited to simple overload; it can also facilitate slower, low-and-slow attacks where an attacker sends requests just below any naive threshold to avoid detection while still exhausting connection pools or transaction retries. Hanami’s emphasis on clean architecture can inadvertently encourage developers to focus on business logic and assume infrastructure controls will handle throttling. In practice, effective mitigation requires explicit rate limiting within Hanami’s pipeline, combined with CockroachDB-aware considerations such as transaction retry budgets and SQL index efficiency to reduce the load caused by abusive patterns.
Cockroachdb-Specific Remediation in Hanami — concrete code fixes
Remediation should combine Hanami-side controls with database-aware practices to reduce the impact of rate abuse. Implement explicit rate limiting in routing or middleware, and design CockroachDB interactions to minimize contention and expensive queries.
# Hanami controller with rate limiting and CockroachDB-aware repository usage
# Gem dependencies: hanami, sequel, cockroachdb rb gem, redis (for rate store)
require "hanami/controller"
require "redis"
class Web::Controllers::Orders
include Hanami::Controller
# Simple token-bucket rate limiter using Redis; adjust thresholds per endpoint
def call(env)
client_ip = env["HTTP_X_FORWARDED_FOR"] || env["REMOTE_ADDR"]
redis = Redis.new(url: ENV.fetch("REDIS_URL", "redis://localhost:6379"))
limit = 60 # max requests
window = 60 # per 60 seconds
key = "rate:#{client_ip}:orders"
current = redis.incr(key)
redis.expire(key, window) if current == 1
if current.to_i > limit
self.status = 429
self.body = { error: "Rate limit exceeded" }.to_json
return self.finish
end
@call_env = env
@action.call(env)
ensure
redis.close if redis
end
end
# app/repositories/order_repository.rb
class OrderRepository
def initialize(db)
@db = db
end
# Use explicit transactions with retry budgeting suitable for CockroachDB
def create(order_params)
@db.transaction(requires_new: true, retry_limit: 3) do
# Ensure indexed column usage to avoid full table scans
# For example, status is indexed to avoid seq scans on large tables
sql = "INSERT INTO orders (user_id, status, total, created_at) VALUES ($1, $2, $3, NOW()) RETURNING id"
user_id = order_params.fetch(:user_id)
total = order_params.fetch(:total)
status = order_params.fetch(:status, "pending")
# CockroachDB benefits from prepared statement patterns via sequel; avoid ad-hoc SQL in loops
result = @db[sql, user_id, status, total].first
{ id: result[:id] }
end
rescue Sequel::Error => e
# Handle CockroachDB-specific retry errors gracefully
{ error: e.message }
end
# Use indexed queries and limit returned columns
def recent_for_user(user_id, limit = 10)
sql = "SELECT id, status, total FROM orders WHERE user_id = $1 ORDER BY created_at DESC LIMIT $2"
@db[sql, user_id, limit].all
end
end
# config/initializers/sequel.rb
# Ensure connection settings are tuned for CockroachDB
Sequel.connect(ENV.fetch("COCKROACHDB_DATABASE_URL")) do |db|
db.sql_log_level = :debug # adjust in production
db.max_connections = 20 # avoid saturating the cluster
# Use timezone-aware settings if needed
end
On the database side, ensure that SQL indexes align with query patterns used by Hanami endpoints; create secondary indexes on foreign keys and commonly filtered columns to avoid full table scans that amplify load in CockroachDB. Configure connection pools in Hanami to respect CockroachDB’s capacity, and consider using statement timeouts to bound abusive long-running queries. Monitoring SQL transaction retries and latency helps correlate API rate patterns with database behavior, enabling more precise throttling and operational adjustments.