Container Escape in Hanami with Api Keys
Container Escape in Hanami with Api Keys — how this specific combination creates or exposes the vulnerability
A container escape in Hanami via mishandled API keys occurs when keys are accepted from untrusted sources and used to influence runtime behavior that affects container boundaries. Hanami’s routing and controller layer will often read keys from headers, such as X-API-Key, and use them to gate access to internal features. If the application uses these keys to select backend services, construct command strings, or influence file paths, an attacker can supply carefully crafted values that traverse expected boundaries.
For example, an attacker might provide a key containing path traversal sequences or environment variable references. If Hanami code does not sanitize the key before interpolation, the runtime may resolve the key to a location outside the intended container context, such as /proc/self/environ or host-mounted volumes. Because middleBrick scans the unauthenticated attack surface, such unsafe handling of API keys can be detected as a potential vector leading to SSRF or information disclosure, which may escalate toward container escape.
Another scenario involves logging or error handling. When API keys are logged verbatim or included in stack traces, sensitive material may be exposed to container-adjacent systems, including shared log volumes that are mounted across containers. middleBrick’s Data Exposure and Unsafe Consumption checks look for these patterns to highlight how leaked keys can aid lateral movement. The interrelation of API key handling and container isolation becomes critical when the same key is used to authorize access to privileged endpoints or to derive configuration that affects container networking, mounts, or capabilities.
Api Keys-Specific Remediation in Hanami — concrete code fixes
To reduce the risk of container escape via API keys in Hanami, validate and constrain every key before use. Do not allow raw key values to influence filesystem paths, environment variable lookups, or command construction. Treat API keys as opaque identifiers and map them to internal permissions without reinterpretation.
Example: Safe key lookup with a predefined mapping
# config/initializers/api_keys.rb
API_KEYS = {
"abc123" => { tenant: "tenant_a", permissions: [:read] },
"def456" => { tenant: "tenant_b", permissions: [:read, :write] }
}.freeze
module ApiKeyAuth
def self.from_header(headers)
key = headers["HTTP_X_API_KEY"]
return nil unless key
API_KEYS[key]
end
end
This approach ensures the key is never concatenated into strings that reach the shell or filesystem. The mapping acts as a strict allowlist, preventing injection of traversal or variable syntax.
Example: Parameterized HTTP client usage
# lib/clients/internal.rb
require "net/http"
module Internal
class HttpClient
def self.get(tenant_id, path)
# tenant_id is derived from the validated key mapping, not raw input
uri = URI("http://internal-service/#{tenant_id}#{path}")
request = Net::HTTP::Get.new(uri)
Net::HTTP.start(uri.hostname, uri.port, use_ssl: false) do |http|
http.request(request)
end
end
end
end
By deriving tenant_id from the pre-mapped permissions, you avoid any direct use of the API key in URI assembly. Also ensure that logging filters API key values to prevent accidental exposure in container-mounted logs.
Example: Hanami controller with safe authorization
# apps/web/controllers/api/resources.rb
class Api::Resources < Hanami::Action
def initialize(*)
@auth = ApiKeyAuth
end
def call(params)
key_data = @auth.from_header(env)
halt 401, { error: "unauthorized" }.to_json unless key_data
# Use key_data for authorization checks, not for path or env manipulation
if key_data[:permissions].include?(:read)
render "api/resources/show", layout: false
else
halt 403, { error: "forbidden" }.to_json
end
end
end
These patterns align with remediations that map to compliance frameworks flagged by middleBrick, such as OWASP API Top 10:2023 Broken Object Level Authorization and Security Misconfiguration. Continuous scanning with the Pro plan can help ensure that any regression in key handling is caught early, and the GitHub Action can fail builds if risky patterns are introduced.