HIGH insecure designhanami

Insecure Design in Hanami

How Insecure Design Manifests in Hanami

Insecure Design in Hanami applications often emerges from architectural decisions that inadvertently expose sensitive operations or data. Unlike implementation bugs, these flaws stem from missing or inadequate security controls at the design level.

A common manifestation occurs in Hanami's controller architecture. When developers create controller actions without proper authorization checks, they create direct paths to sensitive operations. For example:

class Users::Destroy < ActionController::Base
  def call(params)
    UserRepository.new.delete(params[:id])
  end
end

This controller action allows any authenticated user to delete any account by simply changing the ID in the URL. The design flaw here is the absence of authorization logic before performing the destructive operation.

Another Hanami-specific design vulnerability appears in view templates. Hanami's view system allows direct access to entity attributes, which can lead to information disclosure:

class Users::Show < View
  expose :user
end

class Users::ShowView
  include Hanami::View
  
  expose :user
  
  def user_info
    "#{user.email} - #{user.created_at}"
  end
end

If the view doesn't properly filter which user attributes are exposed, sensitive information like internal IDs, timestamps, or even hashed passwords (if accidentally exposed through associations) can leak to unauthorized users.

Hanami's repository pattern, while powerful, can also introduce insecure design when developers create generic repository methods that bypass business logic:

class UserRepository
  include Hanami::Repository
  
  def find_by_id(id)
    query { where(id: id) }.first
  end
end

This method provides direct database access without any authorization checks, allowing any part of the application to retrieve any user record.

Hanami-Specific Detection

Detecting insecure design in Hanami applications requires examining both the code structure and runtime behavior. middleBrick's black-box scanning approach is particularly effective for identifying these architectural flaws without requiring source code access.

middleBrick scans Hanami APIs by sending requests to various endpoints and analyzing the responses. For authentication-related insecure design, it tests for missing authorization controls by:

  • Attempting to access endpoints with different user roles
  • Modifying ID parameters to test for BOLA (Broken Object Level Authorization)
  • Checking for excessive data exposure in API responses

For example, middleBrick would detect the vulnerability in a Hanami controller like this:

class Admin::UsersController < ActionController::Base
  def show
    @user = UserRepository.new.find(params[:id])
  end
end

By attempting to access /admin/users/1, /admin/users/2, etc., middleBrick can determine if the endpoint properly restricts access to admin users only.

middleBrick's OpenAPI spec analysis is particularly valuable for Hanami applications. When a Hanami application provides an OpenAPI specification, middleBrick cross-references the documented security requirements with actual runtime behavior:

paths:
  /users/{id}:
    get:
      summary: Get user by ID
      security:
        - bearerAuth: []
      parameters:
        - name: id
          in: path
          required: true

If the spec indicates authentication is required but middleBrick can access the endpoint without credentials, it flags this as an insecure design issue.

For LLM/AI security, middleBrick's unique capability detects if a Hanami application has exposed AI endpoints without proper safeguards:

class ChatController < ActionController::Base
  def create
    prompt = params[:prompt]
    response = AI::Model.generate(prompt)
    render json: { response: response }
  end
end

middleBrick tests for prompt injection vulnerabilities, system prompt leakage, and excessive agency in these AI endpoints—issues that could allow attackers to manipulate the AI's behavior or extract sensitive information.

Hanami-Specific Remediation

Remediating insecure design in Hanami applications requires architectural changes that enforce proper authorization and data exposure controls. Hanami's modular architecture provides several native features to address these issues.

For authorization, implement policies using Hanami's policy system:

class UserPolicy
  def initialize(user, target)
    @user = user
    @target = target
  end

  def show?
    @user.admin? || @user.id == @target.id
  end

  def destroy?
    @user.admin? && [email protected]?
  end
end

Then integrate policies into your controllers:

class Users::Show < ActionController::Base
  include Hanami::Action::Callbacks
  
  before :authorize
  
  def call(params)
    @user = UserRepository.new.find(params[:id])
  end
  
  private
  
  def authorize
    policy = UserPolicy.new(current_user, @user)
    halt 403 unless policy.show?
  end
end

For data exposure issues, use Hanami's view system to control exactly what information is exposed:

class User::ShowView < View
  include Hanami::View
  
  expose :user do |user|
    { 
      id: user.id,
      name: user.name,
      email: user.email,
      created_at: user.created_at
    }
  end
end

This explicit exposure ensures that sensitive attributes like password hashes, API keys, or internal system data never leak through the API.

For repository-level security, create scoped queries that automatically filter data based on the current user:

class UserRepository
  include Hanami::Repository
  
  def find_by_id_and_user(id, user)
    query do
      where(id: id)
      where(user_id: user.id)
    end.first
  end
end

This pattern ensures that users can only access their own data, even if they manipulate the ID parameter.

For AI endpoint security in Hanami applications, implement proper input validation and output filtering:

class ChatController < ActionController::Base
  def create
    prompt = params[:prompt]
    
    # Validate prompt length and content
    halt 400 if prompt.length > 1000
    halt 400 if contains_disallowed_content?(prompt)
    
    response = AI::Model.generate(prompt)
    
    # Filter sensitive information from response
    filtered_response = filter_sensitive_data(response)
    
    render json: { response: filtered_response }
  end
end

middleBrick's Pro plan includes continuous monitoring that can alert you if these security controls degrade over time, ensuring your remediations remain effective as your application evolves.

Frequently Asked Questions

How does middleBrick detect insecure design in Hanami applications without access to source code?
middleBrick uses black-box scanning to test the actual runtime behavior of your Hanami API. It sends requests with different parameters and authentication states to identify missing authorization controls, data exposure issues, and other architectural flaws. The scanner also analyzes OpenAPI specifications when available to cross-reference documented security requirements with actual behavior.
Can middleBrick's LLM security features detect AI-specific insecure design in Hanami applications?
Yes, middleBrick is the only self-service scanner that includes active LLM security probing. It tests for prompt injection vulnerabilities, system prompt leakage, excessive agency, and unauthenticated AI endpoint access. These tests are particularly important for Hanami applications that expose AI/ML endpoints, as they can identify design flaws that allow attackers to manipulate AI behavior or extract sensitive information.