Prompt Injection in Grape with Basic Auth
Prompt Injection in Grape with Basic Auth — how this specific combination creates or exposes the vulnerability
Grape is a REST-like API micro-framework for Ruby, often used to build JSON APIs. When you expose a Grape endpoint that accepts user input and forwards it to an LLM, and you protect that endpoint with HTTP Basic Auth, the interaction between authentication and prompt handling can create a prompt injection surface. Basic Auth is a stateless challenge-response mechanism where credentials are sent in an Authorization header; it does not inherently protect against malicious input that reaches your application logic.
Consider a Grape resource that accepts a user query and sends it to an LLM. If the prompt template is built by string interpolation and includes static instructions that should never be overridden, an attacker can attempt to inject instructions through the query parameter. The presence of Basic Auth may lead developers to assume the endpoint is private, which can reduce scrutiny during testing. However, if authentication succeeds and the input is not validated or sanitized, the injected text can shift the LLM behavior in unintended ways, such as revealing the system prompt or changing the intended task.
The LLM/AI Security checks in middleBrick include Active Prompt Injection Testing, which runs a sequence of five probes against the endpoint. These probes include system prompt extraction, instruction override, DAN jailbreak, data exfiltration, and cost exploitation. When an API is protected by Basic Auth, the scanner will first perform authentication using provided credentials to establish a valid session, then proceed with the injection probes against the authenticated endpoints. This reveals whether the endpoint correctly isolates user data from system instructions even after successful authentication.
Additionally, middleBrick checks for System Prompt Leakage using 27 regex patterns tuned to common LLM formats such as ChatML, Llama 2, Mistral, and Alpaca. If your Grape API passes user input into the prompt chain without proper separation, patterns that match these formats may trigger findings. The scanner also flags endpoints that expose LLM-related behavior without authentication, but in the case of Basic Auth, it validates credentials and then tests what authenticated users can influence in the model’s output.
Output scanning further examines LLM responses for PII, API keys, or executable code. With Basic Auth, the risk is not that credentials are leaked via the model, but that an authenticated user can coerce the model into revealing sensitive parts of the system prompt or bypass intended constraints. This demonstrates that authentication and prompt integrity are separate concerns; one does not imply the other.
Basic Auth-Specific Remediation in Grape — concrete code fixes
To secure a Grape endpoint using Basic Auth while reducing prompt injection risk, combine proper authentication with strict input handling and prompt engineering. Authentication should validate credentials before processing the request, and the prompt template must ensure user input is treated strictly as data, not as part of the instruction set.
Here is an example of a Grape resource with Basic Auth implemented using the built-in before block. This ensures that only requests with valid credentials proceed to the resource actions.
require 'grape'
require 'base64'
class ProtectedAPI < Grape::API
format :json
before do
auth_header = request.env['HTTP_AUTHORIZATION']
unless auth_header&.start_with?('Basic ')
error!({ error: 'Unauthorized' }, 401)
end
decoded = Base64.decode64(auth_header.split(' ').last)
username, password = decoded.split(':', 2)
unless valid_credentials?(username, password)
error!({ error: 'Invalid credentials' }, 401)
end
end
helpers do
def valid_credentials?(username, password)
# Compare against secure store; this is a simplified example
username == 'admin' && password == 'S3cur3P@ss!'
end
end
desc 'Submit a query to the LLM endpoint'
params do
requires :query, type: String, desc: 'User query to send to the model'
end
post '/ask' do
user_query = params[:query]
# Pass user_query only as data, never merge into system prompt
result = call_llm_with_user_data(user_query)
{ response: result }
end
private
def call_llm_with_user_data(user_input)
# Example: send only user_input as a separate variable to the model
# Do not interpolate user_input into the system prompt
system_prompt = 'You are a helpful assistant. Answer concisely.'
# pseudo function representing your LLM client call
llm_chat_completion(system_prompt: system_prompt, user_input: user_input)
end
end
Key remediation points:
- Validate credentials early using a
beforeblock and reject requests without the proper Authorization header. - Never concatenate or interpolate user input into the system prompt or instruction template. Treat user input as a separate parameter to the LLM call.
- Use strict parameter validation with Grape’s
paramsDSL to ensure only expected types and formats are accepted. - Employ output scanning practices on your side to detect any unintended data leakage from the model, even when authenticated.
middleBrick’s scans can verify that your endpoint requires authentication and then test authenticated inputs for prompt injection and system prompt leakage. The Pro plan offers continuous monitoring and can integrate these checks into your CI/CD pipeline to catch regressions early.
Related CWEs: llmSecurity
| CWE ID | Name | Severity |
|---|---|---|
| CWE-754 | Improper Check for Unusual or Exceptional Conditions | MEDIUM |