Prompt Injection in Axum with Dynamodb
Prompt Injection in Axum with Dynamodb — how this specific combination creates or exposes the vulnerability
Prompt injection against an Axum service that queries DynamoDB typically arises when user-controlled input is embedded into prompts sent to an LLM endpoint without sufficient validation or isolation. In this stack, the API is implemented in Rust using Axum, data is retrieved from or stored in DynamoDB, and an LLM call is made to generate responses or perform reasoning. If user input influences the system prompt, the LLM instructions, or the data passed into the model, an attacker may craft inputs that alter the intended behavior of the prompt, leading to unintended actions or data leakage.
Because DynamoDB is often used as a backend for user profiles, configuration, or session data, an Axum handler might first fetch records (e.g., user context or instruction overrides) from DynamoDB and then incorporate that data into the prompt. If the data from DynamoDB is not treated as potentially malicious, and if the handler directly interpolates user-supplied identifiers or keys into DynamoDB queries or into the LLM prompt, the boundary between data and instructions can blur. For example, an attacker may manipulate a user-supplied parameter to change which DynamoDB item is retrieved, or to inject text that ends up in the prompt sent to the LLM.
The LLM/AI Security checks in middleBrick specifically probe for system prompt extraction and instruction override via sequential probes, including attempts to exfiltrate data or cause the model to ignore original instructions. When Axum routes call DynamoDB and then forward results to an LLM, missing input validation, missing output encoding, or missing separation between data and prompts can amplify risks. Unauthenticated LLM endpoint detection is also relevant: if the LLM endpoint is exposed without authentication and Axum does not enforce proper authorization, an external attacker may probe the service to discover prompt behavior or to trigger cost exploitation.
Concrete risk patterns include: (1) direct concatenation of user input into a prompt string that also includes DynamoDB-derived context, enabling prompt injection; (2) insufficient validation of DynamoDB primary keys or query parameters, allowing an attacker to pivot to other items and retrieve sensitive context used in prompts; (3) missing sanitization of LLM outputs that may contain API keys or PII, which is particularly hazardous when outputs are surfaced in admin interfaces or logs. These patterns map to common web and API risks such as injection and broken access control, and they align with findings that middleBrick reports with severity and remediation guidance.
Dynamodb-Specific Remediation in Axum — concrete code fixes
Remediation focuses on strict separation of data and prompts, robust input validation, and safe handling of DynamoDB results before they reach the LLM. Below are concrete Axum handler examples that demonstrate secure patterns.
First, validate and constrain DynamoDB keys instead of echoing user input directly into key expressions. Use strongly typed structures and avoid string interpolation for table or key names.
use aws_sdk_dynamodb::types::AttributeValue;
use serde::{Deserialize, Serialize};
#[derive(Debug, Deserialize, Serialize)]
struct UserContext {
user_id: String,
default_instruction: String,
}
async fn get_user_context(client: &aws_sdk_dynamodb::Client, user_id: &str) -> Result {
// Validate user_id format strictly (e.g., UUID or alphanumeric pattern)
if !user_id.chars().all(|c| c.is_ascii_alphanumeric() || c == '-' || c == '_') {
return Err("invalid user_id");
}
let output = client.get_item()
.table_name("UserContextTable")
.key("user_id", AttributeValue::S(user_id.to_string()))
.send()
.await
.map_err(|_|"dynamodb_error")?;
let item = output.item().ok_or("no_item")?;
let ctx = UserContext {
user_id: item.get("user_id").and_then(|v| v.as_s().ok()).map(|s| s.to_string()).ok_or("missing_user_id")?,
default_instruction: item.get("default_instruction").and_then(|v| v.as_s().ok()).map(|s| s.to_string()).ok_or("missing_instruction")?,
};
Ok(ctx)
}
Second, build prompts using explicit, immutable templates and pass data separately rather than interpolating user or DynamoDB-derived text into the system prompt. This reduces the risk of instruction override.
use serde_json::json;
fn build_prompt(user_data: &UserContext, user_message: &str) -> (String, serde_json::Value) {
// System prompt is fixed and does not include raw user/DynamoDB text
let system_prompt = "You are a helpful assistant. Follow the user's instructions precisely.";
// User message is treated as separate content, not merged into the system prompt
let user_content = json!({
"role": "user",
"content": format!("{}\nUser data version: {}", user_message, user_data.user_id)
});
(system_prompt.to_string(), user_content)
}
Third, apply output scanning before displaying or logging LLM responses. Even when using Axum to call DynamoDB, ensure responses from the LLM are checked for PII, API keys, or executable code. If your integration stores or forwards outputs, enforce strict allow-lists for characters and structures based on expected use cases.
Finally, enforce authentication and authorization for both the LLM endpoint and DynamoDB access. Do not rely on network-level isolation alone; validate tokens and scopes in Axum middleware, and use least-privilege IAM policies for DynamoDB so that the service can only access the specific tables and items required. These steps reduce the attack surface that middleBrick’s unauthenticated LLM endpoint and SSRF or BFLA checks would otherwise probe.
Related CWEs: llmSecurity
| CWE ID | Name | Severity |
|---|---|---|
| CWE-754 | Improper Check for Unusual or Exceptional Conditions | MEDIUM |