Prompt Injection Indirect in Axum (Rust)
Prompt Injection Indirect in Axum with Rust — how this specific combination creates or exposes the vulnerability
Prompt injection indirect in an Axum service written in Rust occurs when untrusted input influences the construction or selection of prompts that are sent to an LLM endpoint, without directly embedding user content into the system prompt. Unlike direct prompt injection where the attacker tries to alter the system prompt, indirect injection leverages business logic, configuration, or data fetched at runtime to shape the LLM interaction. In Axum, this often maps to how route parameters, query strings, or JSON bodies are used to build dynamic instructions, select templates, or choose which model or tools to invoke.
Consider an Axum handler that builds a prompt from a user-supplied identifier such as a document ID or tenant code. If the identifier is concatenated into a prompt template or used to select a system prompt file, an attacker can supply values that change the effective instruction set seen by the LLM. For example, a routing parameter like tenant_id might map to a stored prompt fragment; if the mapping is not strictly validated, an attacker can traverse to or inject fragments that reorder instructions, reveal internal guidelines, or change the expected output format. Because Axum is strongly typed and uses extractor patterns, developers may inadvertently trust path or query extractors and pass them through helper functions that build the final prompt string.
The risk is amplified when the service uses dynamic tool selection or model routing based on the same untrusted input. If a tenant ID influences which tool configuration is loaded, an attacker might escalate privileges by selecting administrative tools or bypassing intended guardrails. In Rust, the type system and ownership model do not prevent logical flaws; if validation is omitted, the application compiles and runs while still exposing an indirect injection surface. An LLM endpoint that is unauthenticated or permissive can be targeted to observe how different inputs change responses, enabling enumeration of prompt templates or leakage of internal instructions through crafted query parameters or header values.
middleBrick’s LLM/AI Security checks specifically target these indirect patterns by probing how inputs affect system prompts and tool selection. It runs sequential probes—system prompt extraction, instruction override, DAN jailbreak, data exfiltration, and cost exploitation—against endpoints that incorporate user-controlled data into LLM interactions. The scanner also detects unauthenticated LLM endpoints and excessive agency patterns such as unchecked tool_calls or function_call usage, which are common in Rust services that auto-discover tools based on request context. Because middleBrick references real CVEs and OWASP API Top 10 mappings, it helps teams understand how indirect prompt injection fits into known attack classes like Injection and Broken Object Level Authorization.
In Axum services, indirect prompt injection often intersects with insecure deserialization or unsafe consumption of user data when request payloads are forwarded to language model wrappers. If the wrapper builds prompts by interpolating JSON fields without schema validation, attackers can nest values or inject newline sequences that shift prompt boundaries. The presence of OpenAPI/Swagger spec analysis in middleBrick helps highlight mismatches between declared parameters and runtime behavior, especially when $ref definitions are resolved across spec versions. By correlating runtime findings with spec definitions, the scanner surfaces inconsistencies where documentation understates how user-controlled data reaches the LLM path.
Rust-Specific Remediation in Axum — concrete code fixes
Remediation focuses on strict input validation, separation of prompts from user data, and controlled routing. In Axum, prefer strongly typed extractors and validate all path, query, and body inputs against an allowlist before they touch any prompt-building logic. Use enums or sealed traits to represent fixed sets of templates or tools, and avoid dynamic file paths derived from user input. Store prompt templates outside the request lifecycle, for example as static strings or loaded once at startup, and reference them by immutable identifiers that are mapped server-side.
When constructing prompts, use format strings with explicit placeholders and avoid string concatenation with raw user values. For dynamic tool selection, map incoming categorical values to predefined configurations rather than constructing command strings or command names on the fly. Enforce authentication and authorization checks before allowing any selection that could change the LLM’s instructions or toolset. The following Rust examples illustrate secure patterns in Axum.
Example 1: Safe prompt selection with enumerated templates
use axum::{routing::get, Router};
use serde::Deserialize;
#[derive(Deserialize)]
struct PromptRequest {
template_id: String,
}
enum SystemPrompt {
Support,
Billing,
}
impl SystemPrompt {
fn from_id(id: &str) -> Option {
match id {
"support" => Some(SystemPrompt::Support),
"billing" => Some(SystemPrompt::Billing),
_ => None,
}
}
fn content(&self) -> &'static str {
match self {
SystemPrompt::Support => "You are a support assistant. Be concise and polite.",
SystemPrompt::Billing => "You are a billing assistant. Stick to pricing and invoices.",
}
}
}
async fn handle_prompt(req: PromptRequest) -> String {
match SystemPrompt::from_id(&req.template_id) {
Some(tmpl) => format!("{}\nUser context: {{}}", tmpl.content()),
None => String::from("Invalid template"),
}
}
fn app() -> Router {
Router::new().route("/prompt", get(|body: PromptRequest| async move { handle_prompt(body) }))
}
Example 2: Parameterized prompts without dynamic template paths
use axum::{routing::post, Json};
use serde::{Deserialize, Serialize};
#[derive(Deserialize)]
struct UserQuery {
user_id: u64,
query: String,
}
#[derive(Serialize)]
struct ModelResponse {
answer: String,
}
async fn build_prompt(Json(payload): Json) -> ModelResponse {
// Validate user_id against a known set or database record before use
let tenant_context = fetch_tenant_context(payload.user_id).unwrap_or_default();
// Use a static base prompt and inject context via controlled substitution
let base = "Answer the user query with care.";
let prompt = format!("{}\nTenant context: {}", base, tenant_context);
let answer = call_llm(&prompt, payload.query).await;
ModelResponse { answer }
}
async fn call_llm(prompt: &str, query: String) -> String {
// Integration with LLM client; prompt is built from controlled parts only
String::from("Simulated answer")
}
fn fetch_tenant_context(_id: u64) -> Option {
// In practice, fetch from a trusted source with proper access controls
Some("context_data".to_string())
}
fn app() -> Router {
Router::new().route("/chat", post(|body: Json| async move { handle_prompt(body).await }))
}
Always apply rate limiting and input sanitization at the Axum middleware layer to reduce abuse surface. When integrating with middleBrick’s GitHub Action, set thresholds in CI/CD to fail builds if risk scores degrade, ensuring prompt injection patterns are caught before deployment. The MCP Server can be used from IDEs to validate prompt construction logic during development, while the Web Dashboard helps track how changes affect long-term security scores.