Llm Data Leakage in Actix with Basic Auth
Llm Data Leakage in Actix with Basic Auth — how this specific combination creates or exposes the vulnerability
When an Actix web service uses HTTP Basic Authentication and exposes an unauthenticated or insufficiently protected LLM endpoint, the combination can lead to LLM data leakage. In this scenario, credentials are typically transmitted in the Authorization header as a base64-encoded string. If the service does not enforce strict authentication on the LLM route, an attacker can send requests without credentials and still receive responses that may contain sensitive information used or generated by the model.
During a black-box scan like middleBrick, the LLM/AI Security checks probe endpoints that may be inadvertently accessible. Even when Basic Auth protects a subset of routes, developers might overlook applying the same controls to LLM-related handlers, such as completions or chat routes. This creates an unauthenticated attack surface where system prompts, user data, or model outputs can be exposed. middleBrick specifically checks for unauthenticated LLM endpoints and uses patterns that detect whether responses include sensitive content, such as credentials or proprietary instructions.
In an Actix application, if the LLM handler is mounted under a path that does not require authentication, or if the middleware that validates Basic Auth is not applied to that route, the model’s responses may include data that should be restricted. For example, a prompt that includes internal instructions or data may be returned verbatim in the model’s output. An attacker can then exfiltrate this information by making a simple request to the exposed endpoint.
middleBrick’s LLM/AI Security checks include system prompt leakage detection using multiple regex patterns tailored to ChatML, Llama 2, Mistral, and Alpaca formats. These patterns are designed to identify whether system-level instructions are present in model responses. Additionally, active prompt injection testing probes for weaknesses that can lead to unauthorized data disclosure. If an Actix route serving an LLM is not properly gated by authentication, these checks can surface the risk of leakage before it is exploited in the wild.
The interaction between Basic Auth and LLM endpoints in Actix is particularly sensitive because developers may assume that protecting the API contract is sufficient, while neglecting to apply the same authentication logic to generated content routes. Because middleBrick scans the unauthenticated attack surface, it can highlight these gaps by identifying routes that return model-generated data without requiring credentials, thereby pointing to a concrete LLM data leakage scenario.
Basic Auth-Specific Remediation in Actix — concrete code fixes
To mitigate LLM data leakage in Actix when using Basic Authentication, ensure that the authentication middleware is applied consistently across all routes, including those that handle LLM requests. The following examples demonstrate how to implement Basic Auth protection on an Actix route that serves an LLM endpoint.
First, use the actix-web extractor pattern to validate credentials before allowing access to the handler. This approach checks the Authorization header and rejects requests that do not provide valid credentials.
use actix_web::{web, App, HttpResponse, HttpServer, Responder};
use actix_web::http::header::HeaderValue;
use actix_web::dev::ServiceRequest;
use actix_web::error::ErrorUnauthorized;
use std::future::{ready, Ready};
fn validate_basic_auth(req: ServiceRequest) -> Result {
const VALID_USER: &str = "admin";
const VALID_PASS: &str = "securepassword123";
if let Some(auth_header) = req.headers().get("authorization") {
if let Ok(auth_str) = auth_header.to_str() {
if auth_str.starts_with("Basic ") {
let encoded = &auth_str[6..];
// In production, use proper base64 decoding and constant-time comparison
if encoded == "YWRtaW46c2VjdXJlZ3VhcmQxMjM=" { // base64("admin:securepassword123")
return Ok(req);
}
}
}
}
Err((ErrorUnauthorized("Unauthorized"), req))
}
async fn llm_chat() -> impl Responder {
// Simulated LLM response — ensure no sensitive data is echoed
HttpResponse::Ok().json(serde_json::json!({
"response": "This is a safe model response."
}))
}
#[actix_web::main]
async fn main() -> std::io::Result<()> {
HttpServer::new(|| {
App::new()
.route("/api/chat", web::get().to(llm_chat).wrap_fn(|req, srv| {
let fut = validate_basic_auth(req);
async move {
match fut {
Ok(req) => srv.call(req).await,
Err(e) => Err(e),
}
}
}))
})
.bind("127.0.0.1:8080")?
.run()
.await
}
In this example, the validate_basic_auth function checks the Authorization header before allowing access to the /api/chat route. By wrapping the LLM route with this middleware, you ensure that unauthenticated requests cannot reach the handler, reducing the risk of LLM data leakage.
Additionally, avoid including sensitive context in prompts that could be reflected in model outputs. Even with authentication in place, design prompts to limit the exposure of internal instructions. middleBrick’s findings can help identify routes where authentication is missing and where responses may contain sensitive information, supporting targeted remediation.
For teams using the middleBrick CLI, running middlebrick scan <url> against an Actix service can reveal whether LLM endpoints are inadvertently exposed. The dashboard and reports from the Pro plan can help track this risk over time and integrate checks into CI/CD pipelines, ensuring that authentication requirements are enforced as part of the deployment process.
Related CWEs: llmSecurity
| CWE ID | Name | Severity |
|---|---|---|
| CWE-754 | Improper Check for Unusual or Exceptional Conditions | MEDIUM |