Llm Data Leakage in Fiber with Basic Auth
Llm Data Leakage in Fiber with Basic Auth — how this specific combination creates or exposes the vulnerability
When an API built with Fiber exposes endpoints that return or process sensitive data without enforcing authentication, and it also exposes an unauthenticated LLM endpoint, the combination can lead to LLM data leakage. Even if the application uses HTTP Basic Authentication for some routes, an LLM endpoint that does not validate credentials can allow an attacker to submit user data or system prompts through chat completions or tool calls. Because Basic Auth credentials are only as strong as the transport protection and server-side enforcement, failing to apply the same authentication checks to the LLM route creates a bypass path.
Consider a Fiber API that protects admin routes with Basic Auth but leaves an /chat/completions route open. An attacker can probe the unauthenticated LLM endpoint by sending crafted prompts that include sensitive context, such as internal data formats or sample PII. If the model is not configured to reject such input or if output scanning is not enabled, the response may echo back secrets, API keys, or internal instructions. middleBrick’s LLM/AI Security checks detect this by testing for system prompt leakage across 27 regex patterns and by scanning outputs for credentials and PII, highlighting how an unauthenticated route can leak data even when other routes are protected.
Another scenario involves prompt injection through Basic Auth–protected routes that inadvertently pass user-controlled headers or cookies into LLM prompts. For example, if a request’s authorization header is used to personalize prompts without strict validation, an attacker may inject instructions that cause the model to reveal training data or internal logic. middleBrick’s active prompt injection tests—covering system prompt extraction, instruction override, DAN jailbreak, data exfiltration, and cost exploitation—can surface these weaknesses by observing whether the model alters its behavior based on maliciously crafted inputs.
In a real-world test, submitting a request with malformed credentials to an unprotected LLM route might not return an error if the server defers to the model’s default behavior. This unauthenticated LLM endpoint detection capability in middleBrick identifies routes that do not enforce identity verification, reducing the risk that sensitive context is processed and echoed back. Because LLM responses may contain API keys or executable code, enabling output scanning and monitoring for excessive agency (such as tool_calls or function_call patterns) is essential to detect when a model is coaxed into exposing data or performing unintended actions.
middleBrick’s OpenAPI/Swagger analysis helps correlate spec definitions with runtime behavior. If the spec defines a security scheme for Basic Auth but the LLM route lacks a security requirement, the scan will flag this inconsistency. This cross-referencing supports prioritized findings mapped to frameworks like OWASP API Top 10 and SOC2, showing exactly where authentication gaps enable data leakage through LLM interactions.
Basic Auth-Specific Remediation in Fiber — concrete code fixes
To mitigate LLM data leakage in Fiber when using Basic Auth, ensure that every route that handles sensitive data or interacts with an LLM enforces authentication consistently. Do not rely on route grouping alone; apply middleware to each endpoint that requires protection, and never assume that an LLM route is safe because other routes are secured.
Below are concrete Fiber examples using the github.com/gofiber/fiber/v2 package and the github.com/gohouse/fiber-basic-auth middleware. The first example shows how to protect a standard API route with Basic Auth.
// main.go
package main
import (
"log"
"github.com/gofiber/fiber/v2"
"github.com/gohouse/fiber-basic-auth/v2"
)
func main() {
app := fiber.New()
authMiddleware := basicauth.New(basicauth.Config{
Users: map[string]string{
"admin": "SuperSecret123",
},
})
// Protected route
app.Get("/admin/data", authMiddleware, func(c *fiber.Ctx) error {
return c.SendString("Protected data")
})
log.Fatal(app.Listen(":3000"))
}
The second example extends this approach to an LLM endpoint, ensuring that the same authentication check is applied before the request reaches any external service or model integration.
// llm_handler.go
package main
import (
"log"
"net/http"
"github.com/gofiber/fiber/v2"
"github.com/gohouse/fiber-basic-auth/v2"
)
func configureLlmRoute(app *fiber.App, authMiddleware fiber.Handler) {
app.Post("/chat/completions", authMiddleware, func(c *fiber.Ctx) error {
var req struct {
Prompt string `json:"prompt"`
}
if err := c.BodyParser(&req); err != nil {
return c.Status(http.StatusBadRequest).SendString("Invalid request")
}
// Integrate with LLM provider here
// Ensure prompt does not contain sensitive data before sending
return c.JSON(fiber.Map{"response": "processed"})
})
}
func main() {
app := fiber.New()
auth := basicauth.New(basicauth.Config{
Users: map[string]string{
"analyst": "CorrectHorseBatteryStaple",
},
})
configureLlmRoute(app, auth)
log.Fatal(app.Listen(":3000"))
}
For defense in depth, combine these measures with middleBrick’s capabilities. Use the CLI to run scans such as middlebrick scan https://api.example.com and integrate the GitHub Action to fail builds if risk scores drop below your threshold. The MCP Server allows you to scan APIs directly from your IDE, helping you catch missing auth requirements before deployment. Continuous monitoring in the Pro plan can alert you to new authentication gaps as your API evolves.
Related CWEs: llmSecurity
| CWE ID | Name | Severity |
|---|---|---|
| CWE-754 | Improper Check for Unusual or Exceptional Conditions | MEDIUM |