Race Condition in Fiber with Api Keys
Race Condition in Fiber with Api Keys
A race condition in a Fiber API can occur when multiple concurrent requests rely on shared state related to API key validation, and the outcome depends on the non-deterministic timing of those requests. In this context, the vulnerability arises if key verification and subsequent state changes are not performed atomically. For example, consider a rate-limiting or quota-check flow that first reads a remaining-allowance value, then conditionally decrements it only if the allowance is sufficient. Between the read and the write, another request can see the same stale allowance and also pass validation, causing the total usage to exceed the intended limit. This is a classic time-of-check-to-time-of-use (TOCTOU) race facilitated by shared, mutable key-state.
With API keys in Fiber, the risk often maps to the BFLA/Privilege Escalation and Rate Limiting checks in middleBrick’s 12 parallel security checks. If keys are stored or cached in a global in-memory map without synchronization, concurrent requests can interfere. A compromised or shared key might be validated by one request while another concurrently modifies its associated metadata (such as rotating or revoking the key), leading to inconsistent authorization decisions. Similarly, if your application uses key-based access to control per-user resources (e.g., tenant IDs derived from the key), interleaved operations can result in one request seeing or modifying another request’s data, effectively bypassing tenant isolation (a BOLA/IDOR-like outcome).
Real-world attack patterns that mirror this include scenarios where an attacker issues rapid, parallel requests with the same API key to exploit non-atomic quota checks, or where key metadata is updated (e.g., suspension or scope change) while in-flight requests are still being authorized. These patterns are relevant to findings such as BOLA/IDOR and Rate Limiting in the context of unauthenticated attack surface testing. Because middleBrick tests these checks in parallel and reports per-category breakdowns, such race conditions can be surfaced as findings with severity tied to the potential for unauthorized access or resource exhaustion.
To detect this during a scan, middleBrick’s 12 security checks run in parallel and can identify inconsistent behaviors across concurrent, unauthenticated requests. Note that middleBrick detects and reports these issues and provides remediation guidance; it does not fix or block the behavior. The scan typically completes in 5–15 seconds and can map findings to frameworks such as OWASP API Top 10 and SOC2.
Using the CLI, you can scan from the terminal with middlebrick scan <url>, and with the GitHub Action you can add API security checks to your CI/CD pipeline to fail builds if risk scores drop below your chosen threshold. For continuous monitoring, the Pro plan supports configurable scan schedules and alerts.
Api Keys-Specific Remediation in Fiber
Remediation centers on making key validation and any related state updates atomic and isolated per request, avoiding shared mutable checks. In Go Fiber, prefer a synchronized store or a context-bound validation that does not rely on global counters for authorization decisions. Below are concrete, idiomatic examples that reduce race conditions when handling API keys.
Example 1: Mutex-protected key validation and quota decrement
Use a mutex to ensure that read-modify-write cycles on shared quota state are atomic:
import (
"sync"
"github.com/gofiber/fiber/v2"
)
type KeyState struct {
Remaining int
mu sync.Mutex
}
var keys = map[string]*KeyState{
"abc123": {Remaining: 100},
"def456": {Remaining: 200},
}
func ValidateKey(c *fiber.Ctx) error {
k := c.Get("X-API-Key")
state, ok := keys[k]
if !ok {
return c.Status(fiber.StatusUnauthorized).JSON(fiber.Map{"error": "invalid key"})
}
state.mu.Lock()
defer state.mu.Unlock()
if state.Remaining <= 0 {
return c.Status(fiber.StatusTooManyRequests).JSON(fiber.Map{"error": "quota exhausted"})
}
state.Remaining--
// Proceed with request handling
return c.Next()
}
This ensures that concurrent requests cannot overshoot the quota due to interleaved read/write operations.
Example 2: Context-local validation with per-request key metadata
If global state is undesirable, bind key metadata to the request context and perform validation in a single, non-shared step:
import (
"context"
"github.com/gofiber/fiber/v2"
)
type KeyMetadata struct {
TenantID string
Scopes []string
}
var keyMetadata = map[string]KeyMetadata{
"abc123": {TenantID: "t1", Scopes: []string{"read"}},
"def456": {TenantID: "t2", Scopes: []string{"read", "write"}},
}
func RequireScope(required string) fiber.Handler {
return func(c *fiber.Ctx) error {
k := c.Get("X-API-Key")
meta, ok := keyMetadata[k]
if !ok {
return c.Status(fiber.StatusUnauthorized).JSON(fiber.Map{"error": "invalid key"})
}
for _, s := range meta.Scopes {
if s == required {
// Tenant isolation: use meta.TenantID to scope business logic
c.Locals("tenantID", meta.TenantID)
return c.Next()
}
}
return c.Status(fiber.StatusForbidden).JSON(fiber.Map{"error": "insufficient scope"})
}
}
This approach avoids shared counters and ensures each request validates and scopes independently, reducing timing-dependent interference. Combine this with secure key storage and rotation policies to further lower risk.