Request Smuggling in Fiber with Cockroachdb
Request Smuggling in Fiber with Cockroachdb — how this specific combination creates or exposes the vulnerability
Request smuggling occurs when an HTTP proxy or server processes requests differently than intended, allowing an attacker to smuggle a request from one logical request to another. In a Fiber application that uses Cockroachdb as the backend datastore, the risk is not in Cockroachdb itself but in how requests are handled and routed between the Fiber server, any front-end proxies or load balancers, and the database client logic.
When a Fiber service proxies or forwards requests to downstream services or batch jobs that interact with Cockroachdb, differences in header parsing or body buffering between the frontend and backend can enable smuggling. For example, if the frontend allows chunked transfer encoding while the Fiber app does not, an attacker can craft requests where the body is interpreted differently, potentially causing a second request to be associated with a different user context or transaction. In a Cockroachdb-backed service, this can lead to operations being applied under the wrong authentication context or within an unintended transaction, especially when requests are replayed or when connection pooling reuses sessions.
Consider a scenario where a Fiber endpoint accepts a JSON payload and starts a database transaction without properly isolating request context. If request smuggling causes two requests to be merged, the second request may execute within the first transaction, leading to data inconsistency or unauthorized data access. Because Cockroachdb supports serializable isolation, the logical transaction boundaries are strict; however, application-layer transaction handling in Fiber must ensure each request begins and commits its own transaction scope to avoid cross-request contamination.
Another angle involves HTTP method smuggling via headers like Transfer-Encoding and Content-Length. If Fiber routes requests to a Cockroachdb-driven worker or an internal API based on header values without canonicalization, an attacker can smuggle a request intended for a read-only endpoint into a write operation. This is particularly risky when the write operation issues DML against Cockroachdb tables without verifying the exact origin or integrity of the command.
Because middleBrick scans the unauthenticated attack surface and tests input validation and rate limiting, it can surface discrepancies in how requests are handled before they reach Cockroachdb. Findings often highlight missing strict header normalization and insufficient separation of request context, which are critical when each database operation must be reliably attributed to a single, authenticated source.
Cockroachdb-Specific Remediation in Fiber — concrete code fixes
To mitigate request smuggling in a Fiber application backed by Cockroachdb, enforce strict request parsing, isolate database transactions per request, and avoid forwarding or reusing low-level HTTP constructs that can be misinterpreted. The following examples illustrate secure patterns.
1. Enforce strict Content-Length and disable Transfer-Encoding smuggling
Ensure that the Fiber server rejects requests that use both Content-Length and Transfer-Encoding. This prevents an attacker from smuggling a request via chunked encoding.
const { app } = require('fastify')();
const express = require('express');
const { Pool } = require('pg');
const app = express();
// Reject requests with both headers
app.use((req, res, next) => {
if (req.headers['transfer-encoding'] && req.headers['content-length']) {
return res.status(400).send('Invalid headers');
}
next();
});
const pool = new Pool({
connectionString: process.env.COCKROACHDB_URL,
ssl: {
rejectUnauthorized: true,
},
});
app.post('/users', async (req, res) => {
const client = await pool.connect();
try {
await client.query('BEGIN');
const result = await client.query(
'INSERT INTO users(name, email) VALUES($1, $2) RETURNING id',
[req.body.name, req.body.email]
);
await client.query('COMMIT');
res.status(201).json(result.rows[0]);
} catch (err) {
await client.query('ROLLBACK');
res.status(500).send('Transaction failed');
} finally {
client.release();
}
});
2. Isolate database transactions per request in Fiber
In Fiber, always begin a new transaction at the start of a request and commit or roll back before the response is sent. Do not share connections or transaction state across handlers or middleware.
const { Router } = require('express');
const router = Router();
router.post('/transfer', async (req, res) => {
const client = await pool.connect();
try {
await client.query('BEGIN');
const { from, to, amount } = req.body;
const fromBalance = await client.query(
'SELECT balance FROM accounts WHERE id = $1 FOR UPDATE',
[from]
);
if (fromBalance.rows[0].balance < amount) {
throw new Error('Insufficient funds');
}
await client.query(
'UPDATE accounts SET balance = balance - $1 WHERE id = $2',
[amount, from]
);
await client.query(
'UPDATE accounts SET balance = balance + $1 WHERE id = $2',
[amount, to]
);
await client.query('COMMIT');
res.json({ status: 'ok' });
} catch (err) {
await client.query('ROLLBACK');
res.status(400).json({ error: err.message });
} finally {
client.release();
}
});
3. Validate and canonicalize headers before routing
Normalize header names and values to ensure consistent interpretation by both Fiber and any upstream proxies. Reject requests with ambiguous or duplicate headers that could be exploited for smuggling.
app.use((req, res, next) => {
const normalized = {};
for (const [key, value] of Object.entries(req.headers)) {
const lower = key.toLowerCase();
if (Array.isArray(value)) {
if (value.length === 0) continue;
normalized[lower] = value[0];
} else {
normalized[lower] = value;
}
}
req.headers = normalized;
if (req.headers['transfer-encoding'] === 'chunked' && req.headers['content-length']) {
return res.status(400).send('Smuggling attempt detected');
}
next();
});
4. Use prepared statements and strict input validation
Prevent SQL injection and malformed queries that could be leveraged alongside smuggling to manipulate transaction boundaries. Use parameterized queries exclusively.
const result = await client.query(
'SELECT * FROM users WHERE id = $1 AND tenant_id = $2',
[userId, tenantId]
);
5. Enforce per-request connection acquisition and timeouts
Always acquire a fresh connection from the pool for each request and set reasonable statement and idle timeouts to reduce the window for cross-request contamination.
const { Pool } = require('pg');
const pool = new Pool({
connectionString: process.env.COCKRACHDB_URL,
max: 20,
idleTimeoutMillis: 30000,
connectionTimeoutMillis: 2000,
});