Heap Overflow in Axum
How Heap Overflow Manifests in Axum
Heap overflow vulnerabilities in Axum applications typically occur when handling dynamic request data that gets stored in heap-allocated buffers. Unlike stack-based overflows, heap overflows happen when data exceeds the bounds of a dynamically allocated buffer, corrupting adjacent heap metadata or data.
In Axum, heap overflows commonly manifest through:
- Unbounded deserialization of request bodies into Vec or String buffers
- Dynamic memory allocation for JSON payloads without size validation
- Buffer management in custom middleware that processes multipart form data
- Recursive parsing of nested JSON structures without depth limits
- Memory pooling implementations that don't validate allocation sizes
Consider this vulnerable Axum handler:
use axum::extract::Json;
use serde::{Deserialize, Serialize};
#[derive(Deserialize)]
struct Payload {
data: String,
nested: Option<Nested>,
}
#[derive(Deserialize)]
struct Nested {
items: Vec<NestedItem>,
}
#[derive(Deserialize)]
struct NestedItem {
value: String,
}
async fn handle_request(Json(payload): Json<Payload>) {
// No validation of payload size or depth
// Recursive deserialization can cause heap overflow
process_nested(payload.nested);
}
fn process_nested(nested: Option<Nested>) {
if let Some(nested) = nested {
for item in nested.items {
process_nested(item.nested);
}
}
}This code is vulnerable because serde's deserialization doesn't enforce size limits by default. An attacker can send a JSON payload with millions of nested items, causing excessive heap allocation and potential memory exhaustion or corruption.
Another common pattern involves multipart form data:
use axum::extract::Multipart;
async fn upload_file(mut multipart: Multipart) {
while let Some(field) = multipart.next_field().await.unwrap() {
let data = field.bytes().await.unwrap(); // No size validation
// Process file data without checking size
process_file_data(data);
}
}
fn process_file_data(data: Vec<u8>) {
// Large files can cause heap overflow
let mut buffer = Vec::with_capacity(data.len());
buffer.extend_from_slice(&data);
// No bounds checking on buffer operations
}
The vulnerability here is the lack of size validation before allocating memory for the uploaded file data.
Axum-Specific Detection
Detecting heap overflow vulnerabilities in Axum requires both static analysis and runtime monitoring. middleBrick's API security scanner can identify these issues through several mechanisms:
Runtime Payload Analysis: middleBrick sends crafted payloads to test for heap overflow vulnerabilities. For JSON endpoints, it sends:
// Test for unbounded deserialization
{
"data": "A",
"nested": {
"items": [
{ "value": "A", "nested": null },
{ "value": "A", "nested": null },
// ... millions of nested items
]
}
}Multipart Size Testing: The scanner tests multipart endpoints with oversized files to check for proper size validation:
curl -X POST http://localhost:3000/upload \
-F "file=@large_file.bin" \
-F "metadata={"name":"test"}"middleBrick monitors the server's memory usage and response times. If the server crashes, becomes unresponsive, or shows abnormal memory growth, it flags potential heap overflow vulnerabilities.
Middleware Inspection: middleBrick analyzes your Axum middleware stack to identify custom buffer management code. It looks for patterns like:
// Vulnerable pattern detected
let mut buffer = Vec::new();
buffer.reserve(size); // No validation of 'size'
Configuration Analysis: The scanner checks your Axum configuration for missing size limits:
// Missing limits detected
app.with_state(AppState {})
.route("/api/*", axum::routing::get(handler))
// No extractor configuration for size limits
middleBrick provides a detailed report showing:
- Which endpoints are vulnerable to heap overflow
- Specific code patterns that need fixing
- Recommended size limits based on your application's needs
- Compliance mapping to OWASP API Security Top 10 (A1: Broken Object Level Authorization, A4: Lack of Resources & Rate Limiting)
Axum-Specific Remediation
Fixing heap overflow vulnerabilities in Axum requires implementing proper bounds checking and size validation. Here are Axum-specific remediation strategies:
1. Configure Extractor Limits: Use Axum's extractor configuration to set size limits:
use axum::extract::json::JsonConfig;
use axum::Json;
async fn handle_request(
Json(payload): Json<Payload, JsonConfig>,
) {
// Payload automatically limited to 1MB
}
// In your main function:
let app = axum::Router::new()
.route("/api/*", axum::routing::post(handler))
.with_state(AppState {})
.layer(
axum::extract::Extension(JsonConfig::default().limit(1024 * 1024)), // 1MB limit
);
2. Implement Recursive Depth Limits: Add depth checking to recursive deserialization:
use serde::Deserialize;
#[derive(Deserialize)]
struct Payload {
data: String,
#[serde(default)]
nested: Option<Nested>,
}
#[derive(Deserialize)]
struct Nested {
items: Vec<NestedItem>,
}
#[derive(Deserialize)]
struct NestedItem {
value: String,
#[serde(default)]
nested: Option<Nested>,
}
async fn handle_request(Json(payload): Json<Payload>) -> Result<impl axum::response::IntoResponse> {
validate_depth(&payload, 0, 10)?; // Max depth of 10
Ok(axum::response::Json({"status": "success"}))
}
fn validate_depth(payload: &Payload, current_depth: usize, max_depth: usize) -> Result<()> {
if current_depth > max_depth {
return Err(axum::response::Response::from(
axum::response::Json({"error": "Payload depth exceeded"})
));
}
if let Some(nested) = &payload.nested {
for item in &nested.items {
validate_depth(item, current_depth + 1, max_depth)?;
}
}
Ok(())
}
3. Validate Multipart Sizes: Use Axum's multipart extractor with size limits:
use axum::extract::multipart::{FormData, MultipartError};
use axum::extract::multipart::Field;
async fn upload_file(mut multipart: FormData) -> Result<impl axum::response::IntoResponse> {
while let Some(field) = multipart.next_field().await? {
let field_name = field.name().unwrap();
let content_type = field.content_type().unwrap_or("application/octet-stream");
// Validate file size before reading
let size = field.size_hint().unwrap_or(0);
if size > 10 * 1024 * 1024 { // 10MB limit
return Err(axum::response::Response::from(
axum::response::Json({"error": "File too large"})
));
}
let data = field.bytes().await?;
// Process file data
}
Ok(axum::response::Json({"status": "success"}))
}
4. Use Safe Buffer Operations: Replace unsafe buffer operations with checked alternatives:
// Vulnerable
let mut buffer = Vec::with_capacity(size);
buffer.extend_from_slice(&data);
// Safe
let mut buffer = Vec::with_capacity(size.min(MAX_SAFE_SIZE));
buffer.extend_from_slice(&data[..data.len().min(MAX_SAFE_SIZE)]);
5. Implement Rate Limiting: Add rate limiting to prevent repeated large payload attacks:
use axum::extract::Extension;
use tower_http::rate_limit::RateLimitLayer;
// In main:
let app = axum::Router::new()
.route("/api/*", axum::routing::post(handler))
.layer(RateLimitLayer::new(100, std::time::Duration::from_secs(60)))
.check_infallible();
These remediation strategies, combined with middleBrick's continuous monitoring, ensure your Axum application is protected against heap overflow attacks while maintaining performance and usability.