Insufficient Logging on Aws
How Insufficient Logging Manifests in Aws
Insufficient logging in Aws applications creates blind spots that attackers exploit to maintain persistent access and evade detection. In Aws Lambda functions, developers often omit logging for failed authentication attempts, making credential stuffing attacks undetectable. When an attacker submits hundreds of invalid API keys to your Aws Lambda function, the absence of structured logs means you'll never know an attack is underway.
# Vulnerable: No logging for failed auth
import json
import boto3
def lambda_handler(event, context):
try:
# Assume some auth logic here
if not validate_api_key(event['headers']['x-api-key']):
return {"statusCode": 401, "body": json.dumps({"error": "Unauthorized"})}
# Processing logic
except Exception as e:
# Generic error, no logging
return {"statusCode": 500, "body": json.dumps({"error": "Internal Server Error"})}The above pattern is common in Aws Lambda functions where developers prioritize minimal cold start times over security observability. Without logging failed authentication attempts, you cannot correlate attack patterns or implement rate limiting based on authentication failures.
Another manifestation occurs in Aws API Gateway integrations where authorization failures at the resource level go unlogged. When an attacker attempts to access restricted resources without proper permissions, the default behavior returns a 403 without any audit trail:
// Vulnerable: Missing audit logging for authorization failures
import { APIGatewayProxyHandler } from 'aws-lambda';
import { DynamoDB } from 'aws-sdk';
export const handler: APIGatewayProxyHandler = async (event) => {
const userId = event.requestContext.authorizer?.principalId;
// Check if user has permission to access this resource
const hasPermission = await checkUserPermission(userId, event.resource);
if (!hasPermission) {
// No logging of unauthorized access attempt
return {
statusCode: 403,
body: JSON.stringify({ message: 'Access denied' })
};
}
// Continue processing
};Data exfiltration through Aws services also goes undetected without proper logging. When an attacker successfully extracts sensitive data from DynamoDB or S3 buckets, the absence of detailed access logs means you cannot determine what was stolen or when the breach occurred.
Business logic abuse represents another critical area. Consider an e-commerce application where users can manipulate order quantities or apply discounts repeatedly. Without logging these business logic operations, attackers can exploit pricing vulnerabilities without leaving any trace:
// Vulnerable: No logging of business logic operations
const processOrder = async (orderData) => {
const order = await createOrder(orderData);
// No logging of order creation or validation failures
if (order.discountApplied > 0) {
// Business logic abuse goes undetected
}
return order;
};Aws-Specific Detection
Detecting insufficient logging in Aws environments requires examining both application code and Aws service configurations. middleBrick's Aws-specific scanning identifies logging gaps by analyzing your deployed functions and their configurations.
For Lambda functions, middleBrick examines the execution environment for CloudWatch logging configurations. It identifies functions that lack structured logging for error conditions, authentication failures, and authorization denials. The scanner specifically looks for patterns where exceptions are caught without logging the error details:
# middleBrick scan output showing logging deficiencies
$ middlebrick scan https://api.example.com/lambda-endpoint
✓ Authentication checks
✓ Input validation
⚠ Rate limiting
❌ Insufficient logging detected:
- Lambda function 'processPayment' catches exceptions without logging
- API Gateway resource '/admin' returns 403 without audit trail
- DynamoDB access logs disabled for sensitive tablesThe scanner also analyzes your Aws API Gateway configurations to identify endpoints that lack logging for authorization failures. It checks whether your API Gateway stages have logging enabled and whether the log level captures enough detail for security analysis.
For S3 buckets and DynamoDB tables, middleBrick verifies that access logging is enabled and configured to capture read operations on sensitive data. The scanner specifically checks for:
- CloudTrail integration for API-level logging
- S3 server access logging for bucket operations
- DynamoDB detailed monitoring for table access patterns
- Lambda function logging configurations in CloudWatch
middleBrick's LLM/AI security module also detects when your Aws applications use AI services without proper logging of system prompts and model outputs. This is critical for identifying prompt injection attacks that might otherwise go unnoticed:
# LLM-specific logging detection
$ middlebrick scan https://api.example.com/ai-service
✓ System prompt leakage detection
✓ Active prompt injection testing
⚠ Insufficient logging for AI interactions:
- No logging of model inputs and outputs
- Missing audit trail for prompt injection attempts
- No logging of excessive agency patternsAws-Specific Remediation
Remediating insufficient logging in Aws applications requires implementing structured logging patterns that integrate with Aws's native services. For Lambda functions, use CloudWatch structured logging with proper error handling and authentication logging:
import json
import logging
import boto3
from aws_xray_sdk.core import xray_recorder
from aws_xray_sdk.core import patch_all
patch_all()
logger = logging.getLogger()
logger.setLevel(logging.INFO)
def lambda_handler(event, context):
try:
# Log authentication attempts
api_key = event.get('headers', {}).get('x-api-key')
logger.info('Authentication attempt', {
'api_key': api_key,
'source_ip': event['requestContext']['identity']['sourceIp']
})
if not validate_api_key(api_key):
logger.warning('Failed authentication', {
'api_key': api_key,
'reason': 'Invalid API key'
})
return {
'statusCode': 401,
'body': json.dumps({'error': 'Unauthorized'})
}
# Process request with X-Ray tracing
with xray_recorder.in_subsegment('business-logic'):
result = process_request(event)
logger.info('Successful request', {
'user_id': result['user_id'],
'operation': 'process_request'
})
return {
'statusCode': 200,
'body': json.dumps(result)
}
except Exception as e:
logger.error('Unhandled exception', {
'error': str(e),
'stack_trace': traceback.format_exc()
})
# Return generic error to avoid information disclosure
return {
'statusCode': 500,
'body': json.dumps({'error': 'Internal Server Error'})
}For API Gateway, enable detailed logging at the API level and configure authorization failure logging:
// Configure API Gateway with detailed logging
import { APIGatewayProxyHandler } from 'aws-lambda';
import { DynamoDB } from 'aws-sdk';
export const handler: APIGatewayProxyHandler = async (event) => {
const userId = event.requestContext.authorizer?.principalId;
// Log authorization checks
logger.info('Authorization check', {
'user_id': userId,
'resource': event.resource,
'method': event.httpMethod
});
const hasPermission = await checkUserPermission(userId, event.resource);
if (!hasPermission) {
logger.warning('Authorization denied', {
'user_id': userId,
'resource': event.resource,
'reason': 'Insufficient permissions'
});
return {
statusCode: 403,
body: JSON.stringify({ message: 'Access denied' })
};
}
// Continue processing
}Enable comprehensive logging for data stores using Aws's native logging capabilities:
// Enable S3 server access logging
const enableS3Logging = async (bucketName: string) => {
const s3 = new AWS.S3();
await s3.putBucketLogging({
Bucket: bucketName,
BucketLoggingStatus: {
LoggingEnabled: {
TargetBucket: 'logging-bucket',
TargetPrefix: 's3-access-logs/'
}
}
}).promise();
};
// Enable DynamoDB detailed monitoring
const enableDynamoDBMonitoring = async (tableName: string) => {
const dynamodb = new AWS.DynamoDB();
await dynamodb.updateTable({
TableName: tableName,
BillingMode: 'PAY_PER_REQUEST',
StreamSpecification: {
StreamEnabled: true,
StreamViewType: 'NEW_AND_OLD_IMAGES'
}
}).promise();
};For LLM/AI services, implement comprehensive logging of model interactions:
import json
import logging
from datetime import datetime
from typing import Dict, Any
def log_ai_interaction(
prompt: str,
response: Dict[str, Any],
user_id: str,
session_id: str
) -> None:
"""
Log AI interactions with security-relevant metadata
"""
logger.info('AI interaction', {
'timestamp': datetime.utcnow().isoformat(),
'user_id': user_id,
'session_id': session_id,
'prompt_length': len(prompt),
'response_length': len(response.get('content', '')),
'model_used': response.get('model', 'unknown'),
'tool_calls': len(response.get('tool_calls', [])),
'function_calls': len(response.get('function_call', [])),
'security_flags': detect_security_risks(prompt, response)
})
# Store full interaction in secure audit log
audit_log.put_item({
'session_id': session_id,
'timestamp': datetime.utcnow().isoformat(),
'user_id': user_id,
'prompt': prompt,
'response': json.dumps(response),
'security_analysis': detect_security_risks(prompt, response)
})