Log Injection in Cassandra
How Log Injection Manifests in Cassandra
Log injection in Cassandra environments typically occurs when untrusted data is written directly to Cassandra's system logs or application logs without proper sanitization. This vulnerability can manifest in several Cassandra-specific ways:
- Query Parameter Injection: When user-supplied values are logged directly in Cassandra CQL queries without escaping, attackers can inject malicious log entries that alter the log's meaning or structure.
- Driver-Level Logging: Cassandra drivers (like DataStax Java Driver) log query parameters and connection details. If these logs aren't properly sanitized, injection can occur at the driver level.
- Audit Trail Manipulation>: Cassandra's audit logging feature can be compromised if log entries contain unvalidated user input, potentially masking malicious activities.
A common Cassandra-specific pattern involves logging CQL queries with bound parameters. Consider this vulnerable code:
String username = request.getParameter("username");
String query = String.format(
"SELECT * FROM users WHERE username = '%s' AND timestamp = %d",
username, System.currentTimeMillis()
);
logger.info("Executing query: " + query);
An attacker could supply a username like test' OR '1'='1, resulting in a log entry that contains an altered query structure. While this might not execute maliciously in Cassandra itself, it corrupts the audit trail and can mislead security monitoring.
Another Cassandra-specific scenario involves logging partition keys or clustering columns directly. Since Cassandra's data model relies heavily on these components, attackers can craft inputs that break log parsing or create misleading audit trails:
String partitionKey = request.getParameter("user_id");
String clusteringColumn = request.getParameter("action");
logger.info(String.format(
"User %s performed action %s at %s",
partitionKey, clusteringColumn, new Date()
));
If clusteringColumn contains newline characters or log delimiters, it can split a single log entry across multiple lines, confusing log analysis tools.
Cassandra-Specific Detection
Detecting log injection in Cassandra environments requires a multi-layered approach. middleBrick's black-box scanning can identify several Cassandra-specific indicators of this vulnerability:
Runtime Scanning: middleBrick tests API endpoints that interact with Cassandra by submitting payloads containing log injection patterns. It analyzes responses for signs that injected content might appear in logs, such as:
- Echoed error messages containing user input
- Debug information that reveals query structure
- Stack traces that include unvalidated parameters
OpenAPI Analysis: When provided with a Cassandra application's OpenAPI spec, middleBrick cross-references parameter definitions with security best practices. It flags endpoints accepting text fields that might be logged without sanitization.
Manual Detection Techniques: For Cassandra-specific detection, examine your application's logging patterns around these areas:
// Vulnerable pattern - direct string interpolation
logger.debug("Query: " + query);
// Safer pattern - structured logging
logger.debug("Query executed",
StructuredArguments.keyValue("query", query),
StructuredArguments.keyValue("params", params));
Look for these indicators in your Cassandra application code:
| Indicator | Risk Level | Detection Method |
|---|---|---|
| Direct string concatenation in log statements | High | Static code analysis |
| Logging of raw CQL queries with user input | Critical | Code review + runtime testing |
| Audit logging without input validation | High | Configuration review |
| Debug logging enabled in production | Medium | Environment configuration check |
For Cassandra driver-specific detection, review the logging configuration of your driver. The DataStax Java Driver, for example, has configurable logging levels that might expose sensitive query information:
datastax-java-driver {
advanced {
// This can log query parameters at DEBUG level
show_cql = true
show_values = true
}
}
Ensure production environments don't log at DEBUG or TRACE levels unless absolutely necessary and properly secured.
Cassandra-Specific Remediation
Remediating log injection in Cassandra applications requires both code-level fixes and configuration changes. Here are Cassandra-specific remediation strategies:
Structured Logging: Replace string concatenation with structured logging approaches that properly escape values:
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.datastax.oss.driver.api.core.CqlSession;
import com.datastax.oss.driver.api.core.cql.SimpleStatement;
public class SecureCassandraLogger {
private static final Logger logger = LoggerFactory.getLogger(SecureCassandraLogger.class);
public void safeQueryLogging(CqlSession session, String username, String userId) {
// Use prepared statements - they separate data from query structure
SimpleStatement statement = SimpleStatement.builder(
"SELECT * FROM users WHERE username = ? AND user_id = ?")
.addPositionalValue(username)
.addPositionalValue(userId)
.build();
try {
session.execute(statement);
// Structured logging - data is properly escaped
logger.info("Query executed successfully",
new StructuredArguments.KeyValue("username", username),
new StructuredArguments.KeyValue("user_id", userId));
} catch (Exception e) {
// Log exceptions without exposing sensitive data
logger.error("Query failed for user: {}", username, e);
}
}
}
Input Validation and Sanitization: Implement validation before logging any user-supplied data:
public class InputSanitizer {
private static final Pattern LOG_SAFE_PATTERN = Pattern.compile("[\w@.]+");
public static String sanitizeForLog(String input) {
if (input == null) return "null";
// Remove newlines, tabs, and control characters
String cleaned = input.replaceAll("[
\r\t]", " ");
// For Cassandra-specific contexts, validate against expected patterns
if (!LOG_SAFE_PATTERN.matcher(cleaned).matches()) {
// Replace unsafe characters with hex encoding
cleaned = cleaned.replaceAll("[^\w@.]", m -> String.format("0x%02X", (int)m.charAt(0)));
}
return cleaned;
}
}
Audit Logging Configuration: Configure Cassandra's audit logging to prevent injection:
# cassandra.yaml audit configuration
audit_logging_options:
enabled: true
included_categories:
- QUERY
- DML
- DDL
included_keyspaces:
- myapp
included_users:
- cassandra
- myapp_user
log_dir: /var/log/cassandra/audit
max_log_size: 100MB
max_retries: 3
# Important: validate log entries before writing
pre_write_validator: com.myapp.audit.AuditLogSanitizer
Driver-Level Security: Configure Cassandra drivers to minimize sensitive logging:
import com.datastax.oss.driver.api.core.CqlSession;
import com.datastax.oss.driver.api.core.CqlSessionBuilder;
import com.datastax.oss.driver.internal.core.context.CassandraDriverContext;
public class SecureCassandraSession {
public CqlSession createSecureSession() {
return CqlSession.builder()
.withLocalDatacenter("datacenter1")
.withAuthCredentials("user", "password")
.withLoggingOptions(
LoggingOptions.builder()
.withLogLevel(LogLevel.WARN) // Only log warnings+errors
.build())
.build();
}
}
Log Analysis Hardening: Configure your log aggregation and analysis tools to handle Cassandra logs securely:
{
"logstash": {
"filter": {
"cassandra": {
"pattern": ["%{CASSANDRALOG}"],
"what": "log4j"
}
},
"mutate": {
"strip": ["password", "secret", "token"],
"mask": {
"credit_card": "CREDIT_CARD_MASKED",
"ssn": "SSN_MASKED"
}
}
}
}