Api Key Exposure on Kubernetes
How Api Key Exposure Manifests in Kubernetes
API key exposure in Kubernetes environments presents unique challenges due to the distributed nature of containerized applications and the complex web of service-to-service communication. Unlike traditional monolithic applications, Kubernetes clusters create numerous attack surfaces where API keys can be inadvertently exposed.
One of the most common manifestations occurs through environment variables in container definitions. When developers store API keys as environment variables in Kubernetes manifests or Helm charts, these values become visible to anyone with access to the pod configuration. This includes not just developers, but potentially anyone with cluster read permissions:
apiVersion: apps/v1
kind: Deployment
metadata:
name: payment-service
data:
stripe-secret-key: sk_test_1234567890
---
spec:
containers:
- name: app
image: payment-service:latest
env:
- name: STRIPE_API_KEY
valueFrom:
secretKeyRef:
name: stripe-credentials
key: stripe-secret-key
While this example uses a Secret, many production deployments still hardcode API keys directly in environment variables, making them visible through kubectl describe pod or container runtime inspection.
Another critical vulnerability vector is Kubernetes Secrets. Though designed for sensitive data storage, improperly configured Secrets can lead to widespread API key exposure. When Secrets are stored as unencrypted base64 strings or when RBAC permissions are too permissive, any pod with the right service account can read Secrets from other namespaces. This becomes particularly dangerous in multi-tenant clusters where different teams share the same infrastructure.
Service mesh configurations also introduce API key exposure risks. When applications use service mesh sidecars for authentication, API keys might be passed through HTTP headers in clear text within the cluster network. If mutual TLS is not properly configured, these headers can be intercepted by malicious pods running in the same namespace. Additionally, debug endpoints and health checks often inadvertently log API keys when developers include them in diagnostic information.
Container image layers present another subtle attack vector. Developers sometimes bake API keys directly into Docker images during the build process, either through COPY commands or RUN statements that download configuration files. These keys persist in the image layers even if later removed, making them recoverable through image history inspection or layer extraction.
Kubernetes admission controllers and webhooks can also become vectors for API key exposure. When custom admission controllers validate or mutate pod configurations, they often log request bodies for debugging. If these logs include API keys from pod specifications, they create persistent exposure points that might be retained in log aggregation systems for extended periods.
Kubernetes-Specific Detection
Detecting API key exposure in Kubernetes requires a multi-layered approach that combines static analysis of manifests with runtime scanning of running workloads. The first step is examining all Kubernetes manifests for hardcoded secrets using specialized tools.
Static analysis tools like kubesec or custom admission controllers can scan YAML manifests before deployment. These tools look for patterns like base64-encoded strings that resemble API keys, environment variables with suspicious names, or direct references to external credential services. Here's an example of a simple manifest scanner:
package main
import (
"context"
"encoding/json"
"fmt"
"os"
"regexp"
"strings"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
)
var apiKeyPatterns = []*regexp.Regexp{
regexp.MustCompile(`sk_[a-zA-Z0-9]{20,}`), // Stripe
regexp.MustCompile(`ghp_[a-zA-Z0-9]{36}`), // GitHub
regexp.MustCompile(`AIza[0-9A-Za-z_-]{35}`), // Google
}
func scanManifest(manifest []byte) ([]string, error) {
var obj unstructured.Unstructured
if err := json.Unmarshal(manifest, &obj); err != nil {
return nil, fmt.Errorf("invalid manifest: %v", err)
}
findings := []string{}
// Check environment variables
containers, found, err := unstructured.NestedSlice(obj.Object, "spec", "containers")
if err != nil {
return nil, fmt.Errorf("error accessing containers: %v", err)
}
if found {
for _, container := range containers {
env, found, err := unstructured.NestedSlice(container, "env")
if err != nil {
continue
}
if found {
for _, envVar := range env {
value, found, _ := unstructured.NestedString(envVar, "value")
if found && containsAPIKey(value) {
findings = append(findings, "API key in container env var")
}
}
}
}
}
return findings, nil
}
func containsAPIKey(value string) bool {
for _, pattern := range apiKeyPatterns {
if pattern.MatchString(value) {
return true
}
}
return false
}
func main() {
manifest, _ := os.ReadFile("deployment.yaml")
findings, _ := scanManifest(manifest)
for _, finding := range findings {
fmt.Println("Vulnerability found:", finding)
}
}
For runtime detection, middleBrick provides Kubernetes-specific API security scanning that identifies exposed API keys in running services. The scanner examines HTTP responses, headers, and error messages for API key patterns without requiring any credentials or agents. This black-box approach is particularly valuable in Kubernetes environments where you may not have direct access to container internals.
Network traffic analysis using tools like tcpdump or service mesh observability platforms can also detect API keys traversing the network in clear text. By monitoring traffic between pods, you can identify when API keys are being transmitted without proper encryption or when they appear in debug endpoints.
Secret scanning tools like trivy or clair can analyze container images for hardcoded API keys that may have been baked into layers during the build process. These tools examine the entire image history, not just the final layer, making them effective at finding keys that developers thought they had removed.
RBAC analysis tools can identify overly permissive service accounts that might allow pods to read Secrets from other namespaces. By mapping service account permissions against namespace boundaries, you can identify configurations where a compromised pod could access API keys from unrelated services.
Kubernetes-Specific Remediation
Remediating API key exposure in Kubernetes requires a defense-in-depth approach that combines proper secret management with network security and runtime protections. The foundation is implementing a robust secret management strategy using Kubernetes-native features.
First, migrate all API keys from environment variables to Kubernetes Secrets with encryption at rest enabled. This requires configuring the Kubernetes API server with --encryption-provider-config to ensure Secrets are stored encrypted in etcd:
apiVersion: v1
kind: Secret
metadata:
name: stripe-credentials
type: Opaque
data:
stripe-secret-key: c2tfdGVzdF8xMjM0NTY3ODkwCg== # base64 encoded
---
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
containers:
- name: app
env:
- name: STRIPE_API_KEY
valueFrom:
secretKeyRef:
name: stripe-credentials
key: stripe-secret-key
Enable Kubernetes encryption at rest by configuring the API server with a encryption.yaml file:
apiVersion: v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: c2VjcmV0IGtleSB0byBiZSBpbmNsdWRlZAo=
- identity: {}
Implement strict RBAC policies to limit Secret access to only the pods that need them. Create namespace-specific service accounts with minimal permissions:
apiVersion: v1
kind: ServiceAccount
metadata:
name: payment-service
settings:
secrets:
- stripe-credentials
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: payment
name: payment-secrets
rules:
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["stripe-credentials"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: payment-secrets
namespace: payment
subject:
- kind: ServiceAccount
name: payment-service
namespace: payment
roleRef:
kind: Role
name: payment-secrets
apiGroup: rbac.authorization.k8s.io
Implement network policies to prevent unauthorized access to pods that might expose API keys. Use Kubernetes Network Policies to control traffic between pods:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: api-key-protection
namespace: payment
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: trusted
ports:
- protocol: TCP
port: 8080
egress:
- to: []
ports:
- protocol: TCP
port: 443
For applications using service meshes like Istio or Linkerd, implement mTLS to encrypt all service-to-service communication. This prevents API keys from being exposed in clear text even if they appear in HTTP headers:
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: default
tls:
mode: ISTIO_MUTUAL
Implement admission controllers that validate pod specifications before deployment. Custom admission webhooks can reject manifests that contain hardcoded API keys or violate security policies:
package main
import (
"context"
"encoding/json"
"fmt"
"net/http"
"k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/serializer"
admissionv1 "k8s.io/api/admission/v1"
)
func validateSecrets(ar admissionv1.AdmissionReview) *admissionv1.AdmissionResponse {
req := ar.Request
pod := v1.Pod{}
deserializer := serializer.NewCodecFactory(runtime.NewScheme()).UniversalDeserializer()
if _, _, err := deserializer.Decode(req.Object.Raw, nil, &pod); err != nil {
return &admissionv1.AdmissionResponse{
Result: &metav1.Status{
Message: err.Error(),
},
}
}
for _, container := range pod.Spec.Containers {
for _, env := range container.Env {
if strings.Contains(env.Value, "sk_") || strings.Contains(env.Value, "ghp_") {
return &admissionv1.AdmissionResponse{
Allowed: false,
Result: &metav1.Status{
Message: "API keys should not be stored in environment variables",
},
}
}
}
}
return &admissionv1.AdmissionResponse{Allowed: true}
}
Finally, implement continuous scanning of running workloads using middleBrick's Kubernetes integration. The scanner can identify exposed API keys in HTTP responses, headers, and error messages without requiring any credentials or agents. This complements your static analysis and provides runtime verification that your remediation efforts are effective.
Frequently Asked Questions
How can I test my Kubernetes cluster for API key exposure without disrupting production services?
kubectl get pods --all-namespaces and kubectl describe pod to identify potential exposure points without modifying any resources.