Memory Leak in Fiber
How Memory Leak Manifests in Fiber
Memory leaks in Fiber applications typically occur through improper resource management in request handlers and middleware. A common pattern involves goroutines that never terminate, holding references to request-scoped objects. For example, consider a handler that spawns a goroutine to process data but fails to close the channel:
func leakyHandler(c *fiber.Ctx) error {
ch := make(chan string)
go func() {
// Simulate processing
time.Sleep(10 * time.Second)
ch <- "done"
// Missing: close(ch)
}()
// Handler returns before goroutine completes
return c.SendStatus(fiber.StatusAccepted)
}
Each request creates a new goroutine and channel. Without proper cleanup, these accumulate until the application exhausts available memory. Another Fiber-specific scenario involves middleware that captures context but doesn't release it:
var leakedContexts []fiber.Context
func leakyMiddleware(c *fiber.Ctx) error {
leakedContexts = append(leakedContexts, c)
return c.Next()
}
This middleware stores every request context indefinitely. Since Fiber contexts contain references to request bodies, query parameters, and headers, this creates a growing memory footprint. Database connection leaks also manifest in Fiber through improper defer usage:
func queryHandler(c *fiber.Ctx) error {
db, err := sql.Open("postgres", dsn)
if err != nil {
return err
}
// Missing: defer db.Close()
rows, _ := db.Query("SELECT * FROM large_table")
defer rows.Close()
return c.JSON(fiber.Map{"data": "processed"})
}
Without closing the database connection, each request consumes a new connection from the pool, eventually exhausting resources.
Fiber-Specific Detection
Detecting memory leaks in Fiber requires monitoring goroutine counts and memory allocation patterns. The runtime/pprof package provides built-in profiling tools:
import (
"runtime/debug"
"runtime/pprof"
"os"
)
func setupProfiling() {
f, _ := os.Create("/tmp/memprofile")
defer f.Close()
time.AfterFunc(30*time.Second, func() {
pprof.WriteHeapProfile(f)
debug.FreeOSMemory()
})
}
Heap profiles reveal which objects are retaining memory. Look for increasing counts of *fiber.Context, chan, or *sync.Pool objects. The net/http/pprof endpoint works with Fiber when mounted:
import _ "net/http/pprof"
app := fiber.New()
a := app.Group("/debug")
app.Use("/debug/pprof/", http.DefaultServeMux)
Access /debug/pprof/heap to analyze memory usage. For production monitoring, integrate with expvar to track goroutine counts:
import "expvar"
var goroutineCount = expvar.NewInt("goroutine_count")
type statsMiddleware struct{}
func (s *statsMiddleware) Do(c *fiber.Ctx) error {
startGoroutines := runtime.NumGoroutine()
defer func() {
current := runtime.NumGoroutine()
goroutineCount.Set(int64(current - startGoroutines))
}()
return c.Next()
}
middleBrick's scanner can detect memory leak patterns by analyzing request handling code and identifying missing cleanup operations. The scanner examines handler functions for unclosed channels, unreleased contexts, and improper defer usage. For Fiber applications, middleBrick specifically checks for:
- Missing
defer c.Context().Request().Body.Close()calls - Unbounded goroutine creation without cancellation contexts
- Middleware that captures and retains request-scoped objects
- Database operations without proper connection cleanup
Fiber-Specific Remediation
Fixing memory leaks in Fiber requires disciplined resource management. Use context.WithCancel for goroutine lifecycles:
func safeHandler(c *fiber.Ctx) error {
ctx, cancel := context.WithCancel(c.Context())
defer cancel()
ch := make(chan string)
go func() {
defer close(ch)
select {
case <-ctx.Done():
return
case ch <- "processed":
}
}()
select {
case result := <-ch:
return c.JSON(fiber.Map{"result": result})
case <-time.After(5 * time.Second):
return c.Status(fiber.StatusGatewayTimeout).SendString("timeout")
}
}
This pattern ensures goroutines terminate when the request completes. For middleware, avoid capturing request objects:
func safeMiddleware(c *fiber.Ctx) error {
// Process request without storing context
return c.Next()
}
// Use sync.Pool for reusable objects
var contextPool = sync.Pool{
New: func() interface{} {
return &requestData{}
},
}
type requestData struct {
params map[string]string
}
func pooledHandler(c *fiber.Ctx) error {
rd := contextPool.Get().(*requestData)
defer contextPool.Put(rd)
// Use rd for processing
return c.Next()
}
Database operations should use connection pooling with proper cleanup:
var db *sql.DB
func init() {
var err error
db, err = sql.Open("postgres", dsn)
if err != nil {
log.Fatal(err)
}
db.SetMaxOpenConns(25)
db.SetConnMaxLifetime(5 * time.Minute)
}
func queryHandler(c *fiber.Ctx) error {
rows, err := db.Query("SELECT * FROM users WHERE id = $1", c.Params("id"))
if err != nil {
return err
}
defer rows.Close()
var results []User
for rows.Next() {
var u User
if err := rows.Scan(&u.ID, &u.Name); err != nil {
return err
}
results = append(results, u)
}
return c.JSON(results)
}
middleBrick's remediation guidance includes specific code patterns for Fiber applications. The scanner identifies problematic code sections and suggests replacements using Fiber's native features like context cancellation, sync.Pool for object reuse, and proper defer patterns. For continuous monitoring, integrate middleBrick's CLI into your deployment pipeline:
middlebrick scan https://api.example.com --profile fiber --fail-on high
This command scans your API endpoints and fails the build if high-severity memory leak issues are detected, preventing vulnerable code from reaching production.