Profiling Go Applications
Profiling helps you understand where your Go program spends its time and memory. Go has excellent built-in profiling tools that make it easy to identify performance bottlenecks and optimize your code.
CPU Profiling
CPU profiling shows where your program spends its CPU time:
package main
import (
"log"
"os"
"runtime/pprof"
"time"
)
func slowFunction() {
time.Sleep(100 * time.Millisecond)
}
func fastFunction() {
// Do some quick work
sum := 0
for i := 0; i < 1000000; i++ {
sum += i
}
}
func main() {
// Create CPU profile file
f, err := os.Create("cpu.prof")
if err != nil {
log.Fatal(err)
}
defer f.Close()
// Start CPU profiling
if err := pprof.StartCPUProfile(f); err != nil {
log.Fatal(err)
}
defer pprof.StopCPUProfile()
// Run your code
for i := 0; i < 100; i++ {
slowFunction()
fastFunction()
}
}Run the program, then analyze the profile:
go tool pprof cpu.profIn the pprof shell:
top: Show top functions by CPU usagelist slowFunction: Show line-by-line profile for a functionweb: Open interactive web interface (requires graphviz)
Memory Profiling
Memory profiling helps identify memory leaks and excessive allocations:
package main
import (
"log"
"os"
"runtime/pprof"
"time"
)
func allocateMemory() {
// Allocate a lot of memory
data := make([][]byte, 1000)
for i := range data {
data[i] = make([]byte, 1024) // 1KB each
}
// Don't use data to simulate leak
_ = data
}
func main() {
// Create memory profile
f, err := os.Create("mem.prof")
if err != nil {
log.Fatal(err)
}
defer f.Close()
// Run garbage collector before profiling
runtime.GC()
// Start memory profiling
if err := pprof.WriteHeapProfile(f); err != nil {
log.Fatal(err)
}
// Your application code here
allocateMemory()
// Write another profile after allocations
f2, _ := os.Create("mem_after.prof")
pprof.WriteHeapProfile(f2)
f2.Close()
}Analyze memory profile:
go tool pprof mem.profBenchmark Profiling
For profiling specific functions, use benchmarks:
package main
import "testing"
func Fibonacci(n int) int {
if n <= 1 {
return n
}
return Fibonacci(n-1) + Fibonacci(n-2)
}
func BenchmarkFibonacci(b *testing.B) {
for i := 0; i < b.N; i++ {
Fibonacci(30)
}
}Run benchmark with profiling:
go test -bench=. -cpuprofile=cpu.prof -memprofile=mem.profHTTP Profiling
For web applications, Go provides HTTP endpoints for profiling:
package main
import (
"net/http"
_ "net/http/pprof" // Import for side effects
)
func main() {
// Your HTTP handlers here
// Profiling endpoints are automatically available at /debug/pprof/
log.Println("Server starting on :8080")
log.Println("Profiling available at http://localhost:8080/debug/pprof/")
http.ListenAndServe(":8080", nil)
}Access profiling data:
http://localhost:8080/debug/pprof/profile?seconds=30: CPU profile for 30 secondshttp://localhost:8080/debug/pprof/heap: Memory profilehttp://localhost:8080/debug/pprof/goroutine: Goroutine profile
Using pprof Tool
The pprof tool has many commands:
# Interactive mode
go tool pprof cpu.prof
# Commands in pprof:
top # Show top functions
list func # Show source code with profiling info
web # Generate web visualization
pdf # Generate PDF
png # Generate PNG
svg # Generate SVGCommon Profiling Scenarios
1. High CPU Usage
go tool pprof cpu.prof
(pprof) top
(pprof) list expensiveFunctionLook for functions with high CPU time.
2. Memory Leaks
go tool pprof mem.prof
(pprof) topLook for unexpected memory allocations.
3. Goroutine Leaks
go tool pprof http://localhost:8080/debug/pprof/goroutine
(pprof) topCheck for too many goroutines.
Optimizing Based on Profiles
Example: Inefficient String Concatenation
// BAD: Creates many temporary strings
func buildString(n int) string {
result := ""
for i := 0; i < n; i++ {
result += strconv.Itoa(i)
}
return result
}
// GOOD: Use strings.Builder
func buildString(n int) string {
var builder strings.Builder
for i := 0; i < n; i++ {
builder.WriteString(strconv.Itoa(i))
}
return builder.String()
}Example: Unnecessary Allocations
// BAD: Allocates slice on each call
func processData(data []int) []int {
result := make([]int, len(data))
for i, v := range data {
result[i] = v * 2
}
return result
}
// GOOD: Accept result slice as parameter
func processData(data, result []int) {
for i, v := range data {
result[i] = v * 2
}
}Flame Graphs
For visual analysis, use flame graphs:
go tool pprof -http=:8081 cpu.profThis opens a web interface with flame graphs and other visualizations.
Profiling Best Practices
- Profile in production-like conditions: Use realistic data and load
- Profile different scenarios: CPU, memory, goroutines
- Focus on bottlenecks: Don’t optimize code that’s not slow
- Measure improvements: Always verify optimizations help
- Use benchmarks for micro-optimizations: For small functions
- Consider trade-offs: Sometimes readability is more important than performance
Integration with Testing
Profile during tests:
func TestExpensiveOperation(t *testing.T) {
if testing.Short() {
t.Skip("Skipping long test in short mode")
}
// Your test code
}Run with profiling:
go test -v -run=TestExpensiveOperation -cpuprofile=cpu.profContinuous Profiling
For long-running applications, consider continuous profiling tools like Pyroscope or Parca.
Profiling is essential for writing performant Go applications. Start profiling early in development and make it part of your regular workflow.
For more information, see the pprof documentation and profiling blog post.
If you want to learn about benchmarking first, check our benchmarking tutorial.