Lab 14: JMH Performance Tuning
Overview
Step 1: Why Microbenchmarking is Hard
JVM Optimizations that fool naive benchmarks:
Dead code elimination — JIT removes code with no observable side effects
Constant folding — JIT computes constant expressions at compile time
Loop unrolling — JIT replicates loop body to reduce overhead
Inlining — JIT copies callee body into caller
Warmup — first N iterations use interpreter, not JIT
GC pauses — garbage collection adds latency noise
OSR — on-stack replacement changes benchmark behavior mid-run
JMH solutions:
Blackhole.consume() — prevents dead code elimination
@Fork(1+) — fresh JVM per benchmark
@Warmup — discard initial results
@Measurement — only measure after warmup
@State — isolate benchmark state
@BenchmarkMode — throughput, average time, sample timeStep 2: JMH Setup
Step 3: Benchmark Modes and Scopes
Step 4: GC Algorithm Selection
Step 5: GC Log Analysis
Step 6: JIT Compilation Flags
Step 7: String Interning and Constant Pool
Step 8: Capstone — JMH Benchmark
Summary
Tool/Concept
CLI Flag / API
Purpose
Last updated
