Measure Java code performance using nano-benchmarks: String building, collection lookup O(n) vs O(1) vs O(log n), stream vs for-loop, memoization speedup, and map insertion cost — producing data-driven conclusions about which approach to use when.
Background
Microbenchmarking Java is notoriously tricky — JIT compilation, JVM warmup, GC, and CPU branch prediction all affect results. The correct tool is JMH (Java Microbenchmark Harness). This lab builds JMH-style benchmarks without the dependency, covering the same warmup/run pattern. Key lesson: measure before optimising — intuition is wrong surprisingly often.
💡 JIT warmup matters more than you think. The JVM interprets bytecode initially, then profiles hot methods and compiles them to native code (C1, then C2 tiers). A method called 10,000 times will run 10–100x faster than one called 10 times. Always warmup (run the code several times before timing) and run enough iterations to amortise JIT variability. JMH handles all this automatically.