Lab 10: Performance Profiling & Benchmarking

Objective

Measure Java code performance using nano-benchmarks: String building, collection lookup O(n) vs O(1) vs O(log n), stream vs for-loop, memoization speedup, and map insertion cost — producing data-driven conclusions about which approach to use when.

Background

Microbenchmarking Java is notoriously tricky — JIT compilation, JVM warmup, GC, and CPU branch prediction all affect results. The correct tool is JMH (Java Microbenchmark Harness). This lab builds JMH-style benchmarks without the dependency, covering the same warmup/run pattern. Key lesson: measure before optimising — intuition is wrong surprisingly often.

Time

25 minutes

Prerequisites

  • Lab 08 (ForkJoinPool)

Tools

  • Docker: zchencow/innozverse-java:latest


Lab Instructions

Steps 1–8: String bench, collection lookup, stream vs loop, memoization, map comparison, allocation, key lessons, Capstone

💡 JIT warmup matters more than you think. The JVM interprets bytecode initially, then profiles hot methods and compiles them to native code (C1, then C2 tiers). A method called 10,000 times will run 10–100x faster than one called 10 times. Always warmup (run the code several times before timing) and run enough iterations to amortise JIT variability. JMH handles all this automatically.

📸 Verified Output:


Summary

Operation
Data structure
Complexity
Notes

Lookup

HashSet

O(1) avg

Best for membership tests

Lookup

TreeSet

O(log n)

Sorted, bounded queries

Lookup

ArrayList

O(n)

Never for large N

Insertion

HashMap

O(1) amortised

Fastest map

Insertion

TreeMap

O(log n)

Sorted keys

Further Reading

Last updated