Lab 14: Concurrency Basics
Objective
Create and manage threads, use ExecutorService, synchronize shared state, use AtomicInteger, ConcurrentHashMap, CompletableFuture, and understand the happens-before relationship.
Background
Java has first-class concurrency support built into the language and JVM. Understanding threads, synchronization, and the java.util.concurrent package is essential for backend services, parallel data processing, and responsive applications. Virtual threads (Project Loom, Java 21) make I/O-bound concurrency dramatically simpler.
Time
45 minutes
Prerequisites
Lab 08 (Interfaces — Functional Interfaces)
Lab 10 (Exception Handling)
Tools
Java 21 (Eclipse Temurin)
Docker image:
innozverse-java:latest
Lab Instructions
Step 1: Creating Threads
💡
start()creates a new OS thread and callsrun(); callingrun()directly executes in the current thread — a very common mistake. Alwaysstart().join()blocks the calling thread until the target thread terminates.sleep()pauses without releasing locks.
📸 Verified Output:
(output order varies — threads run concurrently)
Step 2: Race Conditions & Synchronization
💡 Race conditions occur when multiple threads read-modify-write shared state without synchronization.
count++is three operations: read, add, write — another thread can interleave.synchronizeduses a monitor lock;AtomicIntegeruses CPU compare-and-swap (CAS) — faster because it avoids blocking.
📸 Verified Output:
(unsafe count varies each run — that's the bug)
Step 3: ExecutorService — Thread Pools
💡 Never create threads manually in production code — use
ExecutorService. It reuses threads (creation is expensive), limits concurrency (preventing thread explosion), and providesFuturefor async results.Executors.newFixedThreadPool(n)is ideal for CPU-bound work; useExecutors.newCachedThreadPool()for many short I/O tasks.
📸 Verified Output:
Step 4: CompletableFuture — Async Pipelines
💡
CompletableFutureis non-blocking by default — each stage runs asynchronously in the ForkJoinPool.thenApplychains synchronous transformations;thenComposechains async ones (likeflatMap).thenCombinemerges two independent futures — they run in parallel, with the combiner called when both complete.
📸 Verified Output:
Step 5: Concurrent Collections
💡
ConcurrentHashMapuses segment-level locking (not the whole map) for much better throughput thanCollections.synchronizedMap().BlockingQueuecoordinates producer and consumer threads without explicit signaling —put()blocks when full,take()blocks when empty. This is the standard work queue pattern.
📸 Verified Output:
Step 6: Virtual Threads (Java 21)
💡 Virtual threads (Project Loom, Java 21) are JVM-managed lightweight threads — you can have millions of them. They're scheduled on a small pool of OS threads, blocking operations (like I/O) unmount the virtual thread until data is ready. This makes blocking code as scalable as async code without the complexity.
📸 Verified Output:
Step 7: Locks & Conditions
💡
ReentrantLock+Conditiongives you explicit lock control:await()atomically releases the lock and waits;signal()wakes one waiter. This is more flexible thansynchronized+wait()/notify()— you can have multiple conditions per lock, timed waits, and tryLock().
📸 Verified Output:
Step 8: Complete Example — Concurrent Web Crawler
💡 This crawler uses virtual threads for each fetch (I/O-bound = perfect fit),
ConcurrentHashMap.newKeySet()for a thread-safe visited set,BlockingQueuefor the URL frontier, andAtomicIntegerfor counters — all without a singlesynchronizedkeyword. This is idiomatic modern Java concurrency.
📸 Verified Output:
Verification
Summary
You've covered thread creation, race conditions, synchronized and AtomicInteger, ExecutorService, CompletableFuture async pipelines, concurrent collections, Java 21 virtual threads, ReentrantLock/Condition, and a concurrent crawler. Concurrency is hard — the key is: share nothing mutable, use higher-level abstractions (Executors, CompletableFuture, BlockingQueue), and virtual threads for I/O-bound work.
Further Reading
Last updated
