Mastering Java concurrency is essential for building high-performance, scalable, and responsive applications. In modern development, concurrency enables Java programs to execute multiple tasks simultaneously—maximizing CPU utilization and ensuring smooth user experiences.
In this complete guide, we’ll dive deep into Java concurrency, covering everything from basic thread creation to advanced synchronization techniques. You’ll learn:
✔️ How to create and manage threads efficiently
✔️ The importance of synchronization and avoiding race conditions
✔️ How the Executor framework and thread pools boost performance
✔️ Advanced tools like ReentrantLocks, Semaphores, and Atomic variables
✔️ Best practices for writing deadlock-free, thread-safe code
With real-world examples and step-by-step explanations, this guide will help you master Java concurrency and confidently build robust, multi-threaded applications. 🚀 Let’s dive in!
Contents
Thread vs Runnable
A Thread is a single, sequential unit of execution that runs independently within a program.
Commonly used for background and parallel processing when the order of execution doesn’t matter.
In Java, there are two primary ways to create a thread:
Extending the Thread
class
- The class directly extends
Thread
and overrides therun()
method to define the task. - Call the
start()
method to begin execution in a new thread. - Limitation: Java supports only single inheritance, so extending
Thread
prevents the class from extending any other.
Implementing the Runnable
interface (Recommended)
- The class implements
Runnable
and overrides therun()
method. - A
Thread
object is created with theRunnable
instance and started usingstart()
. - This approach promotes separation of concerns, making code more modular and maintainable.
- Allows the class to extend another class or implement multiple interfaces — offering greater flexibility in design.
Example
static class MyThread extends Thread {
public void run() {
System.out.println("MyThread's thread: " + Thread.currentThread().getName());
}
}
static class MyRunnable implements Runnable {
public void run() {
System.out.println("MyRunnable's thread: " + Thread.currentThread().getName());
}
}
/* *** *** *** *** *** test code *** *** *** *** *** */
System.out.println("Current thread: " + Thread.currentThread().getName());
// Current thread: main
MyThread t1 = new MyThread();
t1.start(); // Starts a new thread
// MyThread's thread: Thread-0
Thread t2 = new Thread(new MyRunnable());
t2.start(); // Starts a new thread
// MyRunnable's thread: Thread-1
Synchronized Keyword
The synchronized
keyword is used to prevent race conditions in multithreaded environments.
Ensures that only one thread at a time can execute a critical section of code.
Provides mutual exclusion by using a monitor lock to synchronize access to shared resources.
Two ways to use synchronized
:
Synchronized Method
- Declares an entire method as synchronized.
- Locks on the current object (
this
) for instance methods. - Locks on the class object for static methods.
Synchronized Block
- Synchronizes only a specific portion of code inside a method.
- Allows for fine-grained control over what is locked.
- Can specify the object to lock on (any non-null object).
Example
static class SharedResource {
private final Object lock = new Object();
private int count = 1;
public synchronized void incrementUsingSynchronizedMethod() {
for (int i = 1; i <= 3; i++) {
System.out.println("[Synchronized Method] " + Thread.currentThread().getName() + ": " + count++);
}
}
public void incrementUsingSynchronizedBlock() {
synchronized (this) {
for (int i = 1; i <= 3; i++) {
System.out.println("[Synchronized Block] " + Thread.currentThread().getName() + ": " + count++);
}
}
}
public void incrementUsingSynchronizedLockObject() {
synchronized (lock) {
for (int i = 1; i <= 3; i++) {
System.out.println("[Synchronized Lock Object] " + Thread.currentThread().getName() + ": " + count++);
}
}
}
}
/* *** *** *** *** *** test code *** *** *** *** *** */
SharedResource resource = new SharedResource();
new Thread(resource::incrementUsingSynchronizedMethod).start();
new Thread(resource::incrementUsingSynchronizedBlock).start();
new Thread(resource::incrementUsingSynchronizedLockObject).start();
// [Synchronized Method] Thread-2: 1
// [Synchronized Lock Object] Thread-4: 2
// [Synchronized Method] Thread-2: 3
// [Synchronized Lock Object] Thread-4: 4
// [Synchronized Method] Thread-2: 5
// [Synchronized Lock Object] Thread-4: 6
// [Synchronized Block] Thread-3: 7
// [Synchronized Block] Thread-3: 8
// [Synchronized Block] Thread-3: 9
Tip: In the example above, both the synchronized method and the synchronized block use the same shared lock (
this
). As a result, the second thread (e.g., Thread-3) must wait until the first thread (e.g., Thread-2) releases the lock before it can proceed.
wait(), notify(), and join()
These methods are used to coordinate the behavior of threads — to acquire, release locks, and wait for thread completion in a controlled way.
wait()
- Causes the current thread to wait until another thread invokes
notify()
ornotifyAll()
on the same object. - Releases the monitor lock, allowing other threads to enter synchronized blocks on that object.
- Must be called within a synchronized context.
notify()
- Wakes up one waiting thread (chosen arbitrarily) that is waiting on the object’s monitor.
- The awakened thread will not run immediately; it must first re-acquire the lock.
- Used within a synchronized block or method.
notifyAll()
- Wakes up all threads that are waiting on the object's monitor.
- Only one of them will acquire the lock and continue; others will wait until the lock is released again.
join()
- Called on a thread to wait for it to finish execution.
- The calling thread is blocked until the target thread completes.
Example
static class SimpleProducerConsumer {
private int data;
private boolean ready = false;
public synchronized void produce(int value) {
System.out.println("Producer: Producing data = " + value);
data = value;
ready = true;
notify(); // Notify the waiting consumer
System.out.println("Producer: Data produced and notified.");
}
public synchronized int consume() throws InterruptedException {
while (!ready) {
System.out.println("Consumer: Waiting for data...");
wait(); // Wait until data is produced
}
System.out.println("Consumer: Data received = " + data);
return data;
}
}
/* *** *** *** *** *** test code *** *** *** *** *** */
SimpleProducerConsumer simpleProducerConsumer = new SimpleProducerConsumer();
Thread producer = new Thread(() -> {
try {
Thread.sleep(1000); // Simulate delay of 1000 ms
simpleProducerConsumer.produce(19);
} catch (InterruptedException e) { /* handle it */ }
});
Thread consumer = new Thread(() -> {
try {
int received = simpleProducerConsumer.consume();
} catch (InterruptedException e) { /* handle it */ }
});
producer.start(); // Starts Producer thread
consumer.start(); // Starts Consumer thread
// Use join() to wait for both threads to finish
producer.join(); // Main thread waits for producer to finish
consumer.join(); // Main thread waits for consumer to finish
System.out.println("Main thread: Producer and Consumer finished.");
// Consumer: Waiting for data...
// Producer: Producing data...
// Producer: Data produced and notified.
// Consumer: Data received = 19
// Main thread: Producer and Consumer finished.
LockSupport
Provides basic primitives for thread parking and unparking.
It serves as a flexible alternative to Object.wait()/notify()
and Thread.sleep()
.
Key Features
park()
/unpark()
– Blocks and unblocks threads (likewait()
/notify()
but cleaner and more flexible).- No monitor required – Does not rely on
synchronized
blocks or object monitors. - Permit-based mechanism – Each thread has a single permit that controls its parked state.
- Unpark-before-park works – If
unpark()
is called beforepark()
, the permit is saved and prevents blocking — behaves like a token.
Example
Thread worker1 = new Thread(() -> {
System.out.println("WorkerThread-1: Waiting to be unparked...");
LockSupport.park(); // thread will block here
System.out.println("WorkerThread-1: Unparked and resumed.");
});
worker1.start();
try {
Thread.sleep(1000); /* Simulate delay */
} catch (InterruptedException e) { /* handle it */ }
System.out.println("Main thread: Unparking worker thread.");
LockSupport.unpark(worker1); // unblock the worker thread
// WorkerThread-1: Waiting to be unparked...
// Main thread: Unparking worker thread.
// WorkerThread-1: Unparked and resumed.
Thread worker2 = new Thread(() -> {
System.out.println("WorkerThread-2: Started execution...");
try {
Thread.sleep(1000); /* Simulate delay */
} catch (InterruptedException e) { /* handle it */ }
System.out.println("WorkerThread-2: Parking worker thread.");
LockSupport.park(); // will not block because it was already unparked
System.out.println("WorkerThread-2: Resumes immediately!");
});
worker2.start();
LockSupport.unpark(worker2); // pre-unpark — gives a "permit" before actual parking
System.out.println("Main thread: Unparked worker thread.");
// Main thread: Unparked worker thread.
// WorkerThread-2: Started execution...
// WorkerThread-2: Parking worker thread.
// WorkerThread-2: Resumes immediately!
Use Case: Custom lock implementations, task scheduling frameworks, and managing timeouts without blocking threads using Thread.sleep()
.
Tip: Prefer over
wait()
/notify()
for low-level thread control and non-blocking algorithms.
Thread Lifecycle
NEW
- The thread is created using
new Thread()
, butstart()
has not been called yet. - It is not yet eligible for scheduling.
RUNNABLE
- After
start()
is called, the thread enters the RUNNABLE state. - It is ready to run, but may not be executing immediately — depends on the OS thread scheduler.
BLOCKED
- The thread is waiting to acquire a monitor lock (e.g., trying to enter a
synchronized
block or method). - It will remain blocked until the lock is available.
WAITING
- The thread is waiting indefinitely for another thread to perform an action.
- Common methods that lead to this state –
Object.wait()
,Thread.join()
(without timeout),LockSupport.park()
.
TIMED_WAITING
- The thread is waiting for a specified time duration, after which it becomes RUNNABLE again.
- Common methods that lead to this state –
Thread.sleep(time)
,Object.wait(timeout)
,Thread.join(timeout)
,LockSupport.parkNanos()
orparkUntil()
.
TERMINATED
- The thread has completed execution (i.e., its
run()
method has finished). - It can no longer be restarted.
Volatile Keyword
In multithreaded applications, threads can cache variables locally (e.g., in CPU registers or thread stacks) for performance.
The volatile
keyword ensures that a variable is always read from and written to the main memory, not from a thread-local cache.
Without volatile
, a thread may keep reading a stale (outdated) value, unaware that another thread has updated it.
volatile
guarantees visibility, not atomicity — ensures all threads see the most recent value, but doesn’t protect against race conditions in compound actions (like count++
).
Example
static class Worker {
private volatile boolean isRunning = true; // visible across threads
public void run() {
System.out.println(Thread.currentThread().getName() + ": started.");
while (isRunning) { /* simulate some work */ }
// stops when isRunning=false (reads from main memory)
System.out.println(Thread.currentThread().getName() + ": stopped.");
}
public void stopRunning() {
isRunning = false;
}
}
/* *** *** *** *** *** test code *** *** *** *** *** */
Worker worker = new Worker();
Thread t3 = new Thread(worker::run);
t3.start();
// let the thread t3 run for 2 seconds
Thread.sleep(2000);
System.out.println("Main thread: stopping worker...");
worker.stopRunning();
// wait for the worker to finish
t3.join();
System.out.println("Main thread: worker has stopped.");
// Thread-5: started.
// Main thread: stopping worker...
// Thread-5: stopped.
// Main thread: worker has stopped.
Use Case: Flags and control signals shared between threads (e.g., isRunning
, shouldStop
), scenarios where one thread writes, and multiple threads read.
Atomic Variables
Provide lock-free, thread-safe operations on single variables.
Built on CPU-level atomic instructions like Compare-And-Swap (CAS) to avoid using synchronized
.
Common Atomic types – AtomicInteger
, AtomicLong
, AtomicBoolean
, AtomicReference<T>
.
Useful Methods
incrementAndGet()
– Atomically increments the value and returns the updated result.getAndIncrement()
– Returns the current value, then increments.compareAndSet(expected, update)
– Atomically sets the value only if the current value equals the expected.set()
/get()
– For direct reading/writing (volatile semantics).
Example
AtomicInteger counter = new AtomicInteger(0);
AtomicBoolean printed = new AtomicBoolean(false);
Runnable task = () -> {
for (int i = 0; i < 2; i++) {
int current = counter.incrementAndGet();
System.out.println(Thread.currentThread().getName() + ": Count = " + current);
// Only one thread prints this when count reaches 1
if (current >= 1 && printed.compareAndSet(false, true))
System.out.println(Thread.currentThread().getName() + ": Count reached 1! (printed only once)");
// Simulate delay
try { Thread.sleep(100); } catch (InterruptedException e) { /* handle it */ }
}
};
Thread t4 = new Thread(task, "Thread-1");
Thread t5 = new Thread(task, "Thread-2");
t4.start(); t5.start();
t4.join(); t5.join();
System.out.println("Main thread: Final Count = " + counter.get());
// Thread-2: Count = 2
// Thread-1: Count = 1
// Thread-2: Count reached 1! (printed only once)
// Thread-1: Count = 3
// Thread-2: Count = 4
// Main thread: Final Count = 4
Use Case: Atomic counters (e.g., request count, task completion tracking), CAS-based control flags (e.g., isInitialized
).
Reentrant Lock
ReentrantLock
is a flexible alternative to the synchronized
keyword.
Reentrant means the same thread can acquire the lock multiple times without blocking or causing a deadlock.
Key Features
- Reentrancy – A thread can re-enter a lock it already holds without being blocked.
- tryLock() – Try to acquire the lock without blocking; useful to avoid deadlocks.
- Fairness policy – Option to ensure first-come-first-served access to avoid thread starvation.
- Condition support – Advanced alternative to
wait()
/notify()
, allowing multiple wait sets usingCondition
objects.
The thread must release the lock the same number of times it acquired it.
Example
static class SimplePrinterQueue {
private final ReentrantLock lock = new ReentrantLock(true); // fair lock
private final Condition colorQueue = lock.newCondition();
private final Condition bwQueue = lock.newCondition();
private boolean printerBusy = false;
public void printJob(String jobType) {
boolean acquired = false;
try {
// Try to acquire lock with timeout
acquired = lock.tryLock(1, TimeUnit.SECONDS);
if (!acquired) {
System.out.println(Thread.currentThread().getName() + ": Could not acquire lock. Skipping " + jobType + " job.");
return;
}
Condition currentCondition = jobType.equals("color") ? colorQueue : bwQueue;
// Wait if printer is busy
while (printerBusy) {
System.out.println(Thread.currentThread().getName() + ": Waiting in " + jobType + " queue.");
currentCondition.await(); // wait until printer is busy
}
// Proceed to print
printerBusy = true;
System.out.println(Thread.currentThread().getName() + ": Printing a " + jobType + " job...");
Thread.sleep(500); // simulate print time
printerBusy = false;
System.out.println(Thread.currentThread().getName() + ": Finished printing.");
// Notify all waiting threads
colorQueue.signal();
bwQueue.signal();
} catch (InterruptedException e) {
// handle it
} finally {
if (acquired) {
lock.unlock();
}
}
}
}
/* *** *** *** *** *** test code *** *** *** *** *** */
SimplePrinterQueue printer = new SimplePrinterQueue();
Runnable colorTask = () -> printer.printJob("color");
Runnable bwTask = () -> printer.printJob("bw");
for (int i = 0; i < 2; i++) {
new Thread(colorTask, "ColorThread-" + i).start();
new Thread(bwTask, "BWThread-" + i).start();
}
// ColorThread-0: Printing a color job...
// ColorThread-0: Finished printing.
// BWThread-0: Printing a bw job...
// BWThread-1: Could not acquire lock. Skipping bw job.
// ColorThread-1: Could not acquire lock. Skipping color job.
// BWThread-0: Finished printing.
Best Practices
- Use
tryLock()
when you want to avoid waiting forever and handle contention gracefully. - Always release the lock in a
finally
block to avoid deadlock and ensure proper cleanup. - Use fair locks only when starvation is an issue, as they come with performance overhead.
Semaphore
A concurrency control mechanism that manages access to shared resources using a fixed number of permits.
Allows multiple threads (but only a limited number) to enter a critical section simultaneously.
Think of it as a "gatekeeper" that lets N threads in, while others wait their turn.
How It Works
- Threads acquire permits before accessing the shared resource and release them after completing their task.
- If no permits are available, the thread blocks until a permit is released.
- Can be fair or non-fair (fairness decides the order of waiting threads).
Example
static class DatabaseConnectionPool {
private static final int MAX_CONNECTIONS = 3;
private final Semaphore semaphore = new Semaphore(MAX_CONNECTIONS, true); // Fair semaphore (i.e. first-come-first-served)
public void connect() {
try {
semaphore.acquire(); // Wait for a permit
System.out.println(Thread.currentThread().getName() + ": Connected");
Thread.sleep(2000); // Simulate database work
} catch (InterruptedException e) { /* handle it */
} finally {
System.out.println(Thread.currentThread().getName() + ": Disconnected");
semaphore.release(); // Release the permit
}
}
}
/* *** *** *** *** *** test code *** *** *** *** *** */
DatabaseConnectionPool pool = new DatabaseConnectionPool();
// Create 5 threads trying to connect
for (int i = 0; i < 5; i++) {
new Thread(pool::connect, "DatabaseConnectionPoolThread-" + i).start();
}
// DatabaseConnectionPoolThread-0: Connected
// DatabaseConnectionPoolThread-2: Connected
// DatabaseConnectionPoolThread-1: Connected
// DatabaseConnectionPoolThread-2: Disconnected
// DatabaseConnectionPoolThread-1: Disconnected
// DatabaseConnectionPoolThread-4: Connected
// DatabaseConnectionPoolThread-0: Disconnected
// DatabaseConnectionPoolThread-3: Connected
// DatabaseConnectionPoolThread-4: Disconnected
// DatabaseConnectionPoolThread-3: Disconnected
Use Case: Resource pooling (e.g., limiting access to a fixed number of database connections), rate limiting (restricting concurrent access to APIs), and producer-consumer scenarios (controlling access to a bounded buffer).
Fork and Join Framework
Designed to leverage multi-core CPUs by splitting tasks into smaller subtasks and executing them in parallel.
Ideal for recursive algorithms that can be broken into independent subproblems.
Uses work-stealing, where idle threads dynamically "steal" tasks from busy threads to maximize CPU utilization.
Key Components
ForkJoinPool
: A specialized thread pool for managing and executingForkJoinTask
instances.RecursiveTask<V>
: For tasks that return a result (e.g., computing a sum).RecursiveAction
: For tasks that perform actions but return no result (e.g., sorting an array).
How It Works
- Tasks are forked (split into subtasks), which may themselves fork further.
- Once subtasks complete, their results are joined to compute the final result.
- All tasks are managed by the ForkJoinPool, which uses an internal deque per thread and work-stealing to distribute load efficiently.
Example
static class SumTask extends RecursiveTask<Long> {
private static final int THRESHOLD = 3;
private final int[] arr;
private final int start, end;
public SumTask(int[] arr, int start, int end) {
this.arr = arr;
this.start = start;
this.end = end;
}
@Override
protected Long compute() {
if (end - start <= THRESHOLD) {
// Base case: sum directly
long sum = 0;
for (int i = start; i < end; i++) sum += arr[i];
return sum;
} else {
// Fork
int mid = (start + end) / 2;
SumTask left = new SumTask(arr, start, mid);
SumTask right = new SumTask(arr, mid, end);
left.fork(); // run left asynchronously
long rightResult = right.compute(); // compute right directly
long leftResult = left.join(); // wait for left
return leftResult + rightResult;
}
}
}
/* *** *** *** *** *** test code *** *** *** *** *** */
int[] array = new int[] {1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
try (ForkJoinPool pool = ForkJoinPool.commonPool()) {
SumTask sumTask = new SumTask(array, 0, array.length);
long result = pool.invoke(sumTask);
System.out.println("Sum of [1..10] = " + result);
}
// Sum of [1..10] = 55
Performance Tips
- Choose a sensible threshold for splitting tasks: Too small = too many tasks = high overhead; Too large = fewer tasks = poor parallelism.
- Avoid blocking operations (e.g., I/O,
Thread.sleep()
) inside tasks — it prevents efficient thread reuse. - Use the common pool (
ForkJoinPool.commonPool()
) instead of creating new pools unless isolation is needed.
Use Case: Divide-and-conquer algorithms like Merge Sort and Quick Sort, tree traversals (e.g., summing values in a binary tree), matrix multiplication, image processing.
Executor Framework
A high-level API introduced in Java to simplify thread creation, management, and task execution.
It abstracts the complexities of directly using Thread
by offering a flexible and powerful thread pool model.
It simplifies concurrency with easy task submission, thread pooling for performance, and fine-tuned task execution.
Key Components of Executor Framework
Executor
The Executor
is the foundational interface in the Executor Framework (java.util.concurrent
).
Designed to decouple task submission from the mechanics of how each task is executed.
It represents a simple abstraction for running tasks asynchronously.
Key method
void execute(Runnable command)
– Submits a fire-and-forget task for execution — no result is returned and no future is tracked.
Example
Executor executor = Executors.newSingleThreadExecutor();
executor.execute(() -> System.out.println("Running in background"));
((ExecutorService) executor).shutdown(); // Hack to shut down the executor
// Running in background
Future
Future
represents the result of an asynchronous computation, submitted via ExecutorService
.
It allows you to check task status, retrieve results, or cancel the task.
Key methods
get()
– Blocks until the result is available. Throws exceptions if the task failed or was cancelled.get(timeout, unit)
– Waits up to the specified time for the result, then throwsTimeoutException
if not done.isDone()
– Returnstrue
if the task is completed (successfully, with error, or cancelled).cancel()
– Attempts to cancel execution. If the task has already completed, it won't have any effect.
Limited functionality — cannot chain actions or handle results asynchronously, no callback support or composability.
ExecutorService
ExecutorService
is an enhanced version of the Executor
interface, designed to manage a pool of threads and handle asynchronous task execution.
It provides additional methods to manage task submission, lifecycle control, and result tracking.
Key Features
- Manages a thread pool internally to improve efficiency and reuse.
- Supports both
Runnable
andCallable
tasks. - Returns a
Future
object for tracking task progress and results.
Common Methods
submit(Callable task)
– Submits a value-returning task and returns aFuture<T>
.submit(Runnable task)
– Submits a task that does not return a value (still returns aFuture<?>
for tracking).invokeAll(Collection<Callable<T>> tasks)
– Executes all tasks and returns a list ofFuture<T>
results (waits for all to complete).invokeAny(Collection<Callable<T>> tasks)
– Executes the tasks and returns the result of the first successfully completed one.shutdown()
– Initiates a graceful shutdown, allowing existing tasks to finish.shutdownNow()
– Attempts to immediately stop all executing tasks and returns a list of pending ones.awaitTermination(timeout, unit)
– Waits for all tasks to complete after shutdown, within the specified timeout.
Example
static class MyCallable implements Callable<Integer> {
@Override
public Integer call() throws Exception {
Thread.sleep(1000); // sleeps for a second
return 1;
}
}
ExecutorService executorService = Executors.newFixedThreadPool(2);
Future<Integer> future = executorService.submit(new MyCallable());
System.out.println("Task done: " + future.isDone()); // false
System.out.println("Future result: " + future.get()); // blocks until the task is done, then prints 1
executorService.shutdown();
// Task done: false
// Future result: 1
ScheduledExecutorService
ScheduledExecutorService
is a subinterface of ExecutorService
used to schedule tasks for delayed or periodic execution.
A modern replacement for Timer
and TimerTask
, offering better thread control and exception handling.
Ideal for running cron-style jobs, heartbeat checks, retries, or polling mechanisms.
Key methods
schedule()
– Schedules a one-time task to execute after a specified delay.scheduleAtFixedRate()
– Runs the task at a fixed rate, starting afterinitialDelay
, and ignores the task duration. If a task takes longer than the interval, next run may overlap (not ideal for heavy tasks).scheduleWithFixedDelay()
– Schedules a task to run repeatedly with a fixed delay between the end of one task and the start of the next. Ensures no overlap — next task starts only after the previous one finishes.
Example
ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1);
scheduler.schedule(() -> System.out.println("Delayed task"), 1, TimeUnit.SECONDS);
scheduler.scheduleAtFixedRate(() -> System.out.println("Fixed-Rate scheduled task"), 1, 1, TimeUnit.SECONDS);
scheduler.scheduleWithFixedDelay(() -> System.out.println("Fixed-Delay scheduled task"), 1, 2, TimeUnit.SECONDS);
Thread.sleep(3000); // Main thread sleeps for 3 seconds, so that scheduler can run a few iterations
scheduler.shutdown();
boolean terminated = scheduler.awaitTermination(5, TimeUnit.SECONDS);
System.out.println("Scheduler terminated: " + terminated);
// Delayed task
// Fixed-Rate scheduled task
// Fixed-Delay scheduled task
// Fixed-Rate scheduled task
// Fixed-Rate scheduled task
// Fixed-Delay scheduled task
// Scheduler terminated: true
ThreadPoolExecutor
The core implementation behind most thread pool factories (like Executors.newFixedThreadPool()
).
Offers fine-grained control over thread creation, lifecycle, queuing, and task rejection.
Key Parameters
corePoolSize
– Minimum number of threads kept alive, even if idle.maximumPoolSize
– Maximum number of threads allowed in the pool.keepAliveTime
– Time to wait before killing excess idle threads (above core size).unit
– Time unit forkeepAliveTime
(e.g.,TimeUnit.SECONDS
).workQueue
– A blocking queue to hold pending tasks before execution.threadFactory
– Used to create new threads (can be customized).handler
– Defines the rejection policy when tasks can't be accepted.
Queue Strategies (for workQueue
)
ArrayBlockingQueue
– Bounded FIFO queue (fixed capacity).LinkedBlockingQueue
– Unbounded FIFO queue (default fornewFixedThreadPool()
).PriorityBlockingQueue
– Orders tasks based on priority (must implementComparable
).
Rejection Policies (for handler
)
AbortPolicy
(default) – ThrowsRejectedExecutionException
.CallerRunsPolicy
– Executes the task in the caller’s thread.DiscardPolicy
– Silently discards the task.DiscardOldestPolicy
– Removes the oldest task in the queue and retries submission.
Task Execution Flow
- New tasks are executed by core threads if any are free.
- If all core threads are busy, tasks go into the work queue.
- If the queue is full, and the pool hasn’t reached
maximumPoolSize
, new threads are created. - If max threads are also busy, the rejection policy is triggered.
Example
ThreadPoolExecutor tpe = new ThreadPoolExecutor(
2, // Core threads
3, // Max threads
60, // Keep-alive
TimeUnit.SECONDS,
new ArrayBlockingQueue<>(5), // Bounded queue
Executors.defaultThreadFactory(),
new ThreadPoolExecutor.CallerRunsPolicy() // Fallback
);
// Submit tasks
for (int i = 0; i < 10; i++) {
tpe.execute(() -> {
System.out.printf("Task running in %s [Active threads: %d, Queue size: %d, Completed tasks: %d]%n",
Thread.currentThread().getName(), tpe.getActiveCount(), tpe.getQueue().size(), tpe.getCompletedTaskCount());
});
}
tpe.shutdown(); // Graceful shutdown
// Task running in pool-4-thread-1 [Active threads: 2, Queue size: 0, Completed tasks: 0]
// Task running in pool-4-thread-2 [Active threads: 2, Queue size: 5, Completed tasks: 0]
// Task running in main [Active threads: 3, Queue size: 5, Completed tasks: 0]
// Task running in pool-4-thread-3 [Active threads: 3, Queue size: 5, Completed tasks: 0]
// Task running in pool-4-thread-3 [Active threads: 3, Queue size: 3, Completed tasks: 3]
// Task running in pool-4-thread-3 [Active threads: 3, Queue size: 2, Completed tasks: 4]
// Task running in pool-4-thread-1 [Active threads: 3, Queue size: 4, Completed tasks: 1]
// Task running in pool-4-thread-2 [Active threads: 3, Queue size: 3, Completed tasks: 2]
// Task running in pool-4-thread-3 [Active threads: 3, Queue size: 1, Completed tasks: 5]
// Task running in pool-4-thread-1 [Active threads: 3, Queue size: 0, Completed tasks: 6]
CompletableFuture
Introduced in Java 8, CompletableFuture
enables non-blocking, asynchronous, and lock-free programming.
Provides a clean and powerful way to:
- Run tasks in the background
- Chain dependent operations
- Combine multiple async results
- Handle exceptions gracefully
Key Methods
runAsync()
– Run a task asynchronously that returnsvoid
supplyAsync()
– Run a task that returns a result asynchronouslythenAccept()
– Consume the result (no return value)thenApply()
– Transform the resultthenRun()
– Run next task without result dependencythenCompose()
– Chain another async task based on resultthenCombine()
– Combine results of two independent futuresexceptionally()
– Handle errors/exceptions gracefully
Advantages
- Lock-free execution using thread pools (default:
ForkJoinPool.commonPool()
) - Non-blocking by design – suitable for high-performance apps
Example
// run async task (no return value)
CompletableFuture future1 = CompletableFuture.runAsync(() -> {
System.out.println("Running in background");
}); // Running in background
// supply async task (returns value)
CompletableFuture future2 = CompletableFuture.supplyAsync(() -> {
return "Hello World";
});
// blocking get
String result = future2.get();
// non-blocking callback
future2.thenAccept(result1 -> System.out.println("Result: " + result1)); // Result: Hello World
// thenApply() - transform result
CompletableFuture future3 = CompletableFuture.supplyAsync(() -> "Hello")
.thenApply(s -> s + " World")
.thenApply(String::toUpperCase);
future3.thenAccept(System.out::println); // HELLO WORLD
// thenCompose() - chain dependent futures
CompletableFuture getUser = CompletableFuture.supplyAsync(() -> "user123");
CompletableFuture getOrder = getUser.thenCompose(user ->
CompletableFuture.supplyAsync(() -> "Order for " + user)
);
getOrder.thenAccept(System.out::println); // Order for user123
// thenCombine() - merge two futures
CompletableFuture hello = CompletableFuture.supplyAsync(() -> "Hello");
CompletableFuture world = CompletableFuture.supplyAsync(() -> "World");
hello.thenCombine(world, (h, w) -> h + " " + w)
.thenAccept(System.out::println); // "Hello World"
// allOf() - wait for all futures
CompletableFuture all = CompletableFuture.allOf(
CompletableFuture.supplyAsync(() -> "Task1"),
CompletableFuture.supplyAsync(() -> "Task2")
);
all.thenRun(() -> System.out.println("All tasks completed")); // All tasks completed
// exceptionally() - fallback Value
CompletableFuture.supplyAsync(() -> {
if (Math.random() > 0.5) throw new RuntimeException("Error!");
return "Success";
})
.exceptionally(ex -> "Fallback: " + ex.getMessage())
.thenAccept(System.out::println); // Fallback: java.lang.RuntimeException: Error!
// handle() - success/failure in one method
CompletableFuture.supplyAsync(() -> "Process data")
.handle((result2, ex) -> {
if (ex != null) return "Error occurred";
return result2.toUpperCase();
});
Use Case: Chaining asynchronous workflows (e.g., fetch → transform → store), running parallel computations and combining their results, and executing background I/O or CPU-intensive tasks without blocking main application threads.
ThreadLocal Variables
ThreadLocal
allows you to create variables that are local to a thread.
Each thread gets its own isolated copy of the variable — no shared access between threads.
It provides a way to maintain state across multiple method calls within the same thread, without passing variables explicitly.
How It Works
- Internally uses a map-like structure tied to each thread.
- Once a thread sets a value, that value is accessible only to that thread, until it’s removed or the thread dies.
Example
// Create a ThreadLocal holder class
public class RequestContext {
private static final ThreadLocal<String> transactionId = new ThreadLocal<>();
public static void setTransactionId(String id) {
transactionId.set(id);
}
public static String getTransactionId() {
return transactionId.get();
}
public static void clear() {
transactionId.remove(); // Important to prevent memory leaks
}
}
// Set the transaction ID at the beginning of the request
@Component
public class TransactionIdFilter implements Filter {
@Override
public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain)
throws IOException, ServletException {
try {
// Generate or extract transaction ID (e.g., from headers)
String txnId = UUID.randomUUID().toString();
RequestContext.setTransactionId(txnId);
// Continue the chain
chain.doFilter(request, response);
} finally {
// Clean up
RequestContext.clear();
}
}
}
/* *** *** *** *** *** test code *** *** *** *** *** */
// In any service or DAO class, you can access the transaction ID without passing it as a parameter
String txnId = RequestContext.getTransactionId();
System.out.println("Processing order for Transaction ID: " + txnId);
Use Case: Storing user session data (e.g., current user context in web applications), maintaining per-thread database connections, managing request-scoped objects in multi-threaded web servers, and propagating transaction or logging context (such as correlation IDs) across method calls within the same thread lifecycle.
Tip: If you're using
ThreadLocal
in environments like thread pools, remember to remove the value manually to prevent memory leaks.
CountDownLatch
A synchronization aid that allows one or more threads to wait until a set of operations completes in other threads.
Ideal for scenarios where a thread must wait for multiple other threads to finish before proceeding.
Key Characteristics
- Initialized with a count (number of events or threads to wait for).
- Each thread calls
countDown()
when it completes its task. - One or more threads call
await()
to block until the count reaches zero. - One-time use – Once the count hits zero, the latch cannot be reset or reused.
Example
static class Service implements Runnable {
private final String name;
private final int initTime;
private final CountDownLatch latch;
public Service(String name, int initTime, CountDownLatch latch) {
this.name = name;
this.initTime = initTime;
this.latch = latch;
}
@Override
public void run() {
try {
Thread.sleep(initTime);
System.out.println(name + " service initialized");
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
} finally {
latch.countDown(); // Decrement count
}
}
}
/* *** *** *** *** *** test code *** *** *** *** *** */
CountDownLatch latch = new CountDownLatch(3); // Initialize latch with count=3 (for 3 services)
// Start service initialization threads
new Thread(new Service("Auth", 1000, latch)).start();
new Thread(new Service("Cache", 1500, latch)).start();
new Thread(new Service("Database", 2000, latch)).start();
// Main thread waits for all services
System.out.println("Waiting for services to initialize...");
latch.await();
// Proceed when count reaches 0
System.out.println("All services are ready! Starting application...");
// Waiting for services to initialize...
// Auth service initialized
// Cache service initialized
// Database service initialized
// All services are ready! Starting application...
Use Case: Multi-stage application startup (e.g., load configuration → connect to the database → start services), batch job coordination, where a final action is triggered only after all worker threads have finished.
ConcurrentHashMap
Firstly, to refresh your understanding of the basic Map
data structure, check out our Map in Java guide.
ConcurrentHashMap
is a thread-safe, high-concurrency alternative to HashMap
, designed specifically for use in multi-threaded environments.
It allows concurrent reads and fine-grained synchronized writes without blocking the entire map — making it ideal for highly scalable systems.
Key Features
- Concurrent Reads – Multiple threads can read without any locking.
- Concurrent Writes – Updates are synchronized at bucket-level (Java 7) or via CAS (Compare-And-Swap, Java 8+), allowing multiple threads to write safely to different segments or keys.
- Weakly consistent iterators – Iterators reflect the current state but don’t fail on concurrent modifications.
- Supports atomic compound operations –
putIfAbsent()
,compute()
,merge()
.
Example
ConcurrentHashMap<String, Integer> counter1 = new ConcurrentHashMap<>();
Runnable task1 = () -> {
for (int i = 0; i < 1000; i++) {
counter1.merge("count", 1, (oldVal, newVal) -> oldVal + newVal);
}
};
Thread t6 = new Thread(task1);
Thread t7 = new Thread(task1);
t6.start(); t7.start();
t6.join(); t7.join();
System.out.println("Thread-safe total count: " + counter1.get("count"));
// Thread-safe total count: 2000
Use Case: Shared data structures across multiple threads, managing real-time counters, maintaining shared caches and connection pools, and supporting parallel computations or aggregations.
Virtual Threads
Virtual threads are lightweight threads managed entirely by the JVM, not by the operating system.
Introduced in Java 21 as a stable feature under Project Loom.
Designed to handle massive concurrency — making it possible to create thousands or even millions of threads efficiently.
Ideal for I/O-heavy applications where traditional platform threads are too costly.
Execution Lifecycle
Mount Phase
- Virtual thread is scheduled on a carrier thread (from the
ForkJoinPool
or custom pool). - Execution stack resides on the carrier thread's OS stack.
Yield Phase (during blocking operations like I/O or Thread.sleep()
)
- The thread suspends execution, and its stack is moved to heap (as a continuation).
- The carrier thread is released to run other tasks.
Resume Phase
- When the blocking call completes, the continuation is restored.
- Resumes on any available carrier thread, picking up exactly where it left off.
Key Features
- Fully compatible with the existing
Thread
API - Each virtual thread is mapped to a carrier thread only during execution
- No thread pooling required — just create one per task (
Thread.startVirtualThread(...)
) - Designed for structured concurrency (e.g., scoped task management)
- Non-blocking
Thread.sleep()
– JVM handles it by yielding the thread instead of blocking the carrier
Example
// Create virtual thread (Option 1: Thread.startVirtualThread)
Thread vThread = Thread.startVirtualThread(() -> {
System.out.println(Thread.currentThread() + ": Hello from virtual thread!");
});
// Wait for completion
try { vThread.join(); } catch (InterruptedException e) { Thread.currentThread().interrupt(); }
// VirtualThread[#58]/runnable@ForkJoinPool-1-worker-1: Hello from virtual thread!
// Create virtual thread (Option 2: Builder pattern)
Thread.ofVirtual()
.name("my-virtual-thread")
.start(() -> {
System.out.println(Thread.currentThread() + ": Virtual thread with custom name");
});
// VirtualThread[#60,my-virtual-thread]/runnable@ForkJoinPool-1-worker-2: Virtual thread with custom name
// Create virtual threads with ExecutorService
try (ExecutorService executor2 = Executors.newVirtualThreadPerTaskExecutor()) {
for (int i = 0; i < 5; i++) {
int taskId = i;
executor2.submit(() -> {
System.out.println(Thread.currentThread() + ": Executor task " + taskId + " running");
Thread.sleep(500); // sleep is non-blocking in virtual threads
return null;
});
}
} // Auto-close waits for all tasks
// VirtualThread[#63]/runnable@ForkJoinPool-1-worker-2: Executor task 0 running
// VirtualThread[#67]/runnable@ForkJoinPool-1-worker-5: Executor task 4 running
// VirtualThread[#65]/runnable@ForkJoinPool-1-worker-3: Executor task 2 running
// VirtualThread[#64]/runnable@ForkJoinPool-1-worker-1: Executor task 1 running
// VirtualThread[#66]/runnable@ForkJoinPool-1-worker-4: Executor task 3 running
// A Million Threads (Impossible with Platform Threads)
for (int i = 0; i < 1_000_000; i++) {
int taskId = i;
Thread.startVirtualThread(() -> {
System.out.println(Thread.currentThread() + ": Task " + taskId + " running");
try { Thread.sleep(100); } catch (InterruptedException e) { Thread.currentThread().interrupt(); }
});
}
// VirtualThread[#1000079]/runnable@ForkJoinPool-1-worker-4: Task 999997 running
// VirtualThread[#1000080]/runnable@ForkJoinPool-1-worker-4: Task 999998 running
// VirtualThread[#1000081]/runnable@ForkJoinPool-1-worker-4: Task 999999 running
// VirtualThread[#997930]/runnable@ForkJoinPool-1-worker-8: Task 997848 running
// VirtualThread[#997919]/runnable@ForkJoinPool-1-worker-1: Task 997837 running
// VirtualThread[#1000002]/runnable@ForkJoinPool-1-worker-7: Task 999920 running
Use Case: Web servers — where each incoming request can be handled in its own virtual thread (e.g., in servlet containers or HTTP handlers); performing blocking file or network I/O without occupying OS threads; and building high-throughput systems capable of scheduling millions of lightweight tasks without exhausting system resources.
Final Thoughts
Java concurrency is a powerful—but often misunderstood—aspect of the language. By learning how to properly create threads, manage synchronization, and use high-level concurrency utilities like the Executor framework, you unlock the ability to write applications that are faster, more scalable, and more responsive.
Whether you're building a multi-threaded backend service, a high-performance trading system, or simply optimizing your app for modern CPUs, understanding concurrency is no longer optional—it's essential.
Keep practicing with real-world use cases, review thread dumps, analyze race conditions, and always be mindful of deadlocks and shared resource pitfalls. The more you work with concurrency, the more intuitive it becomes.
Remember: Great concurrency code is not just about parallelism—it's about writing code that is correct, efficient, and maintainable.
📌 Enjoyed this post? Bookmark it and drop a comment below — your feedback helps keep the content insightful and relevant!
This guide is a goldmine for Java developers navigating multithreading. It demystifies everything from basic synchronization to Java 21’s virtual threads with clear, concise examples and practical real-world use cases. The straightforward explanations and well-crafted code snippets make it a standout resource—definitely a must-bookmark for serious developers.