Algorithms are the heart of problem-solving in programming. Whether you're preparing for coding interviews, optimizing software performance, or working on real-world applications, understanding algorithms in Java is essential.
In this guide, you'll learn:
✔️ The most important algorithms in Java
✔️ Working principles and time complexity
✔️ Step-by-step code examples for hands-on learning
✔️ Real-world use cases
By the end, you’ll have a solid foundation in Java algorithms and be ready to tackle complex problems efficiently. 🚀
Contents
Linear Search
Iterates through a list or array element by element to find a target value.
Compares each element until the value is found or the list ends.
Example
public static int linearSearch(int[] arr, int target) {
for (int i = 0; i < arr.length; i++) {
if (arr[i] == target) return i;
}
return -1;
}
/* *** *** *** *** *** test code *** *** *** *** *** */
System.out.println("Linear Search:\t\t "
+ "Searching 14 in [10, 7, 5, 14, 2]: "
+ linearSearch(new int[]{10, 7, 5, 14, 2}, 14));
// Searching 14 in [10, 7, 5, 14, 2]: 3
Time Complexity: O(n)
– worst case is checking all elements.
Space Complexity: O(1)
– does not use any extra memory.
Use Case: Unsorted arrays, small datasets, quick, one-time lookups without needing pre-processing.
Tip: Works with any data structure that supports iteration (e.g., arrays, lists).
Binary Search
A divide-and-conquer algorithm that repeatedly halves the search space to locate a target element.
Requires the input array or list to be sorted in advance.
At each step:
- Compares the target with the middle element
- Eliminates half of the remaining search space
Example
public static int binarySearch(int[] arr, int target) {
int left = 0, right = arr.length - 1;
while (left <= right) {
int mid = left + (right - left) / 2;
if (arr[mid] == target) return mid;
else if (arr[mid] < target) left = mid + 1;
else right = mid - 1;
}
return -1;
}
/* *** *** *** *** *** test code *** *** *** *** *** */
System.out.println("Binary Search:\t\t "
+ "Searching 7 in [2, 5, 7, 10, 14]: "
+ binarySearch(new int[]{2, 5, 7, 10, 14}, 7));
// Searching 7 in [2, 5, 7, 10, 14]: 2
Time Complexity: O(log n)
– very efficient, even for large datasets
Space Complexity: O(1)
for iterative version or O(log n)
for recursive version (due to call stack)
Use Case: Sorted arrays/lists, database indexing, and search engines and dictionary lookups.
Tip: Use
Collections.binarySearch()
for lists andArrays.binarySearch()
for arrays.
Hashing
Uses a hash function to map keys to fixed-size hash codes, which determine where values are stored in memory.
Hash collisions (when two keys generate the same hash) are typically handled using:
- Chaining – storing multiple values at the same index using linked lists or buckets
- Open addressing – probing to find the next available slot
Common hashing-based data structures in Java:
HashMap
– stores key-value pairsHashSet
– stores unique keysConcurrentHashMap
– thread-safe concurrent access
Example
public static Map<String, Integer> hashing() {
Map<String, Integer> nameAndAgeMap = new HashMap<>();
nameAndAgeMap.put("Alice", 25);
nameAndAgeMap.put("Bob", 30);
return nameAndAgeMap;
}
/* *** *** *** *** *** test code *** *** *** *** *** */
Map<String, Integer> nameAndAgeMap = hashing();
Set<String> namesSet = nameAndAgeMap.keySet();
System.out.println("Hashing output:\t\t "
+ "Bob's age: " + nameAndAgeMap.get("Bob")
+ ", Candice exists? " + nameAndAgeMap.containsKey("Candice")
+ ", Set of names: " + namesSet);
// Bob's age: 30, Candice exists? false, Set of names: [Bob, Alice]
Time Complexity: O(1)
average-case operations for insert, search, and delete; O(n)
worst case (due to hash collisions).
Space Complexity: O(n)
where n is the number of elements stored.
Use Case: Caching, indexing large datasets, duplicate detection, fast lookups with unique keys.
Additionally, for thread-safe operations in multi-threaded environments, check out our ConcurrentHashMap guide.
Recursion
A technique where a function calls itself, each time with a reduced or simpler input, to solve complex problems by reaching a base condition.
Breaks a problem into smaller subproblems, solving each recursively until reaching the simplest form.
Requires:
- A base case (termination condition)
- A recursive step (the function calls itself with reduced input)
Problem
- Longest common subsequence (LCS)
Input:"abcde"
and"ace"
Output:3
→ LCS is"ace"
Solution
public static int lcsRecursive(String s1, String s2, int m, int n) {
// Base Case: If either string is empty
if (m == 0 || n == 0) return 0;
// If last characters match, add 1 and check remaining
if (s1.charAt(m - 1) == s2.charAt(n - 1)) {
return 1 + lcsRecursive(s1, s2, m - 1, n - 1);
} else {
// If last characters don't match, take max of excluding either character
return Math.max(lcsRecursive(s1, s2, m - 1, n), lcsRecursive(s1, s2, m, n - 1));
}
}
/* *** *** *** *** *** test code *** *** *** *** *** */
String s1 = "abcde", s2 = "ace";
System.out.println("\nRecursion output:\t "
+ "LCS length of " + s1 + ", " + s2 + " is: "
+ lcsRecursive(s1, s2, s1.length(), s2.length()));
// LCS length of abcde, ace is: 3
Time Complexity: Varies by problem e.g., O(2ⁿ)
for LCS, O(2ⁿ)
for Fibonacci (naïve), O(n)
for factorial.
Space Complexity: Depends on recursion depth – O(n)
for linear recursion, O(log n)
for binary recursion.
Use Case: Tree traversals (e.g., in-order, pre-order), divide-and-conquer algorithms (e.g., Merge Sort, Quick Sort), mathematical problems (e.g., factorial, Fibonacci).
Tip: Use memoization or dynamic programming to optimize recursive algorithms and avoid repeated calculations.
Bubble Sort
A basic comparison-based sorting algorithm that repeatedly swaps adjacent elements if they are in the wrong order.
Elements “bubble up” to their correct position in each pass — largest elements move to the end of the list.
Compares each element with every adjacent element in the list, effectively sorting the array one step at a time.
Example
public static void bubbleSort(int[] arr) {
int n = arr.length;
boolean swapped;
for (int i = 0; i < n - 1; i++) {
swapped = false;
for (int j = 0; j < n - i - 1; j++) {
if (arr[j] > arr[j + 1]) {
// Swap arr[j] and arr[j+1]
int temp = arr[j];
arr[j] = arr[j + 1];
arr[j + 1] = temp;
swapped = true;
}
}
// Optimization: break early if no swaps occurred (array is sorted)
if (!swapped) break;
}
}
/* *** *** *** *** *** test code *** *** *** *** *** */
int[] arr = {9, 3, 6, 2, 4};
bubbleSort(arr);
System.out.println("Bubble Sort:\t\t "
+ "Sorted array: " + Arrays.toString(arr));
// Sorted array: [2, 3, 4, 6, 9]
Time Complexity: O(n²)
worst & average case, O(n)
best case (already sorted).
Space Complexity: O(1)
– in-place sorting with no additional memory usage.
Use Case: Educational purposes, very small or nearly sorted datasets.
Merge Sort
A divide-and-conquer algorithm that recursively splits the array into halves, sorts them, and merges the sorted halves back together.
Operates by breaking down the problem (splitting) and combining solutions (merging sorted halves).
Recursively processes smaller subarrays until size is 1, then merges them in sorted order as it returns back up the call stack.
Maintains stability — preserves the original order of equal elements.
Example
public static void mergeSort(int[] arr, int left, int right) {
if (left >= right) return;
int mid = left + (right - left) / 2;
mergeSort(arr, left, mid);
mergeSort(arr, mid + 1, right);
merge(arr, left, mid, right);
}
private static void merge(int[] arr, int left, int mid, int right) {
int[] temp = Arrays.copyOfRange(arr, left, right + 1);
int i = left, j = mid + 1, k = left;
while (i <= mid && j <= right) {
arr[k++] = (temp[i - left] <= temp[j - left]) ? temp[i++ - left] : temp[j++ - left];
}
while (i <= mid) arr[k++] = temp[i++ - left];
while (j <= right) arr[k++] = temp[j++ - left];
}
/* *** *** *** *** *** test code *** *** *** *** *** */
int[] arr = {10, 3, 8, 2, 5};
mergeSort(arr, 0, arr.length - 1);
System.out.println("Merge Sort:\t\t\t "
+ "Sorted array: " + Arrays.toString(arr));
// Sorted array: [2, 3, 5, 8, 10]
Time Complexity: O(n log n)
always
Space Complexity: O(n)
– requires temporary arrays for merging
Use Case: Large datasets, linked lists (due to non-reliance on random access or swapping), situations where stable sorting is important (e.g., sorting records by multiple fields).
Tip: Slower than in-place sorts like Quick Sort in practice (due to memory overhead), but predictable and reliable.
Quick Sort
A divide-and-conquer algorithm that selects a pivot element and partitions the array into two sub-arrays:
- One with elements less than the pivot
- One with elements greater than the pivot
Recursively applies the same logic to each partition until the entire array is sorted.
Efficient for large datasets and often faster than Merge Sort due to in-place partitioning.
Not a stable sort (relative order of equal elements is not guaranteed).
Example
public static void quickSort(int[] arr, int low, int high) {
if (low < high) {
int pivotIndex = partition(arr, low, high);
quickSort(arr, low, pivotIndex - 1);
quickSort(arr, pivotIndex + 1, high);
}
}
private static int partition(int[] arr, int low, int high) {
int pivot = arr[high], i = low - 1;
for (int j = low; j < high; j++) {
if (arr[j] < pivot) {
i++;
swap(arr, i, j);
}
}
swap(arr, i + 1, high);
return i + 1;
}
private static void swap(int[] arr, int i, int j) {
int temp = arr[i];
arr[i] = arr[j];
arr[j] = temp;
}
/* *** *** *** *** *** test code *** *** *** *** *** */
int[] arr = new int[]{7, 3, 9, 1, 5};
quickSort(arr, 0, arr.length - 1);
System.out.println("Quick Sort:\t\t\t "
+ "Sorted array: " + Arrays.toString(arr));
// Sorted array: [1, 3, 5, 7, 9]
Time Complexity: O(n log n)
average case, O(n²)
worst case (when poor pivots are chosen — e.g., already sorted array with naive pivot).
Space Complexity: O(log n)
average (for recursion stack), O(n)
worst case (deep recursion without tail call optimization).
Use Case: General-purpose high-performance sorting, scenarios where in-place and non-stable sorting is acceptable.
Tip: Java’s
Arrays.sort()
for primitive types uses a dual-pivot Quick Sort under the hood for its speed and efficiency.
Breadth First Search
A graph traversal algorithm that explores all the neighbors of a node before going deeper into the graph.
Uses a queue to keep track of nodes to visit — processes nodes level by level.
Suitable for both graphs and trees (non-recursive approach).
Requires a visited[]
or Set
to avoid infinite loops in cyclic graphs.
Example
public static void bfs(Map<Integer, List<Integer>> graph, int start) {
Queue<Integer> queue = new LinkedList<>();
Set<Integer> visited = new HashSet<>();
queue.add(start);
visited.add(start);
while (!queue.isEmpty()) {
int node = queue.poll();
System.out.print(node + " ");
for (int neighbor : graph.getOrDefault(node, new ArrayList<>())) {
if (!visited.contains(neighbor)) {
queue.add(neighbor);
visited.add(neighbor);
}
}
}
}
/* *** *** *** *** *** test code *** *** *** *** *** */
Map<Integer, List<Integer>> graph = Map.of(
0, List.of(1, 2),
1, List.of(3, 4),
2, List.of(5, 6)
);
System.out.print("BFS output:\t\t\t Breadth First Search: ");
bfs(graph, 0);
// Breadth First Search: 0 1 2 3 4 5 6
Time Complexity: O(V + E)
where V = number of vertices, E = number of edges
Space Complexity: O(V)
– for the queue and the visited tracking structure
Use Case: Finding the shortest path in unweighted graphs (e.g., GPS navigation), detecting connected components, crawling the web, social network expansion.
Depth First Search
A graph traversal algorithm that explores as deep as possible along each branch, then backtracks when no further nodes are available.
Can be implemented using recursion or an explicit stack.
Uses a visited set or array to avoid revisiting nodes in cyclic graphs.
Recursively dives down one path before moving to others — mimics exploring one trail to the end before backtracking.
Example
public static void dfs(Map<Integer, List<Integer>> graph, int node, Set<Integer> visited) {
if (visited.contains(node)) return;
System.out.print(node + " ");
visited.add(node);
for (int neighbor : graph.getOrDefault(node, new ArrayList<>())) {
dfs(graph, neighbor, visited);
}
}
/* *** *** *** *** *** test code *** *** *** *** *** */
Map<Integer, List<Integer>> graph = Map.of(
0, List.of(1, 2),
1, List.of(3, 4),
2, List.of(5, 6)
);
System.out.print("\nDFS output:\t\t\t Depth First Search: ");
dfs(graph, 0, new HashSet<>());
// Depth First Search: 0 1 3 4 2 5 6
Time Complexity: O(V + E)
where V = number of vertices, E = number of edges
Space Complexity: O(V)
– due to the recursion stack or explicit stack for traversal
Use Case: Topological sorting of DAGs, cycle detection in graphs, pathfinding in unweighted graphs (all possible paths), maze solving, connected components identification.
Dynamic Programming
A technique for solving problems with overlapping subproblems and optimal substructure.
Improves performance by avoiding redundant calculations through:
- Memoization – top-down approach using recursion + caching
- Tabulation – bottom-up approach using an iterative DP table
Reduces exponential recursive time to polynomial time in many classic problems.
Problem
- Longest common subsequence (LCS)
Input:"abcde"
and"ace"
Output:3
→ LCS is"ace"
Solution
public static int lcsDP(String s1, String s2) {
int m = s1.length(), n = s2.length();
int[][] dp = new int[m + 1][n + 1];
// Fill DP table
for (int i = 1; i <= m; i++) {
for (int j = 1; j <= n; j++) {
if (s1.charAt(i - 1) == s2.charAt(j - 1)) {
dp[i][j] = 1 + dp[i - 1][j - 1];
} else {
dp[i][j] = Math.max(dp[i - 1][j], dp[i][j - 1]);
}
}
}
return dp[m][n];
}
/* *** *** *** *** *** test code *** *** *** *** *** */
String s1 = "abcde", s2 = "ace";
System.out.println("DP output:\t\t\t "
+ "LCS length of " + s1 + ", " + s2 + " is: "
+ lcsDP(s1, s2));
// LCS length of abcde, ace is: 3
Time Complexity: O(m × n)
for problems like LCS, Knapsack, etc
Space Complexity: O(m × n)
for a full DP table, or can be optimized to O(min(m, n))
in certain problems
Use Case: Fibonacci numbers, 0/1 Knapsack problem, matrix chain multiplication, coin change, edit distance, subset sum.
Backtracking
A technique that recursively explores all possible solutions, abandoning (backtracking) paths that violate constraints or lead to dead ends.
Works by trying a choice, moving forward, and if the solution path fails, it undoes (unmarks) the previous step and tries the next option.
Typically involves:
- A recursive function
- A way to mark/unmark visited states
- A base case to accept valid results
Often modeled as a depth-first search (DFS) with additional constraint checking.
Problem
- Longest common subsequence (LCS)
Input:"abcde"
and"ace"
Output:"ace"
Solution
public static String findLCS(String s1, String s2) {
int m = s1.length(), n = s2.length();
int[][] dp = new int[m + 1][n + 1];
// Fill DP table
for (int i = 1; i <= m; i++) {
for (int j = 1; j <= n; j++) {
if (s1.charAt(i - 1) == s2.charAt(j - 1)) {
dp[i][j] = 1 + dp[i - 1][j - 1];
} else {
dp[i][j] = Math.max(dp[i - 1][j], dp[i][j - 1]);
}
}
}
// Backtrack to find the LCS
StringBuilder lcs = new StringBuilder();
int i = m, j = n;
while (i > 0 && j > 0) {
if (s1.charAt(i - 1) == s2.charAt(j - 1)) {
lcs.append(s1.charAt(i - 1));
i--;
j--; // Move diagonally
} else if (dp[i - 1][j] > dp[i][j - 1]) {
i--; // Move up
} else {
j--; // Move left
}
}
return lcs.reverse().toString(); // Reverse to get correct order
}
/* *** *** *** *** *** test code *** *** *** *** *** */
String s1 = "abcde", s2 = "ace";
System.out.println("Backtracking output: "
+ "LCS sequence of " + s1 + ", " + s2 + " is: "
+ findLCS(s1, s2));
// LCS sequence of abcde, ace is: ace
Time Complexity: Varies by problem — for tracing a filled DP table like in LCS O(m + n)
, can be exponential (e.g., N-Queens) in brute-force combinatorial problems
Space Complexity: O(1)
when using existing DP tables, O(n)
for recursion stack in deep recursive calls
Use Case: N-Queens, sudoku solver, subset sum, word search in grids, permutation & combination generation.
Summary Table
Algorithm | Best Case | Worst Case | Average Case | When to Use |
---|---|---|---|---|
Hashing | O(1) | O(n) (Collisions) | O(1) | Fast lookups, caching |
Linear Search | O(1) | O(n) | O(n) | Small unsorted arrays |
Binary Search | O(1) | O(log n) | O(log n) | Sorted arrays/lists |
Merge Sort | O(n log n) | O(n log n) | O(n log n) | Stable sorting, linked lists |
Quick Sort | O(n log n) | O(n²) | O(n log n) | Fast in-place sorting |
BFS | O(V + E) | O(V + E) | O(V + E) | Shortest path (unweighted graphs) |
DFS | O(V + E) | O(V + E) | O(V + E) | Cycle detection, backtracking |
Recursion | Varies | Varies | Varies | Tree traversal, divide-and-conquer |
Final Thoughts
- If fast lookups are needed, use hashing.
- If data is unsorted, linear search is simple, but binary search is better for sorted data.
- Merge Sort is stable, while Quick Sort is fast and memory-efficient.
- BFS is best for shortest paths, while DFS is ideal for cycle detection and backtracking.
- Recursion simplifies many problems but can lead to stack overflow if not optimized.
📌 Enjoyed this post? Bookmark it and drop a comment below — your feedback helps keep the content insightful and relevant!
Great post! Clear explanations and well-organized Java examples make learning algorithms easy. Liked the real-world use cases for each algorithm to understand practical applications better. Keep it up! 👍