What is the time complexity of accessing an element in an array?
O(n)
O(n^2)
O(log n)
O(1)
What is the time complexity of accessing an element in an array?
O(n)
O(n^2)
O(log n)
O(1)
What data structure uses FIFO (First In First Out) principle?
Stack
Linked List
Array
Queue
What data structure uses FIFO (First In First Out) principle?
Stack
Linked List
Array
Queue
Which algorithm is used to find the shortest path in a graph?
Bubble Sort
Dijkstra's Algorithm
Merge Sort
Binary Search
Which algorithm is used to find the shortest path in a graph?
Bubble Sort
Dijkstra's Algorithm
Merge Sort
Binary Search
What is the worst-case time complexity of Quick Sort?
O(log n)
O(n)
O(n log n)
O(n^2)
What is the worst-case time complexity of Quick Sort?
O(log n)
O(n)
O(n log n)
O(n^2)
Which data structure is used for implementing recursion?
Array
Linked List
Stack
Queue
Which data structure is used for implementing recursion?
Array
Linked List
Stack
Queue
What is the space complexity of an array?
O(1)
O(n)
O(n^2)
O(log n)
What is the space complexity of an array?
O(1)
O(n)
O(n^2)
O(log n)
Which algorithm is used to traverse trees?
Dijkstra's Algorithm
Depth-First Search
Breadth-First Search
Binary Search
Which algorithm is used to traverse trees?
Dijkstra's Algorithm
Depth-First Search
Breadth-First Search
Binary Search
What is a binary tree?
A tree where each node has at most two children
A linear structure
A tree with no children
A tree where each node has three children
What is a binary tree?
A tree where each node has at most two children
A linear structure
A tree with no children
A tree where each node has three children
What is the average case time complexity of Merge Sort?
O(log n)
O(n^2)
O(n)
O(n log n)
What is the average case time complexity of Merge Sort?
O(log n)
O(n^2)
O(n)
O(n log n)
What data structure is used to implement a priority queue?
Stack
Linked List
Array
Heap
What data structure is used to implement a priority queue?
Stack
Linked List
Array
Heap
What is the time complexity of QuickSort?
Average: O(n log n)
Worst: O(n²) if pivot choices are poor
Space: O(log n) (recursive stack)
What is the time complexity of accessing an element in an array?
O(1), because arrays use contiguous memory and indexing is direct.
What is the time complexity of inserting at the beginning of an array?
O(n), since all elements must shift.
What is the time complexity of searching for an element in an unsorted array?
O(n).
What’s a sliding window technique, and when is it used?
It maintains a moving subset of elements within an array/string to optimize problems involving subarrays or substrings (e.g., maximum sum subarray, longest substring without repeats).
What is the difference between singly and doubly linked lists?
Singly: Node has next pointer only (less memory, one-way traversal).
Doubly: Node has prev and next (more memory, but easier insert/delete both ends).
Time complexity of inserting at the head of a linked list?
O(1).
Time complexity of accessing the i-th element in a linked list?
O(n), requires traversal.
What’s the benefit of linked lists over arrays?
Dynamic memory allocation and fast insertions/deletions (no shifting required).
What’s the difference between a stack and a queue?
Stack: LIFO (Last In, First Out).
Queue: FIFO (First In, First Out).
Time complexity of push/pop in a stack?
O(1).
How do you implement a queue with two stacks?
Use one stack for incoming elements, another for outgoing. Transfer when needed.
What’s a deque, and what’s its advantage?
A double-ended queue allows O(1) insertion/deletion at both ends.
What is the average time complexity of searching in a hash table?
O(1) average, O(n) worst case (if many collisions).
What are common collision resolution techniques?
Chaining (linked lists), Open Addressing (linear probing, quadratic probing, double hashing).
Why are hash tables not suitable for range queries?
Because they don’t maintain ordering of keys.
What is a perfect hash function?
A hash function with no collisions for a given dataset.
What is the height of a balanced binary tree with n nodes?
O(log n).
What’s the difference between BST and AVL tree?
BST: No balance enforcement, can degrade to O(n).
AVL: Self-balancing BST, ensures O(log n) operations.
What’s a red-black tree?
A self-balancing BST with color properties ensuring O(log n) height, widely used in language libraries (e.g., TreeMap in Java).
What’s a trie, and where is it used?
A prefix tree where each node represents a character. Used in autocomplete, spell check, and IP routing.
What are the common graph representations?
Adjacency list (O(V+E) space, efficient for sparse graphs) and adjacency matrix (O(V²) space, efficient for dense graphs).
Time complexity of BFS and DFS?
O(V + E), where V = vertices, E = edges.
What’s Dijkstra’s algorithm?
Shortest path algorithm for weighted graphs (non-negative weights). Complexity: O((V+E) log V) with a priority queue.
What is topological sort, and when is it used?
Ordering of DAG nodes such that for every edge u → v, u comes before v. Used in scheduling, build systems, dependency resolution.
What is the time complexity of inserting into a binary heap?
O(log n).
What’s the difference between min-heap and max-heap?
Min-heap: Root is smallest element.
Max-heap: Root is largest element.
How is a priority queue implemented?
Usually with a heap (binary, Fibonacci, binomial).
What is heapify, and its complexity?
Operation to turn an array into a heap. Complexity: O(n).
What is a segment tree?
A tree used for range queries (e.g., sum, min, max). Time complexity: O(log n) for query/update.
What is a Fenwick Tree (Binary Indexed Tree)?
A data structure for efficient prefix sums. Complexity: O(log n) for update/query.
What is a Bloom filter?
A probabilistic structure for set membership. Very space efficient, allows false positives but no false negatives.
What is a Disjoint Set (Union-Find)?
A structure to track disjoint sets with union and find operations, optimized with path compression and union by rank.
What is divide and conquer, and give an example?
Breaks a problem into smaller subproblems, solves recursively, and combines results. Example: Merge Sort (O(n log n)).
Difference between greedy and dynamic programming?
Greedy: Makes locally optimal choices (e.g., activity selection).
DP: Stores intermediate results to optimize overlapping subproblems (e.g., knapsack).
What’s the Master Theorem used for?
Analyzing time complexity of divide-and-conquer recurrences like T(n) = aT(n/b) + f(n).
What’s memoization vs tabulation in DP?
Memoization: Top-down recursion with cache.
Tabulation: Bottom-up iterative table filling.
What is the time complexity of QuickSort?
Average: O(n log n)
Worst: O(n²) if pivot choices are poor
Space: O(log n) (recursive stack)
Why is MergeSort stable but QuickSort is not?
MergeSort preserves relative order of equal elements (due to merge step). QuickSort may swap equal elements arbitrarily.
What’s the best sorting algorithm for nearly sorted data?
Insertion Sort → O(n) if array is almost sorted.
Why is HeapSort less cache-friendly than QuickSort?
HeapSort accesses memory non-sequentially (tree jumps), while QuickSort benefits from cache locality in partitioning.
Time complexity of Binary Search?
O(log n) for search, only works on sorted arrays.
What if array is rotated (e.g., [4,5,6,1,2,3])?
Modified Binary Search → O(log n).
What is exponential search?
Finds range exponentially, then applies binary search. Used for unbounded or very large arrays.
How do you search efficiently in a sorted matrix?
Start from top-right (or bottom-left), eliminate row/col each step → O(m+n).
What’s the difference between Big-O, Big-Theta, and Big-Omega?
Big-O: Upper bound (worst case).
Big-Theta: Tight bound (average case).
Big-Omega: Lower bound (best case).
What’s the difference between NP, NP-hard, and NP-complete?
NP: Verifiable in polynomial time.
NP-hard: At least as hard as NP, not necessarily in NP.
NP-complete: In NP and NP-hard.
What is amortized analysis?
Average time per operation over a sequence. Example: Dynamic array resizing (O(1) amortized insert).
Why is comparison-based sorting limited to O(n log n)?
Because any comparison sort must distinguish between n! permutations, requiring log(n!) ~ O(n log n) comparisons.
What’s the difference between Prim’s and Kruskal’s algorithm?
Both build Minimum Spanning Tree (MST).
When would you use Bellman-Ford instead of Dijkstra?
When graph has negative edge weights. Complexity O(VE).
What is Floyd-Warshall algorithm used for?
All-pairs shortest paths in weighted graphs. Complexity: O(V³).
What is A* search?
Pathfinding algorithm using f(n) = g(n) + h(n), where g = cost so far, h = heuristic estimate. Used in maps/games.
What is a skip list?
A probabilistic data structure with multiple linked list levels for O(log n) search/insert/delete.
What is a B-Tree?
A balanced search tree optimized for disk access. Used in databases and file systems.
Difference between B-Tree and B+ Tree?
B-Tree: Keys and values stored in all nodes.
B+ Tree: Values only in leaf nodes, internal nodes store keys for indexing.
What is an LRU cache, and how is it implemented?
Least Recently Used cache → evicts oldest item. Implemented using a HashMap + Doubly Linked List (O(1) get/put).
What if your dataset doesn’t fit in memory for sorting?
Use external merge sort (divide into chunks, sort in memory, merge chunks on disk).
What if the array is almost sorted (few inversions)?
Use Insertion Sort or Timsort (used in Python/Java), which adapts to data.
What if you need the k-th smallest element without full sorting?
Use Quickselect (average O(n)).
What if you need to check duplicates in a huge dataset that can’t fit in memory?
Use Bloom Filter (space-efficient, probabilistic) or Hashing with disk spillover.
What is the two-pointer technique?
Using two indices that move across the array (same or opposite directions) to solve problems in linear time, often for searching pairs or subarrays.
Example of two-pointer usage in sorted arrays?
“Two Sum II” → Given a sorted array, find two numbers that sum to a target. Move left/right pointers inward until sum is found. O(n).
Why is two pointers better than nested loops?
Reduces O(n²) brute force to O(n) or O(n log n), leveraging array ordering.
What is sliding window technique?
Keep a moving subarray (window) of fixed or variable size to optimize substring/subarray problems (e.g., max sum, unique chars).
Example of fixed-size sliding window?
Maximum sum of k consecutive elements → maintain running sum, update in O(1) per shift.
Example of variable-size sliding window?
Longest substring without repeating characters → expand right, shrink left if duplicate found. O(n).
What’s the purpose of Union-Find?
Efficiently manage partitions of a set and answer connectivity queries (e.g., are x and y connected?).
Optimizations in Union-Find?
Path compression (flatten trees) and union by rank/size (attach smaller tree under larger). Amortized O(α(n)) ≈ O(1).
Example problem solved with Union-Find?
Detect cycles in an undirected graph.
Applications of Union-Find in real systems?
Network connectivity, social networks (“friend groups”), Kruskal’s MST.
What is a greedy algorithm?
An algorithm that makes the locally optimal choice at each step, hoping to reach a global optimum.
Classic example of greedy?
Interval scheduling / activity selection → always pick the next activity with earliest finish time.
When does greedy fail?
When local optimum doesn’t lead to global optimum (e.g., 0/1 knapsack problem).
When does greedy work?
When the problem has optimal substructure + greedy choice property (e.g., Huffman coding, MST, Dijkstra).
What is dynamic programming?
Optimization technique that breaks problems into overlapping subproblems and stores results to avoid recomputation.
Difference between top-down (memoization) and bottom-up (tabulation)?
Top-down: Recursive + cache.
Bottom-up: Iterative + table.
Example of DP problem?
Fibonacci numbers: naive recursion O(2ⁿ), with DP → O(n).
Example of classic DP interview questions?
Coin Change (min coins to make value).
Longest Common Subsequence.
0/1 Knapsack.
Edit Distance (Levenshtein).
Matrix Path (min cost path).
What if array contains negative numbers in “max subarray sum” (Kadane’s algorithm)?
Kadane still works → tracks max ending here, max so far. Complexity O(n).
What if graph edges are negative in shortest path?
Dijkstra fails → must use Bellman-Ford (O(VE)).
What if the DP state space is too large to store?
Optimize with state compression (bitmask, rolling arrays) or meet-in-the-middle.
What if recursion depth causes stack overflow in DP?
Convert to bottom-up tabulation or iterative stack.
What’s the difference between stable and unstable sorting?
Stable sorts keep equal elements in the same relative order (MergeSort, InsertionSort). Unstable sorts may change order (QuickSort, HeapSort).
How do you find the majority element (> n/2 times) in an array?
Use Boyer-Moore Voting Algorithm → O(n) time, O(1) space.
How do you find the k-th largest element in an array?
Quickselect: O(n) average.
Heap of size k: O(n log k).
How do you find duplicates in O(1) extra space?
Modify array values (marking indices negative) or use cycle detection (Floyd’s Tortoise & Hare).
What is the Knuth-Morris-Pratt (KMP) algorithm?
String search algorithm using prefix function to avoid redundant comparisons. Complexity O(n + m).
What is Rabin-Karp algorithm?
String matching using rolling hash. Useful for multiple pattern matching. Average O(n+m), worst O(nm).
What is a suffix array?
An array of all suffixes of a string sorted lexicographically. Used in substring search, LCP queries, bioinformatics.
What is a trie, and how is it applied in string problems?
A prefix tree storing characters along paths. Used in autocomplete, dictionary search, IP routing.
What’s the difference between 0/1 knapsack and unbounded knapsack?
0/1: Each item can be chosen at most once.
Unbounded: Items can be chosen unlimited times.
How do you solve the Longest Common Subsequence (LCS)?
DP: dp[i][j] = 1 + dp[i-1][j-1] if match else max(dp[i-1][j], dp[i][j-1]). Complexity O(mn).
What’s the difference between LCS and Longest Common Substring?
LCS: Characters must appear in order, not necessarily contiguous.
Substring: Must be contiguous.
How do you solve the Edit Distance problem?
DP: dp[i][j] = min(insert, delete, replace). Complexity O(mn).
How do you detect a cycle in a directed graph?
Use DFS with recursion stack or Kahn’s algorithm (topological sort).
How do you detect a cycle in an undirected graph?
Use Union-Find or DFS parent tracking.
What is Tarjan’s algorithm used for?
Finding strongly connected components (SCCs) in a directed graph. Complexity O(V+E).
What is a bipartite graph and how do you check it?
A graph whose nodes can be divided into two sets with no intra-set edges. Check with BFS coloring.
How do you find the median of a data stream?
Maintain two heaps: max-heap for lower half, min-heap for upper half. Median is root(s).
How do you merge k sorted lists?
Use a min-heap of size k → repeatedly extract min. Complexity O(n log k).
How do you implement a sliding window maximum?
Use a deque to store candidates, O(n).
How do you implement Dijkstra’s algorithm efficiently?
Use a priority queue (min-heap) for selecting next closest node.
What is amortized O(1)? Give an example.
Average cost per operation is constant over a sequence (e.g., dynamic array resizing, push in stack).
What is divide & conquer with an example?
Break into subproblems → solve recursively → combine. Example: MergeSort.
What is a flow network and max-flow problem?
A directed graph with capacities; max-flow finds the maximum possible flow from source to sink. Ford-Fulkerson algorithm solves it.
What is the difference between greedy vs DP approaches?
Greedy: Local optimal choice → global optimal (works only in certain problems like MST).
DP: Stores overlapping subproblem solutions (works broadly, slower).
How does MergeSort work and why is it O(n log n)?
MergeSort uses divide & conquer: it splits the array into halves recursively until single elements remain, then merges them in sorted order. Each merge step takes O(n), and since the array is split log n times, total = O(n log n). It’s stable and good for linked lists but requires O(n) extra space.
How does QuickSort work and why can it degrade to O(n²)?
QuickSort picks a pivot, partitions array into ≤ pivot and > pivot, then recurses. With good pivot choices (random/median), average = O(n log n). Worst case = O(n²) when array is already sorted and pivot is always min/max. It’s in-place, so very fast in practice.
Why is HeapSort O(n log n) and how does heapify work?
HeapSort builds a max-heap (O(n)) and repeatedly extracts max element (O(log n)) for n elements → O(n log n). Heapify “sifts down” nodes to maintain heap property. Unlike MergeSort, it’s in-place but not stable.
Why is Binary Search O(log n)?
Each step halves the search space by comparing mid element to target. After k steps, only n / 2ᵏ elements remain. Solving n / 2ᵏ = 1 → k = log₂ n. Hence O(log n).
How does Exponential Search work?
First, find a range where the element could be by checking indices 1, 2, 4, 8… until overshoot. Then apply Binary Search in that range. Useful for infinite/unknown-length arrays. Complexity O(log n).
How does KMP string matching achieve O(n+m)?
KMP preprocesses the pattern to build an LPS (longest proper prefix which is also suffix) table. When mismatch occurs, it skips characters based on LPS instead of starting over. This prevents redundant comparisons, so total complexity is linear in text+pattern.
How does Rabin-Karp detect substring matches efficiently?
Uses rolling hash: computes hash of pattern and sliding windows of text. If hash matches, then check substring. Hash update O(1) per shift. Average O(n+m), but worst O(nm) if many collisions. Used in plagiarism detection, multi-pattern search.
How does Kadane’s algorithm find maximum subarray sum in O(n)?
It iterates once, maintaining two values: max_ending_here (best sum ending at current index) and max_so_far (global max). If max_ending_here < 0, reset to current element. This ensures we only extend subarrays that increase sum.
How does Edit Distance (Levenshtein) DP work?
Define dp[i][j] as min edits to convert first i chars of word1 to first j of word2.
How does Coin Change (minimum coins) DP work?
Define dp[amount] = min coins to make amount.
How does BFS guarantee shortest path in an unweighted graph?
BFS explores level by level. The first time we reach a node, it must be via the shortest path (fewest edges). Hence BFS gives shortest path in O(V+E).
Why does Dijkstra’s algorithm fail with negative weights?
Dijkstra assumes once a node is “settled” with shortest distance, it won’t be updated. With negative weights, this assumption breaks (a later path could improve it). Bellman-Ford handles negatives because it relaxes all edges repeatedly.
How does Union-Find detect cycles?
For each edge (u,v), check if u and v already belong to the same set (find). If yes → cycle. If not, union them. Path compression + union by rank make operations nearly O(1).
How does the two-heap method maintain median of a stream?
Maintain a max-heap of lower half and min-heap of upper half. Balance sizes so they differ by at most 1. Median = root(s). Each insert O(log n), query median O(1).
How does sliding window maximum algorithm achieve O(n)?
Use a deque storing indices. Maintain decreasing order inside deque. At each step, remove out-of-window indices and add new element while popping smaller ones. Front of deque always max.
How does Floyd-Warshall find all-pairs shortest paths?
Iteratively improves path estimates using dynamic programming. For each k (intermediate node), update dp[i][j] = min(dp[i][j], dp[i][k] + dp[k][j]). Time O(V³), works with negatives (no negative cycles).
How does A* pathfinding improve on Dijkstra?
Adds heuristic h(n) (estimate to goal). f(n) = g(n) + h(n). Guides search toward goal instead of exploring blindly. If h is admissible (never overestimates), A* is optimal.
How does Ford-Fulkerson algorithm compute max flow?
While there exists an augmenting path (via BFS/DFS), push flow along it and update residual capacities. Repeats until no path exists. Complexity O(E*max_flow) or O(VE²) with Edmonds-Karp.
What is MergeSort and how does it work?
MergeSort is a divide & conquer algorithm. It splits the array into halves recursively until one element remains, then merges sorted halves. Stable, good for linked lists.
What is QuickSort and how does it work?
QuickSort picks a pivot, partitions the array into smaller/larger, and recurses. Very cache-friendly and fast in practice. Worst-case when pivot is always poor.
What is HeapSort and how does it work?
HeapSort builds a max-heap, then repeatedly extracts the max and rebuilds heap. In-place but not stable.
What is Counting Sort and how does it work?
Counting Sort counts frequency of elements and reconstructs sorted output. Works only with integers in small range.
What is Binary Search and how does it work?
Binary Search repeatedly halves a sorted array until target is found or interval empty.
What is Exponential Search and how does it work?
Exponential Search finds range by checking 1, 2, 4, 8… then applies Binary Search. Useful for infinite/unknown arrays.
What is KMP (Knuth-Morris-Pratt) and how does it work?
Preprocesses pattern into an LPS table. When mismatch occurs, skips ahead instead of restarting.
What is Rabin-Karp and how does it work?
Uses rolling hash for substring matching. If hash matches, confirm with direct compare.
What is a Trie and how does it work?
Tree-based structure where each path stores a string prefix. Used in autocomplete and dictionaries.
What is Kadane’s Algorithm and how does it work?
Finds maximum subarray sum. Iterates maintaining max_ending_here and max_so_far.
What is Edit Distance (Levenshtein) and how does it work?
DP table compares substrings with insert, delete, replace operations.
What is Coin Change (Min Coins) and how does it work?
Bottom-up DP: dp[x] = min(dp[x], dp[x-coin]+1).
What is Longest Common Subsequence (LCS) and how does it work?
DP: If chars match, 1+dp[i-1][j-1], else max(dp[i-1][j], dp[i][j-1]).
What is BFS and how does it work?
Traverses level by level using queue. Guarantees shortest path in unweighted graphs.
What is DFS and how does it work?
Traverses graph by going deep via recursion/stack before backtracking. Used in cycle detection, components.
What is Dijkstra’s Algorithm and how does it work?
Uses priority queue to always expand nearest node, updating distances. Works for non-negative weights.
What is Bellman-Ford and how does it work?
Relaxes all edges V-1 times, detects negative cycles.
What is Floyd-Warshall and how does it work?
DP that updates all-pairs shortest path with intermediates.
What is Kruskal’s Algorithm and how does it work?
Sort edges by weight, union-find to build MST.
What is Prim’s Algorithm and how does it work?
Grows MST from a start node, always adding min edge crossing cut.
What is a Heap and how does it work?
A binary tree where parent ≤ children (min-heap). Insert = bubble up, extract = bubble down.
What is a Segment Tree and how does it work?
Binary tree storing range aggregates (sum, min, max). Updates/queries in O(log n).
What is a Fenwick Tree (BIT) and how does it work?
Array-based tree storing prefix sums using least significant bit.
What is a Bloom Filter and how does it work?
Probabilistic structure with multiple hash functions. Can say “maybe in set” or “definitely not.” False positives possible.
What is Union-Find and how does it work?
Tracks disjoint sets with find and union. Path compression + union by rank make it nearly O(1). Used in Kruskal’s MST, connectivity problems.
How does MergeSort work (step by step)?
Idea: divide & conquer + merge sorted halves.
Steps:
How does QuickSort work (step by step)?
Idea: partition around a pivot; recurse on partitions.
Steps:
How does HeapSort work (step by step)?
Idea: build max-heap; repeatedly extract max.
Steps:
How does Counting Sort work (step by step)?
Idea: count frequencies over small integer range.
Steps:
How does Binary Search work (step by step)?
Idea: halve the search interval on a sorted array.
Steps:
How does Exponential Search work (step by step)?
Idea: find a bound exponentially, then binary search.
Steps:
How does Quickselect (k-th smallest) work (step by step)?
Idea: QuickSort’s partition but recurse into one side.
Steps:
How does KMP (Knuth-Morris-Pratt) string search work (step by step)?
Idea: skip redundant comparisons using LPS (prefix function).
Steps:
How does Rabin–Karp work (step by step)?
Idea: rolling hash to compare windows quickly.
Steps:
How do Trie insert/search work (step by step)?
Idea: path per character.
Insert:
How does Kadane’s algorithm work (step by step)?
Idea: best subarray ending here → best overall.
Steps:
How does Edit Distance (Levenshtein) work (step by step)?
Idea: DP on prefixes with insert/delete/replace.
Steps:
How does Coin Change (min coins) work (step by step)?
Idea: unbounded knapsack on amount.
Steps:
How does LCS (Longest Common Subsequence) work (step by step)?
Idea: DP on prefixes; match → extend, else pick best skip.
Steps:
How does BFS work (step by step)?
Idea: layer-by-layer traversal via queue.
Steps:
How does DFS work (step by step)?
Idea: go deep before backtracking.
Steps:
How does Dijkstra work (step by step)?
Idea: greedy expansion by smallest tentative distance.
Steps:
How does Bellman–Ford work (step by step)?
Idea: relax all edges repeatedly.
Steps:
How does Floyd–Warshall work (step by step)?
DP with intermediate nodes.
Steps:
How does Kruskal’s MST work (step by step)?
Idea: add edges from lightest up, avoid cycles (Union–Find).
Steps:
How does Prim’s MST work (step by step)?
Idea: grow one tree by always picking the lightest crossing edge.
Steps:
How does Topological Sort (Kahn’s algorithm) work (step by step)?
Idea: peel nodes with indegree 0.
Steps:
How does Union–Find (Disjoint Set) work (step by step)?
parent pointers + path compression + union by rank.
Steps:
How does Sliding Window Maximum (deque) work (step by step)?
Idea: maintain decreasing deque of indices.
Steps:
How does Median of Data Stream (two heaps) work (step by step)?
Idea: max-heap for lower half, min-heap for upper half.
Steps:
How does Lazy Propagation in Segment Trees work (step by step)?
Idea: Delay range updates until necessary → avoid O(n) per update.
Steps:
lazy[] array stores pending updates for children.How does Prefix Sum (accumulation array) work (step by step)?
Idea: Precompute running sums → answer range queries in O(1).
Steps:
P where P[i] = A[0] + A[1] + … + A[i].P[r] – P[l-1].How does the Sieve of Eratosthenes work (step by step)?
Idea: Cross out multiples to find all primes ≤ n.
Steps:
isPrime[0..n] = true.0,1 as false.How does Fast Power (binary exponentiation) work (step by step)?
Idea: Break power into halves using binary representation of exponent.
Steps:
How does Prefix Min/Max accumulation work (step by step)?
Idea: Similar to prefix sums but store min/max up to i.
Steps:
How does a Difference Array work (step by step)?
Idea: Apply range updates in O(1), accumulate later.
Steps:
How does Sqrt Decomposition work (step by step)?
Idea: Partition array into √n blocks, precompute aggregates.
Steps:
How does a Fenwick Tree work (step by step)?
Idea: Store partial sums in tree structure using LSB (least significant bit).
Steps:
How does the Extended Euclidean Algorithm work (step by step)?
Idea: Find gcd(a,b) + coefficients x,y such that ax+by=gcd.
Steps:
How do you compute a modular inverse (step by step)?
Method 1 (if mod prime): use Fermat’s theorem: a^(mod-2) mod m.
How does Reservoir Sampling work (step by step)?
Idea: Pick k random samples from a stream of unknown length.
Steps:
How does Fisher–Yates Shuffle work (step by step)?
Idea: Generate uniform random permutation.
Steps:
How does the Greatest Common Divisor (Euclid’s algorithm) work (step by step)?
Idea: gcd(a,b) = gcd(b, a mod b).
Steps:
How does the Least Common Multiple (LCM) work (step by step)?
Idea: lcm(a,b) * gcd(a,b) = a * b.
Steps:
How does Modular Exponentiation (Fast Pow) work (step by step)?
Idea: exponentiation by squaring.
Steps:
How does the Miller–Rabin primality test work (step by step)?
Idea: randomized primality test based on Fermat’s little theorem.
Steps:
How does the Chinese Remainder Theorem (CRT) work (step by step)?
Idea: Solve system of congruences with coprime moduli.
Steps:
How does Union by Rank/Size improve DSU (Union-Find)?
Idea: attach smaller tree under larger to limit height.
Steps:
How does DSU with Rollback work (step by step)?
Idea: allow undo of union operations (useful in offline queries).
Steps:
How does a Persistent Segment Tree work (step by step)?
Idea: create new versions of tree on updates without destroying old ones.
Steps:
How does Segment Tree with Lazy Propagation differ from Fenwick Tree?
Answer:
How does Sparse Table (ST) work (step by step)?
Idea: preprocess for idempotent queries (min, max, gcd).
Steps:
How does Modular Multiplication without overflow work (step by step)?
Idea: compute (a·b) mod m safely for large a,b.
Steps:
How does Randomized QuickSort avoid worst-case?
Idea: choose pivot randomly.
Steps:
How does Reservoir Sampling select 1 item uniformly from a stream?
Steps:
How does an AVL Tree work (step by step)?
Idea: Self-balancing BST where balance factor (height(left)–height(right)) ∈ {-1,0,1}.
Steps:
How does a Red-Black Tree work (step by step)?
Idea: Self-balancing BST with coloring rules ensuring ~log n height.
Properties:
How does a Splay Tree work (step by step)?
Idea: BST that “splays” accessed nodes to root with rotations → fast for repeated accesses.
Steps:
How does a Treap (Tree + Heap) work (step by step)?
Idea: BST on keys + heap on random priorities.
Steps:
How does a Skip List work (step by step)?
Idea: Multiple levels of linked lists with “express lanes.”
Steps:
How does a Trie work (step by step)?
Idea: Tree storing characters of strings.
Steps:
How does a Compressed Trie (Radix Tree) differ?
Answer: Combines chains of single children into one edge with substring → saves space, faster traversals.
How does a Suffix Trie / Automaton work (step by step)?
Idea: Stores all suffixes of a string, automaton compresses states.
Steps:
How does a Binary Heap work (step by step)?
Idea: Complete binary tree with heap property (parent ≤ children).
Steps:
How does a Binomial Heap work (step by step)?
Idea: Forest of binomial trees where order encodes size.
Steps:
How does a Fibonacci Heap work (step by step)?
Idea: Heap with relaxed structure → faster amortized ops.
Steps:
How does a Hash Table with Chaining work (step by step)?
Idea: Array of buckets, collisions stored in linked lists.
Steps:
ow does Open Addressing (Linear Probing) work (step by step)?
Idea: Resolve collisions by probing next slots.
Steps:
How does Double Hashing work (step by step)?
Idea: Use second hash function for probe step.
Steps:
How does a Persistent Array work (step by step)?
Idea: Keep versions of array after updates.
Steps:
How does a Persistent Segment Tree work (step by step)?
Idea: Save versions of range structure.
Steps:
How does the Naïve String Search algorithm work (step by step)?
Idea: Check the pattern at every possible position.
Steps:
How does the Knuth-Morris-Pratt (KMP) algorithm work (step by step)?
Idea: Use prefix function (LPS) to skip redundant comparisons.
Steps:
How does the Rabin–Karp algorithm work (step by step)?
Idea: Use rolling hash for fast comparisons.
Steps:
How does the Z-algorithm for pattern matching work (step by step)?
Idea: Compute Z[i] = length of longest substring starting at i matching prefix.
Steps:
How does the Prefix Function (π array) work (step by step)?
Idea: For each index, longest prefix also suffix of substring.
Steps:
How does the Z-function work (step by step)?
Idea: Z[i] = length of longest prefix starting at i.
Steps:
How does a Suffix Array work (step by step)?
Idea: Array of all suffixes sorted lexicographically.
Steps:
How does a Suffix Tree work (step by step)?
Idea: Compressed trie of all suffixes.
Steps:
How does a Suffix Automaton (SAM) work (step by step)?
Idea: Minimal DFA accepting all suffixes of a string.
Steps:
How does the Manacher’s Algorithm work (step by step)?
Idea: Find longest palindrome centered at each position in O(n).
Steps:
How does Polynomial Rolling Hash work (step by step)?
Idea: Represent string as polynomial in base p, mod M.
Steps:
How does Huffman Coding work (step by step)?
Idea: Variable-length codes based on frequency.
Steps:
Which algorithm is used to find the shortest path in a graph?
Dijkstra's Algorithm
Binary Search
Bubble Sort
Merge Sort
Which algorithm is used to traverse trees?
Dijkstra's Algorithm
Binary Search
Depth-First Search
Breadth-First Search
What is a binary tree?
A tree where each node has three children
A tree where each node has at most two children
A tree with no children
A linear structure
What is the time complexity of QuickSort?
Average: O(n log n)
Worst: O(n²) if pivot choices are poor
Space: O(log n) (recursive stack)
What is the time complexity of accessing an element in an array?
O(1), because arrays use contiguous memory and indexing is direct.
What is the time complexity of inserting at the beginning of an array?
O(n), since all elements must shift.
What’s a sliding window technique, and when is it used?
It maintains a moving subset of elements within an array/string to optimize problems involving subarrays or substrings (e.g., maximum sum subarray, longest substring without repeats).
What is the difference between singly and doubly linked lists?
Singly: Node has next pointer only (less memory, one-way traversal).
Doubly: Node has prev and next (more memory, but easier insert/delete both ends).
What’s the benefit of linked lists over arrays?
Dynamic memory allocation and fast insertions/deletions (no shifting required).
What’s the difference between a stack and a queue?
Stack: LIFO (Last In, First Out).
Queue: FIFO (First In, First Out).
How do you implement a queue with two stacks?
Use one stack for incoming elements, another for outgoing. Transfer when needed.
What’s a deque, and what’s its advantage?
A double-ended queue allows O(1) insertion/deletion at both ends.
What is the average time complexity of searching in a hash table?
O(1) average, O(n) worst case (if many collisions).
What are common collision resolution techniques?
Chaining (linked lists), Open Addressing (linear probing, quadratic probing, double hashing).
What’s the difference between BST and AVL tree?
BST: No balance enforcement, can degrade to O(n).
AVL: Self-balancing BST, ensures O(log n) operations.
What’s a red-black tree?
A self-balancing BST with color properties ensuring O(log n) height, widely used in language libraries (e.g., TreeMap in Java).
What’s a trie, and where is it used?
A prefix tree where each node represents a character. Used in autocomplete, spell check, and IP routing.
What are the common graph representations?
Adjacency list (O(V+E) space, efficient for sparse graphs) and adjacency matrix (O(V²) space, efficient for dense graphs).
What’s Dijkstra’s algorithm?
Shortest path algorithm for weighted graphs (non-negative weights). Complexity: O((V+E) log V) with a priority queue.
What is topological sort, and when is it used?
Ordering of DAG nodes such that for every edge u → v, u comes before v. Used in scheduling, build systems, dependency resolution.
What’s the difference between min-heap and max-heap?
Min-heap: Root is smallest element.
Max-heap: Root is largest element.
What is a segment tree?
A tree used for range queries (e.g., sum, min, max). Time complexity: O(log n) for query/update.
What is a Fenwick Tree (Binary Indexed Tree)?
A data structure for efficient prefix sums. Complexity: O(log n) for update/query.
What is a Bloom filter?
A probabilistic structure for set membership. Very space efficient, allows false positives but no false negatives.
What is a Disjoint Set (Union-Find)?
A structure to track disjoint sets with union and find operations, optimized with path compression and union by rank.
What is divide and conquer, and give an example?
Breaks a problem into smaller subproblems, solves recursively, and combines results. Example: Merge Sort (O(n log n)).
Difference between greedy and dynamic programming?
Greedy: Makes locally optimal choices (e.g., activity selection).
DP: Stores intermediate results to optimize overlapping subproblems (e.g., knapsack).
What’s the Master Theorem used for?
Analyzing time complexity of divide-and-conquer recurrences like T(n) = aT(n/b) + f(n).
What’s memoization vs tabulation in DP?
Memoization: Top-down recursion with cache.
Tabulation: Bottom-up iterative table filling.
What is the time complexity of QuickSort?
Average: O(n log n)
Worst: O(n²) if pivot choices are poor
Space: O(log n) (recursive stack)
Why is MergeSort stable but QuickSort is not?
MergeSort preserves relative order of equal elements (due to merge step). QuickSort may swap equal elements arbitrarily.
What’s the best sorting algorithm for nearly sorted data?
Insertion Sort → O(n) if array is almost sorted.
Why is HeapSort less cache-friendly than QuickSort?
HeapSort accesses memory non-sequentially (tree jumps), while QuickSort benefits from cache locality in partitioning.
What is exponential search?
Finds range exponentially, then applies binary search. Used for unbounded or very large arrays.
How do you search efficiently in a sorted matrix?
Start from top-right (or bottom-left), eliminate row/col each step → O(m+n).
What’s the difference between Big-O, Big-Theta, and Big-Omega?
Big-O: Upper bound (worst case).
Big-Theta: Tight bound (average case).
Big-Omega: Lower bound (best case).
What’s the difference between NP, NP-hard, and NP-complete?
NP: Verifiable in polynomial time.
NP-hard: At least as hard as NP, not necessarily in NP.
NP-complete: In NP and NP-hard.
What is amortized analysis?
Average time per operation over a sequence. Example: Dynamic array resizing (O(1) amortized insert).
Why is comparison-based sorting limited to O(n log n)?
Because any comparison sort must distinguish between n! permutations, requiring log(n!) ~ O(n log n) comparisons.
What’s the difference between Prim’s and Kruskal’s algorithm?
Both build Minimum Spanning Tree (MST).
When would you use Bellman-Ford instead of Dijkstra?
When graph has negative edge weights. Complexity O(VE).
What is Floyd-Warshall algorithm used for?
All-pairs shortest paths in weighted graphs. Complexity: O(V³).
What is A* search?
Pathfinding algorithm using f(n) = g(n) + h(n), where g = cost so far, h = heuristic estimate. Used in maps/games.
What is a skip list?
A probabilistic data structure with multiple linked list levels for O(log n) search/insert/delete.
What is a B-Tree?
A balanced search tree optimized for disk access. Used in databases and file systems.
Difference between B-Tree and B+ Tree?
B-Tree: Keys and values stored in all nodes.
B+ Tree: Values only in leaf nodes, internal nodes store keys for indexing.
What is an LRU cache, and how is it implemented?
Least Recently Used cache → evicts oldest item. Implemented using a HashMap + Doubly Linked List (O(1) get/put).
What if your dataset doesn’t fit in memory for sorting?
Use external merge sort (divide into chunks, sort in memory, merge chunks on disk).
What if the array is almost sorted (few inversions)?
Use Insertion Sort or Timsort (used in Python/Java), which adapts to data.
What if you need to check duplicates in a huge dataset that can’t fit in memory?
Use Bloom Filter (space-efficient, probabilistic) or Hashing with disk spillover.
What is the two-pointer technique?
Using two indices that move across the array (same or opposite directions) to solve problems in linear time, often for searching pairs or subarrays.
Example of two-pointer usage in sorted arrays?
“Two Sum II” → Given a sorted array, find two numbers that sum to a target. Move left/right pointers inward until sum is found. O(n).
Why is two pointers better than nested loops?
Reduces O(n²) brute force to O(n) or O(n log n), leveraging array ordering.
What is sliding window technique?
Keep a moving subarray (window) of fixed or variable size to optimize substring/subarray problems (e.g., max sum, unique chars).
Example of fixed-size sliding window?
Maximum sum of k consecutive elements → maintain running sum, update in O(1) per shift.
Example of variable-size sliding window?
Longest substring without repeating characters → expand right, shrink left if duplicate found. O(n).
What’s the purpose of Union-Find?
Efficiently manage partitions of a set and answer connectivity queries (e.g., are x and y connected?).
Optimizations in Union-Find?
Path compression (flatten trees) and union by rank/size (attach smaller tree under larger). Amortized O(α(n)) ≈ O(1).
Applications of Union-Find in real systems?
Network connectivity, social networks (“friend groups”), Kruskal’s MST.
What is a greedy algorithm?
An algorithm that makes the locally optimal choice at each step, hoping to reach a global optimum.
Classic example of greedy?
Interval scheduling / activity selection → always pick the next activity with earliest finish time.
When does greedy fail?
When local optimum doesn’t lead to global optimum (e.g., 0/1 knapsack problem).
When does greedy work?
When the problem has optimal substructure + greedy choice property (e.g., Huffman coding, MST, Dijkstra).
What is dynamic programming?
Optimization technique that breaks problems into overlapping subproblems and stores results to avoid recomputation.
Difference between top-down (memoization) and bottom-up (tabulation)?
Top-down: Recursive + cache.
Bottom-up: Iterative + table.
Example of classic DP interview questions?
Coin Change (min coins to make value).
Longest Common Subsequence.
0/1 Knapsack.
Edit Distance (Levenshtein).
Matrix Path (min cost path).
What if array contains negative numbers in “max subarray sum” (Kadane’s algorithm)?
Kadane still works → tracks max ending here, max so far. Complexity O(n).
What if the DP state space is too large to store?
Optimize with state compression (bitmask, rolling arrays) or meet-in-the-middle.
What if recursion depth causes stack overflow in DP?
Convert to bottom-up tabulation or iterative stack.
What’s the difference between stable and unstable sorting?
Stable sorts keep equal elements in the same relative order (MergeSort, InsertionSort). Unstable sorts may change order (QuickSort, HeapSort).
How do you find the majority element (> n/2 times) in an array?
Use Boyer-Moore Voting Algorithm → O(n) time, O(1) space.
How do you find the k-th largest element in an array?
Quickselect: O(n) average.
Heap of size k: O(n log k).
How do you find duplicates in O(1) extra space?
Modify array values (marking indices negative) or use cycle detection (Floyd’s Tortoise & Hare).
What is the Knuth-Morris-Pratt (KMP) algorithm?
String search algorithm using prefix function to avoid redundant comparisons. Complexity O(n + m).
What is Rabin-Karp algorithm?
String matching using rolling hash. Useful for multiple pattern matching. Average O(n+m), worst O(nm).
What is a suffix array?
An array of all suffixes of a string sorted lexicographically. Used in substring search, LCP queries, bioinformatics.
What is a trie, and how is it applied in string problems?
A prefix tree storing characters along paths. Used in autocomplete, dictionary search, IP routing.
What’s the difference between 0/1 knapsack and unbounded knapsack?
0/1: Each item can be chosen at most once.
Unbounded: Items can be chosen unlimited times.
How do you solve the Longest Common Subsequence (LCS)?
DP: dp[i][j] = 1 + dp[i-1][j-1] if match else max(dp[i-1][j], dp[i][j-1]). Complexity O(mn).
What’s the difference between LCS and Longest Common Substring?
LCS: Characters must appear in order, not necessarily contiguous.
Substring: Must be contiguous.
How do you solve the Edit Distance problem?
DP: dp[i][j] = min(insert, delete, replace). Complexity O(mn).
How do you detect a cycle in a directed graph?
Use DFS with recursion stack or Kahn’s algorithm (topological sort).
What is Tarjan’s algorithm used for?
Finding strongly connected components (SCCs) in a directed graph. Complexity O(V+E).
What is a bipartite graph and how do you check it?
A graph whose nodes can be divided into two sets with no intra-set edges. Check with BFS coloring.
How do you find the median of a data stream?
Maintain two heaps: max-heap for lower half, min-heap for upper half. Median is root(s).
How do you merge k sorted lists?
Use a min-heap of size k → repeatedly extract min. Complexity O(n log k).
How do you implement Dijkstra’s algorithm efficiently?
Use a priority queue (min-heap) for selecting next closest node.
What is amortized O(1)? Give an example.
Average cost per operation is constant over a sequence (e.g., dynamic array resizing, push in stack).
What is divide & conquer with an example?
Break into subproblems → solve recursively → combine. Example: MergeSort.
What is a flow network and max-flow problem?
A directed graph with capacities; max-flow finds the maximum possible flow from source to sink. Ford-Fulkerson algorithm solves it.
What is the difference between greedy vs DP approaches?
Greedy: Local optimal choice → global optimal (works only in certain problems like MST).
DP: Stores overlapping subproblem solutions (works broadly, slower).
How does MergeSort work and why is it O(n log n)?
MergeSort uses divide & conquer: it splits the array into halves recursively until single elements remain, then merges them in sorted order. Each merge step takes O(n), and since the array is split log n times, total = O(n log n). It’s stable and good for linked lists but requires O(n) extra space.
How does QuickSort work and why can it degrade to O(n²)?
QuickSort picks a pivot, partitions array into ≤ pivot and > pivot, then recurses. With good pivot choices (random/median), average = O(n log n). Worst case = O(n²) when array is already sorted and pivot is always min/max. It’s in-place, so very fast in practice.
Why is HeapSort O(n log n) and how does heapify work?
HeapSort builds a max-heap (O(n)) and repeatedly extracts max element (O(log n)) for n elements → O(n log n). Heapify “sifts down” nodes to maintain heap property. Unlike MergeSort, it’s in-place but not stable.
Why is Binary Search O(log n)?
Each step halves the search space by comparing mid element to target. After k steps, only n / 2ᵏ elements remain. Solving n / 2ᵏ = 1 → k = log₂ n. Hence O(log n).
How does Exponential Search work?
First, find a range where the element could be by checking indices 1, 2, 4, 8… until overshoot. Then apply Binary Search in that range. Useful for infinite/unknown-length arrays. Complexity O(log n).
How does KMP string matching achieve O(n+m)?
KMP preprocesses the pattern to build an LPS (longest proper prefix which is also suffix) table. When mismatch occurs, it skips characters based on LPS instead of starting over. This prevents redundant comparisons, so total complexity is linear in text+pattern.
How does Rabin-Karp detect substring matches efficiently?
Uses rolling hash: computes hash of pattern and sliding windows of text. If hash matches, then check substring. Hash update O(1) per shift. Average O(n+m), but worst O(nm) if many collisions. Used in plagiarism detection, multi-pattern search.
How does Kadane’s algorithm find maximum subarray sum in O(n)?
It iterates once, maintaining two values: max_ending_here (best sum ending at current index) and max_so_far (global max). If max_ending_here < 0, reset to current element. This ensures we only extend subarrays that increase sum.
How does Edit Distance (Levenshtein) DP work?
Define dp[i][j] as min edits to convert first i chars of word1 to first j of word2.
How does Coin Change (minimum coins) DP work?
Define dp[amount] = min coins to make amount.
How does BFS guarantee shortest path in an unweighted graph?
BFS explores level by level. The first time we reach a node, it must be via the shortest path (fewest edges). Hence BFS gives shortest path in O(V+E).
Why does Dijkstra’s algorithm fail with negative weights?
Dijkstra assumes once a node is “settled” with shortest distance, it won’t be updated. With negative weights, this assumption breaks (a later path could improve it). Bellman-Ford handles negatives because it relaxes all edges repeatedly.
How does Union-Find detect cycles?
For each edge (u,v), check if u and v already belong to the same set (find). If yes → cycle. If not, union them. Path compression + union by rank make operations nearly O(1).
How does the two-heap method maintain median of a stream?
Maintain a max-heap of lower half and min-heap of upper half. Balance sizes so they differ by at most 1. Median = root(s). Each insert O(log n), query median O(1).
How does sliding window maximum algorithm achieve O(n)?
Use a deque storing indices. Maintain decreasing order inside deque. At each step, remove out-of-window indices and add new element while popping smaller ones. Front of deque always max.
How does Floyd-Warshall find all-pairs shortest paths?
Iteratively improves path estimates using dynamic programming. For each k (intermediate node), update dp[i][j] = min(dp[i][j], dp[i][k] + dp[k][j]). Time O(V³), works with negatives (no negative cycles).
How does A* pathfinding improve on Dijkstra?
Adds heuristic h(n) (estimate to goal). f(n) = g(n) + h(n). Guides search toward goal instead of exploring blindly. If h is admissible (never overestimates), A* is optimal.
How does Ford-Fulkerson algorithm compute max flow?
While there exists an augmenting path (via BFS/DFS), push flow along it and update residual capacities. Repeats until no path exists. Complexity O(E*max_flow) or O(VE²) with Edmonds-Karp.
What is MergeSort and how does it work?
MergeSort is a divide & conquer algorithm. It splits the array into halves recursively until one element remains, then merges sorted halves. Stable, good for linked lists.
What is QuickSort and how does it work?
QuickSort picks a pivot, partitions the array into smaller/larger, and recurses. Very cache-friendly and fast in practice. Worst-case when pivot is always poor.
What is HeapSort and how does it work?
HeapSort builds a max-heap, then repeatedly extracts the max and rebuilds heap. In-place but not stable.
What is Counting Sort and how does it work?
Counting Sort counts frequency of elements and reconstructs sorted output. Works only with integers in small range.
What is Binary Search and how does it work?
Binary Search repeatedly halves a sorted array until target is found or interval empty.
What is Exponential Search and how does it work?
Exponential Search finds range by checking 1, 2, 4, 8… then applies Binary Search. Useful for infinite/unknown arrays.
What is KMP (Knuth-Morris-Pratt) and how does it work?
Preprocesses pattern into an LPS table. When mismatch occurs, skips ahead instead of restarting.
What is Rabin-Karp and how does it work?
Uses rolling hash for substring matching. If hash matches, confirm with direct compare.
What is a Trie and how does it work?
Tree-based structure where each path stores a string prefix. Used in autocomplete and dictionaries.
What is Kadane’s Algorithm and how does it work?
Finds maximum subarray sum. Iterates maintaining max_ending_here and max_so_far.
What is Edit Distance (Levenshtein) and how does it work?
DP table compares substrings with insert, delete, replace operations.
What is Coin Change (Min Coins) and how does it work?
Bottom-up DP: dp[x] = min(dp[x], dp[x-coin]+1).
What is Longest Common Subsequence (LCS) and how does it work?
DP: If chars match, 1+dp[i-1][j-1], else max(dp[i-1][j], dp[i][j-1]).
What is BFS and how does it work?
Traverses level by level using queue. Guarantees shortest path in unweighted graphs.
What is DFS and how does it work?
Traverses graph by going deep via recursion/stack before backtracking. Used in cycle detection, components.
What is Dijkstra’s Algorithm and how does it work?
Uses priority queue to always expand nearest node, updating distances. Works for non-negative weights.
What is Bellman-Ford and how does it work?
Relaxes all edges V-1 times, detects negative cycles.
What is Floyd-Warshall and how does it work?
DP that updates all-pairs shortest path with intermediates.
What is Kruskal’s Algorithm and how does it work?
Sort edges by weight, union-find to build MST.
What is Prim’s Algorithm and how does it work?
Grows MST from a start node, always adding min edge crossing cut.
What is a Heap and how does it work?
A binary tree where parent ≤ children (min-heap). Insert = bubble up, extract = bubble down.
What is a Segment Tree and how does it work?
Binary tree storing range aggregates (sum, min, max). Updates/queries in O(log n).
What is a Fenwick Tree (BIT) and how does it work?
Array-based tree storing prefix sums using least significant bit.
What is a Bloom Filter and how does it work?
Probabilistic structure with multiple hash functions. Can say “maybe in set” or “definitely not.” False positives possible.
What is Union-Find and how does it work?
Tracks disjoint sets with find and union. Path compression + union by rank make it nearly O(1). Used in Kruskal’s MST, connectivity problems.
How does MergeSort work (step by step)?
Idea: divide & conquer + merge sorted halves.
Steps:
How does QuickSort work (step by step)?
Idea: partition around a pivot; recurse on partitions.
Steps:
How does HeapSort work (step by step)?
Idea: build max-heap; repeatedly extract max.
Steps:
How does Counting Sort work (step by step)?
Idea: count frequencies over small integer range.
Steps:
How does Binary Search work (step by step)?
Idea: halve the search interval on a sorted array.
Steps:
How does Exponential Search work (step by step)?
Idea: find a bound exponentially, then binary search.
Steps:
How does Quickselect (k-th smallest) work (step by step)?
Idea: QuickSort’s partition but recurse into one side.
Steps:
How does KMP (Knuth-Morris-Pratt) string search work (step by step)?
Idea: skip redundant comparisons using LPS (prefix function).
Steps:
How does Rabin–Karp work (step by step)?
Idea: rolling hash to compare windows quickly.
Steps:
How do Trie insert/search work (step by step)?
Idea: path per character.
Insert:
How does Kadane’s algorithm work (step by step)?
Idea: best subarray ending here → best overall.
Steps:
How does Edit Distance (Levenshtein) work (step by step)?
Idea: DP on prefixes with insert/delete/replace.
Steps:
How does Coin Change (min coins) work (step by step)?
Idea: unbounded knapsack on amount.
Steps:
How does LCS (Longest Common Subsequence) work (step by step)?
Idea: DP on prefixes; match → extend, else pick best skip.
Steps:
How does BFS work (step by step)?
Idea: layer-by-layer traversal via queue.
Steps:
How does DFS work (step by step)?
Idea: go deep before backtracking.
Steps:
How does Dijkstra work (step by step)?
Idea: greedy expansion by smallest tentative distance.
Steps:
How does Bellman–Ford work (step by step)?
Idea: relax all edges repeatedly.
Steps:
How does Floyd–Warshall work (step by step)?
DP with intermediate nodes.
Steps:
How does Kruskal’s MST work (step by step)?
Idea: add edges from lightest up, avoid cycles (Union–Find).
Steps:
How does Prim’s MST work (step by step)?
Idea: grow one tree by always picking the lightest crossing edge.
Steps:
How does Topological Sort (Kahn’s algorithm) work (step by step)?
Idea: peel nodes with indegree 0.
Steps:
How does Union–Find (Disjoint Set) work (step by step)?
parent pointers + path compression + union by rank.
Steps:
How does Sliding Window Maximum (deque) work (step by step)?
Idea: maintain decreasing deque of indices.
Steps:
How does Median of Data Stream (two heaps) work (step by step)?
Idea: max-heap for lower half, min-heap for upper half.
Steps:
How does Lazy Propagation in Segment Trees work (step by step)?
Idea: Delay range updates until necessary → avoid O(n) per update.
Steps:
lazy[] array stores pending updates for children.How does Prefix Sum (accumulation array) work (step by step)?
Idea: Precompute running sums → answer range queries in O(1).
Steps:
P where P[i] = A[0] + A[1] + … + A[i].P[r] – P[l-1].How does the Sieve of Eratosthenes work (step by step)?
Idea: Cross out multiples to find all primes ≤ n.
Steps:
isPrime[0..n] = true.0,1 as false.How does Fast Power (binary exponentiation) work (step by step)?
Idea: Break power into halves using binary representation of exponent.
Steps:
How does Prefix Min/Max accumulation work (step by step)?
Idea: Similar to prefix sums but store min/max up to i.
Steps:
How does a Difference Array work (step by step)?
Idea: Apply range updates in O(1), accumulate later.
Steps:
How does Sqrt Decomposition work (step by step)?
Idea: Partition array into √n blocks, precompute aggregates.
Steps:
How does a Fenwick Tree work (step by step)?
Idea: Store partial sums in tree structure using LSB (least significant bit).
Steps:
How does the Extended Euclidean Algorithm work (step by step)?
Idea: Find gcd(a,b) + coefficients x,y such that ax+by=gcd.
Steps:
How do you compute a modular inverse (step by step)?
Method 1 (if mod prime): use Fermat’s theorem: a^(mod-2) mod m.
How does Reservoir Sampling work (step by step)?
Idea: Pick k random samples from a stream of unknown length.
Steps:
How does Fisher–Yates Shuffle work (step by step)?
Idea: Generate uniform random permutation.
Steps:
How does the Greatest Common Divisor (Euclid’s algorithm) work (step by step)?
Idea: gcd(a,b) = gcd(b, a mod b).
Steps:
How does the Least Common Multiple (LCM) work (step by step)?
Idea: lcm(a,b) * gcd(a,b) = a * b.
Steps:
How does Modular Exponentiation (Fast Pow) work (step by step)?
Idea: exponentiation by squaring.
Steps:
How does the Miller–Rabin primality test work (step by step)?
Idea: randomized primality test based on Fermat’s little theorem.
Steps:
How does the Chinese Remainder Theorem (CRT) work (step by step)?
Idea: Solve system of congruences with coprime moduli.
Steps:
How does Union by Rank/Size improve DSU (Union-Find)?
Idea: attach smaller tree under larger to limit height.
Steps:
How does DSU with Rollback work (step by step)?
Idea: allow undo of union operations (useful in offline queries).
Steps:
How does a Persistent Segment Tree work (step by step)?
Idea: create new versions of tree on updates without destroying old ones.
Steps:
How does Segment Tree with Lazy Propagation differ from Fenwick Tree?
Answer:
How does Sparse Table (ST) work (step by step)?
Idea: preprocess for idempotent queries (min, max, gcd).
Steps:
How does Modular Multiplication without overflow work (step by step)?
Idea: compute (a·b) mod m safely for large a,b.
Steps:
How does Randomized QuickSort avoid worst-case?
Idea: choose pivot randomly.
Steps:
How does Reservoir Sampling select 1 item uniformly from a stream?
Steps:
How does an AVL Tree work (step by step)?
Idea: Self-balancing BST where balance factor (height(left)–height(right)) ∈ {-1,0,1}.
Steps:
How does a Red-Black Tree work (step by step)?
Idea: Self-balancing BST with coloring rules ensuring ~log n height.
Properties:
How does a Splay Tree work (step by step)?
Idea: BST that “splays” accessed nodes to root with rotations → fast for repeated accesses.
Steps:
How does a Treap (Tree + Heap) work (step by step)?
Idea: BST on keys + heap on random priorities.
Steps:
How does a Skip List work (step by step)?
Idea: Multiple levels of linked lists with “express lanes.”
Steps:
How does a Trie work (step by step)?
Idea: Tree storing characters of strings.
Steps:
How does a Compressed Trie (Radix Tree) differ?
Answer: Combines chains of single children into one edge with substring → saves space, faster traversals.
How does a Suffix Trie / Automaton work (step by step)?
Idea: Stores all suffixes of a string, automaton compresses states.
Steps:
How does a Binary Heap work (step by step)?
Idea: Complete binary tree with heap property (parent ≤ children).
Steps:
How does a Binomial Heap work (step by step)?
Idea: Forest of binomial trees where order encodes size.
Steps:
How does a Fibonacci Heap work (step by step)?
Idea: Heap with relaxed structure → faster amortized ops.
Steps:
How does a Hash Table with Chaining work (step by step)?
Idea: Array of buckets, collisions stored in linked lists.
Steps:
ow does Open Addressing (Linear Probing) work (step by step)?
Idea: Resolve collisions by probing next slots.
Steps:
How does Double Hashing work (step by step)?
Idea: Use second hash function for probe step.
Steps:
How does a Persistent Array work (step by step)?
Idea: Keep versions of array after updates.
Steps:
How does a Persistent Segment Tree work (step by step)?
Idea: Save versions of range structure.
Steps:
How does the Naïve String Search algorithm work (step by step)?
Idea: Check the pattern at every possible position.
Steps:
How does the Knuth-Morris-Pratt (KMP) algorithm work (step by step)?
Idea: Use prefix function (LPS) to skip redundant comparisons.
Steps:
How does the Rabin–Karp algorithm work (step by step)?
Idea: Use rolling hash for fast comparisons.
Steps:
How does the Z-algorithm for pattern matching work (step by step)?
Idea: Compute Z[i] = length of longest substring starting at i matching prefix.
Steps:
How does the Prefix Function (π array) work (step by step)?
Idea: For each index, longest prefix also suffix of substring.
Steps:
How does the Z-function work (step by step)?
Idea: Z[i] = length of longest prefix starting at i.
Steps:
How does a Suffix Array work (step by step)?
Idea: Array of all suffixes sorted lexicographically.
Steps:
How does a Suffix Tree work (step by step)?
Idea: Compressed trie of all suffixes.
Steps:
How does a Suffix Automaton (SAM) work (step by step)?
Idea: Minimal DFA accepting all suffixes of a string.
Steps:
How does the Manacher’s Algorithm work (step by step)?
Idea: Find longest palindrome centered at each position in O(n).
Steps:
How does Polynomial Rolling Hash work (step by step)?
Idea: Represent string as polynomial in base p, mod M.
Steps:
How does Huffman Coding work (step by step)?
Idea: Variable-length codes based on frequency.
Steps:
Are you sure you want to delete 0 flashcard(s)? This cannot be undone.
Select tags to remove from 0 selected flashcard(s):
Loading tags...