Searching...
Flashcards in this deck (238)
  • What is the time complexity of accessing an element in an array?

    O(1)

    O(n)

    O(n^2)

    O(log n)

    algorithms data_structures
  • What data structure uses FIFO (First In First Out) principle?

    Stack

    Linked List

    Array

    Queue

    algorithms data_structures
  • Which algorithm is used to find the shortest path in a graph?

    Dijkstra's Algorithm

    Binary Search

    Bubble Sort

    Merge Sort

    algorithms graphs
  • What is the worst-case time complexity of Quick Sort?

    O(n)

    O(n^2)

    O(n log n)

    O(log n)

    algorithms sorting
  • Which data structure is used for implementing recursion?

    Linked List

    Queue

    Stack

    Array

    algorithms data_structures
  • What is the space complexity of an array?

    O(log n)

    O(n)

    O(n^2)

    O(1)

    algorithms data_structures
  • Which algorithm is used to traverse trees?

    Dijkstra's Algorithm

    Binary Search

    Depth-First Search

    Breadth-First Search

    algorithms trees
  • What is a binary tree?

    A tree where each node has three children

    A tree where each node has at most two children

    A tree with no children

    A linear structure

    algorithms trees
  • What is the average case time complexity of Merge Sort?

    O(n)

    O(n^2)

    O(log n)

    O(n log n)

    algorithms sorting
  • What data structure is used to implement a priority queue?

    Heap

    Stack

    Array

    Linked List

    algorithms data_structures
  • What is the time complexity of QuickSort?

    Average: O(n log n)

    Worst: O(n²) if pivot choices are poor

    Space: O(log n) (recursive stack)

  • What is the time complexity of accessing an element in an array?

    O(1), because arrays use contiguous memory and indexing is direct.

  • What is the time complexity of inserting at the beginning of an array?

    O(n), since all elements must shift.

  • What is the time complexity of searching for an element in an unsorted array?

    O(n).

  • What’s a sliding window technique, and when is it used?

    It maintains a moving subset of elements within an array/string to optimize problems involving subarrays or substrings (e.g., maximum sum subarray, longest substring without repeats).

  • What is the difference between singly and doubly linked lists?

    Singly: Node has next pointer only (less memory, one-way traversal).

    Doubly: Node has prev and next (more memory, but easier insert/delete both ends).

  • Time complexity of inserting at the head of a linked list?

    O(1).

  • Time complexity of accessing the i-th element in a linked list?

    O(n), requires traversal.

  • What’s the benefit of linked lists over arrays?

    Dynamic memory allocation and fast insertions/deletions (no shifting required).

  • What’s the difference between a stack and a queue?

    Stack: LIFO (Last In, First Out).

    Queue: FIFO (First In, First Out).

  • Time complexity of push/pop in a stack?

    O(1).

  • How do you implement a queue with two stacks?

    Use one stack for incoming elements, another for outgoing. Transfer when needed.

  • What’s a deque, and what’s its advantage?

    A double-ended queue allows O(1) insertion/deletion at both ends.

  • What is the average time complexity of searching in a hash table?

    O(1) average, O(n) worst case (if many collisions).

  • What are common collision resolution techniques?

    Chaining (linked lists), Open Addressing (linear probing, quadratic probing, double hashing).

  • Why are hash tables not suitable for range queries?

    Because they don’t maintain ordering of keys.

  • What is a perfect hash function?

    A hash function with no collisions for a given dataset.

  • What is the height of a balanced binary tree with n nodes?

    O(log n).

  • What’s the difference between BST and AVL tree?

    BST: No balance enforcement, can degrade to O(n).

    AVL: Self-balancing BST, ensures O(log n) operations.

  • What’s a red-black tree?

    A self-balancing BST with color properties ensuring O(log n) height, widely used in language libraries (e.g., TreeMap in Java).

  • What’s a trie, and where is it used?

    A prefix tree where each node represents a character. Used in autocomplete, spell check, and IP routing.

  • What are the common graph representations?

    Adjacency list (O(V+E) space, efficient for sparse graphs) and adjacency matrix (O(V²) space, efficient for dense graphs).

  • Time complexity of BFS and DFS?

    O(V + E), where V = vertices, E = edges.

  • What’s Dijkstra’s algorithm?

    Shortest path algorithm for weighted graphs (non-negative weights). Complexity: O((V+E) log V) with a priority queue.

  • What is topological sort, and when is it used?

    Ordering of DAG nodes such that for every edge u → v, u comes before v. Used in scheduling, build systems, dependency resolution.

  • What is the time complexity of inserting into a binary heap?

    O(log n).

  • What’s the difference between min-heap and max-heap?

    Min-heap: Root is smallest element.

    Max-heap: Root is largest element.

  • How is a priority queue implemented?

    Usually with a heap (binary, Fibonacci, binomial).

  • What is heapify, and its complexity?

    Operation to turn an array into a heap. Complexity: O(n).

  • What is a segment tree?

    A tree used for range queries (e.g., sum, min, max). Time complexity: O(log n) for query/update.

  • What is a Fenwick Tree (Binary Indexed Tree)?

    A data structure for efficient prefix sums. Complexity: O(log n) for update/query.

  • What is a Bloom filter?

    A probabilistic structure for set membership. Very space efficient, allows false positives but no false negatives.

  • What is a Disjoint Set (Union-Find)?

    A structure to track disjoint sets with union and find operations, optimized with path compression and union by rank.

  • What is divide and conquer, and give an example?

    Breaks a problem into smaller subproblems, solves recursively, and combines results. Example: Merge Sort (O(n log n)).

  • Difference between greedy and dynamic programming?

    Greedy: Makes locally optimal choices (e.g., activity selection).

    DP: Stores intermediate results to optimize overlapping subproblems (e.g., knapsack).

  • What’s the Master Theorem used for?

    Analyzing time complexity of divide-and-conquer recurrences like T(n) = aT(n/b) + f(n).

  • What’s memoization vs tabulation in DP?

    Memoization: Top-down recursion with cache.

    Tabulation: Bottom-up iterative table filling.

  • What is the time complexity of QuickSort?

    Average: O(n log n)

    Worst: O(n²) if pivot choices are poor

    Space: O(log n) (recursive stack)

  • Why is MergeSort stable but QuickSort is not?

    MergeSort preserves relative order of equal elements (due to merge step). QuickSort may swap equal elements arbitrarily.

  • What’s the best sorting algorithm for nearly sorted data?

    Insertion Sort → O(n) if array is almost sorted.

  • Why is HeapSort less cache-friendly than QuickSort?

    HeapSort accesses memory non-sequentially (tree jumps), while QuickSort benefits from cache locality in partitioning.

  • Time complexity of Binary Search?

    O(log n) for search, only works on sorted arrays.

  • What if array is rotated (e.g., [4,5,6,1,2,3])?

    Modified Binary Search → O(log n).

  • What is exponential search?

    Finds range exponentially, then applies binary search. Used for unbounded or very large arrays.

  • How do you search efficiently in a sorted matrix?

    Start from top-right (or bottom-left), eliminate row/col each step → O(m+n).

  • What’s the difference between Big-O, Big-Theta, and Big-Omega?

    Big-O: Upper bound (worst case).

    Big-Theta: Tight bound (average case).

    Big-Omega: Lower bound (best case).

  • What’s the difference between NP, NP-hard, and NP-complete?

    NP: Verifiable in polynomial time.

    NP-hard: At least as hard as NP, not necessarily in NP.

    NP-complete: In NP and NP-hard.

  • What is amortized analysis?

    Average time per operation over a sequence. Example: Dynamic array resizing (O(1) amortized insert).

  • Why is comparison-based sorting limited to O(n log n)?

    Because any comparison sort must distinguish between n! permutations, requiring log(n!) ~ O(n log n) comparisons.

  • What’s the difference between Prim’s and Kruskal’s algorithm?

    Both build Minimum Spanning Tree (MST).

    • Prim’s: Expands from a start node (uses priority queue).
    • Kruskal’s: Sorts edges, uses Union-Find for cycle detection.

  • When would you use Bellman-Ford instead of Dijkstra?

    When graph has negative edge weights. Complexity O(VE).

  • What is Floyd-Warshall algorithm used for?

    All-pairs shortest paths in weighted graphs. Complexity: O(V³).

  • What is A* search?

    Pathfinding algorithm using f(n) = g(n) + h(n), where g = cost so far, h = heuristic estimate. Used in maps/games.

  • What is a skip list?

    A probabilistic data structure with multiple linked list levels for O(log n) search/insert/delete.

  • What is a B-Tree?

    A balanced search tree optimized for disk access. Used in databases and file systems.

  • Difference between B-Tree and B+ Tree?

    B-Tree: Keys and values stored in all nodes.

    B+ Tree: Values only in leaf nodes, internal nodes store keys for indexing.

  • What is an LRU cache, and how is it implemented?

    Least Recently Used cache → evicts oldest item. Implemented using a HashMap + Doubly Linked List (O(1) get/put).

  • What if your dataset doesn’t fit in memory for sorting?

    Use external merge sort (divide into chunks, sort in memory, merge chunks on disk).

  • What if the array is almost sorted (few inversions)?

    Use Insertion Sort or Timsort (used in Python/Java), which adapts to data.

  • What if you need the k-th smallest element without full sorting?

    Use Quickselect (average O(n)).

  • What if you need to check duplicates in a huge dataset that can’t fit in memory?

    Use Bloom Filter (space-efficient, probabilistic) or Hashing with disk spillover.

  • What is the two-pointer technique?

    Using two indices that move across the array (same or opposite directions) to solve problems in linear time, often for searching pairs or subarrays.

  • Example of two-pointer usage in sorted arrays?

    “Two Sum II” → Given a sorted array, find two numbers that sum to a target. Move left/right pointers inward until sum is found. O(n).

  • Why is two pointers better than nested loops?

    Reduces O(n²) brute force to O(n) or O(n log n), leveraging array ordering.

  • What is sliding window technique?

    Keep a moving subarray (window) of fixed or variable size to optimize substring/subarray problems (e.g., max sum, unique chars).

  • Example of fixed-size sliding window?

    Maximum sum of k consecutive elements → maintain running sum, update in O(1) per shift.

  • Example of variable-size sliding window?

    Longest substring without repeating characters → expand right, shrink left if duplicate found. O(n).

  • What’s the purpose of Union-Find?

    Efficiently manage partitions of a set and answer connectivity queries (e.g., are x and y connected?).

  • Optimizations in Union-Find?

    Path compression (flatten trees) and union by rank/size (attach smaller tree under larger). Amortized O(α(n)) ≈ O(1).

  • Example problem solved with Union-Find?

    Detect cycles in an undirected graph.

  • Applications of Union-Find in real systems?

    Network connectivity, social networks (“friend groups”), Kruskal’s MST.

  • What is a greedy algorithm?

    An algorithm that makes the locally optimal choice at each step, hoping to reach a global optimum.

  • Classic example of greedy?

    Interval scheduling / activity selection → always pick the next activity with earliest finish time.

  • When does greedy fail?

    When local optimum doesn’t lead to global optimum (e.g., 0/1 knapsack problem).

  • When does greedy work?

    When the problem has optimal substructure + greedy choice property (e.g., Huffman coding, MST, Dijkstra).

  • What is dynamic programming?

    Optimization technique that breaks problems into overlapping subproblems and stores results to avoid recomputation.

  • Difference between top-down (memoization) and bottom-up (tabulation)?

    Top-down: Recursive + cache.

    Bottom-up: Iterative + table.

  • Example of DP problem?

    Fibonacci numbers: naive recursion O(2ⁿ), with DP → O(n).

  • Example of classic DP interview questions?

    Coin Change (min coins to make value).

    Longest Common Subsequence.

    0/1 Knapsack.

    Edit Distance (Levenshtein).

    Matrix Path (min cost path).

  • What if array contains negative numbers in “max subarray sum” (Kadane’s algorithm)?

    Kadane still works → tracks max ending here, max so far. Complexity O(n).

  • What if graph edges are negative in shortest path?

    Dijkstra fails → must use Bellman-Ford (O(VE)).

  • What if the DP state space is too large to store?

    Optimize with state compression (bitmask, rolling arrays) or meet-in-the-middle.

  • What if recursion depth causes stack overflow in DP?

    Convert to bottom-up tabulation or iterative stack.

  • What’s the difference between stable and unstable sorting?

    Stable sorts keep equal elements in the same relative order (MergeSort, InsertionSort). Unstable sorts may change order (QuickSort, HeapSort).

  • How do you find the majority element (> n/2 times) in an array?

    Use Boyer-Moore Voting Algorithm → O(n) time, O(1) space.

  • How do you find the k-th largest element in an array?

    Quickselect: O(n) average.

    Heap of size k: O(n log k).

  • How do you find duplicates in O(1) extra space?

    Modify array values (marking indices negative) or use cycle detection (Floyd’s Tortoise & Hare).

  • What is the Knuth-Morris-Pratt (KMP) algorithm?

    String search algorithm using prefix function to avoid redundant comparisons. Complexity O(n + m).

  • What is Rabin-Karp algorithm?

    String matching using rolling hash. Useful for multiple pattern matching. Average O(n+m), worst O(nm).

  • What is a suffix array?

    An array of all suffixes of a string sorted lexicographically. Used in substring search, LCP queries, bioinformatics.

  • What is a trie, and how is it applied in string problems?

    A prefix tree storing characters along paths. Used in autocomplete, dictionary search, IP routing.

  • What’s the difference between 0/1 knapsack and unbounded knapsack?

    0/1: Each item can be chosen at most once.

    Unbounded: Items can be chosen unlimited times.

  • How do you solve the Longest Common Subsequence (LCS)?

    DP: dp[i][j] = 1 + dp[i-1][j-1] if match else max(dp[i-1][j], dp[i][j-1]). Complexity O(mn).

  • What’s the difference between LCS and Longest Common Substring?

    LCS: Characters must appear in order, not necessarily contiguous.

    Substring: Must be contiguous.

  • How do you solve the Edit Distance problem?

    DP: dp[i][j] = min(insert, delete, replace). Complexity O(mn).

  • How do you detect a cycle in a directed graph?

    Use DFS with recursion stack or Kahn’s algorithm (topological sort).

  • How do you detect a cycle in an undirected graph?

    Use Union-Find or DFS parent tracking.

  • What is Tarjan’s algorithm used for?

    Finding strongly connected components (SCCs) in a directed graph. Complexity O(V+E).

  • What is a bipartite graph and how do you check it?

    A graph whose nodes can be divided into two sets with no intra-set edges. Check with BFS coloring.

  • How do you find the median of a data stream?

    Maintain two heaps: max-heap for lower half, min-heap for upper half. Median is root(s).

  • How do you merge k sorted lists?

    Use a min-heap of size k → repeatedly extract min. Complexity O(n log k).

  • How do you implement a sliding window maximum?

    Use a deque to store candidates, O(n).

  • How do you implement Dijkstra’s algorithm efficiently?

    Use a priority queue (min-heap) for selecting next closest node.

  • What is amortized O(1)? Give an example.

    Average cost per operation is constant over a sequence (e.g., dynamic array resizing, push in stack).

  • What is divide & conquer with an example?

    Break into subproblems → solve recursively → combine. Example: MergeSort.

  • What is a flow network and max-flow problem?

    A directed graph with capacities; max-flow finds the maximum possible flow from source to sink. Ford-Fulkerson algorithm solves it.

  • What is the difference between greedy vs DP approaches?

    Greedy: Local optimal choice → global optimal (works only in certain problems like MST).

    DP: Stores overlapping subproblem solutions (works broadly, slower).

  • How does MergeSort work and why is it O(n log n)?

    MergeSort uses divide & conquer: it splits the array into halves recursively until single elements remain, then merges them in sorted order. Each merge step takes O(n), and since the array is split log n times, total = O(n log n). It’s stable and good for linked lists but requires O(n) extra space.

  • How does QuickSort work and why can it degrade to O(n²)?

    QuickSort picks a pivot, partitions array into ≤ pivot and > pivot, then recurses. With good pivot choices (random/median), average = O(n log n). Worst case = O(n²) when array is already sorted and pivot is always min/max. It’s in-place, so very fast in practice.

  • Why is HeapSort O(n log n) and how does heapify work?

    HeapSort builds a max-heap (O(n)) and repeatedly extracts max element (O(log n)) for n elements → O(n log n). Heapify “sifts down” nodes to maintain heap property. Unlike MergeSort, it’s in-place but not stable.

  • Why is Binary Search O(log n)?

    Each step halves the search space by comparing mid element to target. After k steps, only n / 2ᵏ elements remain. Solving n / 2ᵏ = 1 → k = log₂ n. Hence O(log n).

  • How does Exponential Search work?

    First, find a range where the element could be by checking indices 1, 2, 4, 8… until overshoot. Then apply Binary Search in that range. Useful for infinite/unknown-length arrays. Complexity O(log n).

  • How does KMP string matching achieve O(n+m)?

    KMP preprocesses the pattern to build an LPS (longest proper prefix which is also suffix) table. When mismatch occurs, it skips characters based on LPS instead of starting over. This prevents redundant comparisons, so total complexity is linear in text+pattern.

  • How does Rabin-Karp detect substring matches efficiently?

    Uses rolling hash: computes hash of pattern and sliding windows of text. If hash matches, then check substring. Hash update O(1) per shift. Average O(n+m), but worst O(nm) if many collisions. Used in plagiarism detection, multi-pattern search.

  • How does Kadane’s algorithm find maximum subarray sum in O(n)?

    It iterates once, maintaining two values: max_ending_here (best sum ending at current index) and max_so_far (global max). If max_ending_here < 0, reset to current element. This ensures we only extend subarrays that increase sum.

  • How does Edit Distance (Levenshtein) DP work?

    Define dp[i][j] as min edits to convert first i chars of word1 to first j of word2.

    • If last chars match: dp[i][j] = dp[i-1][j-1].
    • Else: 1 + min(insert, delete, replace).
    • Time O(mn), space O(mn) but can be optimized to O(min(m,n)).

  • How does Coin Change (minimum coins) DP work?

    Define dp[amount] = min coins to make amount.

    • dp[0] = 0.
    • For each coin, update dp[x] = min(dp[x], dp[x-coin]+1).
    • This builds bottom-up. Complexity O(n*amount).

  • How does BFS guarantee shortest path in an unweighted graph?

    BFS explores level by level. The first time we reach a node, it must be via the shortest path (fewest edges). Hence BFS gives shortest path in O(V+E).

  • Why does Dijkstra’s algorithm fail with negative weights?

    Dijkstra assumes once a node is “settled” with shortest distance, it won’t be updated. With negative weights, this assumption breaks (a later path could improve it). Bellman-Ford handles negatives because it relaxes all edges repeatedly.

  • How does Union-Find detect cycles?

    For each edge (u,v), check if u and v already belong to the same set (find). If yes → cycle. If not, union them. Path compression + union by rank make operations nearly O(1).

  • How does the two-heap method maintain median of a stream?

    Maintain a max-heap of lower half and min-heap of upper half. Balance sizes so they differ by at most 1. Median = root(s). Each insert O(log n), query median O(1).

  • How does sliding window maximum algorithm achieve O(n)?

    Use a deque storing indices. Maintain decreasing order inside deque. At each step, remove out-of-window indices and add new element while popping smaller ones. Front of deque always max.

  • How does Floyd-Warshall find all-pairs shortest paths?

    Iteratively improves path estimates using dynamic programming. For each k (intermediate node), update dp[i][j] = min(dp[i][j], dp[i][k] + dp[k][j]). Time O(V³), works with negatives (no negative cycles).

  • How does A* pathfinding improve on Dijkstra?

    Adds heuristic h(n) (estimate to goal). f(n) = g(n) + h(n). Guides search toward goal instead of exploring blindly. If h is admissible (never overestimates), A* is optimal.

  • How does Ford-Fulkerson algorithm compute max flow?

    While there exists an augmenting path (via BFS/DFS), push flow along it and update residual capacities. Repeats until no path exists. Complexity O(E*max_flow) or O(VE²) with Edmonds-Karp.

  • What is MergeSort and how does it work?

    MergeSort is a divide & conquer algorithm. It splits the array into halves recursively until one element remains, then merges sorted halves. Stable, good for linked lists.

    • Time: O(n log n)
    • Space: O(n)

  • What is QuickSort and how does it work?

    QuickSort picks a pivot, partitions the array into smaller/larger, and recurses. Very cache-friendly and fast in practice. Worst-case when pivot is always poor.

    • Avg: O(n log n)
    • Worst: O(n²)
    • Space: O(log n)

  • What is HeapSort and how does it work?

    HeapSort builds a max-heap, then repeatedly extracts the max and rebuilds heap. In-place but not stable.

    • Time: O(n log n)
    • Space: O(1)

  • What is Counting Sort and how does it work?

    Counting Sort counts frequency of elements and reconstructs sorted output. Works only with integers in small range.

    • Time: O(n + k), k = range
    • Space: O(k)

  • What is Binary Search and how does it work?

    Binary Search repeatedly halves a sorted array until target is found or interval empty.

    • Time: O(log n)
    • Space: O(1)

  • What is Exponential Search and how does it work?

    Exponential Search finds range by checking 1, 2, 4, 8… then applies Binary Search. Useful for infinite/unknown arrays.

    • Time: O(log n)
    • Space: O(1)

  • What is KMP (Knuth-Morris-Pratt) and how does it work?

    Preprocesses pattern into an LPS table. When mismatch occurs, skips ahead instead of restarting.

    • Time: O(n+m)
    • Space: O(m)

  • What is Rabin-Karp and how does it work?

    Uses rolling hash for substring matching. If hash matches, confirm with direct compare.

    • Avg: O(n+m)
    • Worst: O(nm)

  • What is a Trie and how does it work?

    Tree-based structure where each path stores a string prefix. Used in autocomplete and dictionaries.

    • Insert/Search: O(m), m = string length

  • What is Kadane’s Algorithm and how does it work?

    Finds maximum subarray sum. Iterates maintaining max_ending_here and max_so_far.

    • Time: O(n)
    • Space: O(1)

  • What is Edit Distance (Levenshtein) and how does it work?

    DP table compares substrings with insert, delete, replace operations.

    • Time: O(mn)
    • Space: O(mn)

  • What is Coin Change (Min Coins) and how does it work?

    Bottom-up DP: dp[x] = min(dp[x], dp[x-coin]+1).

    • Time: O(n*amount)
    • Space: O(amount)

  • What is Longest Common Subsequence (LCS) and how does it work?

    DP: If chars match, 1+dp[i-1][j-1], else max(dp[i-1][j], dp[i][j-1]).

    • Time: O(mn)
    • Space: O(mn)

  • What is BFS and how does it work?

    Traverses level by level using queue. Guarantees shortest path in unweighted graphs.

    • Time: O(V+E)
    • Space: O(V)

  • What is DFS and how does it work?

    Traverses graph by going deep via recursion/stack before backtracking. Used in cycle detection, components.

    • Time: O(V+E)
    • Space: O(V)

  • What is Dijkstra’s Algorithm and how does it work?

    Uses priority queue to always expand nearest node, updating distances. Works for non-negative weights.

    • Time: O((V+E) log V)
    • Space: O(V)

  • What is Bellman-Ford and how does it work?

    Relaxes all edges V-1 times, detects negative cycles.

    • Time: O(VE)
    • Space: O(V)

  • What is Floyd-Warshall and how does it work?

    DP that updates all-pairs shortest path with intermediates.

    • Time: O(V³)
    • Space: O(V²)

  • What is Kruskal’s Algorithm and how does it work?

    Sort edges by weight, union-find to build MST.

    • Time: O(E log E)
    • Space: O(V)

  • What is Prim’s Algorithm and how does it work?

    Grows MST from a start node, always adding min edge crossing cut.

    • Time: O(E log V) with PQ
    • Space: O(V)

  • What is a Heap and how does it work?

    A binary tree where parent ≤ children (min-heap). Insert = bubble up, extract = bubble down.

    • Insert/Extract: O(log n)
    • Find min/max: O(1)

  • What is a Segment Tree and how does it work?

    Binary tree storing range aggregates (sum, min, max). Updates/queries in O(log n).

    • Build: O(n)
    • Query/Update: O(log n)

  • What is a Fenwick Tree (BIT) and how does it work?

    Array-based tree storing prefix sums using least significant bit.

    • Update: O(log n)
    • Query prefix sum: O(log n)

  • What is a Bloom Filter and how does it work?

    Probabilistic structure with multiple hash functions. Can say “maybe in set” or “definitely not.” False positives possible.

    • Insert: O(k)
    • Query: O(k)

  • What is Union-Find and how does it work?

    Tracks disjoint sets with find and union. Path compression + union by rank make it nearly O(1). Used in Kruskal’s MST, connectivity problems.

  • How does MergeSort work (step by step)?

    Idea: divide & conquer + merge sorted halves.

    Steps:

    1. Split array in half until subarrays of size 1.
    2. Merge pairs: compare heads of left/right, append smaller, advance pointer.
    3. Continue merging up the recursion.
    4. Complexity: time O(n log n), space O(n), stable.

  • How does QuickSort work (step by step)?

    Idea: partition around a pivot; recurse on partitions.

    Steps:

    1. Choose pivot (random/median-of-three).
    2. Partition: move elements < pivot left, > pivot right.
    3. Recursively quicksort left and right parts.
    4. Base case when subarray size ≤ 1.
    5. Complexity: avg O(n log n), worst O(n²), space O(log n) (stack), not stable.

  • How does HeapSort work (step by step)?

    Idea: build max-heap; repeatedly extract max.

    Steps:

    1. Build max-heap in-place from array (sift-down from ⌊n/2⌋→1).
    2. For i=n→2: swap a[1]↔a[i], shrink heap size, sift-down a[1].
    3. Array ends up sorted ascending.
    4. Complexity: O(n log n) time, O(1) extra space, not stable.

  • How does Counting Sort work (step by step)?

    Idea: count frequencies over small integer range.

    Steps:

    1. Make count array C[0..k] initialized 0.
    2. For each x in input, C[x]++.
    3. Prefix-sum C so C[v] = number ≤ v.
    4. Traverse input backward, place each x at output[C[x]–1], C[x]--.
    5. Complexity: O(n+k) time/space, stable if step 4 uses backward pass.

  • How does Binary Search work (step by step)?

    Idea: halve the search interval on a sorted array.

    Steps:

    1. l=0, r=n-1.
    2. While l≤r: m=(l+r)//2.
    3. If a[m]==t return m; if a[m]<t set l=m+1 else r=m-1.
    4. Complexity: O(log n) time, O(1) space.

  • How does Exponential Search work (step by step)?

    Idea: find a bound exponentially, then binary search.

    Steps:

    1. If a[0]==t return 0.
    2. i=1; while i<n and a[i]≤t: i*=2.
    3. Binary search in range [i/2, min(i,n-1)].
    4. Complexity: O(log n) time, O(1) space.

  • How does Quickselect (k-th smallest) work (step by step)?

    Idea: QuickSort’s partition but recurse into one side.

    Steps:

    1. Choose pivot; partition array.
    2. If pivot_index==k return pivot.
    3. If k<pivot_index, repeat on left; else repeat on right.
    4. Complexity: avg O(n), worst O(n²), O(1) extra space + recursion stack.

  • How does KMP (Knuth-Morris-Pratt) string search work (step by step)?

    Idea: skip redundant comparisons using LPS (prefix function).

    Steps:

    1. Build LPS for pattern p in O(m).
    2. Scan text t with i over t, j over p.
    3. On match: i++, j++; on mismatch: if j>0 set j=LPS[j-1], else i++.
    4. When j==m, report match and set j=LPS[j-1].
    5. Complexity: O(n+m) time, O(m) space.

  • How does Rabin–Karp work (step by step)?

    Idea: rolling hash to compare windows quickly.

    Steps:

    1. Compute hash(pattern) and hash(text[0..m-1]).
    2. For each window i: if hashes equal, verify by direct compare.
    3. Roll hash to next window in O(1).
    4. Complexity: avg O(n+m), worst O(nm) (collisions), O(1) extra (excluding text).

  • How do Trie insert/search work (step by step)?

    Idea: path per character.

    Insert:

    1. Node = root. For c in word: if child(c) missing, create; move to child(c).
    2. Mark node as terminal.
    3. Search:
    4. Node = root; for c in word: if child(c) missing → not found; else move.
    5. Return terminal flag at node.
    6. Complexity: O(m) per op, m=len(word), space proportional to total chars.

  • How does Kadane’s algorithm work (step by step)?

    Idea: best subarray ending here → best overall.

    Steps:

    1. max_ending=a[0], max_so_far=a[0].
    2. For each x from a[1:]: max_ending=max(x, max_ending+x); max_so_far=max(max_so_far, max_ending).
    3. Return max_so_far.
    4. Complexity: O(n) time, O(1) space.

  • How does Edit Distance (Levenshtein) work (step by step)?

    Idea: DP on prefixes with insert/delete/replace.

    Steps:

    1. dp of size (m+1)×(n+1); dp[i][0]=i, dp[0][j]=j.
    2. For i=1..m, j=1..n:
      • if s[i-1]==t[j-1]: dp[i][j]=dp[i-1][j-1]
      • else: 1+min(dp[i-1][j] (del), dp[i][j-1] (ins), dp[i-1][j-1] (rep))
    3. Answer dp[m][n].
    4. Complexity: O(mn) time, O(mn) space (optimizable to O(min(m,n))).

  • How does Coin Change (min coins) work (step by step)?

    Idea: unbounded knapsack on amount.

    Steps:

    1. dp[0]=0; dp[x]=∞ for 1..amount.
    2. For each coin c: for x=c..amount: dp[x]=min(dp[x], dp[x-c]+1).
    3. Return dp[amount] if finite else -1.
    4. Complexity: O(amount×#coins) time, O(amount) space.

  • How does LCS (Longest Common Subsequence) work (step by step)?

    Idea: DP on prefixes; match → extend, else pick best skip.

    Steps:

    1. dp (m+1)×(n+1) zeros.
    2. For i=1..m, j=1..n:
      • if s[i-1]==t[j-1]: dp[i][j]=1+dp[i-1][j-1]
      • else: dp[i][j]=max(dp[i-1][j], dp[i][j-1])
    3. dp[m][n] is length; backtrack to recover sequence.
    4. Complexity: O(mn) time, O(mn) space (can do O(min(m,n)) for length only).

  • How does BFS work (step by step)?

    Idea: layer-by-layer traversal via queue.

    Steps:

    1. Mark start visited, enqueue it.
    2. While queue: dequeue u; for neighbors v of u: if not visited → mark, enqueue, record parent/dist.
    3. Distances are shortest (unweighted).
    4. Complexity: O(V+E) time, O(V) space.

  • How does DFS work (step by step)?

    Idea: go deep before backtracking.

    Steps:

    1. For each unvisited node u: call dfs(u).
    2. In dfs(u): mark visited; for each neighbor v not visited → dfs(v).
    3. Use entry/exit times for cycle/component detection.
    4. Complexity: O(V+E) time, O(V) stack/space.

  • How does Dijkstra work (step by step)?

    Idea: greedy expansion by smallest tentative distance.

    Steps:

    1. dist[source]=0; push (0,source) in min-heap.
    2. While heap: pop (d,u); if d>dist[u] continue.
    3. For each (u→v,w): if dist[u]+w<dist[v], update and push.
    4. Repeat until heap empty.
    5. Complexity: O((V+E) log V) with heap; no negative edges.

  • How does Bellman–Ford work (step by step)?

    Idea: relax all edges repeatedly.

    Steps:

    1. dist[source]=0; others=∞.
    2. Repeat V−1 times: for each edge (u,v,w): dist[v]=min(dist[v], dist[u]+w).
    3. One more pass: if any distance improves → negative cycle.
    4. Complexity: O(VE) time, O(V) space.

  • How does Floyd–Warshall work (step by step)?

    DP with intermediate nodes.

    Steps:

    1. dp = adjacency matrix (∞ if no edge, 0 on diagonal).
    2. For k=1..V: for i=1..V: for j=1..V: dp[i][j]=min(dp[i][j], dp[i][k]+dp[k][j]).
    3. Negative cycle if dp[i][i]<0.
    4. Complexity: O(V³) time, O(V²) space.

  • How does Kruskal’s MST work (step by step)?

    Idea: add edges from lightest up, avoid cycles (Union–Find).

    Steps:

    1. Sort edges by weight ascending.
    2. Init Union–Find on vertices.
    3. For each edge (u,v,w): if find(u)!=find(v): add to MST; union(u,v).
    4. Stop when MST has V−1 edges.
    5. Complexity: O(E log E) time, O(V) UF space.

  • How does Prim’s MST work (step by step)?

    Idea: grow one tree by always picking the lightest crossing edge.

    Steps:

    1. Start from any node; push its edges into min-heap.
    2. Pop smallest edge (u,v,w) crossing the cut; if v not in tree, add and push its outgoing edges.
    3. Repeat until V−1 edges chosen.
    4. Complexity: O(E log V) with heap + adj list.

  • How does Topological Sort (Kahn’s algorithm) work (step by step)?

    Idea: peel nodes with indegree 0.

    Steps:

    1. Compute indegree of each node; enqueue all with 0.
    2. While queue: pop u, append to order; for each edge u→v: indegree[v]--, if 0 enqueue v.
    3. If order has <V nodes → cycle exists.
    4. Complexity: O(V+E) time, O(V) space.

  • How does Union–Find (Disjoint Set) work (step by step)?

    parent pointers + path compression + union by rank.

    Steps:

    1. make_set(x): parent[x]=x, rank[x]=0.
    2. find(x): if parent[x]!=x → parent[x]=find(parent[x]); return parent[x].
    3. union(a,b): ra=find(a), rb=find(b); if ra==rb return; attach smaller rank under larger; if ranks equal, increment new root rank.
    4. Complexity: α(n) per op (≈ O(1)).

  • How does Sliding Window Maximum (deque) work (step by step)?

    Idea: maintain decreasing deque of indices.

    Steps:

    1. For each index i:
      • Pop front if it’s out of window (i−k).
      • Pop back while a[back] ≤ a[i].
      • Push i.
    2. After i≥k−1, window max is a[deque.front()].
    3. Complexity: O(n) time, O(k) space.

  • How does Median of Data Stream (two heaps) work (step by step)?

    Idea: max-heap for lower half, min-heap for upper half.

    Steps:

    1. Insert x to max-heap if x ≤ maxLower else to min-heap.
    2. Rebalance so |sizes| ≤ 1 (move root between heaps).
    3. Median = root of larger heap (or average of both roots).
    4. Complexity: insert O(log n), query O(1), space O(n).

  • How does Lazy Propagation in Segment Trees work (step by step)?

    Idea: Delay range updates until necessary → avoid O(n) per update.

    Steps:

    1. Each node stores an aggregate (sum, min, max) for its segment.
    2. A parallel lazy[] array stores pending updates for children.
    3. On range update (l,r,val):
      • If current node segment fully in range: update node, mark lazy for children.
      • If partially overlaps: push pending lazy values down, recurse on children.
    4. On query:
      • Before using a node, apply its lazy value (propagate to children).
    5. Continue until query covers desired range.
    6. Complexity: O(log n) per update/query (instead of O(n)).

  • How does Prefix Sum (accumulation array) work (step by step)?

    Idea: Precompute running sums → answer range queries in O(1).

    Steps:

    1. Build prefix array P where P[i] = A[0] + A[1] + … + A[i].
    2. To query sum(l,r): return P[r] – P[l-1].
    3. Extendable to 2D: P[x][y] = sum of rectangle (0,0) to (x,y).
    4. Complexity: Build O(n), query O(1).

  • How does the Sieve of Eratosthenes work (step by step)?

    Idea: Cross out multiples to find all primes ≤ n.

    Steps:

    1. Create boolean array isPrime[0..n] = true.
    2. Mark 0,1 as false.
    3. For p = 2 → √n: if isPrime[p], mark multiples (p², p²+p, …) false.
    4. Remaining true indices are primes.
    5. Complexity: O(n log log n) time, O(n) space.

  • How does Fast Power (binary exponentiation) work (step by step)?

    Idea: Break power into halves using binary representation of exponent.

    Steps:

    1. If exp==0 return 1.
    2. Recursively/iteratively compute pow(base, exp//2).
    3. Square result.
    4. If exp is odd, multiply by base once more.
    5. Complexity: O(log exp) multiplications.

  • How does Prefix Min/Max accumulation work (step by step)?

    Idea: Similar to prefix sums but store min/max up to i.

    Steps:

    1. prefixMin[i] = min(prefixMin[i-1], A[i]).
    2. prefixMax[i] = max(prefixMax[i-1], A[i]).
    3. Complexity: Build O(n), query O(1).

  • How does a Difference Array work (step by step)?

    Idea: Apply range updates in O(1), accumulate later.

    Steps:

    1. Build diff[] where diff[i] = A[i] – A[i-1].
    2. For update (l,r,val): diff[l]+=val, diff[r+1]-=val.
    3. Final array = prefix sum of diff[].
    4. Complexity: O(1) per update, O(n) to finalize.

  • How does Sqrt Decomposition work (step by step)?

    Idea: Partition array into √n blocks, precompute aggregates.

    Steps:

    1. Divide array into blocks of size √n.
    2. For queries, combine full blocks + leftovers.
    3. For updates, update element + its block aggregate.
    4. Complexity: Query/Update O(√n).

  • How does a Fenwick Tree work (step by step)?

    Idea: Store partial sums in tree structure using LSB (least significant bit).

    Steps:

    1. Update: while i ≤ n: tree[i]+=val; i+=LSB(i).
    2. Query prefix: while i>0: sum+=tree[i]; i-=LSB(i).
    3. Complexity: O(log n) per op.

  • How does the Extended Euclidean Algorithm work (step by step)?

    Idea: Find gcd(a,b) + coefficients x,y such that ax+by=gcd.

    Steps:

    1. If b==0: return (a,1,0).
    2. Else: gcd, x1, y1 = extgcd(b, a mod b).
    3. Return (gcd, y1, x1 – (a//b)*y1).
    4. Complexity: O(log(min(a,b))).

  • How do you compute a modular inverse (step by step)?

    Method 1 (if mod prime): use Fermat’s theorem: a^(mod-2) mod m.

    • Compute via fast pow O(log m).
    • Method 2: extended Euclidean algorithm when gcd(a,m)=1.
    • Solve ax ≡ 1 mod m.

  • How does Reservoir Sampling work (step by step)?

    Idea: Pick k random samples from a stream of unknown length.

    Steps:

    1. Fill reservoir with first k items.
    2. For each i>k: pick random j ∈ [0..i]. If j<k, replace res[j] with item i.
    3. Complexity: O(n) time, O(k) space.

  • How does Fisher–Yates Shuffle work (step by step)?

    Idea: Generate uniform random permutation.

    Steps:

    1. For i=n-1→1:
      • Pick j=random(0..i).
      • Swap arr[i], arr[j].
      • Complexity: O(n).

  • How does the Greatest Common Divisor (Euclid’s algorithm) work (step by step)?

    Idea: gcd(a,b) = gcd(b, a mod b).

    Steps:

    1. If b=0 → return a.
    2. Else compute gcd(b, a mod b).
    3. Complexity: O(log(min(a,b))).

  • How does the Least Common Multiple (LCM) work (step by step)?

    Idea: lcm(a,b) * gcd(a,b) = a * b.

    Steps:

    1. Compute g=gcd(a,b).
    2. Return (a/g)*b.
    3. Complexity: O(log(min(a,b))).

  • How does Modular Exponentiation (Fast Pow) work (step by step)?

    Idea: exponentiation by squaring.

    Steps:

    1. res=1.
    2. While exp>0:
      • If exp odd → res=(res*base)%mod.
      • base=(base*base)%mod, exp//=2.
    3. Return res.
    4. Complexity: O(log exp).

  • How does the Miller–Rabin primality test work (step by step)?

    Idea: randomized primality test based on Fermat’s little theorem.

    Steps:

    1. Write n-1=2^s·d (d odd).
    2. Pick random a∈[2,n-2].
    3. Compute x=a^d mod n.
      • If x=1 or x=n-1 → pass round.
    4. Repeat s-1 times: x=(x² mod n); if x=n-1 → pass.
    5. If no pass → composite.
    6. Complexity: O(k·log³ n) with k rounds (probabilistic).

  • How does the Chinese Remainder Theorem (CRT) work (step by step)?

    Idea: Solve system of congruences with coprime moduli.

    Steps:

    1. Given x≡a1 mod m1, x≡a2 mod m2, …
    2. Compute M=m1·m2·…·mk.
    3. For each i: Mi=M/mi, yi=Mi⁻¹ mod mi.
    4. Result = Σ ai·Mi·yi mod M.
    5. Complexity: O(k·log M).

  • How does Union by Rank/Size improve DSU (Union-Find)?

    Idea: attach smaller tree under larger to limit height.

    Steps:

    1. Each set has rank/size.
    2. When union(a,b):
      • Find roots ra,rb.
      • Attach smaller to larger.
      • If ranks equal → increase rank.
    3. Use with path compression for O(α(n)) complexity.

  • How does DSU with Rollback work (step by step)?

    Idea: allow undo of union operations (useful in offline queries).

    Steps:

    1. Keep stack of changes (parent modifications, size changes).
    2. On union: push previous state to stack.
    3. On rollback: pop stack, restore state.
    4. Complexity: Union/Find O(log n) amortized with rollback.

  • How does a Persistent Segment Tree work (step by step)?

    Idea: create new versions of tree on updates without destroying old ones.

    Steps:

    1. Each update copies nodes along path root→leaf.
    2. Unchanged nodes reused.
    3. Store pointer to each version’s root.
    4. Queries can run on any version.
    5. Complexity: O(log n) per update/query, O(log n) extra space per update.

  • How does Segment Tree with Lazy Propagation differ from Fenwick Tree?

    Answer:

    • Segment Tree + Lazy → supports range updates + range queries efficiently.
    • Fenwick Tree → supports point updates + prefix/range queries.
    • Complexity both O(log n), but Segment Tree more general, Fenwick lighter.

  • How does Sparse Table (ST) work (step by step)?

    Idea: preprocess for idempotent queries (min, max, gcd).

    Steps:

    1. Precompute st[i][j] = value for range [i,i+2^j-1].
    2. Query(l,r): let k=⌊log₂(r-l+1)⌋.
      • Answer = op(st[l][k], st[r-2^k+1][k]).
      • Complexity: O(n log n) build, O(1) queries, immutable array.

  • How does Modular Multiplication without overflow work (step by step)?

    Idea: compute (a·b) mod m safely for large a,b.

    Steps:

    1. If language supports 128-bit → multiply directly.
    2. Else use repeated addition/doubling (like fast pow).
    3. res=0; while b>0:
      • If b odd: res=(res+a)%m.
      • a=(2a)%m, b//=2.
      • Complexity: O(log b).

  • How does Randomized QuickSort avoid worst-case?

    Idea: choose pivot randomly.

    Steps:

    1. Pick pivot index at random.
    2. Partition.
    3. Recurse left/right.
    4. Guarantee: Expected O(n log n) regardless of input order.

  • How does Reservoir Sampling select 1 item uniformly from a stream?

    Steps:

    1. res=first item.
    2. For i=2..n: replace res with item i with probability 1/i.
    3. Ensures uniform distribution.
    4. Complexity: O(n) time, O(1) space.

  • How does an AVL Tree work (step by step)?

    Idea: Self-balancing BST where balance factor (height(left)–height(right)) ∈ {-1,0,1}.

    Steps:

    1. Insert as in normal BST.
    2. Update heights going up.
    3. If balance factor violates, perform rotation:
      • LL → right rotation
      • RR → left rotation
      • LR → left rotation + right rotation
      • RL → right rotation + left rotation
      • Complexity: Search/Insert/Delete = O(log n), extra O(1) for rotations.

  • How does a Red-Black Tree work (step by step)?

    Idea: Self-balancing BST with coloring rules ensuring ~log n height.

    Properties:

    1. Each node red/black.
    2. Root is black.
    3. No two reds adjacent.
    4. Every path from root→leaf has same number of black nodes.
    5. Steps:
    • Insert like BST, color red.
    • Fix violations with rotations + recoloring.
    • Complexity: O(log n) operations, used in std::map, TreeSet.

  • How does a Splay Tree work (step by step)?

    Idea: BST that “splays” accessed nodes to root with rotations → fast for repeated accesses.

    Steps:

    1. On access: perform rotations (zig, zig-zig, zig-zag) until node reaches root.
    2. Insert/Delete also splay the last accessed node.
    3. Complexity: Amortized O(log n), but single op worst O(n).

  • How does a Treap (Tree + Heap) work (step by step)?

    Idea: BST on keys + heap on random priorities.

    Steps:

    1. Insert node with key + random priority.
    2. Place in BST by key.
    3. Rotate to fix heap property by priority.
    4. Complexity: O(log n) expected, randomized balancing.

  • How does a Skip List work (step by step)?

    Idea: Multiple levels of linked lists with “express lanes.”

    Steps:

    1. Each element promoted to higher level with probability ½.
    2. Search: start top level, move right until overshoot, then drop down.
    3. Insert: insert at level 0, promote randomly, update links.
    4. Complexity: O(log n) expected search/insert/delete, O(n) space.

  • How does a Trie work (step by step)?

    Idea: Tree storing characters of strings.

    Steps:

    1. Insert: follow chars, create missing nodes.
    2. Mark end node terminal.
    3. Search: follow chars, check terminal flag.
    4. Complexity: O(m) for string length m, space large.

  • How does a Compressed Trie (Radix Tree) differ?

    Answer: Combines chains of single children into one edge with substring → saves space, faster traversals.

  • How does a Suffix Trie / Automaton work (step by step)?

    Idea: Stores all suffixes of a string, automaton compresses states.

    Steps:

    1. Build incrementally from left to right.
    2. Each extension adds states/transitions.
    3. Used for substring queries, pattern matching.
    4. Complexity: O(n) states, O(n) build.

  • How does a Binary Heap work (step by step)?

    Idea: Complete binary tree with heap property (parent ≤ children).

    Steps:

    1. Insert: place at bottom, bubble up.
    2. Extract: swap root with last, remove last, bubble down.
    3. Complexity: Insert/Extract O(log n), peek O(1).

  • How does a Binomial Heap work (step by step)?

    Idea: Forest of binomial trees where order encodes size.

    Steps:

    1. Merge heaps by linking roots of same order.
    2. Insert by merging one node heap.
    3. Extract-min merges children back.
    4. Complexity: Insert/Merge O(log n), Extract O(log n).

  • How does a Fibonacci Heap work (step by step)?

    Idea: Heap with relaxed structure → faster amortized ops.

    Steps:

    1. Insert: add node to root list.
    2. Extract-min: remove root, merge children, consolidate.
    3. Decrease-key: cut node, move to root list.
    4. Complexity: Insert/Decrease O(1) amortized, Extract O(log n).

  • How does a Hash Table with Chaining work (step by step)?

    Idea: Array of buckets, collisions stored in linked lists.

    Steps:

    1. Compute hash(key) % size → bucket.
    2. Insert/Search/Delete in that bucket.
    3. Complexity: O(1) avg, O(n) worst if poor hash.

  • ow does Open Addressing (Linear Probing) work (step by step)?

    Idea: Resolve collisions by probing next slots.

    Steps:

    1. Compute hash → slot.
    2. If occupied, try next (i+1)%n until empty.
    3. Search: follow same probe sequence.
    4. Complexity: O(1) avg, clustering possible.

  • How does Double Hashing work (step by step)?

    Idea: Use second hash function for probe step.

    Steps:

    1. Slot = (h1(key) + i*h2(key)) % n.
    2. Reduces clustering.
    3. Complexity: O(1) avg.

  • How does a Persistent Array work (step by step)?

    Idea: Keep versions of array after updates.

    Steps:

    1. Copy only modified path (like path copying in trees).
    2. Keep pointer to new root/version.
    3. Complexity: O(log n) per update with structural sharing.

  • How does a Persistent Segment Tree work (step by step)?

    Idea: Save versions of range structure.

    Steps:

    1. Each update copies O(log n) nodes.
    2. Store roots of versions.
    3. Queries can run on any version.
    4. Complexity: O(log n) per op, O(log n) extra space per update.

  • How does the Naïve String Search algorithm work (step by step)?

    Idea: Check the pattern at every possible position.

    Steps:

    1. For i=0→n-m: compare pattern[0..m-1] with text[i..i+m-1].
    2. If all match → report position.
    3. Complexity: O(nm) worst case.

  • How does the Knuth-Morris-Pratt (KMP) algorithm work (step by step)?

    Idea: Use prefix function (LPS) to skip redundant comparisons.

    Steps:

    1. Precompute LPS array for pattern in O(m).
    2. Scan text with pointer j on pattern.
    3. On mismatch, jump j to LPS[j-1] instead of restarting.
    4. On match, advance both pointers; if j==m → match found.
    5. Complexity: O(n+m).

  • How does the Rabin–Karp algorithm work (step by step)?

    Idea: Use rolling hash for fast comparisons.

    Steps:

    1. Compute hash(pattern) and hash of first window in text.
    2. Slide window: update hash in O(1).
    3. If hashes match → verify substring directly.
    4. Complexity: Avg O(n+m), worst O(nm) (collisions).

  • How does the Z-algorithm for pattern matching work (step by step)?

    Idea: Compute Z[i] = length of longest substring starting at i matching prefix.

    Steps:

    1. Concatenate string = pattern + "$" + text.
    2. Build Z-array in O(n+m).
    3. Whenever Z[i]==m, pattern occurs at i-m-1.
    4. Complexity: O(n+m).

  • How does the Prefix Function (π array) work (step by step)?

    Idea: For each index, longest prefix also suffix of substring.

    Steps:

    1. π[0]=0.
    2. For i=1→n-1: set j=π[i-1]; while j>0 and s[i]!=s[j], j=π[j-1];
    3. If s[i]==s[j], j++. π[i]=j.
    4. Complexity: O(n).

  • How does the Z-function work (step by step)?

    Idea: Z[i] = length of longest prefix starting at i.

    Steps:

    1. Maintain [l,r] of rightmost match.
    2. If i>r → brute force compare from i.
    3. Else set Z[i]=min(r-i+1,Z[i-l]); extend with brute force.
    4. Complexity: O(n).

  • How does a Suffix Array work (step by step)?

    Idea: Array of all suffixes sorted lexicographically.

    Steps:

    1. Build suffixes with indices.
    2. Sort them (O(n log² n) naive, O(n log n) with doubling).
    3. LCP array can be built with Kasai’s algorithm in O(n).
    4. Complexity: O(n log n) build, O(n) space.

  • How does a Suffix Tree work (step by step)?

    Idea: Compressed trie of all suffixes.

    Steps:

    1. Each edge labeled with substring of text.
    2. Each suffix corresponds to a path from root.
    3. Built with Ukkonen’s algorithm in O(n).
    4. Complexity: O(n) build, O(n) space, substring search O(m).

  • How does a Suffix Automaton (SAM) work (step by step)?

    Idea: Minimal DFA accepting all suffixes of a string.

    Steps:

    1. Build states incrementally for each new character.
    2. Each state stores length + link + transitions.
    3. Suffix links connect to “longest proper suffix” states.
    4. Complexity: O(n) build, O(n) space, powerful for substring/occurrence queries.

  • How does the Manacher’s Algorithm work (step by step)?

    Idea: Find longest palindrome centered at each position in O(n).

    Steps:

    1. Transform string with separators (#).
    2. Maintain center + right boundary.
    3. For each i, mirror=2*center-i.
    4. Expand palindrome using symmetry + brute force.
    5. Update center,right if expanded beyond.
    6. Complexity: O(n).

  • How does Polynomial Rolling Hash work (step by step)?

    Idea: Represent string as polynomial in base p, mod M.

    Steps:

    1. Precompute powers of p mod M.
    2. hash(s[0..i]) = (Σ s[j]*p^j) mod M.
    3. Substring hash in O(1) with prefix hashes.
    4. Complexity: O(n) preprocess, O(1) query.

  • How does Huffman Coding work (step by step)?

    Idea: Variable-length codes based on frequency.

    Steps:

    1. Build min-heap of chars by frequency.
    2. Pop 2 nodes, combine into new node with freq=sum.
    3. Repeat until one node left = root.
    4. Left=0, Right=1 → assign codes.
    5. Complexity: O(n log n).