1.3 Glossary of Algorithm

1.3.1 Depth First Traversal

: An algorithm for traversing or searching tree or graph data structures. The algorithm starts at the root node and explores as far as possible along each branch recursively. See Tree DFS for tree-specific implementation and Matrix DFS for matrix-specific implementation.

1.3.2 Breadth First Traversal

: An algorithm for traversing or searching tree or graph data structures. It starts at the tree root, and explores all of the neighbor nodes at the present depth prior to moving on to the nodes at the next depth level. See Tree BFS for tree-specific implementation and Matrix BFS for matrix-specific implementation.

1.3.3 Topological Sort

: A linear ordering of vertices in a directed acyclic graph (DAG) such that for every directed edge (u, v), vertex u comes before vertex v in the ordering. Used for dependency resolution, task scheduling, and build systems. Can be implemented using Kahn’s algorithm (BFS-based) or DFS-based approach.

See also: 2.3.6">Detailed explanation in Common Strategies

1.3.4 Minimax

: (sometimes MinMax) is an algorithm for minimizing the possible loss for a worst case (maximum loss) scenario.

1.3.5 Recursion

: a method of solving a problem where the solution depends on solutions to smaller instances of the same problem

1.3.6 Backtracking

: a method trying to construct a solution to a computational problem incrementally, one small piece at a time. Whenever the algorithm needs to decide between multiple alternatives to the next component of the solution, it recursively evaluates alternative and then chooses the best one.

See also: 14.0.2">Detailed explanation in Common Strategies

1.3.7 Dynamic Programming

: a method to to simplify a complicated problem by breaking it down into simpler sub-problems in a recursive manner. You have a main problem (the root of your tree of subproblems), and subproblems (subtrees). The subproblems typically repeat and overlap. Common approach is implemented recursively or iteratively table-filling.

See also: 14.0.1">Detailed explanation in Common Strategies

1.3.8 Memoization

: an optimization technique used to speed up programs by storing the results of expensive function calls and returning the cached result when the same inputs occur again. The implementation can be of form recursive call (or some iterative equivalent)

See also: 14.0.1.1">Detailed explanation in Common Strategies

1.3.9 Tabulation

: is an approach where you solve a dynamic programming problem by first filling up a table, and then compute the solution to the original problem based on the results in this table.

See also: 14.0.1.2">Detailed explanation in Common Strategies

1.3.10 Divide and Conquer

: an algorithm to recursively breaking down a problem into two or more sub-problems of the same or related type, until these become simple enough to be solved directly. The solutions to the sub-problems are then combined to give a solution to the original problem.

1.3.11 Binary Search Tree

: a binary tree where nodes are ordered, where the keys in the left subtree are less then the key in its parent node, and the keys in the right subtree are greater the key in its parent node.

1.3.12 Merge sort

: a Divide and Conquer algorithm. It divides input array in two halves, calls itself for the two halves and then merges the two sorted halves.

See also: 2.3.4">Detailed explanation in Common Strategies

1.3.13 Bubble sort

: a sorting algorithm that compares adjacent pairs and swaps them if necessary, causing the items to “bubble” up toward their proper position.

See also: 2.3.2">Detailed explanation in Common Strategies

1.3.14 Insertion Sort

: a sorting algorithm that builds the final sorted array one element at a time by repeatedly picking the next element and inserting it into its correct position among the previously sorted elements. Efficient for small arrays and nearly sorted data.

See also: 2.3.3">Detailed explanation in Common Strategies

1.3.15 Quick Sort

: a Divide and Conquer algorithm. It picks an element as pivot and partitions the given array around the picked pivot.

See also: 2.3.5">Detailed explanation in Common Strategies

1.3.16 Minimum Heap

: a complete binary tree in which the value in each internal node is smaller than or equal to the values in the children of that node. Thus, the root node is the smallest element in the tree.

1.3.17 Stack

: a linear data structure that follows the Last-In-First-Out (LIFO) principle. Elements are added (pushed) and removed (popped) from the same end, called the top of the stack. Common operations include push (add element), pop (remove top element), and peek (view top element without removing). Stacks are useful for tracking state, reversing sequences, matching pairs (parentheses, brackets), and handling cascading operations where recent additions need to be accessed or removed first.

1.3.18 Greedy

: an algorithmic paradigm that builds up a solution piece by piece, always choosing the next piece that offers the most immediate benefit. The greedy algorithm makes the locally optimal choice at each step with the hope of finding a global optimum.

1.3.19 Two Pointer

: a technique where two pointers iterate through a data structure in tandem (or opposite directions) until one or both of the pointers hit a certain condition. Commonly used in sorted arrays for searching pairs or triplets.

See also: 2.1">Detailed explanation in Common Strategies

1.3.20 Sliding Window

: a technique that maintains a contiguous subset (window) of elements in a linear data structure, typically an array or string. The window slides or expands/contracts based on certain conditions, efficiently solving subarray/substring problems in O(n) time by avoiding redundant computations. Commonly implemented using two pointers to track the window boundaries.

See also: 2.1">Two Pointer technique (Pattern 3: Sliding Window)

1.3.21 Hash Table

: a data structure that implements an associative array, mapping keys to values. It uses a hash function to compute an index into an array of buckets or slots, providing O(1) average-case time complexity for insertions and lookups.

1.3.23 Sorting

: the process of arranging elements in a specific order (ascending or descending). Common sorting algorithms include merge sort, quick sort, and bubble sort, with varying time complexities.

1.3.24 Matrix Traversal

: a technique for visiting each element in a 2D matrix systematically, typically in patterns such as row-by-row, column-by-column, diagonal, or spiral order. See Matrix Traversal Patterns for detailed explanations, Matrix DFS, and Matrix BFS.

1.3.25 Matrix Manipulation

: operations that modify or transform a 2D matrix structure, such as rotation, transposition, flipping, or in-place modifications of matrix elements.

1.3.26 Array

: a linear data structure that stores elements in contiguous memory locations, allowing direct access to any element using its index in O(1) time. Arrays have fixed size and support efficient random access but require O(n) time for insertion/deletion in the middle.

1.3.27 Linked List

: a linear data structure where elements (nodes) are stored non-contiguously in memory, with each node containing data and a pointer/reference to the next node. Linked lists allow efficient O(1) insertion/deletion at known positions but require O(n) time for random access.

1.3.28 Queue

: a linear data structure that follows the First-In-First-Out (FIFO) principle. Elements are added (enqueued) at the rear and removed (dequeued) from the front. Common operations include enqueue (add element), dequeue (remove front element), and peek (view front element). Queues are useful for breadth-first search, task scheduling, and processing items in order.

1.3.29 Trie

: a tree-like data structure used for efficient storage and retrieval of strings, where each node represents a character and paths from root to nodes represent prefixes. Also known as a prefix tree, it enables fast prefix-based searches and is commonly used for autocomplete, spell checking, and IP routing.

1.3.30 Bitwise Operations

: operations that directly manipulate individual bits of binary representations of numbers. Common operations include AND (&), OR (|), XOR (^), NOT (~), left shift (<<), and right shift (>>). Useful for optimization, flags, masks, and solving problems involving powers of 2.

1.3.31 Bit Manipulation

: algorithmic techniques that use bitwise operations to solve problems efficiently. Common patterns include counting set bits (Hamming weight), checking/setting/clearing specific bits, finding single elements in arrays, and Brian Kernighan’s algorithm (n & (n-1) to remove the rightmost 1-bit). Examples: Number of 1 Bits, Sum of Two Integers. Often provides O(1) space and faster runtime than arithmetic alternatives.

1.3.32 Simulation

: a problem-solving technique where you directly implement the process or rules described in the problem statement, step by step, mimicking the real-world behavior. Instead of finding a mathematical formula or clever optimization, you execute the algorithm exactly as specified, tracking state changes through each iteration. Common in problems involving game rules, physical processes (rotation, traversal), arithmetic operations with carry, or boundary tracking. Examples include spiral matrix traversal (simulating directional movement and boundary shrinking) and adding numbers digit-by-digit with carry propagation.

1.3.33 Probability

: the mathematical framework for reasoning about uncertainty and randomness. In algorithm design, probability is used to analyze expected behavior of randomized algorithms, design Monte Carlo methods, and ensure uniform distributions. Common applications include random sampling, probabilistic data structures (Bloom filters, skip lists), and analyzing average-case complexity. Essential for understanding randomized algorithms like QuickSort’s expected O(n log n) time or the expected number of hash collisions.

1.3.34 Rejection Sampling

: a technique for generating random samples from a target distribution by sampling from a larger, easily-generated proposal distribution and rejecting samples that don’t meet certain criteria. The method repeatedly draws from the proposal distribution until an accepted sample is obtained, ensuring the output follows the desired uniform distribution. Commonly used to transform one random number generator into another (e.g., implementing rand10() using rand7()) or sampling from complex probability distributions. The acceptance rate determines efficiency—higher acceptance rates mean fewer rejections and better performance.

1.3.35 Reservoir Sampling

: an algorithm for randomly choosing k samples from a stream of n items where n is either very large or unknown. The key advantage is that it requires only O(k) space and processes each item exactly once in a single pass. The algorithm maintains a reservoir of k items and decides probabilistically whether to include each new item, replacing a random existing item in the reservoir. Each item has an equal probability of k/n of being in the final sample, ensuring uniform random selection. Commonly used for random sampling from large datasets, log files, or data streams where you cannot store all items in memory. The classic problem is selecting a random element from a linked list without knowing its length in advance.