📖 Chapters
- 1. Introduction to DSA and Its Importance
- 2. Time and Space Complexity (Big O, Θ, Ω)
- 3. Recursion and Stack Memory
- 4. Mathematics for DSA (GCD, LCM, Modulo Arithmetic)
- 5. Arrays: Basics and Operations
- 6. Strings: Manipulation & Inbuilt Functions
- 7. Structures and Unions with Arrays
- 8. Linked Lists (Singly, Circular, Doubly)
- 9. Stack (Using Array & Linked List)
- 10. Queue (Simple, Circular, Deque)
- 11. Linear Search & Binary Search
- 12. Sorting Basics: Bubble, Selection, Insertion
- 13. Merge Sort
- 14. Quick Sort
- 15. Counting Sort, Radix Sort, Bucket Sort
- 16. Heap Sort
- 17. Binary Trees & Representations
- 18. Binary Tree Traversals (Pre, In, Post)
- 19. Binary Search Tree (BST)
- 20. AVL Trees (Self-balancing BST)
- 21. Trie (Prefix Tree)
- 22. Segment Tree (Range Queries)
- 23. Fenwick Tree / Binary Indexed Tree (BIT)
- 24. Heap & Priority Queues (Min-Heap, Max-Heap)
- 25. Hashing and Hash Tables
- 26. Disjoint Set Union (Union-Find, Path Compression)
- 27. Sparse Tables
- 28. Sliding Window Technique
- 29. Two Pointers Technique
- 30. Graph Representations (Adjacency Matrix & List)
- 31. BFS and DFS (Traversal & Search)
- 32. Topological Sort (Kahn’s & DFS)
- 33. Shortest Path Algorithms (Dijkstra, Bellman-Ford)
- 34. Minimum Spanning Tree (Kruskal, Prim)
- 35. Cycle Detection in Graphs
- 36. Bridges and Articulation Points
- 37. Greedy Algorithms (Activity Selection, Huffman Coding)
- 38. Backtracking (N-Queens, Sudoku Solver)
- 39. Dynamic Programming (Intro, Memoization, Tabulation)
- 40. Advanced DP (LIS, Knapsack, DP on Trees/Grids)
- 41. Bit Manipulation and XOR Tricks
- 42. Interview Problems from FAANG Companies
- 43. Competitive Coding Patterns
- 44. Building Mini Projects Using DSA (Menu-driven)
- 45. Final Quiz + Problem Sheet (100+ questions)
- 46. Next Steps: Competitive Programming or System Design?
Data Structures and Algorithms in C (DSA)
Category: IT Fundamentals & Programming
This course is designed to help learners master the core concepts of Data Structures and Algorithms (DSA) using the C programming language. Starting from the basics and advancing to complex topics like graphs, dynamic programming, and memory optimization, this course …
Hashing and Hash Tables
📘 Chapter: Hashing and Hash Tables
Moving beyond ordered and hierarchical data structures, we delve into Hashing and Hash Tables, a cornerstone of efficient data retrieval. Hash tables provide an average constant-time (O(1)) performance for insertion, deletion, and search operations, making them incredibly powerful for scenarios where quick access to data is paramount. This efficiency is achieved through the use of a hash function, which maps keys to indices in an array.
💡 What is Hashing? Core Concepts
Hashing is the process of converting a given key (which can be any type of data, e.g., an integer, string, or object) into a small, fixed-size numerical value called a hash value or hash code. This hash value is then used as an index to store or retrieve the data in an array-like structure called a hash table.
Key components of Hashing:
- Key: The input value that needs to be stored or retrieved.
- Hash Function: A function that takes a key and converts it into a hash value (an integer index). A good hash function should:
- Be deterministic: Always produce the same hash value for the same input key.
- Be fast to compute.
- Distribute keys uniformly across the hash table to minimize collisions.
- Hash Value (Hash Code): The integer output of the hash function.
- Hash Table (Hash Map): An array (or a collection of buckets) where data is stored based on the hash values.
💥 Collision Handling: The Challenge of Hashing
A collision occurs when two different keys map to the same hash value (and thus the same index in the hash table). Since a hash table has a finite number of slots and keys can be infinite, collisions are inevitable. Effective collision handling is critical for maintaining the efficiency of hash tables.
The two most common methods for collision handling are:
1. Separate Chaining
In separate chaining, each slot (or "bucket") in the hash table is a pointer to a linked list (or another data structure like a balanced BST for very large chains). When a collision occurs, the new key-value pair is simply added to the linked list at that index.
// Conceptual C++ structure struct Node { KeyType key; ValueType value; Node* next; }; // Hash Table as an array of linked list heads Node* hashTable[TABLE_SIZE]; // Array of pointers to Nodes (linked list heads) // To insert (conceptually): int index = hashFunction(key) % TABLE_SIZE; Node* newNode = createNode(key, value); newNode->next = hashTable[index]; // Add to front of list hashTable[index] = newNode;
Advantages: Simple to implement, less sensitive to hash function quality, table never "fills up."
Disadvantages: Requires extra memory for pointers, performance degrades with long chains (becomes O(N) in worst case for search/insert/delete if all elements hash to one bucket).
2. Open Addressing (Probing)
In open addressing, all elements are stored directly within the hash table array. When a collision occurs, the algorithm "probes" for an alternative empty slot in the table based on a predefined probing sequence.
- Linear Probing: If index `i` is occupied, try `i+1`, `i+2`, `i+3`, etc. (modulo table size). Leads to "primary clustering."
- Quadratic Probing: If index `i` is occupied, try `i+1^2`, `i+2^2`, `i+3^2`, etc. (modulo table size). Reduces primary clustering but can suffer from "secondary clustering."
- Double Hashing: Uses a second hash function to determine the step size for probing: `(hash1(key) + i * hash2(key)) % TABLE_SIZE`. Offers better distribution.
// Conceptual Array for Open Addressing ValueType hashTable[TABLE_SIZE]; // Stores values directly bool occupied[TABLE_SIZE]; // To mark if slot is occupied // To insert with Linear Probing (conceptually): int index = hashFunction(key) % TABLE_SIZE; while (occupied[index] && hashTable[index].key != key) { // Find next available slot or existing key index = (index + 1) % TABLE_SIZE; } hashTable[index] = value; occupied[index] = true;
Advantages: No extra memory for pointers, better cache performance.
Disadvantages: Table can "fill up," sensitive to load factor and hash function, suffers from clustering, deletion is complex (requires "tombstones").
📊 Hash Table Visualization (Separate Chaining)
Hash Table with Separate Chaining (Size 4)
Keys are hashed to an index, and collisions are handled by storing elements in linked lists (chains) at that index.
⚖️ Load Factor and Rehashing
The load factor ($\alpha$) of a hash table is a crucial metric that impacts performance:
$\alpha = \frac{\text{Number of elements (n)}}{\text{Size of hash table (m)}}$
- For Separate Chaining: $\alpha$ can be $> 1$. Average time complexity is $O(1 + \alpha)$.
- For Open Addressing: $\alpha$ must be $\le 1$. Performance degrades significantly as $\alpha$ approaches 1.
When the load factor exceeds a certain threshold (e.g., 0.7-0.75 for open addressing, or 1.0 for chaining), the hash table undergoes rehashing. This involves:
- Creating a new, larger hash table (typically double the size).
- Re-hashing all existing elements from the old table into the new table.
🚀 Advantages and Disadvantages of Hash Tables
Advantages:
- Fast Average Case Operations: O(1) average time for search, insertion, and deletion.
- Efficient for Large Datasets: Scales well with large amounts of data, assuming a good hash function and proper load factor management.
- Flexible Keys: Can store data associated with various types of keys (strings, objects, numbers).
Disadvantages:
- Worst Case Performance: Can degrade to O(N) in the worst case (e.g., all keys hash to the same bucket due to a bad hash function or malicious input).
- No Ordered Data: Elements are not stored in any particular order, making range queries or finding min/max values inefficient.
- Memory Overhead: Can consume more memory than other structures, especially with a low load factor or extensive chaining.
- Collision Handling Complexity: Requires careful design for hash functions and collision resolution strategies.
📌 Real-Life Example (UdaanPath)
Hash Tables are used extensively in modern software, and UdaanPath would heavily rely on them for many core functionalities:
- User Authentication and Session Management: Storing user credentials (hashed passwords), session tokens, and user profiles for quick lookup during login and subsequent requests. User IDs or session IDs can be keys, allowing O(1) retrieval of user data.
- Caching: Implementing various caches (e.g., API response cache, database query cache, frequently accessed course content cache) where keys are URLs, query parameters, or content IDs, and values are the cached data. This drastically reduces database/API calls.
- Dictionary/Map Implementations: Underlying data structure for programming language dictionaries (Python dict, Java HashMap, C++ unordered_map) used internally for various application logic, like mapping course codes to full course names.
- Counting Frequencies: Quickly counting occurrences of words in forum posts, popular search queries, or common errors in student code submissions. Keys are the items to count, values are their frequencies.
- URL Shortening Services: Mapping long URLs to short, unique codes, and vice versa. The short code serves as a key to retrieve the original long URL.
- Symbol Tables in Interpreters/Compilers: For fast lookup of variable names, function names, etc., and their associated properties (type, scope).
✅ Summary: Instant Access with Hashing
- Hashing maps keys to indices in an array (a Hash Table) for average O(1) operations.
- A hash function computes the hash value.
- Collisions (multiple keys mapping to the same index) are handled primarily by Separate Chaining (linked lists at each bucket) or Open Addressing (probing for an empty slot).
- The Load Factor determines when Rehashing (resizing the table) is necessary to maintain performance.
- Hash tables are widely used for dictionaries, caches, and quick lookups, but do not maintain order.
📤 Coming Next: Disjoint Set Union (Union-Find, Path Compression)
Our next chapter will explore the **Disjoint Set Union (DSU)** data structure, also known as Union-Find. This powerful structure is used to manage a collection of disjoint (non-overlapping) sets and supports two primary operations: efficiently determining which set an element belongs to (Find) and merging two sets into one (Union). We'll also cover crucial optimizations like path compression and union by rank/size that make DSU operations nearly constant time on average. It's crucial for problems involving connected components, cycles in graphs, and network connectivity.