Database Selection & Design (Part VIII)
— Indexing —
Indexing is a way of improving the performance of the database by reducing the number of access to disk when query is processed. It is a data structure technique which is used to quickly locate and access the data in a database. This comes with the cost of additional writes and storage space to store the indexing information. Indexing can be single level or Multi level indexing.
Before going into Indexing, we need to understand how the data is stored in the Hard disk. The disk storage contains sectors and tracks.
In the diagram above, the triangular part is sector and the circular part is track. The convergence area of these two is called as a block. Offset comes into place when you want to read a particular byte.
Byte = Block Address (Sector # and Track #) + Offset
In a hard disk, by spindle movement, the sectors are changed & by head post movement, your tracks are changed. The data you will be storing on to your database will be stored in one or more blocks based on the size of your tuple and your disk block. When you have multiple rows in your table, there will be multiple blocks used to store your information. For example: You have an order service table defined as below:
This means, each row (aka tuple) in your table is going to take 128b. Now, if you have a block size of 512b, each block can store 512/128 = 4 rows. Assume you have 100 rows in your table, you need 100/4 = 25 blocks to store your entire table.
Now, when we try to read the information you stored, in worst case scenario, you have to read through 25 blocks to get to the tuple you want. To address this situation, Indexing comes into picture. This is a process of storing key with the record pointers in a separate table. For every search key value in the data file, there is an index record. This record contains the search key and also a reference to the first data record with that search key value. This is called as Dense Index.
So in this case, the index table has Order ID (10b) and Record Pointer (6b) which makes the Total size as 16b for a tuple in the index table. So each block can store 512/16 = 32 index records in a block. Since we assumed 100 records in the order table above and because index table has 1 entry for each of these rows, Index table will also have 100 records. The whole index table requires 100/32 = ~4 blocks. With this structure in place, the maximum number of blocks you might have to read to locate a record in the order table will be 4 index blocks + 1 order table block (which will be derived from the record pointer of the index table)
If you table continue to grow, the dense index table will also continue to grow. This will cause the searching in the index itself time consuming and becomes inefficient. Here comes your Sparse indexing, which will have a record pointer to every block of your dense index. In the example above, each Sparse index pointer will point to a block of dense index and with the similar calculation above, we need just 2 blocks to store the sparse index. Now, this algorithm becomes very efficient to locate the record in your order table.
When the sparse indexing continues to grow along with your database and dense index, you can create an additional index layer. This is called as Multi Level Indexing. The problem with this approach is that you need to create and delete these levels as and when your database size grows or reduces. Instead of doing this manually, we needed a self managing multi level indexing method of managing indices.
This gave way to the evolution of Search Trees. There are many types of search trees:
Binary Search Tree:
This is a rooted binary tree, where each node is represented with 1 key and 2 children (commonly denoted as left and right). The key in each node must be greater than or equal to any key stored in the left sub-tree, and less than or equal to any key stored in the right sub-tree. The algorithm usually goes from the root to the leaves. Leaf is a bottom most node, which contains no key and no children. In this case, the degree of the node is always 2 (number of children). Special attention needed here is to keep the tree well balanced. An unbalanced tree can be very inefficient. In the most extreme case, for example when all left subtrees are empty, the tree degrades into a singly linked list.
Multi Way (m — way) Search Tree:
This is an extension of Binary search tree, where you can have more numbers of keys and children. Unlike BST, m-way can have more than 2 keys. The number of children it can have is referred as ‘M’ and so the number of keys becomes M-1. The tree in the below example is called 3-way search tree, since it has 3 (M) children and 2 (M-1) keys. In this method, the node structure will contain Key, Child pointer and a Record pointer. The issue with the m-way ST is that the creation process doesn’t have any guidelines. One can decide to create children before filling up all the keys in a certain layer. So this has a potential of becoming a linear search, if not governed.
B — Tree: The above mentioned issue was resolved by adding some governing principles around that. So B-Tree is a Multi Way search tree which satisfies the following properties:
- Every node has at most m children
- Every non-leaf node (except root) has at least ⌈m/2⌉ child nodes
- The root has at least two children if it is not a leaf node
- A non-leaf node with k children contains k − 1 keys
- All leaves appear in the same level and carry no information
- The creation process starts from bottom and goes up
Since the build process happens starting from the leaf nodes and all the way up, this is more distributed and efficient. In this method, every key will have a child pointer and a record pointer at all levels of the indices
B+ — Tree: This is very similar to the B-Tree, except for the fact that each of the keys will be stored in its child as well. So, the record pointers will only be available at the leaf nodes and not at any other level. Since all the keys are present in the leaves, they form a linked list and they appear like a dense index. The higher levels appear like a sparse index.
Bloom filters: This is a probabilistic data structure designed to tell you, rapidly and memory-efficiently, whether an element is present in a set or not. Probabilistic data structure means that it tells us that the element either definitely is not in the set or may be in the set.
Let us take an example. We need to find out whether an Inventory is present in the database, before we even start performing database operations on it.
Step #1: Take the string and convert that into a numeric value using a function
- Eg: Take a simple function to covert string to numeric using straight number mapping. a maps to 1, b maps to 2 and so on… Inventory item “abc” translates to 123
Step #2: Use the numerical value and identify the hash value by using different hash functions. Say, for this example, we use 2 hash functions
- Apply two hash functions on this numeric value. Hash # 1(123) = 7 & Hash # 2(123) = 9 in a hash table size of 10
Step #3: Find the location in the hash table and turn the indicator ON, if the hash value provides that as the output
Similarly, other inventory items are converted and marked in this table. Now, when the search comes in, they will do the same exact conversion to check in the hash table for that byte is marked as ‘1’. If we search for “bcd” and if the hash value translates to 5 & 7, the algorithm checks those bytes and returns results. In this case, 5 is unmarked and 7 is marked. So the filter will positively say that “bcd” is not present in the index.
Let us assume, we have another inventory item “cde” that hashes to 4 & 7. The table now will look like below
When we search for an item “def” which hashes out to 4 & 9, the filter looks it up and finds both 4 and 9 marked as ‘1’. Note here, the 4 was marked by “cde” and 9 was marked by “abc”. The actual item “def” is not present. That is why this filter always probabilistic for presence of value and definite for not-present outputs.
In non-relational world, many databases like HBase, Cassandra and others are optimized to use and check filter before even executing any search queries. My research shows that even this site “medium.com” uses bloom filters to validate recommendations for their customers. You can also implement multilevel filters for hierarchical validations for multi-hit-wonder use cases
Link to the next part in this series: