Fast Index Order in Binary Search Tree

So, you can use a Binary Search Tree (BST) to store and quickly access key-value pairs. Usually the key gives the position of the value element in the tree. You place smaller values in the left side, and larger values in the right side of the tree.

A sample BST unbalanced, having 13 nodes in the tree. Key in the BST is the person’s name in ascending order.

Finding by key value is O(log n) operation. Starting from the root node of the tree, comparing the key, you can choose which subtree to continue searching for the key. If the BST is balanced, you need to only do O(log n) comparisons at most to get to the key-value pair you wish to find. Like searching for Jouni Kappalainen takes four comparisons from the root and it is found.

The tree contents in key order shown in a Java Swing table view. Order is by BST key order.

Note that the numbers in each tree node picture above is the number of children of that specific node. For example, Kevin Bacon has zero children, as Samir Numminen has five, and the root node has 12 children.

How about if you want to treat the BST as an ordered collection, each item in the tree accessible by index, as you would do with an array? Like, “give me the 6th element in the tree (element at index 5), when navigating in-order, from smallest to largest key in the tree”. In the sample tree above, Kappalainen would be in the index 5 (zero based indexing). Kevin Bacon’s index would be zero since it is the first element in-order. Root node would be at index 1 in this sample tree.

The obvious thing to do would be to traverse the tree in-order, counting indices from the lowest left side node (Bacon) with index 0 (zero) and then continue counting from there until you are at the index 5 (Kappalainen).

Sample tree with in-order indices added in blue. Compare this to the GUI table order and you see the indices are correct.

This works, but the time complexity of this operation is O(n). If you have, let’s say a million or more elements in the three, that is a lot of steps. Especially if you need to do this repeatedly. Is there a faster way to do this?

The faster way is to use the child counts of the tree nodes to count the index of the current node (starting from the root), and then use that information to decide if you should proceed to left or right subtree to continue finding the desired index. This makes finding the element with an index O(log n) operation, which is considerably faster than linear in-order traversal with O(n).

For example, consider you wish to find element at the index 5. Since the root node’s left child has zero children, we can then deduce that the root node’s index is zero plus one, equals one. Therefore at the root you already know that the element at index 5 must be at the right subtree of the root. No use navigating the left subtree.

Also, when going to the right child from the root, you can calculate the index of that node, Töllikkö. Since Töllikkö is after the root node, add one to the index of root node, therefore the current index being two (2). Then, since all Töllikkö’s left children are in earlier indices, add the child count of Töllikkö’s left subtree, which is 8, to the current index, having current index of ten (10). And since the left child node must be counted too, not just its children, we add another one (1) to current index, it now being 11. So the index of Töllikkö is 11.

Now we know that since we are finding the index 5, it must be in the left subtree of Töllikkö, so we continue there and ignore Töllikkö’s right subtree.

Moving then to Kantonen, we need to deduct one (1) from the current index, since we moved to earlier indices from Töllikkä, current Index being now 10. This is not yet Kantonen’s index. We need to deduct the number of children in Kantonen’s right child + 1 from the current index, and will get 10 – 6, having 4. That is Kantonen’s index.

Since 4 is smaller than 5, we know we need to proceed to the right subtree of Kantonen to find the index 5. Advancing there, in Numminen, we need to add 1 to the current index since we are going “forward” in the index order to bigger indices, as well as the count of children + 1 (the subtree itself) in Numminen’s left subtree, in so current index (Numminen’s) is now 6.

Since 6 < 5 we know to continue searching the index from Numminen’s left subtree there we go (to Kappalainen), deducing one (1) from current index, it now being 5. Now, earlier, when moving to left (to Kantonen), we also deducted the number of children in Kantonen’s right subtree. But since Kappalainen has no right subtree, we deduct nothing.

Now we see that current index is 5, the same we are searching for. So, we can stop the search and return Kappalainen as the element in the index 5 in the tree.

Using this principle of knowing the count of children in each tree node, we can directly proceed to the correct subtree, either left or right, and therefore replacing the linear in-order search with faster logarithmic search. With large data sets, this is considerably faster.

While demonstrating this to students using the course project app TIRA Coders, we saw that loading 1 000 000 coders to the course TIRA Coders app, the O(n) index finding operation (required by the Java Swing table view) made the app quite slow and sluggy. When scrolling, the app kept scrolling long time after I lifted my hand off the mouse wheel. Swing Table view was getting the objects in the visible indices constantly while scrolling and as this is O(n) operation, everything was really, really slow.

When replaced by the O(log n) operation described above, using the app was a breeze even with the same number of one million data elements in the BST; there was no lags in scrolling the table view nor in selecting rows in the table.

So, if you need to access BST elements by the index, consider using the algorithm described above. Obviously, for this to work, you need to update the child counts in parent nodes while adding nodes to the BST. If your BST supports removing nodes and balancing the BST, you need to update the child counts also in those situations.

(I won’t share any code yet, since the students are currently implementing this (optional) feature with the given pseudocode in their BST programming task.)

Swift tree structures: value types, enums and classes

What if you need to define a tree like structure of data elements in Swift? For example you might use a Binary Search Tree to keep track of unique words in a book file:

A tree structure where each node in the tree has optional two child nodes, left and right. Each node has two values: the word found in the book, and the count how many times the word appeared in the book.
An example of a binary search tree with unique word counts from a text file.

Since value types are often preferred in Swift, you could use a struct. The Node struct contains the word, the count of it in the book, a key to manage the tree, and optional left and right child nodes of the tree.

struct Node {
   let key: Int
   let word: String
   var count: Int

   var leftChild: Node?
   var rightChild: Node?

   init(_ word: String) {
      key = word.hashValue
      self.word = word
      count = 1
   }
}

But as you can see from the error message “Value type ‘Node’ cannot have a stored property that recursively contains it” — recursive value types are not supported in Swift. A Node in the tree struct cannot contain the left and right child nodes when using value types.

What to do? You have (at least) two options:

  1. Use the enum type with associated values.
  2. Use classes.

With Swift enums, you can define two states for the enumeration. Either a) the node in the tree is Empty (there is no node) or b) it has associated values in a Node — the word, the word count, key used to arrange the nodes in the tree by the word hash value, and the optional left and right subtrees:

indirect enum EnumTreeNode {
   case Empty
   case Node(left: EnumTreeNode, hash: Int, word: String, count: Int, right: EnumTreeNode)

   init() {
      self = .Empty
   }

   init(_ word: String) {
      self = .Node(left: .Empty, hash: word.hashValue, word: word, count: 1, right: .Empty)
   }

   func accept(_ visitor: Visitor) throws {
      try visitor.visit(node: self)
   }
}
A tree node as an enumeration with associated values.

When defining recursive enumerations, you must use the indirect keyword to indicate recursion in the enumeration.

The other option is to use classes, which are reference type elements in Swift:

final class TreeNode {
   let key: Int
   let word: String
   var count: Int

   var left: TreeNode?
   var right: TreeNode?

   init(_ word: String) {
      self.key = word.hashValue
      self.word = word
      count = 1
   }

You can read more about Swift structs and classes from here if you are unfamiliar with them.

Check out the full implementation of both class based and enum based solutions from this GitHub repository.

So, are there any other differences in the enum and class implementations, than the differences in code?

Let’s check out. First run below is using the enum implementation, and the second one is executed using the class based implementation.

> swift build -c release
> .build/release/bstree ../samples/tiny.txt ..samples/ignore-words.txt 100
...
Count of words: 44, count of unique words: 32
>>>> Time 0.0008840560913085938 secs.


> swift build -c release
> .build/release/bstree ../samples/tiny.txt ..samples/ignore-words.txt 100
...
Count of words: 44, count of unique words: 32
>>>> Time 0.0009189844131469727 secs.

So far so good. Both implementations work (not all results not shown above) and seem to be quite fast. The tiny text file contains only 44 words of which 32 are unique.

But when executing both implementations with a larger 16MB file with 2674582 words of which 97152 are unique…:

> .build/release/bstree ../samples/Bulk.txt ..samples/ignore-words.txt 100
...
Count of words: 2674582, count of unique words: 97152
 >>>> Time 16.52852702140808 secs.


> .build/release/bstree ../samples/Bulk.txt ..samples/ignore-words.txt 100
Count of words: 2674582, count of unique words: 97152
 >>>> Time 3.5031620264053345 secs.

You can see that the first enum based implementation took 16.5 secs to process the same file the class based implementation only took 3.5 secs to process. This is a significant difference. Why this happens?

Swift enums behave like value types. When the algorithm reads a word from the book file, it searches if it already exists in the tree. If yes, the old enum value is replaced with the new one in the tree. This results in copying the tree since it is now mutated. The Swift book says:

“All structures and enumerations are value types in Swift. This means that any structure and enumeration instances you create—and any value types they have as properties—are always copied when they’re passed around in your code.”

— Swift Language Guide

So whenever a node in the tree is modified, that results in copying the tree. When you change the tree by adding to it, the tree is copied. Using classes this does not happen. The excessive copying of the tree nodes is a performance killer when having very large data sets to handle.

There is also a performance penalty in using classes — each time a class instance is accessed, the retain / release count is updated. But as you can see, still the implementation is much faster compared to copying the structure with value types.

Summarizing, enums are a nice way to implement recursive data structures. If you have large data sets and/or the tree or tree nodes are updated often, consider using classes instead.

There was an app

Wanted to make a separate more detailed post of my previous post item about how the order of things may greatly influence the time performance of code.

There is an app which reads a large text file to find out how many unique words it has. The app also reports on the top 100 unique words in the file, and the count of them.

One file had 2 735 307 words, of which unique were 99 130.

Since the programmer was clever, s/he used a binary search tree to do the job of keeping book of the unique words. Then it transferred the job to a table to sort the words in order of frequency to print out the top-100 list.

Inserting words into the binary search tree, or adding the appearance (frequency) counter of the word if already in the tree.

When measuring the time performance of the app to handle this one file, the app reported that it could do the job in 2.550396 seconds*).

When looking at the code inserting the word to the tree (above) closely, one can see that a node is allocated and then free’d even when the word is already in the tree, without anything useful done with the newnode. The node is needed only when the word is not already in the tree.

So, why not move the allocation of the node after the loop, when the node is actually needed:

Allocating the newnode only when it is actually necessary.

This small, some may say a pedantic change, took off 30% of the execution time of the whole app, when handling that same file mentioned above. After this change, the execution time of the app dropped down to 1.867050 seconds. Repeated runs produced similar timings for both solutions.

Allocating from the heap, done repeatedly, is slow.

*) Measurements done on Apple Mac Mini M1 2020, 16Gb RAM, 8 cores, 1 Tb SSD.

Today’s thoughts

Someone mentioned the book Exercises in Programming Style by Cristina Lopes in Twitter and I bought it. Interesting read, can recommend. I can bring some things from the book into my course next Fall. We should actually have a course focusing mainly on these topics but alas, do not have one. I just need to implement those things I could apply in Java, maybe as demonstrations, since the book is in Python.

Another thought I need to embed into course material is this nice example of how order of things may greatly influence the time performance of code. I was evaluating a student submission of a course project. A function added nodes (unique word counts of very large text files) into a binary search tree. First a node was allocated using malloc after which the tree was searched in a loop where to put the node. And if a node with the value already existed, the frequency count of the existing node was increased and the node just created could be freed and function would return. And if the value of the node was not found in the tree, then the node just allocated was inserted into the tree in the place that was found for it. By just moving the malloc after the loop (where the node was actually needed) got rid of 30% of the execution time of the whole app. Boom! What an improvement with such a small change. Which is obvious if you have some experience but not so for many students who are in the beginning. This will be a fun demonstration…

Final thought of the day popped to my mind because of that minor piece of code having a large effect, and that is this saying:

The idiom “the straw that broke the camel’s back“, alluding to the proverb “it is the last straw that breaks the camel’s back”, describes the seemingly minor or routine action that causes an unpredictably large and sudden reaction, because of the cumulative effect of small actions.

Wikipedia