Another post about data structures, time performance and the count-the-words-in-a-book-file-fast case I’ve been writing about before.
I did a C++ implementation of the books and words problem. Earlier, I have implemented several solutions for the problem using Swift and Java. This time I used C++ standard library std::map and wanted to see if parallel processing in several threads would speed up the processing.
Obviously it did. Execution time of the multithreaded version was 74% of the single threaded version. The sample files were processed by the single threaded version in 665 ms, while the multithreaded version took only 491 ms. Nice!
But then I saw, from the documentation of the std::map, that it keeps the key values in the dictionary in order while elements are added to the map/dictionary.
But this is not needed in my case! Surely this also takes time and gives me additional possibilities in optimising the time performance.
I changed, in the single threaded implementation, the std::map to std::unordered_map, and behold, it was faster than the multithreaded version with 446 ms execution time!
So mind the map. There are many, and some of those may be more suitable to your use case than the others.
For details, see the project in GitHub.