The Swift code below goes through an array of words (Strings) and counts unique words and their frequencies, ignoring words in wordsToFilter (another array of Strings). Then the resulting dictionary (map data structure, consisting of word/count pairs) is sorted by the word frequency in descending order. Finally, the top 100 of the most frequent words are printed out.
![var words = [String]()
var wordsToFilter = [String]()
...
var counter = 1
words.filter { word in
word.count >= 2 && !wordsToFilter.contains(word)
}.reduce(into: [:]) { counts, word in
counts[word, default: 0] += 1
}.sorted(by: { lhs, rhs in
lhs.value > rhs.value
}).prefix(topListSize).forEach { key, value in
print("\(String(counter).rightJustified(width: 3)). \(key.leftJustified(width: 20, fillChar: ".")) \(value)")
counter += 1
}](https://www.juustila.com/antti/wp-content/uploads/2022/01/Nayttokuva-2022-1-13-kello-17.05.57-1024x237.png)
With my test book file of size 17.1 MB, with 2 378 668 words and 97 115 unique words, the code above uses 1.226099 secs to process the file. The time includes reading and splicing the words from the text files into the arrays. For details of measuring, see the end of this post.
Could it be faster if using the Swift async tasks? Let’s try and see!
Below is the code doing the same in eight async tasks. Code for printing out the result is omitted, shown later below.

In the code, first the slice size is calculated at line 66. For example, if the array has 1000 words, it is divided into eight slices, each containing 125 words. Then in a for loop, a task group with eight async tasks execute (lines 79-85). Each async task calculates the word frequencies of their own slice of the array. Each task return a dictionary to the task group. Dictionary contains the word / frequency count pairs of the slice of the array.
No thread locking for data synchronisation is needed since all concurrent tasks only read from the array and each of them read from their own slice.
In lines 88-96, the task group awaits for the tasks to finish. As they do that, the task group combines the dictionary of partial result provided by the task to the task group’s dictionary wordCounts. This happens in a single thread so no data corruption happens. The async tasks are not writing to the final dictionary having all the word / frequency pairs from the async tasks.
Finally the result is sorted and printed out from the wordCounts dictionary, after the task group has merged the results from the tasks:

Why the semaphore? This is a console app, and the main thread would continue until the end, after the async tasks were launched. What would happen in the end? The main thread would run past the end of the function, return to main function and finish & quit the process. While the async tasks are still executing. Not good.
So to avoid that 1) the main thread stops to wait for the semaphore, and 2) task group uses the same semaphore to signal when the task group has finished working. The main thread then proceeds to finish.
So, is this any faster? Any use at all in having this more complicated code?
Executing with the same files as above, the execution now takes 0.694983 secs. That is 57% of the original execution time of the single threaded implementation!
Though the absolute times or time differences are not large, the relative difference is very significant. Consider the data sizes being hundreds or thousands of times larger than this test file. Or if this process would be done repeatedly over thousands of files. Continuously. Then the difference would be significant also in time, not only relatively, even if the files would be smaller.
When you take a look at the Xcode Instruments view of the time profiler, you see easily why the speed difference:

As you can see, all that work that was earlier done in sequence, is now executed in parallel, asynchronously.
So the answer to the question “Could it be faster if using the Swift async tasks?”, is: yes, absolutely.
The measurements were taken on an Apple Mac Mini M1 (Apple Silicon) with 16GB of RAM and 1 TB of SSD storage.
Reason for the slicing of the array to eight? The M1 processor has eight cores in the processor, each one is put to work. As you can see, the OS and other processes also needs the cores so they are not executed at 100% for this process’ threads all the time.
The code can be found in my BooksAndWords repository at GitHub. Single threaded implementation is in the Functional directory as the async is in the FunctionalParallel directory.



