A few days ago, Heinz Kabutz tweeted:
Computational time complexity trumps brute force parallelism— Heinz Kabutz (@heinzkabutz) December 16, 2016
Most of his 7000 Twitter followers are programmers. But it also automatically appeared on his Facebook timeline and that caused a lot of confusion to his non-programmer friends. He then explained it in non-technical terms to his aunts and friends far and wide. Since this is very relevant to anybody who is writing computer programs, irrespective of the computer language used, we thought we'd make a tutorial out of it. Here is the text written on his Facebook page:
OK, for my non-CS friends: Imagine that you want to find a name in the phone book (yeah, they used to print these out). And imagine that the only way you knew how to do that is to start with the first name and keep on going until you reached the end or if you found the name. We call this "linear". So if you are in my village Chorafakia, you'll get through that list pretty quickly. But if you are in Cape Town, it could take you a long time.
Now, imagine that you didn't have all day to search for the name. How could you speed this up? Well, you could get a bunch of friends to help. Each takes a part of the phone book and goes through it. You would agree that it would go quicker, right? You might lose some friends in the process though. The "computational time complexity" is still linear though, but we have tried to make things faster with "brute force parallelism". The speedup is equal to the number of friends, minus the time it takes to get everyone together, tear up the phone book, explain what to do, etc. The more friends, the more trouble you'll have coordinating everything.
You're probably wondering why anyone would go sequentially through a phone book, when everyone knows that they are ordered alphabetically??? Well yes, and because it is, we can do something called a "binary search". Here, we divide the phone book in half and see whether the name is on the left or on the right. If it's on the right, we divide the right portion in half again. We keep on going this way and we can find the name in a maximum of log2n lookups.
Let's say that the phone book has one million names in it. With our first one-person linear search, you would have to look at one million names in the worst case. With our parallel solution (using your poor friends to help, each person has to look at 1000000/numberOfFriends names. So say you have 10 people together, each one looks at one hundred thousand names. If you get 1000 people, everyone only needs to look at a thousand names, but it's going to be expensive getting everyone together at the same time and tearing the phone book into 1000 pieces is also not easy. The last solution is the binary search, where we divide in half every time. Even with a phone book of one million entries, you need to look at a *maximum* of 20 names. Amazing, isn't it? That's the power of finding a better "algorithm" to solve a problem.
Would you like to receive our monthly Java newsletter, with lots of interesting tips and tricks that will make you a better Java programmer?