Part 4 — Data Structures and Algorithms in Plain English | Time Complexities
Hello again, so nice to see you today through the logarithmic eyes of the internet. Don’t worry, I can’t actually see you but one thing I can see is your optimism for today’s topic which is why I’m going to cut the chit chat and dive right in.
In Part 3 of this series, we learned about data structures and their classifications from which you must have realized that there is more to data storage than just arrays and objects. In that part of the series, we also talked about selecting the right data structure for a particular problem as this would allow for easy and faster operations like insertion, deletion, and search.
Putting first, in mind, the amount of time your algorithm will take to execute a defined set of instructions is a pivotal part of writing clean and efficient computer-coded algorithms. And in order for us to efficiently calculate this time, several methods have been considered and one has been choosing to rule above others. This method is what we generally refer to as Time Complexity.
Before we give it a definition, let’s consider a few things. A few other things you might possibly think of when trying to measure the running time of an algorithm.
- Execution time
If you decide to measure the running time of an algorithm by the amount of time the algorithm takes to run, then you will face huge inconsistency problems as the same algorithm itself doesn’t take the same amount of time to run on the same computer. This means that if you write a computer algorithm to add two numbers together and it takes 3ms (three milliseconds) to run, running that algorithm on your computer the second time doesn’t guarantee another 3ms execution time as this might even become faster and turn out to take even lesser time.
The execution time is dependent on the computer specifications and the language being used to write the algorithm. It is also dependent on other factors which is why we can’t use this method for measuring the running time of algorithms.
2. Executed statements
Now, what about the number of statements an algorithm executes before stopping? In itself, you start to see that this will result in huge variations as an algorithm I write in 3 lines can be written in 5 lines by another programmer. And with different programming languages taking on different syntaxes, this would also become dependent on the language being used.
We want something that isn’t dependent on anything, a method that only considers input in order to properly calculate the time taken for the output, and this is what Time Complexity solves. By definition,
Time Complexity is the amount of time an algorithm takes to run as a function of the size of the input.
Putting in even simpler terms, time complexity describes how long it takes to for an algorithm to run by only considering the number of iterations the algorithm performs on the data input. This means, supposing an algorithm takes 2ms to execute one 1 iteration, it will take 2ms x 20 when the input data is increased to 20. This is independent of computer or programming language as we only consider the input.
Basically, 2ms is just an assumption and will vary from computer to computer considering that a slower computer will result in a long time for each iteration. Using our knowledge of mathematical algebra, we can say that the amount of time taken to run that 20 input data will be c x 20 or 20c where c will depend on the computer’s speed. See how flexible this is. With this method, anyone can easily measure the running time of an algorithm without having to worry about the computer that will actually run the algorithm.
In the example above, we created a generalization where we defined the running time of the algorithm with 20 data inputs as 20c. If the input becomes 50 then the running time becomes 50c. Easy, right? Absolutely! But notice that the data input will always vary (change) and as such we can change our 50c to nc where n is the number of input (input size). This is easier to remember and we can easily give anyone this simple algebraic equation for them to accurately calculate the time their algorithm will take to run as their input size changes.
As clean as our above generalization is (nc) it still doesn’t work in all cases. The thing is, you don’t need to factor in computer speed, all you need is the input, n, and this is where the Big O notation comes in.
What the Heck is Big O notation
Big O notation is simply a mathematical notation that describes the limiting behavior of an algorithm when the input size tends to n.
Where n is just a collection of integral values 1, 2, 3, 4, 5, …., ∞. Yes! n, sometimes, can grow as high as never-ending.
Big O notation gives us the worst-case scenario of the running time of an algorithm. That is, it allows us to know the most time an algorithm would take to completely run.
With the example we gave above (nc) for representing our algorithm’s running time, understanding Big O should no longer be difficult as we simply just need to take away c from nc and replace it with Big O.
nc → Big O n → O(n)
It’s that easy! We just broke down Time Complexities and Big O notation. The above Big O representation is called Linear Time Complexity and it is just one of many.
There are several other types of Time Complexities and they include…
- Constant Time Complexities → O(1)
- Linear Time Complexities → O(n)
- Logarithmic Time Complexities → O(log n)
- Quadratic Time Complexities → O(n²)
- Exponential Time Complexities → O(2^n)
- Factorial Time Complexities → O(n!)
We said O(n) is Linear Time Complexity but why exactly is it linear. Well, can you still remember the 20c, 50c, nc examples? Notice that this algebraic expression will change in result with respect to whatever value of n we use be it 20, 30, 35, 50, 56, or just about any integral value. This means that if we plot a graph for this using a mathematical expression like y = mx + c (you don’t have to worry about this) what we will get is a straight-line graph. And mathematics, straight-line graphs are linear graphs.
An example of an algorithm with Linear Time Complexity is an algorithm that prints out all the elements of an array (list).
Below is a coded algorithm using the JavaScript programming language (the next algorithm might be Python 😎).
We just solved the same problem using four different methods in the same programming language. Notice how we were able to even limit the executed statement to just a single line in the case of forEach. Now you see why executed statements aren’t a good measurement scale?
Anyway, the above algorithm will perform 6 iterations before coming to stop since we have 6 elements in our nums array. 6, in this case, is the input size of this algorithm. The Big O notation for this becomes O(6). If we include more elements in the array, the iteration increases with respect to the number of elements added to the nums array. And that exactly is what makes this algorithm Linear!
It’s easy to see that linear algorithms are slow since they will take more iterations and, invariably, more time as the input size grows. What if we don’t want the above algorithm to be linear, do you think there’s anything we can do? No! there’s nothing. Printing out the elements in an array can only be executed Linearly. However, there are some algorithms that can be executed Linearly and also re-written such that their Time Complexity becomes faster. An example of such an algorithm will be one where we search through the above array for a certain number.
Let’s say we want to search for 13 in the nums array we defined above, what is the solution that first comes to mind. Below is the first solution you would think of.
Since the lost number is 13 and this is the last element of the array, the solution above will take 6 iterations before finding the lost number. If we were to increase the nums array to have 500 elements inside it and our lost number is the last element, this means that our algorithm will take 500 iterations before finding the number.
Terrible, right? Do you think we can make this faster? Yes! we can. By using an algorithm with a Logarithmic Time Complexity, we can make the algorithm way faster such that we can find our lost number in just 9 iterations when the input size becomes 500 elements.
Before we write the algorithm, take the time to sink this in. I know it’s already a mouthful 😎. Go over it and if you need to ask questions, then feel free to reach me here https://abbrefy.xyz/techmovers or via my portfolio at https://samperfect.me
Thank you for reading and be sure to clap up if you found this enlightening. I’ll see you in Part 5. Till then…bye 👋 .