A Journey into Time Complexity Analysis.
Unlocking the secrets to crafting efficient algorithms for any programming task.
Overview
The efficiency of algorithms is important to be a better programmer. usually, it’s easy to design an algorithm but the real challenge is to make it more efficient and should be noted how fast it will work. The time complexity of an algorithm estimates how much time will be used for input. by calculating the time complexity, we can find out how fast the algorithm works.
Computation Guidelines
The time complexity of an algorithm is denoted O(···), these three dots indicate some function. suppose input is an array of numbers, n will be the size of the array, and if the input is a string, n will be the length of the string. as usual n denotes the input size.
Magnitude Scale
The common reason for a slow algorithm is that it contains many loops that go through the input. If there are k nested loops, the time complexity is O(n^k ).
For example, the time complexity of the below is O(n).
for (int i = 1; i <= n; i++) {
// other code
}
And the time complexity of the following code is O(n² ).
for (int i = 1; i <= n; i++) {
for (int j = 1; j <= n; j++) {
// other code
}
}
Time complexity doesn’t provide an exact count of how many times the code within a loop runs; instead, it offers a measure of its growth rate. the below example, even though the code inside the loop executes 3n, n+5, and [n/2] times, but the time complexity of each code is O(n).
for (int i = 1; i <= 3*n; i++) {
// other code
}
for (int i = 1; i <= n+5; i++) {
// other code
}
for (int i = 1; i <= n; i += 2) {
// other code
}
The time complexity of the below code is O(n² ).
for (int i = 1; i <= n; i++) {
for (int j = i+1; j <= n; j++) {
// other code
}
}
Levels
When the algorithm has multiple phases, the total time complexity is the largest time complexity of a single phase.
The below code consists of three levels with time complexities O(n), O(n² ) and O(n). total time complexity is O(n²).
for (int i = 1; i <= n; i++) {
// other code
}
for (int i = 1; i <= n; i++) {
for (int j = 1; j <= n; j++) {
// other code
}
}
for (int i = 1; i <= n; i++) {
// other code
}
Multiple Factors
Time complexity sometimes depends on multiple variables. the following code’s time complexity is O(nm).
for (int i = 1; i <= n; i++) {
for (int j = 1; j <= m; j++) {
// other code
}
}
Recursion
Recursive function’s time complexity depends on the number of the function call and the time complexity of a single call.
Consider the following function:
void func(int n) {
if (n == 1) return;
func(n-1);
}
Each call to func(n) results in precisely n function calls, with each call having a constant time complexity of O(1). Consequently, the overall time complexity remains O(n).
As another example:
void func(int n) {
if (n == 1) return;
func(n-1);
func(n-1);
}
In that case each function call generates two calls, except for n = 1. take a look at the table below, which illustrates the function calls produced by this single call.
So, according to above, time complexity is:
Big O Complexity Chart
Big O notation is a one kind of tool which describes algorithm performance and complexity in relation to input size. It helps programmers assess worst-case scenarios, execution times, and memory usage. Big O complexity is often illustrated with a graph, commonly referred to as the Big O graph or chart.
Above Big O chart, O(1) means constant time, which is ideal for straightforward, efficient algorithms. O(log n) is also good, especially when handling larger inputs.
Complexity Classes
In Big O notation, there are Seven primary complexity categories.
- Constant: O(1) - Excellent/Best.
- Logarithmic time: O(logn) - Good.
- Logarithmic time: O(n log n) - Bad.
- Linear time: O(n) - Fair.
- Quadratic time: O(n²) - Worst.
- Exponential time: O(2^n) - Worst.
- Factorial time: O(n!) - Worst.
Now the natural follow-up question is how to determine the time complexity of a given algorithm. well, we discussed that earlier with prior examples. however, here is the brief:
- When not dependent on the input size, then it is a constant time complexity O(1).
- When the input size is reduced by half, during iterating, recursion, or whatsoever, then it is a logarithmic time complexity O(log n).
- When having a single loop then time complexity O(n).
- When having nested loops, means a loop in a loop, then the time complexity is O(n²).
- When the algorithm’s runtime doubles with each increment in input size, then the time complexity is O(2^n).
Assessing Performance
Before implementing the algorithms, it’s possible to check time complexity. let’s consider a problem with a time limit of one second and an input size of n = 10⁵. If the time complexity of an algorithm is O(n²), it would perform approximately (10⁵)² = 10¹⁰ operations. this will take approximately ten seconds and seems too slow. So according to the input size, we can attempt to predict the algorithm’s time complexity to solve the problem.
According to the above, if the input size is n = 10⁵, it’s expected that the time complexity of the algorithm should be around O(n) or O(n log n). so this above information makes it easier to design the efficient algorithm.
Conclusion
This article covered the core concept of time complexity which quantifies algorithm efficiency. we’ve examined various time complexities with practical examples. understanding these complexities helps programmers to make informed choices for efficient algorithms. with this knowledge, now we are better prepared to optimize our code and handle real-world programming tasks.