An algorithm is a method or procedure for accomplishing a specific task, and which is sufficiently precise and that can be programmed on computer. In Computer Science, it is important to measure efficiency of algorithms before applying them on a large scale i.e., on bulk of data. The quality of an algorithm or shall we say, performance of an algorithm depends on many internal and external factors.
Internal Factors specify algorithm's efficiency in terms of :
- Time required to run
- Space required to run
External Factors affect the algorithm's performance. These include:
- Size of the input to the algorithm
- Speed of the computer on which it is run
- Quality of the compiler
Quality of the compiler
Since, external factors are controllable to some extent, mainly internal factors are studied and measured in order to determine an algorithm's efficiency or we can say complexity. Complexity of an algorithm is determined by studying and measuring internal factors affecting the algorithm. This chapter is going to introduce how you can determine or measure efficiency of an algorithm in terms of computational complexity.
What Is Computational Complexity?
The term 'Computational complexity' is made of two words: 'Computation' and 'Complexity'. The Computation involves the problems to be solved and the algorithms to solve them. Complexity involves the study of factors to determine how much resource is sufficient/ necessary for this algorithm to run efficiently (performance).
The resources generally include:
- The time to run the algorithm (Temporal complexity).
- The space (or the memory/storage) needed to run the algorithm (Space Complexity).
The first thing to take into account is the difference between efficiency and effectiveness. Effectiveness means that the algorithm carries out its intended function correctly. But efficiency means the algorithm should be correct with the best possible performance. And to measure efficiency, we determine complexity. Complexity of an algorithm quantifies the resources needed as a function of the amount of data processed.
Informally, we can define time complexity of a program (for a given output) is the number of elementary instructions that this program executes. This number is computed with respect to the size n of the input data.
Estimating Complexity Of Algorithms
As mentioned, algorithms are usually compared along two dimensions: amount of space (that is memory) used and the time taken. Of the two, the time taken is usually considered the more important. The motivation to study time complexity is to compare different algorithms and use the one that is the most efficient in a particular situation.
Actual run time on a particular computer (external factors) is not a good basis for comparison since it depends heavily on the speed of the computer, the total amount of RAM in the computer, the OS running on the system and the quality of the compiler used. So we need a more abstract way to compare the time complexity of algorithms.
Informally, we can define time complexity of a program (for a given input) is the number of elementary instructions that this program executes. This number is computed with respect to the size of the input data.
Big-O Notation
The Big-O notation is used to depict an algorithm's growth rate. The growth rate determines the algorithm's performance when its input size grows. Through big-O, the upper bound of an algorithm's performance is specified e.g., if we say an algorithm takes O(n) time; this means that this algorithm will carry out its task taking at the most N² steps for input size N.
The Big-O notation is very useful for comparing the performance of two or more algorithms For instance, if we say that we have two algorithms for solving a problem; one has Big-O Notation.
The Big-O notation is used to depict an algorithm's growth rate. The growth rate determines the algorithm's performance when its input size grows. Through big-O, the upper bound of an algorithm's performance is specified e.g., if we say an algorithm takes O(n) time; this means that this algorithm will carry out its task taking at the most N² steps for input size N.
The Big-O notation is very useful for comparing the performance of two or more algorithms For instance, if we say that we have two algorithms for solving a problem; one has complexity O(N²), and the other has complexity O(2^n).
More the number of steps, more is the time taken and lesser is the performance. As for N=100, the time taken by an algorithm with O(2^n) is 1.26 x 10^30 compared to algorithm with O(N²) which is 10000. As 1.26 x 10^30 is way more than 10000 i.e., 10^5. Thus, you can say that performance of algorithms is inversely proportional to the wall clock time it records for a given input size.
Size O(2^n)'s 1.26 x 10^30 >> O(N²)'s 10^{5}. O(N²) is better algorithm than O(2^n) algorithm as it clocks lesser time comparatively. It other words,
we can say that the algorithm with complexity O(N²) is better than the algorithm
with complexity O(2^n) for solving the same problem.
Dominant Term
Big-O notation indicates the growth rate. It is the class of mathematical formula that best describes an algorithm's performance, and is discovered by looking inside the algorithm.
Big-O is a function with parameter N, where N is usually the size of the input to the algorithm. More the input size, more impact it can have on the growth rate of the algorithm. However, while describing the growth rate of an algorithm, we simply consider the term, which is going to affect the most on the algorithm's performance. This term is known as the dominant term. For example, if an algorithm depending on the value has performance an^{2}+bn +c for constants a, b, c) then we can say that the maximum impact on the algorithm's performance will be of the term an^{2} So while describing the complexity of the algorithm can ignore the rest of the terms and simply say that the algorithm has performance O(N²) In other words, for large N, the N² term dominates. Only the dominant term is included in big-O.
Guidelines for Computing Complexity
After talking about some basic terms, let complexity of an algorithm. us now discuss how you can actually compute complexity of an algorithm
The basic steps for computing complexity of an algorithm are:
- Select the computational resource you want to measure. Normally we have two options: time and memory. But other studies can be undertaken like, e.g., network traffic. But here, we are mainly interested in measuring time complexity.
- Look to the algorithm and pay attention to loops or recursion. Try to see the variables and conditions that make the algorithm work more or less. Sometimes it's one variable, some times several. This is going to be our size of input. Remember that with complexity analysis, we are interested in getting a function that relates the size of input with the computational resource.
- Once we have the size of input of the algorithm, try to see if there are different cases inside it, such as when the algorithm gives best performance i.e., takes shortest possible time (best case); when the algorithm gives worst performance i.e., takes maximum possible time (worst case); and when the algorithm performs in between the two cases Le.. performs better than the worst case but does not give best performance (average case).
Calculating Complexity
Five guidelines for finding out the time complexity of a piece of code are: (Assumption: All steps take precisely the same time to execute.)
- Loops
- Nested loops
- Consecutive statements
- if-then-else statement
- Logarithmic complexity
In conclusion, understanding and optimizing computational complexity is crucial for developing efficient algorithms that can tackle real-world problems with speed and effectiveness. we may learn about how individual ways of calculating computational complexity in different procedures of a code in following sessions.
Comments
Post a Comment
For any correction, doubts or suggestion, please let me know