The Self-Taught Computer Scientist. Cory Althoff. Читать онлайн. Newlib. NEWLIB.NET

Автор: Cory Althoff
Издательство: John Wiley & Sons Limited
Серия:
Жанр произведения: Зарубежная компьютерная литература
Год издания: 0
isbn: 9781119724339
Скачать книгу
complexity involving trying to guess a numerical password consisting of n decimal digits by testing every possible combination is O(10**n).

      Here is an example of password guessing with O(10**n) complexity:

      pin = 931 n = len(pin) for i in range(10**n): if i == pin: print(i)

      The number of steps this algorithm takes to complete grows incredibly fast as n gets larger. When n is 1, this algorithm takes 10 steps. When n is 2, it takes 100 steps. When n is 3, it takes 1,000 steps. As you can see, at first, an exponential algorithm looks like it doesn't grow very quickly. However, eventually, its growth explodes. Guessing a password with 8 decimal digits takes 100 million steps, and guessing a password with 10 decimal digits takes more than 10 billion steps. Exponential scaling is the reason why it is so important to create long passwords. If someone tries to guess your password using a program like this, they can easily guess it if your password is four digits. However, if your password is 20 digits, it is impossible to crack because the program will take longer to run than a person's life span.

      This solution to guessing a password is an example of a brute-force algorithm. A brute-force algorithm is a type of algorithm that tests every possible option. Brute-force algorithms are not usually efficient and should be your last resort.

Graph depicts Big O complexity chart

      An algorithm's performance can change depending on different factors, such as the type of data you are working with. Because of this, when you evaluate an algorithm's performance, you need to consider its best-, worst-, and average-case complexities. An algorithm's best-case complexity is how it performs with ideal input, and an algorithm's worst-case complexity is how it performs in the worst possible scenario for it. An algorithm's average-case complexity is how it performs on average.

      For example, if you have to search one by one through a list, you may get lucky and find what you are looking for after checking the first item in your list. That would be the best-case complexity. However, if the item you are looking for isn't in the list, you would have to search the entire list, which is the worst-case complexity.

      If you have to search through a list one by one a hundred times, on average you will find what you are looking for in O(n/2) time, which is the same as O(n) time in big-O notation. When comparing algorithms, you often start by looking at the average-case complexity. If you want to do a deeper analysis, you can also compare their best-case and worst-case complexities.

      You can apply the time complexity concepts you learned earlier to space complexity. For example, you can calculate a factorial of n (a product of all positive integers less than or equal to n) using an algorithm that has a constant, O(1), space complexity:

      x = 1 n = 5 for i in range(1, n + 1): x = x * i

      The space complexity is constant because the amount of space your algorithm needs does not grow as n gets larger. If you decided to store all the factorials up to n in a list, your algorithm would have a linear space complexity, O(n):

      x = 1 n = 5 a_list = [] for i in range(1, n + 1): a_list.append(x) x = x * i

      Your algorithm's space complexity is O(n) because the amount of space it uses grows at the same pace as n.

      Like with time complexity, the acceptable level of space complexity for an algorithm depends on the situation. In general, though, the less space an algorithm requires, the better.

      As a computer scientist, you need to understand the different orders of magnitude to optimize your algorithms. When you are trying to improve an algorithm, you should focus on changing its order of magnitude instead of improving it in other ways. For example, say you have an O(n**2) algorithm that uses two for loops. Instead of optimizing what happens inside your loops, it is much more important to determine whether you can rewrite your algorithm so that it doesn't have two nested for loops and thus has a smaller order of magnitude.

      The decisions you make about algorithms can have enormous consequences in the real world. For example, say you are a web developer responsible for writing an algorithm that responds to a customer's web request. Whether you decide to write a constant or quadratic algorithm could make the difference between your website loading in less than one second, making your customer happy, and taking more than a minute, which may cause you to lose customers before the request loads.

       algorithm: A sequence of steps that solves a problem.

       run time: The amount of time it takes your computer to execute an algorithm written in a programming language like Python.

       size of the problem: The variable n in an equation that describes the number of steps in an algorithm.

       big O notation: A mathematical notation that describes how an algorithm's time or space requirements increase as the size of n increases.

       order of magnitude: A class in a classification system where each class is many times greater or smaller than the one before.

       time complexity: The maximum number of steps an algorithm takes to complete as n gets larger.

       constant time: An algorithm runs in constant time when it requires the same number of steps regardless of the problem's size.

       logarithmic time: An algorithm runs in logarithmic time when its run time grows in proportion to the logarithm of the input size.

       linear time: An algorithm runs in linear time when it grows at the same rate as the problem's size.

       log-linear time: An algorithm runs in log-linear time when it grows as a combination (multiplication) of logarithmic and linear time complexities.

       quadratic time: An algorithm runs in quadratic