One of the most important aspects of an algorithm is how fast it is. It
is often easy to come up with an algorithm to solve a problem, but if
the algorithm is too slow, its a draw back. Since the
exact speed of an algorithm depends on where it run, as
well as the exact details of its implementation.
Computer scientists talk about the runtime relative to the size of the input. For
example, if the input consists of N integers, an algorithm might have a
runtime proportional to N
2, represented as O(N
2).
This means that if you were to run an implementation of the algorithm on
your computer with an input of size N, it would take C*N
2 seconds, where C is some constant that doesn't change with the size of the input.
However, the execution time of many complex algorithms can vary due to
factors other than the size of the input. For example, a sorting
algorithm may run much faster when given a set of integers that are
already sorted than it would when given the same set of integers in a
random order. As a result, you often hear about the
worst-case runtime, or the average-case runtime. The worst-case runtime
is how long it would take for the algorithm to run if it were given the
most insidious of all possible inputs. The average-case runtime is the
average of how long it would take the algorithm to run if it were given
all possible inputs. Of the two, the worst-case is often easier to
reason about, and therefore is more frequently used as a benchmark for a
given algorithm. The process of determining the worst-case and
average-case runtimes for a given algorithm can be tricky, since it is
usually impossible to run an algorithm on all possible inputs.
Approximate completion time for algorithms, N = 100...
For more details visit:
Run time analysis and time complexity