Big-Onotation, and other notations, come in!
asymptotic notationthat describes how an algorithm grows in
spacein relation to the
input. This is call it's
Space ComplexityThat may sound like a load of waffle, but what it means if "If I make n so many times larger, how long or big does the algorithm get?" On some sites where you train your coding skill against challenges; you may find people commenting on other people's solution and throwing
O(n^2)or some other
Oterminology about, that's what Big O is. In our previous lesson, if you did a bit of reading on the mentioned algorithms, you'll more than likely find this kind of notation.
unconfined). Say you written some function and at worst it takes very specific number of operations to tackle a problem of size n, for example:
Big O, our worst-case growth with input n:
Big Ojust by looking at the (pseudo)code you've written. We shall walk through a few examples of different
Big Osizes as far as
timeis concerned, how to recognise them, and what they mean:
constant. Let's write a bit of
pseudocodeto demonstrate this:
all we need to do is return the firstelement. If we take an array of length 3,000,000;
all we need to do is return the first element. Notice how our process of getting the answer has not changed at all? That means the algorithm's size has not changed, making it constant.
constant space; depending on which metric you're measuring.
time. Take this simple example:
Does this element match?: No->
Does this element match?: No->
narray elements. So what's our
Big O? We think about the worst case. What is the worst case? Our value is either sitting on the very end of the array or doesn't exist. Here, if we have an array size
n; the function does
nchecks before it gives us an answer. So our function grows at the same rate that
ngrows, which gives us
linear time. Algorithm
n-1comparisons for each element in our input array. And the array has
nelements. So we can say:
So how does it work?section, we can ignore the
- n, and we're left with
quadratic timealgorithm. It belongs in a larger family of algorithms,
O(n^k), which are known as
O(n^2)counter parts. What
O(log(n))means is that for the algorithm to grow in
multiples, the input needs to grow in
powers. So the input will grow much faster. Could we find an example of this? Let's give it a try:
pointat each end of the array. and one point in the
middle. If our
middle pointdoesn't have the desired value, we check if the
less thanour middle value. Depending on this answer, we
shiftone of our
end pointsto our
middle point, then find a new middle point. Here is an example of our code in action, with the array: [ 1,4,7,8,9,14,17,32,69 ], and we're searching for 32:
halfour problem space, and only make one comparison in that problem space? This allows us to quickly look through vast quantities of data by
ignoringcomparisons that we know will not be successful. With our array above, at worst; we could do 4 searches before we either find or fail. So how would this perform in a space of say...
Linear timefunction would take to go through twenty
Big O notationis an accurate quantifier for performance on a large scale. There exists some cases where although an algorithm has a more efficient
Big O, in reality that efficiency might only come in for
impossibly bigdata sets. So care should still be made to properly research an algorithm if you intend on employing it.
Big Ois theoretical. Again, it should mostly hold in the real world; but there are other tricks you will learn throughout your career to further optimise your code if necessary.