-3

Here is my thought:

You have N elements to begin with.

But you aren't going through each one though?

Say I have

{1, 2, 3, 4, 5, 6, 7, 8, 9} and I want to find 4.

The first step is looking at 5, we see 4 < arr[5]. Then, we have {1, 2, 3, 4, 5}, the middle is 3, we see 4 > arr[2], thus we are left with {3, 4, 5}.

Now we will get 4.

But that was only 3 steps! I am not understanding why the first search takes N elements, when we are looking at the (N-1)/2th element, which is one step?

EDIT!!!

Here is what I am taught:

search 1: n elements in search space

search 2: n/2 elements in search space

search 3: n/4 elements in search space

... search i: 1 element in search space.

search i has n/(2^[i-1])elements, thus you solve for i then you get

i = log(n) + 1.

What I don't understand:

You have n elements, I agree, but you aren't searching through all of them, you are only searching 1 element, then why do you count all n?

21
  • in sorted array you don't need to go through every number, you need to check only O(log2 N) numbers to find whether element in array or not Commented Sep 17, 2016 at 20:21
  • 2
    I dont understand your last sentence. Can you reformulate/explain it in more detail? Commented Sep 17, 2016 at 20:21
  • O(.. lg n) is the basis for operations that "divide the problem in half each loop". In this case that is simply the mathematical representation - note that lg in such cases is log to base-2. A binary search (which requires a sorted sequence at the start) works very nicely as an example here in concrete numbers. For example it would take lg(1000), or ~10, loops / item reads to search in 1000 elements. 10 < 1000 and the difference grows as n -> infinity, as described by O. Commented Sep 17, 2016 at 20:21
  • @user2864740 "lg" often also represents the base-10 logarithm, so that might be confusing. However it doesnt matter here O(log_a n) = O(log_b n) for all a and b > 1. Commented Sep 17, 2016 at 20:25
  • 2
    Possible duplicate of how to calculate binary search complexity Commented Sep 17, 2016 at 20:38

1 Answer 1

4

The main reason why binary search (which requires sorted data in a data structure with O(1) random access reads) is O(log N) is that for any given data set, we start by looking at the middle-most element.

If that is large than the element we're looking for, we know that we can ignore anything from that point to the end. If it is smaller, we can ignore anything from the beginning to that element. This means that for every step, we're effectively cutting the size of the remaining in half.

By cutting the problem set in half at every step, we can (relatively) easily see that to get from N elements to a single element is O(log N) steps.

The reason why we're not terminating earlier in your example is that even though it seems as if we're scanning the first elements, the only things we actually do are "get length of array" and "access the middle-most element" (so, we never know if the element we're looking for is contained in the array).

Sign up to request clarification or add additional context in comments.

2 Comments

Hello, that is the confusion for me, how do we see that from N->1 step is log(N) steps? To get from N to N/2 is 1 step then N/2 to N/4 is 1 ... so how do we calculate it?
@garchyguy Log is the anti-exponent. If you have an original problem that's X elements large and it takes S steps, and the size of the problem halves every step, then the original problem must have been O(2 ** S) elements large. So if you have an initial problem that's N elements large, finding the solution must be O(log N) steps.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.