public static int findMin(int[] numbers, int startIndex, int endIndex) {
if (startIndex == endIndex){
return numbers[startIndex];
}
else {
int midIndex = (startIndex + endIndex) / 2;
int leftMin = findMin(numbers, startIndex, midIndex);
int rightMin = findMin(numbers, midIndex + 1, endIndex);
if (leftMin < rightMin)
return leftMin;
else
return rightMin;
}
}
I really have trouble understanding this find min recursion. This recursive method finds the minimum number in an array.
This is how I understand it.
Suppose I have a array 5, 3 , -5, 8, and startIndex is 0, endIndex is 3
First time, midIndex = (0+3)/2 =1. So it divided between 3 and -5.
And then it goes to findMin, so it passes Array, 0, 1 back to findMin.
Then, the midIndex = (0+1)/2 = 0. And passes Array, 0, 0 back to findMin.
Since startIndex 0 = endIndex 0, return numbers[startIndex](which is 5?).
I can't really figure out how this method finds the minimum number. Since the startIndex is alway 0, why does it need to return numbers[startIndex]?