I'll give you a simple example.
Suppose you have some floating-point numbers to add together. We'll assume they're all non-negative so that cancellation isn't an issue.
For the purpose of this example, I'm going to use decimal with four significant digits of precision just to illustrate the point.
The numbers are:
1.000e-4
1.000e-4
1.000e-4
1.000e-4
1.000e-4
1.000e-4
1.000e-4
1.000e-4
1.000e-4
1.000e-4
1.000e+0
That is, ten copies of 1e-4 and one copy of 1.
This sum can be perfectly represented in our toy floating-point format:
1.001e+0
But the answer that you get depends on what order you do the additions! If you start at the top of the list and work your way down, the partial sums are:
1.000e-4
2.000e-4
3.000e-4
4.000e-4
5.000e-4
6.000e-4
7.000e-4
8.000e-4
9.000e-4
1.000e-3
1.001e+0
But if you start at the bottom and work your way to the top, the partial sums are:
1.000e+0
1.000e+0
1.000e+0
1.000e+0
1.000e+0
1.000e+0
1.000e+0
1.000e+0
1.000e+0
1.000e+0
1.000e+0
Clearly, you need to add the small numbers first. So here are two possible algorithms.
- Sort the numbers into ascending order, then sum them in that order.
- Insert the numbers into a priority queue. Remove the two smallest numbers, add them, and insert the sum back into the priority queue. Repeat until there is one number left.
Exercises:
- Is there a set of numbers for which these two algorithms return different answers?
- Is one of these algorithms better than the other, in the sense of always returning a more accurate final answer?
- Can we do even better?