I realized that I didn’t understand exactly how it worked. I mean, I know floating point calculations are inexact, and I know that you can’t exactly represent `0.1` in binary, but: there’s a floating point number that’s closer to 0.3 than `0.30000000000000004`! So why do we get the answer `0.30000000000000004`?
If you don’t feel like reading this whole post with a bunch of calculations, the short answer is that `0.1000000000000000055511151231257827021181583404541015625 + 0.200000000000000011102230246251565404236316680908203125` lies exactly between 2 floating point numbers, `0.299999999999999988897769753748434595763683319091796875` (usually printed as `0.3`) and `0.3000000000000000444089209850062616169452667236328125` (usually printed as `0.30000000000000004`). The answer is `0.30000000000000004` (the second one) because its significand is even.
So let’s use these rules to calculate 0.1 + 0.2. I just learned how floating point addition works yesterday so it’s possible I’ve made some mistakes in this post, but I did get the answers I expected at the end.
These really are the exact values: because floating point numbers are in base 2, you can represent them all exactly in base 10. You just need a lot of digits sometimes :)
So the exact sum of those two floating point numbers is `0.3000000000000000166533453693773481063544750213623046875`
This isn’t our final answer though because `0.3000000000000000166533453693773481063544750213623046875` isn’t a 64-bit float.
#### step 3: look at the nearest floating point numbers
Now, let’s look at the floating point numbers around `0.3`. Here’s the closest floating point number to `0.3` (usually written as just `0.3`, even though that isn’t its exact value):
We can figure out the next floating point number after `0.3` by serializing `0.3` to 8 bytes with `struct.pack`, adding 1, and then using `struct.unpack`:
In the binary representation of a floating point number, there’s a number called the “significand”. In cases like this (where the result is exactly in between 2 successive floating point number, it’ll round to the one with the even significand.
The last digit of the big endian hex representation of `0.30000000000000004` is `4`, so that’s the one with the even significand (because the significand is at the end).
Above we did the calculation in decimal, because that’s a little more intuitive to read. But of course computers don’t do these calculations in decimal – they’re done in a base 2 representation. So I wanted to get an idea of how that worked too.
I don’t think this binary calculation part of the post is particularly clear but it was helpful for me to write out. There are a really a lot of numbers and it might be terrible to read.
I’m ignoring the sign bit (the first bit) because we only need these functions to work on two numbers (0.1 and 0.2) and those two numbers are both positive.
(you might legitimately be worried about floating point accuracy issues with this calculation, but in this case I’m pretty sure it’s fine because these numbers by definition don’t have accuracy issues – the floating point numbers starting at `2**-4` go up in steps of `1/2**(52 + 4)`)
So we need to add together `2702159776422298` and `3602879701896397`
```
>>> 2702159776422298 + 3602879701896397
6305039478318695
```
Cool. But `6305039478318695` is more than 2**52 - 1 (the maximum value for a significand), so we have a problem:
```
>>> 6305039478318695 > 2**52
True
```
#### step 4: increase the exponent
Right now our answer is
```
2**-3 + 6305039478318695 / 2**(52 + 3)
```
First, let’s subtract 2**52 to get
```
2**-2 + 1801439850948199 / 2**(52 + 3)
```
This is almost perfect, but the `2**(52 + 3)` at the end there needs to be a `2**(52 + 2)`.
So we need to divide 1801439850948199 by 2. This is where we run into inaccuracies –`1801439850948199` is odd!
```
>>> 1801439850948199 / 2
900719925474099.5
```
It’s exactly in between two integers, so we round to the nearest even number (which is what the floating point specification says to do), so our final floating point number result is:
```
>>> 2**-2 + 900719925474100 / 2**(52 + 2)
0.30000000000000004
```
That’s the answer we expected:
```
>>> 0.1 + 0.2
0.30000000000000004
```
#### this probably isn’t exactly how it works in hardware
The way I’ve described the operations here isn’t literally exactly what happens when you do floating point addition (it’s not “solving for X” for example), I’m sure there are a lot of efficient tricks. But I think it’s about the same idea.
The computer isn’t actually printing out the exact value of the number, instead it’s printing out the _shortest_ decimal number `d` which has the property that our floating point number `f` is the closest floating point number to `d`.
It turns out that doing this efficiently isn’t trivial at all, and there are a bunch of academic papers about it like [Printing Floating-Point Numbers Quickly and Accurately][1]. or [How to print floating point numbers accurately][2].
#### would it be more intuitive if computers printed out the exact value of a float?
Rounding to a nice clean decimal value is nice, but in a way I feel like it might be more intuitive if computers just printed out the exact value of a floating point number – it might make it seem a lot less surprising when you get weird results.
Someone in the comments somewhere pointed out that `<?php echo (0.1 + 0.2 );?>` prints out `0.3`. Does that mean that floating point math is different in PHP?
`<?php echo (0.1 + 0.2 )- 0.3);?>` on [this page][3], I get the exact same answer as in Python 5.5511151231258E-17. So it seems like the underlying floating point math is the same.
I think the reason that `0.1 + 0.2` prints out `0.3` in PHP is that PHP’s algorithm for displaying floating point numbers is less precise than Python’s – it’ll display `0.3` even if that number isn’t the closest floating point number to 0.3.
I kind of doubt that anyone had the patience to follow all of that arithmetic, but it was helpful for me to write down, so I’m publishing this post anyway. Hopefully some of this makes sense.