0

I'm making a code to convert binary numbers between 0 and 1 to decimal. I made the code, tested with 0.1(equivalent to 0.5 in decimal) and it worked. When I tested it with 0.01 and 0.001 I was given the wrong answers(albeit close). I went to python tutor and found that when going for the second iteration, it would fail to transform 0.1 float into string. it would return "0.09999999999999964". Is there another way to make this conversion?

This is an algorithm from numeric method conversion.

2
  • 1
    Read about numerical representations, and specifically about floating point. You can't expect there to be a mathematically-perfect representation of all numbers, so you're going to have to define what you want to do with rounding errors and explicitly deal with them. Commented Jul 6, 2019 at 22:04
  • This thread will answer all your questions stackoverflow.com/questions/21895756/… Commented Jul 6, 2019 at 23:34

1 Answer 1

2

The error is caused by floating point rounding errors. You can choose to round your strings using format:

str(0.1 + 0.2)
# => ''0.30000000000000004'

'{:.10f}'.format(0.1 + 0.2)
# => '0.3000000000'

The format string .10f tells format that you want a float with 10 digits of precision.


Alternatively you can use more exact representations of numbers such as decimal

from decimal import Decimal
str(Decimal('0.1') + Decimal('0.2'))
# => '0.3'

Notice how 0.1 and 0.2 are put in quotes to make them strings so they won't be converted into float and be misrounded.

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.