• Question: Does floating point precision work in science and what do you do when it eventually loses accurate precision and falls apart?

    Asked by anon-230014 to Sarah, Isaac, Hira, Elena, Anisha, Alex on 15 Nov 2019.
    • Photo: Alex Leide

      Alex Leide answered on 15 Nov 2019:


      This is a really complicated question, and I don’t really understand it myself! I think floating point precision comes into problems in computational calculations, where some ratios like pi cannot be written with enough digits to accurately solve problems. As an experimentalist, the errors in our equipment and experiment design will hide any problems with floating point precision. We never get anywhere near the kind of accuracy where it becomes an issue. I think most theories and calculations are limited by what assumptions you make to simplify them, so again floating point precision doesn’t cause too much of a problem. It is more to do with computer science and engineering, where calculations need to be precise where it can fall apart. In my part of science we can never get to that kind of precision for other reasons.

    • Photo: Anisha Wijeyesekera

      Anisha Wijeyesekera answered on 17 Nov 2019:


      Thanks Alex! Sorry Comrade Samuel, i havent a clue!

    • Photo: Elena Maters

      Elena Maters answered on 17 Nov 2019:


      I have to admit this question has stumped me too! Thanks Alex for sharing your thoughts 🙂

    • Photo: Sarah Knight

      Sarah Knight answered on 18 Nov 2019:


      Yup, this is a very complicated question! 🙂 My understanding of floating-point numbers is as follows: computer memory is limited, so we store numbers using what are called “floating-point numbers”. These consist of two parts: the actual digits (called the “significand”), and then something to tell you where within the string of digits the decimal point should be placed (the “exponent”). This means you can write very large or very small numbers efficiently, to the same degree of accuracy, and use them in calculations. So far so good! However, computers work in binary (they use 1s and 0s), and floating-point numbers can’t be perfectly represented in binary — in other words, precision is limited. As Alex says, this isn’t really a problem for many types of science. I’m a psychologist, and the data I collect from human participants is far far messier than any issues to do with floating point precision! However, I have come across this problem when coding 🙂 I sometimes set up experiments using Python, and I have to remember not to do things like test whether two floating-point numbers are equal to each other — because even if they are the same as each other “in real life”, it’s likely that the computer won’t have stored them as *exactly* the same thing! Comparing the value of two integers works fine, though, because those can be stored precisely. So yes, I have come across this problem in some sense!

Comments