Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That strikes me as an unnecessarily elitist answer that holds us back.

I'm hardly a mathematician, or even college educated, but AFAICT this all boils down to the fact that I can type a number into the computer, and it can't represent it exactly internally, so it misrepresents it, silently.

Where I come from, that's called "a bug", regardless of cause.

Non-mathematicians (and even non-accountants and non-financiers and the like) have to math money all the time. They do it in daily life. Some of them even write programs to do it, because they know enough about computers to do that.

They don't expect that their expensive smartphone is going to screw up the calculation due to some esoteric representational reason that they need four or eight years of college to be aware of, let alone to understand or explain.

And they shouldn't need to!

I would argue that if computers can't do the job correctly in every case without the user jumping through hoops, then we should be continuing to develop methods to make it better.



That's not what I'm talking about. There are legal frameworks about how to compute something (which I only vaguely know exist, so I know enough that I personally should not be implementing any of this without significant outside input and expertise), and if you do it wrong we get deaths. This has been a major issue in at least two countries (the UK with the Post Office and Australia with Robodebt). I'd argue its more elitist thinking "I can program numbers into a computer so I can do anything involving numbers", rather than referring to non-software-developer experts.

On needing years in college to learn this: the "weirdness" of floats should be covered (it was for me) in high school science, and is also drummed in in first year science labs. Any time you actually need to work with numbers (rather than saying checking whether a group is abelian), you're dealing with how to compute and these rules predate electronics.


That’s because you haven’t looked at what floats are designed for. They were created for the purpose of high performance scientific computing, so they quite deliberately, and explicitly, trade off perfect accuracy for much greater performance.

Computers are perfectly good at providing infinite precision numbers and doing arithmetic on them, and most languages expose explicit number types for that purpose. But there’s a reason why floats are called floats, and not numbers. It’s because floats aren’t numbers (at least not base 10 numbers)! They’re a pretty accurate, but highly performant, approximation of base 10 numbers.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: