Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Herbie: Automatically rewrites expressions to minimize floating point error (github.com/uwplse)
117 points by santaclaus on Jan 24, 2016 | hide | past | favorite | 19 comments


The OP copied only half my tweet, full title should be "...to minimize floating-point precision errors". This tool is specifically for improving the numerical accuracy of equations when translated into floating-point code-- "minimizing errors" makes no sense without the qualifiers.

I do a lot of procedural graphics on the GPU and am constantly plagued by precision issues (you cannot copy equations verbatim into code, you must observe overflow/underflow and other numerical issues). Herbie is pretty magical in that it can _automatically_ translate math expressions into a form that's friendlier for computers.

Also the reason why the GitHub site source was linked is because the website [1] crashed under traffic earlier today (full site contains an interactive solver and examples).

[1] http://herbie.uwplse.org/


Do you have an example of a graphics algorithm that you were helped with by this tool / a description of how that went?


There's actually a great write-up on how Herbie was used in the real-world to fix and improve math.js: https://pavpanchekha.com/blog/casio-mathjs.html

I've personally used it for improving some distance-field approximation code on the GPU (made it easier to translate the math into numerically-stable code).


Adam, I'm one of the authors of Herbie. I'd love to hear about the distance-field approximation code you improved using Herbie. Is there any chance you can email me some details? You can use my personal email (on my profile) or the [email protected] mailing list.


Ok, we changed the title from "Herbie: A tool to automatically rewrite arithmetic expressions to minimize error" to be closer to what you've said here. If anyone can suggest a better (more accurate and neutral) title, we can change it again.


Wouldn't it need to know the probable value ranges of the input variables?


I think it would be interesting if something like this could be chosen as a compiler optimization.



Looks quite interesting. Is there a paper/tech report that describes the approach? The website [1] seems to be down at the moment.

[1] http://herbie.uwplse.org/


Yes, this was a best paper award at PLDI '15: http://homes.cs.washington.edu/~jrw12/herbie.pdf



The heck is Racket?


The language that this site is written in.


Huh? Isn't HN written in Arc?


Right, meant to say "the platform that powers this site".

Arc itself is an interpreter written in Racket. The site runs on racket.


The Arc compiler converts Arc expressions into Scheme (in the mzscheme dialect) as implemented by Racket.


LISP people likes to refer a language modification to a "language", partly because its lexical structure is so simple and its metaprogramming makes such modification easy. But yeah, Racket and Arc are only similar in the surface syntax.



A scheme dialect formerly known as DrScheme or PLT Scheme




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: