Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The real problem is, how do you actually design a test that will catch these issues. If you have 3 crashes in millions of miles, you are pretty unlikely to find those errors during testing.

I wish Tesla was more open about the way the actually test their Autopilot software.



While automated driving is a relatively new field, the science and data of traffic conditions and accidents is quite deep. You don't have to blindly wait until your beta users get into accidents. In the case of the first fatality, how is it possible that Tesla engineers were unaware of a situation in which a lightly-colored vehicle might show up on days in which the sky is bright?

Of course they were aware of that situation. They chose to deprioritize that to reduce the incidence of false positives that came from overhead highway signs. Nothing wrong with attempting to squash false positives, which can lead to deadly situations themselves. But there was nothing preventing Tesla from doing adequate testing to determine what unintended consequences their software modifications would entail. They don't have to wait for someone to get decapitated to realize that there's a trade off in reducing false positives from overhead signs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: