Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

To be clear, information on the internet has always been assumed unreliable. It isn't like you typically click on only the very first Google link because 1) Google is that good (they aren't) 2) the data is reliable without corroboration.


> It isn't like you typically click on only the very first Google link because 1) Google is that good (they aren't)

I know it's popular to hate Google around here, but yes they are. It's their core competency. You can argue that they're doing a bad job of it, or get bogged down in an argument about SEO, or the morality and economics of AdWords, but outside of our bubble here, there are billions of people who type Facebook into Google to get to the Facebook login in screen, and pick that first result. Or Bank of America, or $city property taxes. (Probably not those, specifically, because the majority of the world's population speaks languages other than English.)


It's not a binary reliable/unreliable.

AI just introduces another layer of mistrust to a system with a lot of perverse incentives.

In other words, if the information was also unreliable in the past, it doesn't mean it can't get much worse in the future.

At some point, even experts will be overwhelmed with the amount of data to sift through, because the generated data is going to be optimized for "looking" correct, not "being" correct.


This is a matter of signal-noise. What people are saying when they complain about this is that the cost of producing noise that looks like signal has gone down dramatically.


depends on what your personal filters are - i've always felt like a large amount of the things i see on the internet are clearly shaped in some artificial way.

either by a "raid" by some organized group seeking to shape discourse or just accidentally by someone creating the right conditions via entertainment. With enough digging into names/phrases you can backtrack to the source.

LLMs trained on these sources are gonna have the same biases inherently. This is before considering the idea that the people training these things could just obfuscate a particularly biased node and claim innocence.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: