Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Security lapse exposed Clearview AI source code (techcrunch.com)
145 points by jbegley on April 16, 2020 | hide | past | favorite | 35 comments


Can we please not do this? "Hussein said that he found some 70,000 videos in one of Clearview’s cloud storage buckets, taken from a camera installed at face-height in the lobby of a residential building. The videos show residents entering and leaving the building.

Ton-That explained that, “as part of prototyping a security camera product we collected some raw video strictly for debugging purposes, with the permission of the building management.”"


> with the permission of the building management

It makes me angry that's all the permission they think they need - and even more so that it's all they are probably legally required to need.


That's why we need laws for facial recognition and biometric rights, much like how some states are starting.


> with the permission of the building management

Hey building manager! Wanna make some money with no additional work? We will collect anonymized data from your cameras, but that's it!


I'm curious if the repo supports the recent story that Clearview's early programmers came from an alt-right social circle. They've publicly denied links that the journalists seemed to support quite well.

https://www.huffpost.com/entry/clearview-ai-facial-recogniti...

Does anyone know the security researcher to ask them to run this? git log --format='%aN' | sort | uniq -c | sort -rn


Clearview is bad. And I haven't dug into the supporting materials for this story at all. But it's disquieting that this story appears to include sensitive private information obtained through security research and released directly to a media outlet, including camera footage apparently taken from a compromised cloud storage bucket. That's not how security research works.


It is perfect irony! They left their code wide open out on the web, the same way they went out scraping for people's pictures on the wide open web.


>That's not how security research works.

They contacted Clearview and made sure that the issue was fixed before sharing the information they found with the press. What problem do you see with this? The researchers have no responsibility to keep Cleaeview's failures secret.


> They contacted Clearview and made sure that the issue was fixed before sharing the information they found with the press. What problem do you see with this? The researchers have no responsibility to keep Cleaeview's failures secret.

No problem with disclosing the vulnerability, but it's definitely problematic making the video footage publicly available—sure, anonymised, but we know how reliable that is, even if done with the best of intentions. At least, that's how I read your parent:

> But it's disquieting that this story appears to include sensitive private information obtained through security research and released directly to a media outlet, including camera footage apparently taken from a compromised cloud storage bucket. That's not how security research works.


>but it's definitely problematic making the video footage publicly available

Which is why they informed the press that Clearview kept that footage and made it publicly available.

I understand the point you're trying to make, but a small amount of proof is necessary to verify such claims. A few seconds showing that some people were at an apartment building is unlikely to be sensitive, and it's entirely possible they contacted the people before publishing.


> I understand the point you're trying to make, but a small amount of proof is necessary to verify such claims.

I wasn't trying to make the point, only to emphasise my grandparent's point.

But, for what it's worth, I agree with that point: the journalists probably needed to see the video footage as proof, but there was no need for the journalists to publish the video footage. If I don't trust reporters when they tell me that they've seen the footage themselves, then why would I trust them when they show me footage that could have come from anywhere but that they say came from a data leak?


I am most uneasy with the legality and normalization of the lobby footage. I wonder if the residents of that building have been alerted to their exposure on this?

The residents may have signed away their rights to surveillance footage being shared with 3rd party "partners" like in many privacy agreements in U.S. If that's the case, it probably wasn't clear to them what they were signing, and how the footage would be used.

Now that this has come to light, I wonder if the next chapter involves some legal action from these residents? Would they have a case?

NYC has really progressively renters rights and many public advocacy groups who may want to study the rental agreements that make this stuff possible.


Whenever reading about these kind of horror stories, I feel so lucky that we have the GDPR in europe.

AFAIK under GDPR it would be imposssible to share those data without explicit, free, informed consent of all affected persons [1]. Even if shared, you have a right to information with whom the data was shared (and for which purpose) and a right for demanding erasure.

[1] https://en.wikipedia.org/wiki/General_Data_Protection_Regula...


If it's in their database, why can it not be in our database? If it is public and it's out there.


That’s a form of No True Scotsman. You have an idea of what “security research” is supposed to mean based on the kinds that have been profitable to you and the acquaintances you keep etc. As someone without skin in this game, I appreciate traditional security research but also muckrakers, immediate disclosure, and many other forms of holding bad actors to account. I don’t care what this is called but I’m glad that Hussein did it.


I could give a rats ass about coordinated disclosure. Immediate disclosure is fine. You've misconstrued the distinction I'm drawing. It's not between notifying and not notifying; it's between disclosing vulnerabilities and disclosing data gained through vulnerabilities.


I think it's a matter of opinion that the investigative journalist (which is what a security researcher is akin to) can use their best judgment in making. I think the video, right there, of the people walking into their apartment, makes the whole nefarious agenda so much more visceral. OK, maybe they are a cheating couple who will be discovered by their partners or whatever... but I think the tradeoffs are worth it especially given that the media also applied their filter of blurring the faces.


That's all public data, if you ask Clearview.


Point being? It's not and nobody in their right mind is asking Clearview for their opinion on morals or privacy.


Point being that if they can scrape the web and collect and correlate PII, under a claim that’s basically “finders keepers!” and sell that information, that anyone else gaining unrestricted access to that same information can make the same claim of it being publicly available and thus fair game to do with what they will.


> Hussein, who has previously reported security issues at several startups, including MoviePass, Remine and Blind, said he reported the exposure to Clearview but declined to accept a bounty, which he said if signed would have barred him from publicly disclosing the security lapse.

seems grey to me


The "accept the money and keep quiet" part? Yep, very gray.


I like this Hussein guy though. Glad he acted selflessly. Need more like him.


I like Hussein as well !


That's not grey. Not signing a binding contract saying you need to keep your mouth shut is everyone's right.


eh? That's done literally all the time. But before the fact, not after.

I believe ptacek was complaining about unilateral disclosure. I'm arguing that the researcher was pushed into it. However, it's still grey: just because you are presented with an option you simply don't like, doesn't mean you leak actual data. You can still disclose the breach without exposing the data. It's very grey.


The way the report was written, I suspect this was another open Gitlab server.

Most people who self-host Gitlab doesn't realize that between the default self-registration and "Explore" button at the bottom, possible for entirely random individuals to gain enormous access.

I have written the Naval Postgraduate School several times since December about their open Gitlab server (maybe it is supposed to be open though) which seems exposed via the "Explore" tab at the bottom: https://204.102.228.54/users/sign_in


Please send me an email at harlan -at- dds.mil or file a report on https://hackerone.com/deptofdefense and we'll get it closed.



browsing through it, there are some references in readme files to cloning repos without an account, so I think that the users are aware.


Probably not supposed to be open. Although technically anything developed by the govt is public domain...


Does anyone have a copy of their source code?


"Inside those buckets, Clearview stored copies of its finished Windows, Mac and Android apps, as well as its iOS app, which Apple recently blocked for violating its rules. The storage buckets also contained early, pre-release developer app versions that are typically only for testing, Hussein said."

Smartphone apps for interfacing with a SaaS are now "Clearview AI source code"?


The company is called "Clearview AI". Not "Clearview" source code for AI.

Directly before what you quote:

> The repository contained Clearview’s source code, which could be used to compile and run the apps from scratch. The repository also stored some of the company’s secret keys and credentials, which granted access to Clearview’s cloud storage buckets. Inside those buckets, ...


Is anyone aware of any attempts to pollute their database? It seems like there should be a way of injecting bad data into their system. Maybe make a FB account, post some mis-tagged pictures, and 3 months later file a records request to see if they've been vacuumed up?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: