Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

0x0 ,

Minority Report vibes...

Vipsu ,
@Vipsu@lemmy.world avatar

Can't wait for something like this get hacked.
There'll be a lot explaining to do.

dogsnest ,
@dogsnest@lemmy.world avatar

I read this in a Ricky Ricardo voice.

Jayjader , (edited )

Still, I think the only way that would result in change is if the hack specifically went after someone powerful like the mayor or one of the richest business owners in town.

Dasnap ,
@Dasnap@lemmy.world avatar

Did the post office let them borrow their tech?

PseudorandomNoise ,
@PseudorandomNoise@lemmy.world avatar

Despite concerns about accuracy and potential misuse, facial recognition technology seems poised for a surge in popularity. California-based restaurant CaliExpress by Flippy now allows customers to pay for their meals with a simple scan of their face, showcasing the potential of facial payment technology.

Oh boy, I can’t wait to be charged for someone else’s meal because they look just enough like me to trigger a payment.

0x0 ,

Just go to a restaurant where public figures go and use a photo of their face.

mPony ,

like Deadpool did, with a stapler

Cethin ,

I have an identical twin. This stuff is going to cause so many issues even if it worked perfectly.

xavier666 ,
@xavier666@lemm.ee avatar

Sudden resurgence of the movie "Face Off"

AtariDump ,
xavier666 ,
@xavier666@lemm.ee avatar

Hell yeah brother!

Thassodar ,

I don't see this as a negative.

CeeBee ,

Ok, some context here from someone who built and worked with this kind tech for a while.

Twins are no issue. I'm not even joking, we tried for multiple months in a live test environment to get the system to trip over itself, but it just wouldn't. Each twin was detected perfectly every time. In fact, I myself could only tell them apart by their clothes. They had very different styles.

The reality with this tech is that, just like everything else, it can't be perfect (at least not yet). For all the false detections you hear about, there have been millions upon millions of correct ones.

BeardedBlaze ,
@BeardedBlaze@lemmy.world avatar

Twins are no issue. Random ass person however is. Lol

CeeBee ,

Yes, because like I said, nothing is ever perfect. There can always be a billion little things affecting each and every detection.

A better statement would be "only one false detection out of 10 million"

Zron ,

You want to know a better system?

What if each person had some kind of physical passkey that linked them to their money, and they used that to pay for food?

We could even have a bunch of security put around this passkey that makes it’s really easy to disable it if it gets lost or stolen.

As for shoplifting, what if we had some kind of societal system that levied punishments against people by providing a place where the victim and accused can show evidence for and against the infraction, and an impartial pool of people decides if they need to be punished or not.

CeeBee ,

100%

I don't disagree with a word you said.

FR for a payment system is dumb.

fishpen0 ,

Another way to look at that is ~810 people having an issue with a different 810 people every single day assuming only one scan per day. That’s 891,000 people having a huge fucking problem at least once every single year.

I have this problem with my face in the TSA pre and passport system and every time I fly it gets worse because their confidence it is correct keeps going up and their trust in my actual fucking ID keeps going down

CeeBee , (edited )

I have this problem with my face in the TSA pre and passport system

Interesting. Can you elaborate on this?

Edit: downvotes for asking an honest question. People are dumb

MonkderDritte ,

it can't be perfect (at least not yet).

Or ever, because it locks you out after a drunken night otherwise.

CeeBee ,

Or ever because there is no such thing as 100% in reality. You can only add more digits at the end of your accuracy, but it will never reach 100.

Cethin ,

This tech (AI detection) or purpose built facial recognition algorithms?

boatswain ,

In fact, I myself could only tell them apart by their clothes. They had very different styles.

This makes it sound like you only tried one particular set of twins--unless there were multiple sets, and in each set the two had very different styles? I'm no statistician, but a single set doesn't seem statistically significant.

CeeBee ,

What I'm saying is we had a deployment in a large facility. It was a partnership with the org that owned the facility to allow us to use their location as a real-world testing area. We're talking about multiple buildings, multiple locations, and thousands of people (all aware of the system being used).

Two of the employees were twins. It wasn't planned, but it did give us a chance to see if twins were a weak point.

That's all I'm saying. It's mostly anecdotal, as I can't share details or numbers.

boatswain ,

Two of the employees were twins. It wasn't planned, but it did give us a chance to see if twins were a weak point.

No, it gave you a chance to see if that particular set of twins was a weak point.

CeeBee ,

With that logic we would need to test the system on every living person to see where it fails.

The system had been tested ad nauseum in a variety of scenarios (including with twins and every other combination you can think of, and many you can't). In this particular situation, a real-world test in a large facility with many hundreds of cameras everywhere, there happened to be twins.

It's a strong data point regardless of your opinion. If it was the only one then you'd have a point. But like I said, it was an anecdotal example.

techt ,

Can you please start linking studies? I think that might actually turn the conversation in your favor. I found a NIST study (pdf link), on page 32, in the discussion portion of 4.2 "False match rates under demographic pairing":

The results above show that false match rates for imposter pairings in likely real-world scenarios are much higher than those from measured when imposters are paired with zero-effort.

This seems to say that the false match rate gets higher and higher as the subjects are more demographically similar; the highest error rate on the heat map below that is roughly 0.02.

Something else no one here has talked about yet -- no one is actively trying to get identified as someone else by facial recognition algorithms yet. This study was done on public mugshots, so no effort to fool the algorithm, and the error rates between similar demographics is atrocious.

And my opinion: Entities using facial recognition are going to choose the lowest bidder for their system unless there's a higher security need than, say, a grocery store. So, we have to look at the weakest performing algorithms.

CeeBee ,

My references are the NIST tests.

https://pages.nist.gov/frvt/reports/1N/frvt_1N_report.pdf

That might be the one you're looking at.

Another thing to remember about the NIST tests is that they try to use a standardized threshold across all vendors. The point is to compare the results in a fair manner across systems.

The system I worked on was tested by NIST with an FMR of 1e-5. But we never used that threshold and always used a threshold that equated to 1e-7, which is orders of magnitude more accurate.

And my opinion: Entities using facial recognition are going to choose the lowest bidder for their system unless there's a higher security need than, say, a grocery store. So, we have to look at the weakest performing algorithms.

This definitely is a massive problem and likely does contribute to poor public perception.

techt ,

Thanks for the response! It sounds like you had access to a higher quality system than the worst, to be sure. Based on your comments I feel that you're projecting the confidence in that system onto the broader topic of facial recognition in general; you're looking at a good example and people here are (perhaps cynically) pointing at the worst ones. Can you offer any perspective from your career experience that might bridge the gap? Why shouldn't we treat all facial recognition implementations as unacceptable if only the best -- and presumably most expensive -- ones are?

A rhetorical question aside from that: is determining one's identity an application where anything below the unachievable success rate of 100% is acceptable?

CeeBee ,

Based on your comments I feel that you're projecting the confidence in that system onto the broader topic of facial recognition in general; you're looking at a good example and people here are (perhaps cynically) pointing at the worst ones. Can you offer any perspective from your career experience that might bridge the gap? Why shouldn't we treat all facial recognition implementations as unacceptable if only the best -- and presumably most expensive -- ones are?

It's a good question, and I don't have the answer to it. But a good example I like to point at is the ACLU's announcement of their test on Amazon's Rekognition system.

They tested the system using the default value of 80% confidence, and their test resulted in 20% false identification. They then boldly claimed that FR systems are all flawed and no one should ever use them.

Amazon even responded saying that the ACLU's test with the default values was irresponsible, and Amazon's right. This was before such public backlash against FR, and the reasoning for a default of 80% confidence was the expectation that most people using it would do silly stuff like celebrity lookalikes. That being said, it was stupid to set the default to 80%, but that's just hindsight speaking.

My point here is that, while FR tech isn't perfect, the public perception is highly skewed. If there was a daily news report detailing the number of correct matches across all systems, these few showing a false match would seem ridiculous. The overwhelming vast majority of news reports on FR are about failure cases. No wonder most people think the tech is fundamentally broken.

A rhetorical question aside from that: is determining one's identity an application where anything below the unachievable success rate of 100% is acceptable?

I think most systems in use today are fine in terms of accuracy. The consideration becomes "how is it being used?" That isn't to say that improvements aren't welcome, but in some cases it's like trying to use the hook on the back of a hammer as a screw driver. I'm sure it can be made to work, but fundamentally it's the wrong tool for the job.

FR in a payment system is just all wrong. It's literally forcing the use of a tech where it shouldn't be used. FR can be used for validation if increased security is needed, like accessing a bank account. But never as the sole means of authentication. You should still require a bank card + pin, then the system can do FR as a kind of 2FA. The trick here would be to first, use a good system, and then second, lower the threshold that borders on "fairly lenient". That way you eliminate any false rejections while still maintaining an incredibly high level of security. In that case the chances of your bank card AND pin being stolen by someone who looks so much like you that it tricks FR is effectively impossible (but it can never be truly zero). And if that person is being targeted by a threat actor who can coordinate such things then they'd have the resources to just get around the cyber security of the bank from the comfort of anywhere in the world.

Security in every single circumstance is a trade-off with convenience. Always, and in every scenario.

FR works well with existing access control systems. Swipe your badge card, then it scans you to verify you're the person identified by the badge.

FR also works well in surveillance, with the incredibly important addition of human-in-the-loop. For example, the system I worked on simply reported detections to a SoC (with all the general info about the detection including the live photo and the reference photo). Then the operator would have to look at the details and manually confirm or reject the detection. The system made no decisions, it simply presented the info to an authorized person.

This is the key portion that seems to be missing in all news reports about false arrests and whatnot. I've looked into all the FR related false arrests and from what I could determine none of those cases were handled properly. The detection results were simply taken as gospel truth and no critical thinking was applied. In some of those cases the detection photo and reference (database) photo looked nothing alike. It's just the people operating those systems are either idiots or just don't care. Both of those are policy issues entirely unrelated to the accuracy of the tech.

techt ,

The mishandling is indeed what I'm concerned about most. I now understand far better where you're coming from, sincere thanks for taking the time to explain. Cheers

hazeebabee ,

Super interesting to read your more technical perspective. I also think facial recognition (and honestly most AI use cases) are best when used to supplement an existing system. Such as flagging a potential shoplifter to human security.

Sadly most people don't really understand the tech they use for work. If the computer tells them something they just kind of blindly believe it. Especially in a work environment where they have been trained to do what the machine says.

My guess is that the people were trained on how to use the system at a very basic level. Troubleshooting and understanding the potential for error typically isn't covered in 30min corporate instructional meetings. They just get a little notice saying a shoplifter is in the store and act on that without thinking.

Telodzrum ,

If it works anything like Apple’s Face ID twins don’t actually map all that similar. In the general population the probability of matching mapping of the underlying facial structure is approximately 1:1,000,000. It is slightly higher for identical twins and then higher again for prepubescent identical twins.

MonkderDritte ,

Meaning, 8'000 potential false positives per user globally. About 300 in US, 80 in Germany, 7 in Switzerland.

Might be enough for Iceland.

Telodzrum ,

Yeah, which is a really good number and allows for near complete elimination of false matches along this vector.

MonkderDritte ,

If used as login, together with some other method of access restriction?

Telodzrum ,

Yeah, exactly.

4am ,

I promise bro it’ll only starve like 400 people please bro I need this

Telodzrum ,

Who’s getting starved because of this technology?

awesome_lowlander ,

A single mum with no support network who can't walk into any store without getting physically ejected, maybe?

uis ,
@uis@lemm.ee avatar

Let me rephrase it. "Who's getting suffocated because of gas chambers?"

awesome_lowlander ,

You're perfectly OK with 8000 people worldwide being able to charge you for their meals?

Telodzrum ,

No you misunderstood. That is a reduction in commonality by a literal factor of one million. Any secondary verification point is sufficient to reduce the false positive rate to effectively zero.

awesome_lowlander ,

secondary verification point

Like, running a card sized piece of plastic across a reader?

It'd be nice if they were implementing this to combat credit card fraud or something similar, but that's not how this is being deployed.

BassTurd ,

Which means the face recognition was never necessary. It's a way for companies to build a database that will eventually get exploited. 100% guarantee.

starchylemming ,

no,people in iceland are so genetically homogeneous, they probably match thanks to everyone being so related

ramjambamalam ,

I can already imagine the Tom Clancy thriller where some Joe Nobody gets roped into helping crack a terrorist's locked phone because his face looks just like the terrorist's.

Cethin ,

Yeah, people with totally different facial structures get identified as the same person all the time with the "AI" facial recognition, especially if your darker skinned. Luckily (or unluckily) I'm white as can be.

I'm assuming Apple's software is a purpose built algorithm that detects facial features and compares them, rather than the black box AI where you feed in data and it returns a result. Thats the smart way to do it, but it takes more effort.

CeeBee ,

people with totally different facial structures get identified as the same person all the time with the "AI" facial recognition

All the time, eh? Gonna need a citation on that. And I'm not talking about just one news article that pops up every six months. And nothing that links back to the UCLA's 2018 misleading "report".

I'm assuming Apple's software is a purpose built algorithm that detects facial features and compares them, rather than the black box AI where you feed in data and it returns a result.

You assume a lot here. People have this conception that all FR systems are trained blackbox models. This is true for some systems, but not all.

The system I worked with, which ranked near the top of the NIST FRVT reports, did not use a trained AI algorithm for matching.

Cethin ,

I'm not doing a bunch of research to prove the point. I've been hearing about them being wrong fairly frequently, especially on darker skinned people, for a long time now. It doesn't matter how often it is. It sounds like you have made up your mind already.

I'm assuming that of apple because it's been around for a few years longer than the current AI craze has been going on. We've been doing facial recognition for decades now, with purpose built algorithms. It's not mucb of leap to assume that's what they're using.

CeeBee ,

I've been hearing about them being wrong fairly frequently, especially on darker skinned people, for a long time now.

I can guarantee you haven't. I've worked in the FR industry for a decade and I'm up to speed on all the news. There's a story about a false arrest from FR at most once every 5 or 6 months.

You don't see any reports from the millions upon millions of correct detections that happen every single day. You just see the one off failure cases that the cops completely mishandled.

I'm assuming that of apple because it's been around for a few years longer than the current AI craze has been going on.

No it hasn't. FR systems have been around a lot longer than Apple devices doing FR. The current AI craze is mostly centered around LLMs, object detection and FR systems have been evolving for more than 2 decades.

We've been doing facial recognition for decades now, with purpose built algorithms. It's not mucb of leap to assume that's what they're using.

Then why would you assume companies doing FR longer than the recent "AI craze" would be doing it with "black boxes"?

I'm not doing a bunch of research to prove the point.

At least you proved my point.

Cethin ,

You don't see any reports from the millions upon millions of correct detections that happen every single day. You just see the one off failure cases that the cops completely mishandled.

Obviously. I don't have much of an issue with it when it's working properly (although I do still absolutely have an issue with it still). It being wrong and causing issues fairly frequently, and every 5 or 6 months is frequent (this is a low number, just the frequency of it causing newsworthy issues) with it not being deployed widely yet, is a pretty big issue. Scale that up by several orders of magnitude if it's widely adopted and the errors will be constant.

No it hasn't. FR systems have been around a lot longer than Apple devices doing FR. The current AI craze is mostly centered around LLMs, object detection and FR systems have been evolving for more than 2 decades.... Then why would you assume companies doing FR longer than the recent "AI craze" would be doing it with "black boxes"?

You're repeating what I said. Apples FR tech is a few years older than the machine learning tech that we have now. FR in general is several decades old, and it's not ML based. It's not a black box. You can actually know what it's doing. I specifically said they weren't doing it with black boxes. I said the AI models are. Please read again before you reply.

At least you proved my point.

You wrongly assuming what I said, which is actually the opposite of what I said, is the reason I'm not putting in the effort. You've made up your mind. I'm not going to change it, so I'm not putting in the effort it would take to gather the data, just to throw it into the wind. It sounds like you are already aware of some of it, but somehow think it's not bad.

4am ,

And yet this woman was mistaken for a 19-year-old 🤔

Telodzrum ,

Shitty implementation doesn’t mean shitty concept, you’d think a site full of tech nerds would understand such a basic concept.

Hawk ,

Pretty much everyone here agrees that it's a shitty concept. Doesn't solve anything and it's a privacy nightmare.

Telodzrum ,

Well I guess we’re lucky that no one on Lemmy has any power in society.

chiisana ,
@chiisana@lemmy.chiisana.net avatar

I think from a purely technical point of view, you’re not going to get FaceID kind of accuracy on theft prevention systems. Primarily because FaceID uses IR array scanning within arm’s reach from the user, whereas theft prevention is usually scanned from much further away. The distance makes it much harder to get the fidelity of data required for an accurate reading.

uis ,
@uis@lemm.ee avatar

Shorter answer: physics

sugar_in_your_tea ,

Yup, it turns out if you have millions of pixels to work with, you have a better shot at correctly identifying someone than if you have dozens.

CeeBee ,

I think from a purely technical point of view, you’re not going to get FaceID kind of accuracy on theft prevention systems. Primarily because FaceID uses IR array scanning within arm’s reach from the user, whereas theft prevention is usually scanned from much further away. The distance makes it much harder to get the fidelity of data required for an accurate reading.

This is true. The distance definitely makes a difference, but there are systems out there that get incredibly high accuracy even with surveillance footage.

Maggoty ,

Not uh! All you and your twin have to do is write the word Twin on your forehead every morning. Just make sure to never commit a crime with it written where your twin puts their sign. Or else, you know... You might get away with it.

Nope no obvious problems here at all!

SlopppyEngineer ,

And a lot of these face recognition systems are notoriously bad with dark skin tones.

CeeBee ,

No they aren't. This is the narrative that keeps getting repeated over and over. And the citation for it is usually the ACLU's test on Amazon's Rekognition system, which was deliberately flawed to produce this exact outcome (people years later still saying the same thing).

The top FR systems have no issues with any skin tones or connections.

awesome_lowlander ,

There are like a thousand independent studies on this, not just one

CeeBee , (edited )

I promise I'm more aware of all the studies, technologies, and companies involved. I worked in the industry for many years.

The technical studies you're referring to show that the difference between a white man and a black woman (usually polar opposite in terms of results) is around 0.000001% error rate. But this usually gets blown out of proportion by media outlets.

If you have white men at 0.000001% error rate and black women at 0.000002% error rate, then what gets reported is "facial recognition for black women is 2 times worse than for white men".

It's technically true, but in practice it's a misleading and disingenuous statement.

Edit: here's the actual technical report if anyone is interested

https://pages.nist.gov/frvt/reports/1N/frvt_1N_report.pdf

awesome_lowlander ,

Would you kindly link some studies backing up your claims, then? Because nothing I've seen online has similar numbers to what you're claiming

CeeBee , (edited )

https://pages.nist.gov/frvt/reports/1N/frvt_1N_report.pdf

It's a 481 443 page report directly from the body that does the testing.

Edit: mistyped the number of pages

Edit 2: as I mentioned in another comment. I've read through this document many times. We even paid a 3rd party to verify our interpretations.

ricdeh ,
@ricdeh@lemmy.world avatar

It saddens me that you are being downvoted for providing a detailed factual report from an authoritative source. I apologise in the name of all Lemmy for these ignorant people

CeeBee ,

Ya, most upvotes and downvotes are entirely emotionally driven. I knew I would get downvoted for posting all this. It happens on every forum, Reddit post, and Lemmy post. But downvotes don't make the info I share wrong.

msage ,

Just post the sources first, arguing emotionally with 'trust me bro' should get the exact response it's gotten.

CeeBee ,

I posted my sides across many comments. But the same argument applies to everyone saying the opposite.

awesome_lowlander ,

Thanks! Appreciate it, will take a look when I have time

CeeBee ,

Np.

As someone else pointed out in another comment. I've been saying the x% accuracy number incorrectly. It's just a colloquial way of conveying the accuracy. The truth is that no one in the industry uses "percent accuracy" and instead use FMR (false match rate) and FNMR (false non-match rate) as well as some other metrics.

goldenbug ,

Fair. But you are asking us to trust your word when you could provide us with some links.

CeeBee ,
Resonosity ,

Yep, classic fallacy (? Bias?) of consider relative scales/change over absolute.

Here are some sources that speak about the difference between the two, and how different interpreters of data can use either or to further an argument:

nyan ,

Technically, there's a tendency for them to be trained on datasets that don't include nearly enough dark-skinned people. As a result, they don't learn to make the necessary distinctions. I'd like to think that the selection of datasets for training facial recognition AI has improved since the most egregious cases of that. I'm not willing to bet on it, though.

Thassodar ,

Shit even the motion sensors on the automated sinks have trouble recognizing dark skinned people! You have to show your palm to turn the water on most times!

TrickDacy ,

Are we assuming there is no pin or any other auth method? That would be unlike any other payment system I'm aware of. I have to fingerprint scan on my phone to use my credit cards even though I just unlocked my phone to attempt it

Liz ,

I have come across a stranger online who looks exactly like me. We even share the same first name. We even live in the same area. I'm so excited for this wonderful new technology...

ಠ⁠_⁠ಠ

ChaoticEntropy ,
@ChaoticEntropy@feddit.uk avatar

Who the fuck wants this...? Besides the company raking in venture capital money.

explodicle ,

I'm going to have a field day with this. I've got an extremely common-looking face in a major city.

MajorHavoc ,

I've got an extremely common-looking face in a major city.

Indeed, it's likely to be a problem, if you stick with committing few or no crimes.

The good news is that, should you choose to commit an above-average number of crimes, then the system will be prone to under-report you.

So that's nice. /sarcasm, I'm not actually advocating for more crimes. Though I am pointing out that maybe the folks installing these things aren't incentivising the behavior they want.

explodicle ,

I'm advocating more crimes, but unfortunately am still below average.

rottingleaf ,

So cute.

For old stuff things like minority rights and all other principles about making people comfortable apply, and reliability standards with a lot of nines have to be met.

For new stuff - "if it fails 1/100 times, then it's fine, so screw you".

See, everybody (or at least people whose voices are heard, not us dumb fucks, authentic Zuck quote btw) wants all this tech bro surveillance centralized obscure blackbox ambiguous crap so fucking badly that other things don't matter.

Boeing planes dropping outta sky? Wait till "AI" reaches nuclear energy. Or until autonomous armed police drones roam your area, as something easier to imagine. (I've just remembered that in Star Wars police drones on Coruscant are unarmed, both under Republic and under Empire. EU writers couldn't imagine our times' degree of stupidity EDIT: so I'm imagining it now.)

TrickDacy ,

Fear mongering

ABCDE ,

How so? India doesn't recognise people in some unusual but not impossible circumstances, such as those without fingerprints.

DarkThoughts ,

Look at China.

TrickDacy ,

They are living in the same timeline as us. The difference is level of authoritarianism, not that they have warped into the future.

DarkThoughts ,

And what do you think those mass surveillance tools are supposed to be used for?

CeeBee ,

For new stuff - "if it fails 1/100 times, then it's fine, so screw you".

The FMR (false match rate) for these systems is 1e-6. Which translates to about 0.000001% error rate or 99.999999% accuracy. And that number was from about 3 or 4 years ago. They are much better today.

How do I know? I worked in that industry building that kind of tech and read through the 500+ page NIST report on testing results for all the various Face Recognition vendors.

NeoNachtwaechter ,

How do I know? I worked in that industry building that kind of tech

Thank you for pleading guilty :)

CeeBee ,

LMAO. You have no idea what I built the system for, and I have no skin in the game anymore as I moved on to a completely different industry that doesn't even use AI at all.

The implications of your argument is the same with flat earthers where they demand photographic proof of a spherical Earth, but when they are shown photos from space they simply say it's fake and NASA is in the lie.

Sometimes you just can't get past people's preconceived biases regardless of the facts.

notabot ,

It doesn't really matter whether the FMR is one in a hundred or one in a million, for the uses it's being put to it's still too high. If it was only being used as one factor for authenticating someone (I.e. the 'thing for are') but still required the other factor(s) (the 'thing you know' and the 'thing you have') then it'd be adaquate.

As it stands, when it's being used either to pick someone out without further scrutiny, or to make payments with no further checks, it's simply unacceptable. There's good arguments to say it's not just the error rate is unacceptable, but that the technology simply shouldn't be used in those scenarios as it's entirely inappropriate, but that's a separate discussion.

CeeBee ,

The truth is the numbers I cited are the 1:N stats. The 1:1 numbers are far higher, because you can immediately control for distance, lighting, angle, and gaze direction.

With my system I worked with had a 1:1 match rate that was statistically perfect (but nothing is ever 100%).

the technology simply shouldn't be used in those scenarios as it's entirely inappropriate, but that's a separate discussion.

Agreed. Its use as a payment system is just ridiculous.

GluWu , (edited )

[Thread, post or comment was deleted by the author]

  • Loading...
  • CeeBee ,

    That was a really garbage system then. Like disgracefully bad Fisher Price quality.

    The reality is that there are more crap systems than really good ones out there. And there are as many algorithms and different ways of doing it as there are companies.

    The system I developed was so good, even when we tried all kinds of shenanigans to trip it up, we just couldn't do it.

    palordrolap ,

    Ay, there's the rub. Almost no-one's going to pay for the top-notch system, and will instead go for the lowest bidder.

    CeeBee ,

    We were cheaper on both hardware and software costs than just about anyone else, and we placed easily in the top 5 for performance and accuracy.

    The main issue was the covid came around, and since we're not a US company and the vast majority of interest was in the US we were dead in the water.

    What I've learned through the years is that the best rarely costs the most. Most corporate/vendor software out there are chosen by just about every consideration aside from quality.

    rottingleaf ,

    You know what overfitting is, right?

    Other than that, if this system of yours makes 1 error in a million scans, that's still not very good, if that's treated as "virtually no errors" as in no talking to manager, no showing ID as a fallback, so on. Say, if it were employed in Moscow subway, that'd mean a few unpleasant errors preventing people from getting where they need every day.

    CeeBee ,

    You know what overfitting is, right?

    As you reply to someone who spent a decade in the AI industry.

    This has nothing to do with overfitting. Particularly because our matching algorithm isn't trained on data.

    The face detection portion is, but that's simply finding the face in an image.

    The system I worked with used a threshold value that equates to an FMR of 1e-07. And it wasn't used in places like subways or city streets. The point I'm making is that in the few years of real world operation (before I left for another job) we didn't see a single false detection. In fact, one of the facility owners asked us to lower the threshold temporarily to verify the system was actually working properly.

    rottingleaf ,

    This has nothing to do with overfitting. Particularly because our matching algorithm isn’t trained on data.

    Good to know.

    The face detection portion is, but that’s simply finding the face in an image.

    So you are saying yourself that your argument has nothing to do with what's in the article?..

    CeeBee ,

    So you are saying yourself that your argument has nothing to do with what's in the article?..

    OP said "reliability standards with a lot of nines have to be met". All I'm saying is that we're already there.

    rottingleaf ,

    Well, the place you worked at is already there. Those stores - possibly not.

    Also I said that about new and shiny stuff like what they call "AI".

    CeeBee ,

    Fair enough

    ricdeh ,
    @ricdeh@lemmy.world avatar

    A probability of 10^-6^ corresponds to 10^-4^ %, not 10^-6^ %.

    CeeBee , (edited )

    Ok, sure

    Edit: the truth is that saying x% accuracy isn't entirely correct, because the Numbers just don't work that way. It's just a way we convey the data to the average person. I can't count the amount of times I've had asked "ok, but what doesn't mean in terms of accuracy? What's the accuracy percentage?"

    And I understand what you're saying now. Yes I did have the number written down incorrectly as a percentage. I'm on mobile this whole time doing a hundred other things. I added two extra digits.

    NeoNachtwaechter ,

    This raises questions about how 'good' this technology is.

    But it also raises the question of how well your police can deal with false suspicions and false accusations.

    CeeBee ,

    This raises questions about how 'good' this technology is.

    No it doesn't. For every 10 million good detections you only hear about the 1 or 2 false detections. The issue here are the policies around detections and how to verify them. Some places are still taking a blind faith approach to the detections.

    NeoNachtwaechter ,

    For every 10 million good detections you only hear about the 1 or 2 false detections.

    Considering the impact of these faults, it is obviously not good enough.

    CrayonMaster ,

    But that really says more about the user then the tech. This issue here isn't that the tech has too many errors, it's that stores use it and it alone to ban people despite it having a low but well known error rate.

    NeoNachtwaechter ,

    says more about the user then the tech.

    "You need to have a suitable face for our face recognition?" ;-)

    stores use it and it alone to ban people

    No. Read again. The stores did not use technology, they used the services of that tech company.

    nyan ,

    stores use it and it alone to ban people despite it having a low but well known error rate.

    And it is absolutely predictable that some stores would do that, because humans. At very least, companies deploying this technology need to make certain that all the store staff are properly trained on what it does and doesn't mean, including new hires who arrive after the system is put in. Forcing that is going to require that a law be passed.

    CeeBee ,

    Considering the impact of these faults, it is obviously not good enough.

    I was throwing a rough number out there, but the true error rate is lower than what I said. But when with those numbers this tech is statistically safer than driving a car.

    The other half of the equation is policy management. Every single one of these systems should operate with human-in-the-loop. Meaning after a detection is made, it goes over to a person to make a "real" determination for accuracy.

    explodicle ,

    That doesn't sound as cost-effective as just losing the millionth customers.

    CeeBee ,

    Because using FR for a payment system is dumb.

    Everyone here seems to be hyperfixated on the payment system aspect.

    I'm talking purely in the context of FR tech being vastly better than what people think, since everyone has this Idea that FR tech doesn't work.

    MajorHavoc ,

    We're also worried that it does work.

    As another person said, it feels like there's a lot more use cases for rampant authoritarian control, then positive benefits to society.

    Recognizing sociopaths, sure. We do that already with wanted posters, and with political office advertisements.

    But for everyone else who is just trying to live their life, this can be extremely invasive technology.

    CeeBee , (edited )

    But for everyone else who is just trying to live their life, this can be extremely invasive technology.

    Now here's where I drop what seems like a whataboutism.

    You already have an incredibly invasive system tracking you. It's the phone in your pocket.

    There's almost nothing a widespread FR system could do to a person's privacy that isn't already covered by phone tracking.

    Edit: and including already existing CCTV systems that have existed for decades now. /edit

    Even assuming a FR system is on every street corner, at best it knows where you are, when you're there, and maybe who you interact with. That's basically it. And that will be limited to just the company/org that operates that system.

    Your cellphone tracks where are you, when you're there, who's with you, the sites you visit, your chats, when you're at home/friends place, how long you're there, can potentially listen to your conversations, activate your camera. On and on.

    On the flip side a FR system can notify stores and places when a previous shoplifter or violent person arrives at the store (not trying to fear monger, just an example) and a cellphone wouldn't be able to do that.

    The boogyman that everyone sees with FR and privacy doesn't exist.

    Edit 2: as an example, there was an ad SDK from a number of years ago that when you had an app open that used that SDK it would listen to your microphone and pickup high pitched tones from TV ads to identify which ads were playing nearby or on your TV.

    https://arstechnica.com/tech-policy/2015/11/beware-of-ads-that-use-inaudible-sound-to-link-your-phone-tv-tablet-and-pc/

    https://www.forbes.com/sites/thomasbrewster/2015/11/16/silverpush-ultrasonic-tracking/

    MajorHavoc ,

    You already have an incredibly invasive system tracking you. It's the phone in your pocket.

    As a Cybersecurity expert running well configured GrapheneOS, I actually don't.

    So I, personally, have a lot more privacy to lose from facial recognition technology. Since my only path to reasonable mitigation is a socially ostracizing face paint pattern. (It would play well with my professional colleagues, who understand the risks, I suppose. But I have a feeling it wouldn't play out so nice at my local grocery store...)

    But I do take your point, that for most folks, it's not a huge change.

    A key difference is that, while it's a lot of work, I can, and have, opted out of the phone tracking.

    CeeBee ,

    GrapheneOS isn't a complete solution, especially if you still use things like Facebook and Whatsapp. Although it is a massive plus to privacy.

    Quick question. I've been hesitating with jumping to Graphene for a little while now. The two things that have held me back is losing access to Google Camera and Android Pay (or Google Pay, or Google Wallet, or Android Wallet. Whatever Google's calling it these days).

    The Google Wallet feature I think has taken care of itself. They pushed an update that requires you to re-authenticate after the initial tap for "security". Which means half the time the transaction fails and the cashier has to redo the payment process. So I just gave up and have gone back to tapping with my cards directly for the past month.

    So that just leaves the Google Camera. How's the quality with Graphene?

    MajorHavoc , (edited )

    especially if you still use things like Facebook and Whatsapp.

    Yeah....speaking of my making myself a social outcast by painting my face crazy colors - I figure I am at least 20% of the way there by not using Facebook or Whatsapp.

    I'm joking...mostly. But it really can feel isolating not to have either of those apps.

    The Google Wallet feature I think has taken care of itself.

    My experience matches. I did miss Google Pay for a few months after switching to GrapheneOs, until tap-to-pay reached all my favorite stores. Now I'm just mildly annoyed to carry a card to do something my phone ought to do.

    So that just leaves the Google Camera. How's the quality with Graphene?

    I was very annoyed with how slow the Google camera app loaded, on my previous phone.

    My Pixel with GraoheneOS is the best camera I have had in about a decade, because the stock camera app opens almost instantly. I had a big problem with the camera taking a couple seconds to open, on my previous two or three Android phones. Somehow it got worse with each generation of phone, while I paid more for stronger CPU and worse battery life.

    I am vaguely aware that I maybe gave up some clever camera features that some of my phone vendors added, but I don't miss them since I wasn't using them. One had a 3d photo picture that I used exactly once, if I recall.

    But compared to stock (Pixel) Android, it's literally apparently the same camera app, except I swear it loads much faster. (I'm wrong, it's not the same app.) The privacy implications of the load time difference I perceived freak me out a little, honestly. I hope I'm just wrong about that bit. (Thankfully, yes. I'm wrong.)

    I also missed Google Photos for backup, until I bought a Synology Network Attached storage device.

    CeeBee ,

    I'm using a Pixel 7 right now, and I love the camera. I'm not sure I'll be happy if I lose all the camera features.

    Thanks for replying

    MajorHavoc ,

    Thanks for replying

    Sure! Incidentally, it looks like you can now install the GrapheneOS camera through Google Play, if you want to give it a test run without going full GrapheneOS.

    https://play.google.com/store/apps/details?id=app.grapheneos.camera.play&hl=en&gl=US

    CeeBee ,

    Ok, now that's awesome! I'm installing it now. Thanks!

    DarkThoughts ,

    Good as in ethical, not in capability. Facial recognition (and similar mass surveillance tech) is simply a tool for authoritarianism and should be banned for general usage. There's literally no good reason why this should be widely used at all.

    MentalEdge ,
    @MentalEdge@sopuli.xyz avatar

    Even if someone did steal a mars-bar... Banning them from all food-selling establishments seems... Disproportional.

    Like if you steal out of necessity, and get caught once, you then just starve?

    Obviously not all grocers/chains/restaurants are that networked yet, but are we gonna get to a point where hungry people are turned away at every business that provides food, once they are on "the list"?

    DivineDev ,

    No no, that would be absurd. You'll also be turned away if you are not on the list if you're unlucky.

    FuryMaker ,

    get caught once, you then just starve?

    Maybe they send you to Australia again?

    The world hasn't changed has it.

    theOneTrueSpoon ,

    Sure it has. They send you to Rwanda now

    mPony ,

    it's like a no-fly list, but for food

    MentalEdge ,
    @MentalEdge@sopuli.xyz avatar

    it's like a no-fly list, but for being alive

    ftfy

    WeirdGoesPro ,
    @WeirdGoesPro@lemmy.dbzer0.com avatar
    InternetPerson ,

    This becomes even more ridiculous if you consider that we wasted about 1.05 billion tonnes of food worldwide in 2022 alone. (UNEP Food Waste Index Report 2024 Key Messages)

    But no. Supermarkets will miss out on profits if they ban people from their stores who can't pay.

    Seems illogical? Because it is.

    intensely_human ,

    If that case ever does exist (god forbid), I hope that there’s something like a free-entry market so they can set up their own food solutions instead of being forced to starve.

    If it’s a free market, and every existing business is coordinating to refuse to sell food to this person, then there’s a profit opportunity in getting food to them. You could even charge them double for food, and make higher profits selling to the grocery-banned class, while saving their lives.

    That may sound cold-hearted, but what I’m trying to point out is that in this scenario, the profit motive is pulling that food to those people who need it. It’s incentivizing people who otherwise wouldn’t care, and enabling people who do care, to feed those hungry people by channeling money toward the solution.

    And that doesn’t require anything specific about food to be in the code that runs the marketplace. All you need is a policy that new entrants to the market are allowed, and without any lengthy waiting process for a permit or whatever. You need a rule that says “You can’t stop other players from coming in an competing with you”, which is the kind of rule you need to run a free market, and then the rest of the problem is solved by people’s natural inclinations.

    I know I’m piggybacking here. I’m just saying that a situation in which only some finite cartel of providers gets to decide who can buy food, is an example of a massive violation of free market principals.

    People think “free market” means “the market is free to do evil”. No. “Free market” just means the people inside it are free to buy and sell what they please, when they please.

    Yes it means stores can ban people. But it also means other people can start stores that do serve those people. It means “I don’t have to deal with you if I don’t want to, but I also can’t control your access to other people”.

    A pricing cartel or a blacklisting cartel is a form of market disease. The best prevention and cure is to ensure the market is a free one - one which new players can enter at will - which means you can’t enforce that cartel reliably since there’s always someone outside the blacklisting cartel who could benefit from defecting from the blacklist.

    MentalEdge ,
    @MentalEdge@sopuli.xyz avatar

    That is some serious "capitalism can solve anything and therefore will, if only we let it"-type brain rot.

    This "solution" relies on so many assumptions that don't even begin to hold water.

    Of course any utopian framework for society could deal with every conceivable problem... But in practice they don't, and always require intentional regulation to a greater or lesser extent in order to prevent harm, because humans are humans.

    This particular potential problem is almost certainly not the kind that simply "solves itself" if you let it.

    And IMO suggesting otherwise is an irresponsible perpetuation of the kind of thinking that has led human civilization to the current reality of millions starving in the next few decades, due to the predictable environmental destruction of arable land in the near future.

    Zagorath ,
    @Zagorath@aussie.zone avatar

    Wtf kind of Randian hellscape nonsense is this? They should be allowed to charge double to exploit people who are already disadvantaged by the way other companies are treating them? Fuck this nonsense.

    Go create your own bear-infested village somewhere nobody with any morals has to live near you. But this time do it from scratch rather than ruining things for the people already living there.

    lolcatnip , (edited )

    They've essentially created their own privatized law enforcement system. They aren't allowed to enforce their rules the same way a government would be, but punishment like banning a person from huge swaths of economic life can still be severe. The worst part is that private legal systems almost never have any concept of rights or due process, so there is absolutely nothing stopping them from being completely arbitrary in how they apply their punishments.

    I see this kind of thing as being closely aligned with right wingers' desire to privatize everything, abolish human rights, and just generally turn the world into a dystopian hellscape for anyone who isn't rich and well connected.

    Sizzler ,

    I can see this being used against ex-employees.

    MacNCheezus ,
    @MacNCheezus@lemmy.today avatar

    Like if you steal out of necessity, and get caught once, you then just starve?

    I mean... you could try getting on food stamps or whatever sort of government assistance is available in your country for this purpose?

    In pretty much all civilized western countries, you don't HAVE to resort to becoming a criminal simply to get enough food to survive. It's really more of a sign of antisocial behavior, i.e. a complete rejection of the system combined with a desire to actively cause harm to it.

    Or it could be a pride issue, i.e. people not wanting to admit to themselves that they are incapable of taking care of themselves on their own and having to go to a government office in order to "beg" for help (or panhandle outside the supermarket instead).

    otter ,
    @otter@lemmy.ca avatar

    This makes me think of people who have trouble in airports because their name is similar to someone else's.

    Only this is going to be much harder to deal with

    Buffalox ,

    This is why some UK leaders wanted out of EU, to make their own rules with way less regard for civil rights.

    yournamehere ,

    nah i think main thing was a super fragile identity. i mean they have been shit all the time since before EU.
    when talks between france,germany and uk took place the insisted to take control of EU.

    if you live on an island for generations with limited new genetic input...well, thats where you end up.

    sailingbythelee ,

    We humans have these things called "boats" that have enabled the British Isles to receive regular inputs of new genetic material. Pretty useful things, these boats, and somewhat pivotal in the history of the UK.

    yournamehere ,

    sure

    FooBarrington ,

    I don't understand the tendency to attribute harmful behaviours of the rich and powerful to these strange, irrational reasons. No, UK leaders didn't spend millions upon millions on propaganda because they have a fragile identity. They did it because they'll make money off of it, and will be able to move the legislation towards their own goals.

    It's the same when people say Putin invaded Ukraine because he wants to restore the glory of the Soviet Union. No, he doesn't care about any of that, he cares about staying in power and becoming more powerful. One of the best ways to do so is to invade other countries, as long as you don't lose.

    uis ,
    @uis@lemm.ee avatar

    It's the same when people say Putin invaded Ukraine because he wants to restore the glory of the Soviet Union. No, he doesn't care about any of that, he cares about staying in power and becoming more powerful. One of the best ways to do so is to invade other countries, as long as you don't lose.

    Thank you. I see so many people who don't get it. I'm happy some people understand it without sending them link to one of few Ekaterina Shulman's lectures in English.

    FooBarrington ,

    Thank you for the validation, sometimes I feel like I'm going crazy with how often these things are repeated.

    But those lectures do sound interesting - would you mind linking them when you have the time?

    uis ,
    @uis@lemm.ee avatar

    This is not the lecture I originally intended to post. Also small correction for 1:00:02 first answer in poll should be translated as "social fainess".

    If you find lecture where she says about "dealing with internal problems by external means" and "dropping concrete slab on nation's head" - that is one I intended to link, but still searching which one it is.

    FooBarrington ,

    Awesome, thank you!

    Maggoty ,

    Nah the core drivers wanted their own little neoliberal haven where they didn't have to listen to the EU. They'd have been rich either way, but this way they get more power.

    TheGrandNagus ,

    if you live on an island for generations with limited new genetic input...well, thats where you end up.

    Literally the most diverse country in Europe lol

    AFC1886VCC ,

    It's the Tory way. Authoritarianism, culture wars, fucking over society's poorest.

    retrospectology ,
    @retrospectology@lemmy.world avatar

    This can't be true. I was told that if she has nothing to hide she has nothing to worry about!

    Fleppensteijn , (edited )
    @Fleppensteijn@feddit.nl avatar

    Reminds me of when I joined some classmates to the supermarket. We got kicked out while waiting in line because they didn't want middleschoolers there because we're all thieves anyways. So most of the group walked out without paying.

    muntedcrocodile ,

    We have so many dystopian futures and we decided to invent a new one.

    Lith ,
    @Lith@lemmy.sdf.org avatar

    Actually this one feels pretty similar to watch_dogs. Wasn't this the plot to watch_dogs 2?

    sfxrlz , (edited )

    Now I’m interested in the plot of watch dogs 2…

    Edit: it’s indeed the plot of watch dogs 2

    https://en.m.wikipedia.org/wiki/Watch_Dogs_2

    In 2016, three years after the events of Chicago, San Francisco becomes the first city to install the next generation of ctOS – a computing network connecting every device together into a single system, developed by technology company Blume. Hacker Marcus Holloway, punished for a crime he did not commit through ctOS 2.0 …

    rmuk ,

    Also, a kickass soundtrack by Hudson Mohawke.

    Jackthelad ,

    Well, this blows the "if you've not done anything wrong, you have nothing to worry about" argument out of the water.

    raspberriesareyummy ,

    That argument was only ever made by dumb fucks or evil fucks. The article reports about an actual occurrence of one of the problems of such technology that we (people who care about privacy) have warned about from the beginning.

    refalo ,

    the way I like to respond to that:

    "ok, pull down your pants and hand me your unlocked phone"

    Karyoplasma ,

    I'm stealing this.

    HonoraryMancunian ,

    And never close a cubicle door

    UnderpantsWeevil ,
    @UnderpantsWeevil@lemmy.world avatar

    Gotta say, I don't think Officer Chauvin is going to take well to your request.

    Maggoty ,

    The US killed that argument a long time ago. We shot it in the back and claimed it had a gun.

    EngineerGaming ,
    @EngineerGaming@feddit.nl avatar

    Are you prohibited from covering your face in the stores like this?

    sugar_in_your_tea ,

    Idk, but as someone who has a fair skin tone and thus likely wouldn't trigger a false positive, I am prohibited from entering the store based purely on the fact that they use face recognition software. Screw that.

    EngineerGaming ,
    @EngineerGaming@feddit.nl avatar

    I mean wouldn't the prohibited person just have to wear some mask to avoid triggering? And if this is prohibited - wouldn't that be problematic when it comes to hijabs?

    sugar_in_your_tea ,

    I'm guessing they use eye tracking, so they'd probably need something to block IR facial recognition. Another user mentioned Reflecticles, which should work but are a bit pricey.

    MajorHavoc ,

    Cyberpunk face paint isn't illegal...yet.

    EngineerGaming ,
    @EngineerGaming@feddit.nl avatar

    But it looks so much more sus than a mask you moght wear when you're sick. Or a hijab. And also very algorithm-dependent.

    MajorHavoc ,

    I'm willing to subscribe to a per-algorithm daily updated face paint advising Patreon account, if it comes to that.

    Face painting actually sounds like the most fun part of the distopia we often feel headed towards.

    EngineerGaming ,
    @EngineerGaming@feddit.nl avatar

    But it makes you much more flaggable by human staff. And again - how would an advisory work? There is no one unified algorithm used in every surveillance system by every manufacturer.

    MajorHavoc , (edited )

    Yeah. I mean, it's not going to work, but it would be a hell of a social commentary, and would show solidarity with a lot of folks who already can't hide in a crowd.

    Also, there's more than one operating system algorithm and more than one antivirus algorithm, but virus writers are getting by. Algorithm and anti algorithm tech escalation exists today in an uneasy balance.

    And, at the point that someone has been flagged and thrown out of the civil system, whether fair or not, they're going to say, "fuck it" and paint their face and open carry the largest weapon they can get their hands on. I'm guessing they'll stop tipping, too.

    If this continues to escalate, we really may see face paint and LED hats come out in force, in particularly troubled areas.

    I'm hoping for the version of the future where we regulate this facial recognition crap instead.

    feedum_sneedson ,

    As an American, I don't know what opinion to have about this without knowing the woman's race.

    zcd ,
    feedum_sneedson ,

    Literally what people on Lemmy think.

    erwan ,

    If she's been flagged as shoplifter she's probably black!

    feedum_sneedson ,

    Why, because all shoplifters are black? I don't understand. She's being mistaken for another person, a real person on the system.

    I used to know a smackhead that would steal things to order, I wonder if he's still alive and whether he's on this database. Never bought anything off him but I did buy him a drink occasionally. He'd had a bit of a difficult childhood.

    Dudewitbow ,

    i think its more that all facial recognition systems have a harder time picking up faces with darker complexion (same idea why phone cameras are bad at it for example). this failure leads to a bunch of false flags. its hilariously evident when you see police jurisdictions using it and aresting innocent people.

    not saying the womans of colored background, but would explain why it might be triggering.

    Maggoty ,

    Because facial recognition systems infamously cannot tell black people apart.

    UnderpantsWeevil ,
    @UnderpantsWeevil@lemmy.world avatar

    Why, because all shoplifters are black?

    All the ones that get caught. When you're white, you can steal whatever you want.

    erwan ,

    Because the algorithm trying to identify shoplifters is probably trained on biased dataset

    feedum_sneedson ,

    That's not how it works! It has mistakenly identified her as a specific individual who was already convicted of theft.

    KonalaKoala ,
    @KonalaKoala@lemmy.world avatar

    Congratulations, you just identified yourself as a racist and need to understand you can't just judge someone without first getting to know them.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • technology@lemmy.world
  • random
  • incremental_games
  • meta
  • All magazines