Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

Court Bans Use of 'AI-Enhanced' Video Evidence Because That's Not How AI Works

A judge in Washington state has blocked video evidence that’s been “AI-enhanced” from being submitted in a triple murder trial. And that’s a good thing, given the fact that too many people seem to think applying an AI filter can give them access to secret visual data.

intensely_human ,

“Just so we’re clear guys, fake evidence is not allowed”

FiniteBanjo ,
AnUnusualRelic ,
@AnUnusualRelic@lemmy.world avatar

Why not make it a fully AI court and save time if they were going to go that way. It would save so much time and money.

Of course it wouldn't be very just, but then regular courts aren't either.

cordlesslamp ,

Be careful what you wish for.

Sl00k , (edited )

In the same vein Bloomberg just did a great study on ChatGPT 3.5 ranking resumes and it had an extremely noticeable bias of ranking black names lower than the average and Asian/white names far higher despite similar qualifications.

Archive source: https://archive.is/MrZIm

ProgrammingSocks ,

Perfect, a drop-in replacement!

FiniteBanjo ,

You forgot the /s

BreakDecks ,

Me, testifying to the AI judge: "Your honor I am I am I am I am I am I am I am I am I am"

AI Judge: "You are you are you are you are you are you..."

Me: Escapes from courthouse while the LLM is stuck in a loop

mojofrododojo ,

Honestly, an open-source auditable AI Judge/Justice would be preferable to Thomas, Alito, Gorsuch and Barrett any day.

milkjug ,

I’d love to see the “training data” for this model, but I can already predict it will be 99.999% footage of minorities labelled ‘criminal’.

And cops going “Aha! Even AI thinks minorities are committing all the crime”!

ricdeh ,
@ricdeh@lemmy.world avatar

Tell me you didn't read the article without telling me you didn't read the article

chemicalwonka ,
@chemicalwonka@discuss.tchncs.de avatar

Just for now, soon this practice will be normalized and widely used, after all we are in the late capitalism stage and all violations are relativized

SacrificedBeans ,

Nor evidence, for that matter

Kolanaki ,
@Kolanaki@yiffit.net avatar

clickity clackity

"ENHANCE"

ChaoticEntropy ,
@ChaoticEntropy@feddit.uk avatar

"When we enhance the image and place a knife in the defendant's hand..."

reverendsteveii ,

enhance

enhance

enhance

UnderpantsWeevil ,
@UnderpantsWeevil@lemmy.world avatar
Mango ,

We do not need AI pulling a George Lucas.

BreadstickNinja ,

AI can make any blurry criminal look like George Lucas with the right LoRAs.

Mango ,

Jesus Christ, does this even need to be pointed out!??

Whirling_Cloudburst ,

Unfortunately it does need pointing out. Back when I was in college, professors would need to repeatedly tell their students that the real world forensics don't work like they do on NCIS. I'm not sure as to how much thing may or may not have changed since then, but based on American literacy levels being what they are, I do not suppose things have changed that much.

sealhaslupus ,

you might be referring to the CSI Effect

Whirling_Cloudburst ,

Its certainly similar in that CSI played a role in forming unrealistic expectations in student's minds. But. Rather than expecting more physical evidence in order to make a prosecution, the students expected magic to happen on computers and lab work (often faster than physically possible).

AI enhancement is not uncovering hidden visual data, but rather it generates that information based on previously existing training data and shoe horns that in. It certainly could be useful, but it is not real evidence.

postmateDumbass ,

ENHANCE !

Stopthatgirl7 OP ,
@Stopthatgirl7@lemmy.world avatar

Yes. When people were in full conspiracy mode on Twitter over Kate Middleton, someone took that grainy pic of her in a car and used AI to “enhance it,” to declare it wasn’t her because her mole was gone. It got so much traction people thought the ai fixed up pic WAS her.

Mirshe ,

Don't forget people thinking that scanlines in a news broadcast over Obama's suit meant that Obama was a HOLOGRAM and ACTUALLY A LIZARD PERSON.

melpomenesclevage ,

Its not actually worse than eyewitness testimony.

This is not an endorsement if AI, just pointing out that truth has no place in a courtroom, and refusing to lie will get you locked in a cafe.

Too good, not fixing it.

douglasg14b ,
@douglasg14b@lemmy.world avatar

Of course, not everyone is technology literate enough to understand how it works.

That should be the default assumption, that something should be explained so that others understand it and can make better, informed, decisions.
.

ItsMeSpez ,

It's not only that everyone isn't technologically literate enough to understand the limits of this technology - the AI companies are actively over-inflating their capabilities in order to attract investors. When the most accessible information about the topic is designed to get non-technically proficient investors on board with your company, of course the general public is going to get an overblown idea of what the technology can do.

altima_neo ,
@altima_neo@lemmy.zip avatar

The layman is very stupid. They hear all the fantastical shit AI can do and they start to assume its almighty. Thats how you wind up with those lawyers that tried using chat GPT to write up a legal brief that was full of bullshit and didnt even bother to verify if it was accurate.

They dont understand it, they only know that the results look good.

T156 , (edited )

The layman is very stupid. They hear all the fantastical shit AI can do and they start to assume its almighty. Thats how you wind up with those lawyers that tried using chat GPT to write up a legal brief that was full of bullshit and didnt even bother to verify if it was accurate.

Especially since it gets conflated with pop culture. Someone who hears that an AI app can "enhance" an image might think it works like something out of CSI using technosmarts, rather than just making stuff up out of whole cloth.

laughterlaughter ,

There's people who still believe in astrology. So, yes.

dual_sport_dork ,
@dual_sport_dork@lemmy.world avatar

And people who believe the Earth is flat, and that Bigfoot and the Loch Ness Monster exist, and there are reptillians replacing the British royal family...

People are very good at deluding themselves into all kinds of bullshit. In fact, I posit that they're better even at it than learning the facts or comprehending empirical reality.

FilterItOut ,

Good god, there are still people who believe in phrenology!

lole ,

I met a student at university last week at lunch who told me he is stressed out about some homework assignment.
He told me that he needs to write a report with a minimum number of words so he pasted the text into chatGPT and asked it about the number of words in the text.

I told him that every common text editor has a word count built in and that chatGPT is probably not good at counting words (even though it pretends to be good at it)

Turns out that his report was already waaaaay above the minimum word count and even needed to be shortened.

So much about the understanding of AI in the general population.

I'm studying at a technical university.

mojofrododojo ,

I’m studying at a technical university.

AI is gonna fuck up an entire generation or more.

JackbyDev ,

During Kyle Rittenhouse's trial the defense attorney objected to using the pinch to zoom feature of an iPad because it (supposedly) used AI. This was upheld by the judge so the prosecution couldn't zoom in on the video.

LordCrom ,

Wait, when did that become ok to do in The first place

BrianTheeBiscuiteer ,

They told the AI it was going to get locked up with the viruses because they had evidence it was generating CSAM.

rottingleaf ,

The fact that it made it that far is really scary.

I'm starting to think that yes, we are going to have some new middle ages before going on with all that "per aspera ad astra" space colonization stuff.

Strobelt ,

Aren't we already in a kind of dark age?

People denying science, people scared of diseases and vaccination, people using anything AI or blockchain as if it were magic, people defending power-hungry, all-promising dictators, people divided over and calling the other side barbaric. And of course, wars based on religion.

Seems to me we're already in the dark.

Krauerking ,

Oh for sure. We are already in a period that will have some fancy name in future anthropology studies but the question is how far down do we still have to go before we see any light.

abhibeckert , (edited )

Aren’t we already in a kind of dark age?

A bit over 150 years ago, slavery was legal (and commonplace) in the United States.

Sure, lots of shitty stuff in the world today... but you don't have to go far back to a time when a sherif with zero evidence relying on unverified accusations and heresy would've put up a "wanted dead or alive" poster with a drawing of the guy's face created by an artist who had never even laid eyes on the alleged murderer.

rottingleaf ,

Well, the dark ages came after the late antiquity where slavery was normal. And it took a few centuries for slavery to die out in European societies, though serfdom remained which wasn't too different. And then serfdom in England formally existed even in XIXth century. I'm not talking about Russia, of course, where it played the same role as slavery in the US south.

EDIT: What I meant - this is more about knowledge and civilization, not good and bad. Also 150 years is too much, but compared to 25 years ago - I think things are worse in many regards.

rottingleaf , (edited )

Aren’t we already in a kind of dark age?

In the sense of actually making things in the backbone of our civilization becoming a process and knowledge heavily centralized and removed from most people living their daily lives, yes.

Via many small changes we've come to the situation where everybody uses Intel and AMD or other very complex hardware, directly or in various mechanisms, which requires infrastructure and knowledge more expensive than most nation-states to produce.

People no more can make a computer usable for our daily processes via soldering something together using TTL logic and elements bought in a radio store, and we could perform many tasks via such computers, if not for network effect. We depend on something even smart people can't do on their own, period.

It's like tanks or airplanes or ICBMs.

A decent automatic rifle or grenade or a mortar can well be made in a workshop. Frankly even an alternative to a piece of 50s field artillery can be, and the ammunition.

What we depend on in daily civilian computing is as complex as ICBMs, and this knowledge is even more sparsely distributed in the society than the knowledge of how ICBMs work.

And also, of course, the tendency for things to be less repairable (remember the time when everything came with manuals and schematics?) and for people to treat them like magic.

This is both reminiscent of Asimov's Foundation (only there Imperial machines were massive, while Foundation's machines were well miniaturized, but the social mechanisms of the Imperial decay were described similarly) and just psychologically unsettling.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • technology@lemmy.world
  • incremental_games
  • meta
  • All magazines