Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

The White House wants to 'cryptographically verify' videos of Joe Biden so viewers don't mistake them for AI deepfakes

The White House wants to 'cryptographically verify' videos of Joe Biden so viewers don't mistake them for AI deepfakes::Biden's AI advisor Ben Buchanan said a method of clearly verifying White House releases is "in the works."

FrostKing ,

Can someone try to explain, relatively simply, what cryptographic verification actually entails? I've never really looked into it.

0xD ,

I'll be talking about digital signatures which is the basis for such things. I assume basic understanding of asymmetric cryptography and hashing.

Basically, you hash the content you want to verify with a secure hashing function and encrypt the value with your private key. You can now append this encrypted value to the content or just release it alongside it.

To now verify this content they can use your public key to decrypt your signature and get the original hash value, and compare it to their own. To get that, they just need to hash the content themselves with the same function.

So by signing their videos with the white house private key and publishing their public key somewhere, you can verify the video's authenticity like that.

For a proper understanding check out DSA :)

Natanael ,

Only RSA uses a function equivalent to encryption when producing signatures, and only when used in one specific scheme. Every other algorithm has a unique signing function.

abhibeckert , (edited )

Click the padlock in your browser, and you'll be able to see that this webpage (if you're using lemmy.world) was encrypted by a server that has been verified by Google Trust Services to be a server which is controlled by lemmy.world. In addition, your browser will remember that... and if you get a page from the same server that has been verified by another cloud provider, the browser (should) flag that and warn you it might be

The idea is you'll be able to view metadata on an image and see that it comes from a source that has been verified by a third party such as Google Trust Services.

How it works, mathematically... well, look up "asymmetric cryptography and hashing". It gets pretty complicated and there are a few different mathematical approaches. Basically though, the white house will have a key, that they will not share with anyone, and only that key can be used to authorise the metadata. Even Google Trust Services (or whatever cloud provider you use) does not have the key.

There's been a lot of effort to detect fake images, but that's really never going to work reliably. Proving an image is valid, however... that can be done with pretty good reliability. An attack would be at home on Mission Impossible. Maybe you'd break into a Whitehouse photographer's home at night, put their finger on the fingerprint scanner of their laptop without waking them, then use their laptop to create the fake photo... delete all traces of evidence and GTFO. Oh and everyone would know which photographer supposedly took the photo, ask them how they took that photo of Biden acting out of character, and the real photographer will immediately say they didn't take the photo.

FrostKing ,

Thanks a lot, that helped me understand. Seems like a good idea

Thirdborne ,

When it comes to misinformation I always remember when I was a kid I'm the early 90s, another kid told me confidently that the USSR had landed on Mars, gathered rocks, filmed it and returned to earth(it now occurs to me that this homeschooled kid was confusing the real moon landing.) I remember knowing it was bullshit but not having a way to check the facts. The Internet solved that problem. Now, by God , the Internet has recreated the same problem.

Snapz ,

We need something akin to the simplicity and ubiquity of Google that does this, government funded and with transparent oversight. We're past the point of your aunt needing a way to quickly check if something is obvious bullshit.

Call it something like Exx-Ray, the two Xs mean double check - "That sounds very unlikely that they said that Aunt Pat... You need to Exx-Ray shit like that before you talk about it at Thanksgiving"

Or same thing, but with the word Check, CHEXX - "No that sounds like bullshit, I'm gonna CHEXX it... Yup that's bullshit, Randy."

csm10495 ,
@csm10495@sh.itjust.works avatar

Man some Chex mix sounds good right now. They have this one that has chocolate pieces now.

cooopsspace ,

You mean to tell me that cryptography isn't the enemy and that instead of fighting it in the name of "terrorism and child protection" that we should be protecting children by having strong encryption instead??

helenslunch ,
@helenslunch@feddit.nl avatar

I mean they could just create a highly-secure official Fediverse server/account?

stockRot ,

What problem would that solve?

helenslunch ,
@helenslunch@feddit.nl avatar

An official channel to post and review deepfakes for accuracy.

otl ,
@otl@hachyderm.io avatar

A link to the video could be shared via ActivityPub.
The video would be loaded over HTTPS; we can verify that the video is from the white house, and that it hasn't been modified in-transit.

A big issue is that places don't want to share a link to an independently verifiable video, they want you to load a copy of it from their website/app. This way we build trust with the brand (e.g. New York Times), and spend more time looking at ads or subscribe.
@stockRot @technology

stockRot ,

A big issue is that places don't want to share a link to an independently verifiable video, they want you to load a copy of it from their website/app.

Exactly. This "solution" doesn't take into account how people actually use the Internet. Unless we expect billions of people to change their behavior, this is just a pointless comment.

otl ,
@otl@hachyderm.io avatar

Might be closer than you think. The White House is just using Instagram right now: https://www.whitehouse.gov
(See section “featured media”)

@stockRot @technology

hyperhopper ,

Just because you're writing this on the fediverse doesn't mean it's the answer to everything. It's certainly not the answer to this.

helenslunch ,
@helenslunch@feddit.nl avatar

Sick Strawman bro

recapitated ,

It's a good idea. And I hope to see more of this in other types of communications.

VampyreOfNazareth ,

Government also puts backdoor in said math, gets hacked, official fakes released

Squizzy ,

Or more likely they will only discredit fake news and not verify actual footage that is a poor reflection. Like a hot mic calling someone a jackass, white House says no comment.

HawlSera ,

This is sadly necessary

surewhynotlem ,

Fucking finally. We've had this answer to digital fraud for ages.

BrianTheeBiscuiteer ,

Sounds like a very Biden thing (or for anyone well into their Golden Years) to say, "Use cryptography!" but it's not without merit. How do we verify file integrity? How to we digitally sign documents?

The problem we currently have is that anything that looks real tends to be accepted as real (or authentic). We can't rely on humans to verify authenticity of audio or video anymore. So for anything that really matters we need to digitally sign it so it can be verified by a certificate authority or hashed to verify integrity.

This doesn't magically fix deep fakes. Not everyone will verify a video before distribution and you can't verify a video that's been edited for time or reformatted or broadcast on the TV. It's a start.

SpaceCowboy ,
@SpaceCowboy@lemmy.ca avatar

The President's job isn't really to be an expert on everything, the job is more about being able to hire people who are experts.

If this was coupled with a regulation requiring social media companies to do the verification and indicate that the content is verified then most people wouldn't need to do the work to verify content (because we know they won't).

It obviously wouldn't solve every problem with deepfakes, but at least it couldn't be content claiming to be from CNN or whoever. And yes someone editing content from trusted sources would make that content no longer trusted, but that's actually a good thing. You can edit videos to make someone look bad, you can slow it down to make a person look drunk, etc. This kind of content should not considered trusted either.

Someone doing a reaction video going over news content or whatever could have their stuff be considered trusted, but it would be indicated as being content from the person that produced the reaction video not as content coming from the original news source. So if you see a "news" video that has it's verified source as "xXX_FlatEarthIsReal420_69_XXx" rather than CNN, AP News, NY Times, etc, you kinda know what's up.

go_go_gadget ,

We've had this discussion a lot in the Bitcoin space. People keep arguing it has to change so that "grandma can understand it" but I think that's unrealistic. Every technology has some inherent complexities that cannot be removed and people have to learn if they want to use it. And people will use it if the motivation is there. Wifi has some inherent complexities people have become comfortable with. People know how to look through lists of networks, find the right one, enter the passkey or go through the sign on page. Some non-technical people know enough about how Wifi should behave to know the internet connection might be out or the route might need a reboot. None of this knowledge was commonplace 20 years ago. It is now.

The knowledge required to leverage the benefits of cryptographic signatures isn't beyond the reach of most people. The general rules are pretty simple. The industry just has to decide to make the necessary investments to motivate people.

nxdefiant ,

The number of 80 year olds that know what cryptography is AND know that it's a proper solution here is not large. I'd expect an 80 year old to say something like "we should only look at pictures sent by certified mail" or "You cant trust film unless it's an 8mm and the can was sealed shut!"

npaladin2000 ,
@npaladin2000@lemmy.world avatar

If the White House actually makes the deep fakes, do they count as "fakes?"

pineapplelover ,

Huh. They actually do something right for once instead of spending years trying to ban A.I tools. I'm pleasantly surprised.

PhlubbaDubba ,

I mean banning use cases is deffo fair game, generating kiddy porn should be treated as just as heinous as making it the "traditional" way IMO

General_Effort ,

Yikes! The implication is that it does not matter if a child was victimized. It's "heinous", not because of a child's suffering, but because... ?

PhlubbaDubba ,

Man imagine trying to make "ethical child rape content" a thing. What were the lolicons not doing it for ya anymore?

As for how it's exactly as heinous, it's the sexual objectification of a child, it doesn't matter if it's a real child or not, the mere existence of the material itself is an act of normalization and validation of wanting to rape children.

Being around at all contributes to the harm of every child victimised by a viewer of that material.

General_Effort ,

I see. Since the suffering of others does not register with you, you must believe that any "bleeding heart liberal" really has some other motive. Well, no. Most (I hope, but at least some) people are really disturbed by the suffering of others.

I take the "normalization" argument seriously. But I note that it is not given much credence in other contexts; violent media, games, ... Perhaps the "gateway drug" argument is the closest parallel.

In the very least, it drives pedophiles underground where they cannot be reached by digital streetworkers, who might help them not to cause harm. Instead, they form clandestine communities that are already criminal. I doubt that makes any child safer. But it's not about children suffering for you, so whatever.

PhlubbaDubba ,

Man imagine continuing to try and argue Ethical Child Rape Content should be a thing.

If we want to make sweeping attacks on character, I'd rather be on the "All Child Rape Material is Bad" side of the argument but whatever floats ya boat.

Fly4aShyGuy ,
@Fly4aShyGuy@lemmy.one avatar

I don't think he's arguing that, and I don't think you believe that either. Doubt any of us would consider that content ethical, but what he's saying is it's not nearly the same as actually doing harm (as opposed what you said in your original post).

You implying that anyone who disagrees with you is somehow into those awful things is extremely poor taste. I'd expect so much more on Lemmy, that is a Reddit/Facebook level debate tactic. I guess I'm going to get accused of that too now?

I don't like to give any of your posts any credit here, but I can somewhat see the normalization argument. However, where is the line drawn regarding other content that could be harmful because normalized. What about adult non consensual type porn, violence on TV and video games, etc. Sliding scale and everyone might draw the line somewhere else. There's good reason why thinking about an awful things (or writing, drawing, creating fiction about it) is not the same as doing an awful thing.

I doubt you'll think much of this, but please really try to be better. It's 2024, time to let calling anyone you disagree with a pedo back on facebook in the 90s.

TheGrandNagus , (edited )

Idk, making CP where a child is raped vs making CP where no children are involved seem on very different levels of bad to me.

Both utterly repulsive, but certainly not exactly the same.

One has a non-consenting child being abused, a child that will likely carry the scars of that for a long time, the other doesn't. One is worse than the other.

E: do the downvoters like... not care about child sexual assault/rape or something? Raping a child and taking pictures of it is very obviously worse than putting parameters into an AI image generator. Both are vile. One is worse. Saying they're equally bad is attributing zero harm to the actual assaulting children part.

PhlubbaDubba ,

Man imagine trying to make the case for Ethical Child Rape Material.

You are not going to get anywhere with this line of discussion, stop now before you say something that deservedly puts you on a watchlist.

TheGrandNagus , (edited )

I'm not making the case for that at all, and I find you attempting to make out that I am into child porn a disgusting debate tactic.

"Anybody who disagrees with my take is a paedophile" is such a poor argument and serves only to shut down discussion.

It's very obviously not what I'm saying, and anybody with any reading comprehension at all can see that plainly.

You'll notice I called it "utterly repulsive" in my comment - does that sound like the words of a child porn advocate?

The fact that you apparently don't care at all about the child suffering side of it is quite troubling. If a child is harmed in its creation, then that's obviously worse than some creepy fuck drawing loli in Inkscape or typing parameters into an AI image generator. I can't believe this is even a discussion.

CyberSeeker ,

Bingo. If, at the limit, the purpose of a generative AI is to be indistinguishable from human content, then watermarking and AI detection algorithms are absolutely useless.

The ONLY means to do this is to have creators verify their human-generated (or vetted) content at the time of publication (providing positive proof), as opposed to attempting to retroactively trying to determine if content was generated by a human (proving a negative).

Deello ,

So basically Biden ads on the blockchain.

TheGrandNagus ,

...no

Think of generating an md5sum to verify that the file you downloaded online is what it should be and hasn't been corrupted during the download process or replaced in a Man in the Middle attack.

brbposting ,

generating an md5sum to verify that the file you downloaded online

https://sh.itjust.works/pictrs/image/fb788092-0889-4dcb-8500-5f0265f51f96.jpeg

Muehe ,

Cryptography ⊋ Blockchain

A blockchain is cryptography, but not all cryptography is a blockchain.

circuitfarmer ,
@circuitfarmer@lemmy.world avatar

I'm sure they do. AI regulation probably would have helped with that. I feel like congress was busy with shit that doesn't affect anything.

ours ,

I salute whoever has the challenge of explaining basic cryptography principles to Congress.

Spendrill ,

Might just as well show a dog a card trick.

wizardbeard ,
@wizardbeard@lemmy.dbzer0.com avatar

That's why I feel like this idea is useless, even for the general population. Even with some sort of visual/audio based hashing, so that the hash is independant of minor changes like video resolution which don't change the content, and with major video sites implementing a way for the site to verify that hash matches one from a trustworthy keyserver equivalent...

The end result for anyone not downloading the videos and verifying it themselves is the equivalent of those old ”✅ safe ecommerce site, we swear" images. Any dedicated misinformation campaign will just fake it, and that will be enough for the people who would have believed the fake to begin with.

brbposting ,
johnyrocket ,

Should probably start out with the colour mixing one. That was very helpfull for me to figure out public key cryptography. The difficulty comes in when they feel like you are treating them like toddlers so they start behaving more like toddlers. (Which they are 99% if the time)

lemmyingly ,

I see no difference between creating a fake video/image with AI and Adobe's packages. So to me this isn't an AI problem, it's a problem that should have been resolved a couple of decades ago.

andrew_bidlaw ,
@andrew_bidlaw@sh.itjust.works avatar

Why not just official channels of information, e.g. White house Mastodon instance with politicians' accounts, government-hosted, auto-mirrored by third parties.

long_chicken_boat ,

what if I meet Joe and take a selfie of both of us using my phone? how will people know that my selfie is an authentic Joe Biden?

PhlubbaDubba ,

Probably a signed comment from the Double-Cone Crusader himself, basically free PR so I don't see why he or any other president wouldn't at least have an intern give you a signed comment fist bump of acknowledgement

fidodo ,

That's the big question. How will we verify anything as real?

cynar ,

Ultimately, reputation based trust, combined with cryptographic keys is likely the best we can do. You (semi automatically) sign the photo, and upload it's stamp to a 3rd party. They can verify that they received the stamp from you, and at what time. That proves the image existed at that time, and that it's linked to your reputation. Anything more is just likely to leak, security wise.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • technology@lemmy.world
  • random
  • incremental_games
  • meta
  • All magazines