Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

The White House wants to 'cryptographically verify' videos of Joe Biden so viewers don't mistake them for AI deepfakes

The White House wants to 'cryptographically verify' videos of Joe Biden so viewers don't mistake them for AI deepfakes::Biden's AI advisor Ben Buchanan said a method of clearly verifying White House releases is "in the works."

pineapplelover ,

Huh. They actually do something right for once instead of spending years trying to ban A.I tools. I'm pleasantly surprised.

PhlubbaDubba ,

I mean banning use cases is deffo fair game, generating kiddy porn should be treated as just as heinous as making it the "traditional" way IMO

General_Effort ,

Yikes! The implication is that it does not matter if a child was victimized. It's "heinous", not because of a child's suffering, but because... ?

PhlubbaDubba ,

Man imagine trying to make "ethical child rape content" a thing. What were the lolicons not doing it for ya anymore?

As for how it's exactly as heinous, it's the sexual objectification of a child, it doesn't matter if it's a real child or not, the mere existence of the material itself is an act of normalization and validation of wanting to rape children.

Being around at all contributes to the harm of every child victimised by a viewer of that material.

General_Effort ,

I see. Since the suffering of others does not register with you, you must believe that any "bleeding heart liberal" really has some other motive. Well, no. Most (I hope, but at least some) people are really disturbed by the suffering of others.

I take the "normalization" argument seriously. But I note that it is not given much credence in other contexts; violent media, games, ... Perhaps the "gateway drug" argument is the closest parallel.

In the very least, it drives pedophiles underground where they cannot be reached by digital streetworkers, who might help them not to cause harm. Instead, they form clandestine communities that are already criminal. I doubt that makes any child safer. But it's not about children suffering for you, so whatever.

PhlubbaDubba ,

Man imagine continuing to try and argue Ethical Child Rape Content should be a thing.

If we want to make sweeping attacks on character, I'd rather be on the "All Child Rape Material is Bad" side of the argument but whatever floats ya boat.

Fly4aShyGuy ,
@Fly4aShyGuy@lemmy.one avatar

I don't think he's arguing that, and I don't think you believe that either. Doubt any of us would consider that content ethical, but what he's saying is it's not nearly the same as actually doing harm (as opposed what you said in your original post).

You implying that anyone who disagrees with you is somehow into those awful things is extremely poor taste. I'd expect so much more on Lemmy, that is a Reddit/Facebook level debate tactic. I guess I'm going to get accused of that too now?

I don't like to give any of your posts any credit here, but I can somewhat see the normalization argument. However, where is the line drawn regarding other content that could be harmful because normalized. What about adult non consensual type porn, violence on TV and video games, etc. Sliding scale and everyone might draw the line somewhere else. There's good reason why thinking about an awful things (or writing, drawing, creating fiction about it) is not the same as doing an awful thing.

I doubt you'll think much of this, but please really try to be better. It's 2024, time to let calling anyone you disagree with a pedo back on facebook in the 90s.

TheGrandNagus , (edited )

Idk, making CP where a child is raped vs making CP where no children are involved seem on very different levels of bad to me.

Both utterly repulsive, but certainly not exactly the same.

One has a non-consenting child being abused, a child that will likely carry the scars of that for a long time, the other doesn't. One is worse than the other.

E: do the downvoters like... not care about child sexual assault/rape or something? Raping a child and taking pictures of it is very obviously worse than putting parameters into an AI image generator. Both are vile. One is worse. Saying they're equally bad is attributing zero harm to the actual assaulting children part.

PhlubbaDubba ,

Man imagine trying to make the case for Ethical Child Rape Material.

You are not going to get anywhere with this line of discussion, stop now before you say something that deservedly puts you on a watchlist.

TheGrandNagus , (edited )

I'm not making the case for that at all, and I find you attempting to make out that I am into child porn a disgusting debate tactic.

"Anybody who disagrees with my take is a paedophile" is such a poor argument and serves only to shut down discussion.

It's very obviously not what I'm saying, and anybody with any reading comprehension at all can see that plainly.

You'll notice I called it "utterly repulsive" in my comment - does that sound like the words of a child porn advocate?

The fact that you apparently don't care at all about the child suffering side of it is quite troubling. If a child is harmed in its creation, then that's obviously worse than some creepy fuck drawing loli in Inkscape or typing parameters into an AI image generator. I can't believe this is even a discussion.

CyberSeeker ,

Bingo. If, at the limit, the purpose of a generative AI is to be indistinguishable from human content, then watermarking and AI detection algorithms are absolutely useless.

The ONLY means to do this is to have creators verify their human-generated (or vetted) content at the time of publication (providing positive proof), as opposed to attempting to retroactively trying to determine if content was generated by a human (proving a negative).

CyberSeeker ,

Digital signature as a means of non repudiation is exactly the way this should be done. Any official docs or releases should be signed and easily verifiable by any public official.

otter ,
@otter@lemmy.ca avatar

Would someone have a high level overview or ELI5 of what this would look like, especially for the average user. Would we need special apps to verify it? How would it work for stuff posted to social media

linking an article is also ok :)

pupbiru ,

it would potentially be associated with a law that states that you must not misrepresent a “verified” UI element like a check mark etc, and whilst they could technically add a verified mark wherever they like, the law would prevent that - at least for US companies

it may work in the same way as hardware certifications - i believe that HDMI has a certification standard that cables and devices must be manufactured to certain specifications to bear the HDMI logo, and the HDMI logo is trademarked so using it without permission is illegal… it doesn’t stop cheap knock offs, but it means if you buy things in stores in most US-aligned countries that bear the HDMI mark, they’re going to work

LodeMike ,

There’s already some kind of legal structure for what you’re talking about: trademark. It’s called “I’m Joe Biden and I approve this message.”

If you’re talking about HDCP you can break that with an HDMI splitter so IDK.

captain_aggravated ,
@captain_aggravated@sh.itjust.works avatar

Relying on trademark law to combat deepfake disinformation campaigns has the same energy as "Murder is already illegal, we don't need gun control."

LodeMike ,

Agreed

pupbiru ,

kinda… trademark law and copyright is pretty tightly controlled on the big social media platforms, and really that’s the target here

pupbiru ,

TLDR: trademark law yes, combined with a cryptographic signature in the video metadata… if a platform sees and verifies the signature, they are required to put the verified logo prominently around the video

i’m not talking about HDCP no. i’m talking about the certification process for HDMI, USB, etc

(random site that i know nothing about):
https://www.pacroban.com/en-au/blogs/news/hdmi-certifications-what-they-mean-and-why-they-matter

you’re right; that’s trademark law. basically you’re only allowed to put the HDMI logo on products that are certified as HDMI compatible, which has specifications on the manufacturing quality of cables etc

in this case, you’d only be able to put the verified logo next to videos that are cryptographically signed in the metadata as originating from the whitehouse (or probably better, some federal election authority who signs any campaign videos as certified/legitimate: in australia we have the AEC - australian electoral commission - a federal body that runs our federal elections and investigations election issues, etc)

now this of course wouldn’t work for sites outside of US control, but it would at least slow the flow of deepfakes on facebook, instagram, tiktok, the platform formerly known as twitter… assuming they implemented it, and assuming the govt enforced it

brbposting ,

Once an original video is cryptographically signed, could future uploads be automatically verified based on pixels plus audio? Could allow for commentary to clip the original.

Might need some kind of minimum length restriction to prevent deceptive editing which simply (but carefully) scrambles original footage.

pupbiru ,

not really… signing is only possible on exact copies (like byte exact; not even “the same image” but the same image, formatted the same, without being resized, etc)… there are things called perceptual hashes, and ways of checking if images are similar, but cryptography wouldn’t really help there

Natanael ,
AbouBenAdhem , (edited )

Depending on the implementation, there are two cryptographic functions that might be used (perhaps in conjunction):

  • Cryptographic hash: An arbitrary amount of data (like a video file) is used to create a “hash”—a shorter, (effectively) unique text string. Anyone can run the file through the same function to see if it produces the same hash; if even a single bit of the file is changed, the hash will be completely different and you’ll know the data was altered.

  • Public key cryptography: A pair of keys are created, one of which can only encrypt data (but can’t decrypt its own output), and the other, “public” key can only decrypt data that was encrypted by the first key. Users (like the White House) can post their public key on their website; then if a subsequent message purporting to come from that user can be decrypted using their public key, it proves it came from them.

Serinus ,

a shorter, (effectively) unique text string

A note on this. There are other videos that will hash to the same value as a legitimate video. Finding one that is coherent is extraordinarily difficult. Maybe a state actor could do it?

But for practical purposes, it'll do the job. Hell, if a doctored video with the same hash comes out, the White House could just say no, we punished this one, and that alone would be remarkable.

CyberSeeker ,

There are other videos that will hash to the same value

This concept is known as ‘collision’ in cryptography. While technically true for weaker key sizes, there are entire fields of mathematics dedicated to probably ensuring collisions are cosmically unlikely. MD5 and SHA-1 have a small enough key space for collisions to be intentionally generated in a reasonable timeframe, which is why they have been deprecated for several years.

To my knowledge, SHA-2 with sufficiently large key size (2048) is still okay within the scope of modern computing, but beyond that, you’ll want to use Dilithium or Kyber CRYSTALS for quantum resistance.

Natanael ,

SHA family and MD5 do not have keys. SHA1 and MD5 are insecure due to structural weaknesses in the algorithm.

Also, 2048 bits apply to RSA asymmetric keypairs, but SHA1 is 160 bits with similarly sized internal state and SHA256 is as the name says 256 bits.

ECC is a public key algorithm which can have 256 bit keys.

Dilithium is indeed a post quantum digital signature algorithm, which would replace ECC and RSA. But you'd use it WITH a SHA256 hash (or SHA3).

CyberSeeker ,

Good catch, and appreciate the additional info!

AbouBenAdhem , (edited )

Finding one that is coherent is extraordinarily difficult.

You’d need to find one that was not just coherent, but that looked convincing and differed in a way that was useful to you—and that likely wouldn’t be guaranteed, even theoretically.

Natanael ,

Pigeon hole principle says it does for any file substantially longer than the hash value length, but it's going to be hard to find

ReveredOxygen ,
@ReveredOxygen@sh.itjust.works avatar

Even for a 4096 bit hash (which isn't used afaik, usually only 1024 bit is used (but this could be outdated)), you only need to change 4096 bits on average. Even for a still 1080p image, that's 1920x1080 pixels. If you change the least significant bit of each color channel, you get 6,220,800 bits you can change within anyone noticing. That means on average there are 1,518 identical-looking variations of any image with a given 4096 bit hash, on average.
This goes down a lot when you factor in compression: those least significant bits aren't going to stay the same. But using a video brings it up by orders of magnitude: rather than one image, you can tweak colors in every frame
The difficulty doesn't come from the existence, it comes because you need to check 2⁵¹² = 10¹⁵⁴ different images to guarantee you'll find a match. Hash functions are designed to take a while to compute, so you'd have to run a supercomputer for an extremely long time to brute force a hash collision

Natanael ,

Most hash functions are 256 bit (they're symmetric functions, they don't need more in most cases).

There are arbitrary length functions (called XOF instead of hash) which built similarly (used when you need to generate longer random looking outputs).

Other than that, yeah, math shows you don't need to change more data in the file than the length of the hash function internal state or output length (whichever is less) to create a collision. The reason they're still secure is because it's still extremely difficult to reverse the function or bruteforce 2^256 possible inputs.

ReveredOxygen ,
@ReveredOxygen@sh.itjust.works avatar

Yeah I was using a high length at first because even if you overestimate, that's still a lot. I did 512 for the second because I don't know a ton about cryptography but that's the largest SHA output

Natanael ,

Public key cryptography would involve signatures, not encryption, here.

AtHeartEngineer ,
@AtHeartEngineer@lemmy.world avatar

The best way this could be handled is a green check mark near the video that you could click on it and it would give you all the meta data of the video (location, time, source, etc) with a digital signature (what would look like a random string of text) that you could click on and your browser would show you the chain of trust, where the signature came from, that it's valid, probably the manufacturer of the equipment it was recorded on, etc.

ulterno ,
@ulterno@lemmy.kde.social avatar

Just make sure the check mark is outside the video.

Natanael ,

Browser controlled modal.

wizardbeard ,
@wizardbeard@lemmy.dbzer0.com avatar

The issue is making that green check mark hard to fake for bad actors. Https works because it is verified by the browser itself, outside the display area of the page. Unless all sites begin relying on a media player packed into the browser itself, if the verification even appears to be part of the webpage, it could be faked.

brbposting ,

Hope verification gets built in to operating systems as compromised applications present a risk too.

But I’m sure a crook would build a MAGA Verifier since you can’t trust liberal Apple/Microsoft technology.

dejected_warp_core , (edited )

The only thing that comes to mind is something that forces interactivity outside the browser display area; out of the reach of Javascript and CSS. Something that would work for both mobile and desktop would be a toolbar icon that is a target for drag-and-drop. Drag the movie or image to the "verify this" target, and you get a dialogue or notification outside the display area. As a bonus, it can double for verifying TLS on hyperlinks while we're at it.

Edit: a toolbar icon that's draggable to the image/movie/link should also work the same. Probably easier for mobile users too.

Natanael ,

If you set the download manager icon in the browser as permanently visible, then dragging it there could trigger the verification to also run if the metadata is detected, and to then also show whichever metadata it could verify.

dejected_warp_core ,

That's a tad obscure, but makes it much easier to code up a prototype. I like it.

Natanael ,

Do not show a checkmark by default! This is why cryptographers kept telling browsers to de-emphasize the lock icon on TLS (HTTPS) websites. You want to display the claimed author and if you're able to verify keypair authenticity too or not.

AtHeartEngineer ,
@AtHeartEngineer@lemmy.world avatar

Fair point, I agree with this. There should probably be another icon in the browser that shows if all, some, or none of the media on a page has signatures that can be validated. Though that gets messy as well, because what is "media"? Things can be displayed in a web canvas or SVG that appears to be a regular image, when in reality it's rendered on the fly.

Security and cryptography UX is hard. Good point, thanks for bringing that up! Btw, this is kind of my field.

Natanael ,

I run /r/crypto at reddit (not so active these days due to needing to keep it locked because of spam bots, but it's not dead yet), usability issues like this are way too common

AtHeartEngineer ,
@AtHeartEngineer@lemmy.world avatar

I ran /r/cryptotechnology for years, and am good friends with the /r/cc mods. Reddit is a mess though, especially in the crypto areas.

PhlubbaDubba ,

Probably you'd notice a bit of extra time posting for the signature to be added, but that's about it, the responsibility for verifying the signature would fall to the owners of the social media site and in the circumstances where someone asks for a verification, basically imagine it as a libel case on fast forward, you file a claim saying "I never said that", they check signatures, they shrug and press the delete button and erase the post, crossposts, and if it's really good screencap posts and those crossposts of the thing you did not say but is still being attributed falsely to your account or person.

It basically gives absolute control of a person's own image and voice to themself, unless a piece of media is provable to have been made with that person's consent, or by that person themself, it can be wiped from the internet no trouble.

Where it comes to second party posters, news agencies and such, it'd be more complicated but more or less the same, with the added step that a news agency may be required to provide some supporting evidence that what they said is not some kind of misrepresentation or such as the offended party filing the takedown might be trying to insist for the sake of their public image.

Of course there could still be a YouTube "Stats for Nerds"-esque addin to the options tab on a given post that allows you to sign-check it against the account it's attributing something to, and a verified account system could be developed that adds a layer of signing that specifically identifies a published account, like say for prominent news reporters/politicians/cultural leaders/celebrities, that get into their own feed so you can look at them or not depending on how ya be feelin' that particular scroll session.

General_Effort , (edited )

For the average end-user, it would look like "https". You would not have to know anything about the technical background. Your browser or other media player would display a little icon showing that the media is verified by some trusted institution and you could learn more with a click.

In practice, I see some challenges. You could already go to the source via https, EG whitehouse.gov, and verify it that way. An additional benefit exists only if you can verify media that have been re-uploaded elsewhere. Now the user needs to check that the media was not just signed by someone (EG whitehouse.gov. ru), but if it was really signed by the right institution.

TheKingBee ,
@TheKingBee@lemmy.world avatar

As someone points out above, this just gives them the power to not authenticate real videos that make them look bad...

General_Effort ,

Videos by third parties, like Trump's pussy grabber clip, would obviously have to be signed by them. After having thought about it, I believe this is a non-starter.

It just won't be as good as https. Such a signing scheme only makes sense if the media is shared away from the original website. That means you can't just take a quick look at the address bar to make sure you are not getting phished. That doesn't work if it could be any news agency. You have to make sure that the signer is really a trusted agency and not some scammy lookalike. That takes too much care for casual use, which defeats the purpose.

Also, news agencies don't have much of an incentive to allow sharing their media. Any cryptographic signature would only make sense for them if directs users to their site, where they can make money. Maybe the potential for more clicks - basically a kind of clickable watermark on media - could make this take off.

dejected_warp_core ,

I honestly feel strategies like this should be mitigated by technically savvy journalism, or even citizen journalism. 3rd parties can sign and redistribute media in the public domain, vouching for their origin. While that doesn't cover all the unsigned copies in existence, it provides a foothold for more sophisticated verification mechanisms like a "tineye" style search for media origin.

Starbuck ,

Adobe is actually one of the leading actors in this field, take a look at the Content Authenticity Initiative (https://contentauthenticity.org/)

Like the other person said, it’s based on cryptographic hashing and signing. Basically the standard would embed metadata into the image.

Natanael ,
dejected_warp_core , (edited )

TL;DR: one day the user will see an overlay or notification that shows an image/movie is verified as from a known source. No extra software required.

Honestly, I can see this working great in future web browsers. Much like the padlock in the URL bar, we could see something on images that are verified. The image could display a padlock in the lower-left corner or something, along with the name of the source, demonstrating that it's a securely verified asset. "Normal" images would be unaffected. The big problem is how to put something on the page that cannot be faked by other means.

It's a little more complicated for software like phone apps for X or Facebook, but doable. The problem is that those products must choose to add this feature. Hopefully, losing reputation to being swamped with unverifiable media will be motivation enough to do so.

The underlying verification process is complex, but should be similar to existing technology (e.g. GPG). The key is that images and movies typically contain a "scratch pad" area in the file for miscellaneous stuff (metadata). This is where the image's author can add a cryptographic signature for the file itself. The user would never even know it's there.

Cocodapuf ,

It needs some kind of handler, but we mostly have those in place. A web browser could be the handler for instance. A web browser has the green dot on the upper left, telling you a page is secure, that https is on and valid. This could work like that, the browser can verify the video and display a green or red dot in the corner, the user could just mouse over it/tap on it to see who it's verified to be from. But it's up to the user to mouse over it and check if it says whitehouse.gov or dr-evil-mwahahaha.biz

pupbiru ,

i wouldn’t say signature exactly, because that ensures that a video hasn’t been altered in any way: no re-encoded, resized, cropped, trimmed, etc… platforms almost always do some of these things to videos, even if it’s not noticeable to the end-user

there are perceptual hashes, but i’m not sure if they work in a way that covers all those things or if they’re secure hashes. i would assume not

perhaps platforms would read the metadata in a video for a signature and have to serve the video entirely unaltered if it’s there?

AbouBenAdhem ,

Rather that using a hash of the video data, you could just include within the video the timestamp of when it was originally posted, encrypted with the White House’s private key.

Natanael ,

That doesn't prove that the data outside the timestamp is unmodified

AbouBenAdhem , (edited )

It does if you can also verify the date of the file, because the modified file will be newer than the timestamp. An immutable record of when the file was first posted (on, say, YouTube) lets you verify which version is the source.

Natanael ,

No it does not because you can cut out the timestamp and put it into anything if the timestamp doesn't encode anything about the frame contents.

It is always possible to backdate file edits.

Sure, public digital timestamping services exists, but most people will not check. Also once again, an older timestamp can simply be cut out of one file and posted into another file.

You absolutely must embedd something which identifies what the media file is, which can be used to verify ALL of the contents with cryptographic signatures. This may additionally refer to a verifiable timestamp at some timestamping service.

thantik ,

You don't need to bother with cryptographically verifying downstream videos, only the source video needs to be able to be cryptographically verified. That way you have an unedited, untampered cut that can be verified to be factually accurate to the broadcast.

The White House could serve the video themselves if they so wanted to. Just use something similar to PGP for signature validation and voila. Studios can still do all the editing, cutting, etc - it shouldn't be up to the end user to do the footwork on this, just for the studios to provide a kind of 'chain of custody' - they can point to the original verification video for anyone to compare to; in order to make sure alterations are things such as simple cuts, and not anything more than that.

pupbiru , (edited )

you don’t even need to cryptographically verify in that case because you already have a trusted authority: the whitehouse… of the video is on the whitehouse website, it’s trusted with no cryptography needed

the technical solutions only come into play when you’re trying to modify the video and still accurately show that it’s sourced from something verifiable

heck you could even have a standard where if a video adds a signature to itself, editing software will add the signature of the original, a canonical immutable link to the file, and timestamps for any cuts to the video… that way you (and by you i mean anyone; likely hidden from the user) can load up a video and be able to link to the canonical version to verify

in this case, verification using ML would actually be much easier because you (servers) just download the canonical video, cut it as per the metadata, and compare what’s there to what’s in the current video

Natanael ,

Apple's scrapped on-device CSAM scanning was based on perceptual hashes.

The first collision demo breaking them showed up in hours with images that looked glitched. After just a week the newest demos produced flawless images with collisions against known perceptual hash values.

In theory you could create some ML-ish compact learning algorithm and use the compressed model as a perceptual hash, but I'm not convinced this can be secure enough unless it's allowed to be large enough, as in some % of the original's file size.

pupbiru ,

you can definitely produced perceptual hashes that collide, but really you’re not just talking about a collision, you’re talking about a collision that’s also useful in subverting an election, AND that’s been generated using ML which is something that’s still kinda shakey to start with

Natanael ,

Perceptual hash collision generators can take arbitrary images and tweak them in invisible ways to make them collide with whichever hash value you want.

pupbiru ,

from the comment above, it seems like it took a week for a single image/frame though… it’s possible sure but so is a collision in a regular hash function… at some point it just becomes too expensive to be worth it, AND the phash here isn’t being used as security because the security is that the original was posted on some source of truth site (eg the whitehouse)

Natanael ,

No, it took a week to refine the attack algorithm, the collision generation itself is fast

The point of perceptual hashes is to let you check if two things are similar enough after transformations like scaling and reencoding, so you can't rely on that here

pupbiru ,

oh yup that’s a very fair point then! you certainly wouldn’t use it for security in that case, however there are a lot of ways to implement this that don’t rely on the security of the hash function, but just uses it (for example) to point to somewhere in a trusted source to manually validate that they’re the same

we already have the trust frameworks; that’s unnecessary… we just need to automatically validate (or at least provide automatic verifyability) that a video posted on some 3rd party - probably friendly or at least cooperative - platform represents reality

Natanael ,

I think the best bet is really video formats with multiple embedded streams carrying complementary frame data (already exists) so you decide video quality based on how many streams you want to merge in playback.

If you then hashed the streams independently and signed the list of hashes, then you have a video file which can be "compressed" without breaking the signature by stripping out some streams.

mods_are_assholes ,

Maybe deepfakes are enough of a scare that this becomes standard practice, and protects encryption from getting government backdoors.

RVGamer06 ,
@RVGamer06@sh.itjust.works avatar
mods_are_assholes ,

Hey, congresscritters didn't give a shit about robocalls till they were the ones getting robocalled.

We had a do not call list within a year and a half.

That's the secret, make it affect them personally.

Daft_ish ,

Doesn't that prove that government officials lack empathy? We see it again and again but still we keep putting these unfeeling bastards in charge.

mods_are_assholes ,

Well sociopaths are really good at navigating power hierarchies and I'm not sure there is an ethical way of keeping them from holding office.

Natanael ,

It really depends on their motivation. The ones we need to keep out are the ones who enjoy hurting others or don't care at all.

autotldr Bot ,

This is the best summary I could come up with:


The White House is increasingly aware that the American public needs a way to tell that statements from President Joe Biden and related information are real in the new age of easy-to-use generative AI.

Big Tech players such as Meta, Google, Microsoft, and a range of startups have raced to release consumer-friendly AI tools, leading to a new wave of deepfakes — last month, an AI-generated robocall attempted to undermine voting efforts related to the 2024 presidential election using Biden's voice.

Yet, there is no end in sight for more sophisticated new generative-AI tools that make it easy for people with little to no technical know-how to create fake images, videos, and calls that seem authentic.

Ben Buchanan, Biden's Special Advisor for Artificial Intelligence, told Business Insider that the White House is working on a way to verify all of its official communications due to the rise in fake generative-AI content.

While last year's executive order on AI created an AI Safety Institute at the Department of Commerce tasked with creating standards for watermarking content to show provenance, the effort to verify White House communications is separate.

Ultimately, the goal is to ensure that anyone who sees a video of Biden released by the White House can immediately tell it is authentic and unaltered by a third party.


The original article contains 367 words, the summary contains 218 words. Saved 41%. I'm a bot and I'm open source!

  • All
  • Subscribed
  • Moderated
  • Favorites
  • technology@lemmy.world
  • random
  • incremental_games
  • meta
  • All magazines