Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

vegeta ,

Bless the maker and his water

NoRodent ,
@NoRodent@lemmy.world avatar

Type without rhythm and it won't attract the worm.

YarHarSuperstar ,
@YarHarSuperstar@lemmy.world avatar

This doesn't sound like it could have any negative consequences or anything.

autotldr Bot ,

This is the best summary I could come up with:


Startups and tech companies are building AI agents and ecosystems on top of the systems that can complete boring chores for you: think automatically making calendar bookings and potentially buying products.

The research, which was undertaken in test environments and not against a publicly available email assistant, comes as large language models (LLMs) are increasingly becoming multimodal, being able to generate images and video as well as text.

While generative AI worms haven’t been spotted in the wild yet, multiple researchers say they are a security risk that startups, developers, and tech companies should be concerned about.

To show how the worm can work, the researchers created an email system that could send and receive messages using generative AI, plugging into ChatGPT, Gemini, and open source LLM, LLaVA.

Despite this, there are ways people creating generative AI systems can defend against potential worms, including using traditional security approaches.

There should be a boundary there.” For Google and OpenAI, Swanda says that if a prompt is being repeated within its systems thousands of times, that will create a lot of “noise” and may be easy to detect.


The original article contains 1,239 words, the summary contains 186 words. Saved 85%. I'm a bot and I'm open source!

  • All
  • Subscribed
  • Moderated
  • Favorites
  • technology@lemmy.world
  • incremental_games
  • meta
  • All magazines