Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

mkhoury OP ,
@mkhoury@lemmy.ca avatar

I agree that with the current state of tools around LLMs, this is very unadvisable. But I think we can develop the right ones.

  • We can have tools that can generate the context/info submitters need to understand what has been done, explain the choices they are making, discuss edge cases and so on. This includes taking screenshots as the submitter is using the app, testing period (require X amount of time of the submitter actually using their feature and smoothening out the experience)

  • We can have tools at the repo level that can scan and analyze the effect. It can also isolate the different submitted features in order to allow others to toggle them or modify them if they're not to their liking. Similarly, you can have lots of LLMs impersonate typical users and try the modifications to make sure they work. Putting humans in the loop at different appropriate times.

People are submitting LLM generated code they don't understand right now. How do we protect repos? How do we welcome these contributions while lowering risk? I think with the right engineering effort, this can be done.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • technology@beehaw.org
  • random
  • incremental_games
  • meta
  • All magazines