Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

variaatio

@variaatio@sopuli.xyz

This profile is from a federated server and may be incomplete. Browse more on the original instance.

variaatio , (edited )

Main issue comes from GDPR. When one uses the consent basis for collecting and using information it has to be a free choice. Thus one can't offer "Pay us and we collect less information about you". Hence "pay or consent" is blatantly illegal. Showing ads in generic? You don't need consent. That consent is "I vote with my browser address bar". Thing just is nobody anymore wants to use non tracked ads.....

So in this case DMA 5(2) is just basically re-enforcement and emphasis of previous GDPR principle. from verge

“exercise their right to freely consent to the combination of their personal data.”

from the regulation

  1. The gatekeeper shall not do any of the following:
    (a) process, for the purpose of providing online advertising services, personal data of end users using services of third parties that make use of core platform services of the gatekeeper;
    (b) combine personal data from the relevant core platform service with personal data from any further core platform services or from any other services provided by the gatekeeper or with personal data from third-party services;
    (c) cross-use personal data from the relevant core platform service in other services provided separately by the gatekeeper, including other core platform services, and vice versa; and
    (d) sign in end users to other services of the gatekeeper in order to combine personal data,

unless the end user has been presented with the specific choice and has given consent within the meaning of Article 4, point (11), and Article 7 of Regulation (EU) 2016/679.

surprise 2016/679 is..... GDPR. So yeah it's new violation, but pretty much it is "Gatekeepers are under extra additional scrutiny for GDPR stuff. You violate, we can charge you for both GDPR and DMA violation, plus with some extra rules and explicity for DMA".

I think technically already GDPR bans combining without permission, since GDPR demands permission for every use case for consent based processing. There must be consent for processing.... combining is processing, needs consent. However this is interpretation of the general principle of GDPR. It's just that DMA makes it explicit "oh these specific processing, yeah these are processing that need consent per GDPR". Plus it also rules them out of trying to argue "justified interest" legal basis of processing case of the business. Explicitly ruling "these type of processing don't fall under justified interest for these companies, these are only and explicitly per consent type actions".

variaatio ,

That is just its core function doing its thing transforming inputs to outputs based on learned pattern matching.

It may not have been trained on translation explicitly, but it very much has been trained on these are matching stuff via its training material. Since you know what its training set most likely contained..... dictionaries. Which is as good as asking it to learn translation. Another stuff most likely in training data: language course books, with matching translated sentences in them. Again well you didnt explicitly tell it to learn to translate, but in practice the training data selection did it for you.

variaatio ,

Well it could also be a lever or a switch.

variaatio ,

Newer ever take Klarnas word for anything. They are the fine and Dandy company whose business model involved by routine fishing for customers bank authorization credentials.

variaatio ,

Well difference is you have to know coming to know did the AI produce what you actually wanted.

Anyone can read the letter and know did the AI hallucinate or actually produce what you wanted.

On code. It might produce code, that by first try does what you ask. However turns AI hallucinated a bug into the code for some edge or specialty case.

Hallucinating is not a minor hiccup or minor bug, it is fundamental feature of LLMs. Since it isn't actually smart. It is a stochastic requrgitator. It doesn't know what you asked or understand what it is actually doing. It is matching prompt patterns to output. With enough training patterns to match one statistically usually ends up about there. However this is not quaranteed. Thus the main weakness of the system. More good training data makes it more likely it more often produces good results. However for example for business critical stuff, you aren't interested did it get it about right the 99 other times. It 100% has to get it right, this one time. Since this code goes to a production business deployment.

I guess one can code comprehensive enough verified testing pattern including all the edge cases and with thay verify the result. However now you have just shifted the job. Instead of programmer programming the programming, you have programmer programming the very very comprehensive testing routines. Which can't be LLM done, since the whole point is the testing routines are there to check for the inherent unreliability of the LLM output.

It's a nice toy for someone wanting to make a quick and dirty test code (maybe) to do thing X. Then try to find out does this actually do what I asked or does it have unforeseen behavior. Since I don't know what the behavior of the code is designed to be. Since I didn't write the code. good for toying around and maybe for quick and dirty brainstorming. Not good enough for anything critical, that has to be guaranteed to work with promise of service contract and so on.

So what the future real big job will be is not prompt engineers, but quality assurance and testing engineers who have to be around to guard against hallucinating LLM/ similar AIs. Prompts can be gotten from anyone, what is harder is finding out did the prompt actually produced what it was supposed to produce.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • incremental_games
  • meta
  • All magazines