Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

Wirlocke

@Wirlocke@lemmy.blahaj.zone

This profile is from a federated server and may be incomplete. Browse more on the original instance.

Wirlocke , to 196 in Rule

Here is an article about it. Even if it's technically right in some ways, the language it uses tries to normalize pedophilia in the same ways as sexualities. Specifically the term "Minor Attracted Person" is controversial and tries to make pedophilia into an Identity like "Person of Color".

It was lampshading the fact this is a highly dangerous disorder. It shouldn't be blindly accepted but instead require immediate psychiatric care at best.

https://www.washingtontimes.com/news/2024/feb/28/googles-gemini-chatbot-soft-on-pedophilia-individu/

Wirlocke , to 196 in Rule

First the Google Bard demo, then the racial bias and pedophile sympathy of Gemini, now this.

It's funny that they keep floundering with AI considering they invented the transformer architecture that kickstarted this whole AI gold rush.

Wirlocke , to 196 in Nonbinary rule

I believe proper gun safety teaches people to treat every gun as loaded and safety off. Same reason it's extremely unsafe to point a supposedly empty gun at someone.

Wirlocke , to Technology in We have to stop ignoring AI’s hallucination problem

In terms of LLM hallucination, it feels like the name very aptly describes the behavior and severity. It doesn't downplay what's happening because it's generally accepted that having a source of information hallucinate is bad.

I feel like the alternatives would downplay the problem. A "glitch" is generic and common, "lying" is just inaccurate since that implies intent to deceive, and just being "wrong" doesn't get across how elaborately wrong an LLM can be.

Hallucination fits pretty well and is also pretty evocative. I doubt that AI promoters want to effectively call their product schizophrenic, which is what most people think when hearing hallucination.

Ultmately all the sciences are full of analogous names to make conversations easier, it's not always marketing. No different than when physicists say particles have "spin" or "color" or that spacetime is a "fabric" or [insert entirety of String theory]...

Wirlocke , to Technology in We have to stop ignoring AI’s hallucination problem

I'm a bit annoyed at all the people being pedantic about the term hallucinate.

Programmers use preexisting concepts as allegory for computer concepts all the time.

Your file isn't really a file, your desktop isn't a desk, your recycling bin isn't a recycling bin.

[Insert the entirety of Object Oriented Programming here]

Neural networks aren't really neurons, genetic algorithms isn't really genetics, and the LLM isn't really hallucinating.

But it easily conveys what the bug is. It only personifies the LLM because the English language almost always personifies the subject. The moment you apply a verb on an object you imply it performed an action, unless you limit yourself to esoteric words/acronyms or you use several words to overexplain everytime.

Wirlocke , to 196 in you dont understand i NEED boy midriff
Wirlocke , (edited ) to 196 in nuanceposting rule

Since discovering I'm trans I've shaken myself out of this hardcore "rational" mindset that I feel is poisoning the internet.

It's the moderate point of view that the marginalized needs to remain "civil" and shouldn't get overly emotional or say anything hyperbolic.

Every statement needs to be followed with multiple asterisks responding to every possible angle of your statement. All until everything boils down to tepid "bad things are bad" statements, or writing things off as "case by case".

It's this hyperdrive to remain unbiased to the point that taking any stance reveals your biased and you lose.

Our ability to sit around and debate all day like greek philosophers is a recent luxury that's drying up. We need to commit to action, and action requires strong emotional stances by the marginalized.

Wirlocke , to Technology in Report: Microsoft to face antitrust case over Teams

I hate that it's links are "incompatible" with Firefox, even though if you trick it into thinking it's Chrome, it works just fine.

Wirlocke , to Technology in Elon Musk’s Neuralink reports trouble with first human brain chip

Can't wait for FOSS brain implants, it would still be hellish but a fun kind of hellish.

I want someone like Linus Torvalds to verbally abuse someone for not understanding basic computational neuroscience.

Wirlocke , to Memes in Just one more lane

I've seen those trucks with a bunch of cars packed on top, something like that (minus truck) could totally fit in a train cargo container.

https://lemmy.blahaj.zone/pictrs/image/f23af545-750a-41aa-b0f6-d46d51def920.jpeg

Wirlocke , to Memes in Just one more lane

In fact I think there's a missed opportunity for EVs to partner with long distance public transit.

The main limitations of electric cars is distance, but if people knew they could go across the state or several states comfortably without their car, they might be more willing to take a electric car for city driving.

Wirlocke , to 196 in rule

There was a weird incident in class where a good amount of my classmates, including some who were POC, believed that black people were biologically more aggressive based on anecdotal experience.

I'm white but I was arguing against this because it made no sense. As a possible explanation I argued that black communities are typically poorer because of history (slavery, segregation, ect) and that poor and desperate communities are whats more likely to be violent.

It seemed to get them to pause for a moment. I'm sure I wasn't as nuanced as I'd be now but I was a dumb reactionary teenager talking to dumb reactionary teenagers.

Wirlocke , to 196 in Nether Rule

Rule of thumb is that Computers love base 2 numbers (2,4,8,16,ect) and hate prime numbers 7 and higher.

Wirlocke , to 196 in Beep boop, I don't want this rule

This is ultimately because LLMS are intelligent in the same way the subconscious is intelligent. It can rapidly make association but they are their initial knee jerk associations. In the same way that you can be tricked with word games if you're not thinking things through, the LLM gets tricked by saying the first thing on their mind.

However we're not far off from resolving this. Current methods are just to force the LLM to make a step by step plan before returning the final result.

Currently though there's the hot topic of Q* from OpenAI. No one knows what it is but a good theory is that it's applying the A* maze solving algorithm to the neural network. Essentially the LLM will explore possible routes in their neural network to try and discover the best answer. In other word it would let them think ahead and compare solutions, this would be far more similar to what the conscious mind does.

This would likely patch up these holes because it would discard pathways that lead to contradicting itself/the prompt, in favor of one that fits the entire prompt (In this case, acknowledging the attempt to have it break it's initial rules).

Wirlocke , to Technology in Ask ChatGPT to pick a number between 1 and 100

I'm curious, is there actually so many 42's in the system? (more than 69 sounds unlikely)

What if the LLM is getting tripped up because 42 is always referred to as the answer to "the Ultimate Question of Life, the Universe, and Everything".

So you ask it a question like give a number between 1-100, it answers 42 because that's the answer to "Everything", according to it's training data.

Something similar happened to Gemini. Google discouraged Gemini from giving unsafe advice because it's unethical. Then Gemini refused to answer questions about C++ because it's considered "unsafe" (referring to memory management). But Gemini thinks C++ is "unsafe" (the normal meaning), therefore it's unethical. It's like those jailbreak tricks but from its own training set.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • incremental_games
  • meta
  • All magazines