Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

Aggravationstation ,

Haaaaaaaaaaaaaaa!

Enjoy your open impartial platform Redtards.

n3m37h ,

That's like adding caustic soda to bleach. Just made the poison stronger

funn ,

I don't understand how Lemmy/Mastodon will handle similar problems. Spammers crafting fake accounts to give AI generated comments for promotions

FeelThePower ,

The only thing we reasonably have is security through obscurity. We are something bigger than a forum but smaller than Reddit, in terms of active user size. If such a thing were to happen here, mods could handle it more easily probably (like when we had the spammer of the Japanese text back then), but if it were to happen on a larger scale than what we have it would be harder to deal with.

roguetrick ,

Mostly it seems to be handled here with that URL blacklist automod.

linearchaos ,
@linearchaos@lemmy.world avatar

I think the real danger here is subtlety. What happens when somebody asks for recommendations on a printer, or complains about their printer being bad, and all of a sudden some long established account recommends a product they've been happy with for years. And it turns out it's just an AI bot shilling for brother.

deweydecibel ,

For one, well established brands have less incentives to engage in this.

Second, in this example, the account in question being a "long established user" would seem to indicate you think these spam companies are going to be playing a long game. They won't. That's too much effort and too expensive. They will do all of this on the cheap, and it will be very obvious.

This is not some sophisticated infiltration operation with cutting edge AI. This is just auto generated spam in a new upgraded form. We will learn to catch it, like we've learned to catch it before.

linearchaos ,
@linearchaos@lemmy.world avatar

I mean, it doesn't have to be expensive. And also doesn't have to be particularly cutting edge. Start throwing some credits into an LLM API, haven't randomly read and help people out in different groups. Once it reaches some amount of reputation have it quietly shill for them. Pull out posts that contain keywords. Have the AI consume the posts and figure out if they have to do with what they sound like they do. Have it subtly do product placement. None of this is particularly difficult or groundbreaking. But it could help shape our buying habits.

old_machine_breaking_apart ,

There's one advantage on the fediverse. We don't have the corporations like reddit manipulating our feeds, censoring what they dislike, and promoting shit. This alone makes using the fediverse worth for me.

When it comes to problems involving the users themselves, things aren't that different, and we don't have much to do.

MinFapper ,

We don't have corporations manipulating our feeds

yet. Once we have enough users that it's worth their effort to target, the bullshit will absolutely come.

old_machine_breaking_apart ,

they can perhaps create instances, pay malicious users, try some embrace, extend, extinguish approach or something, but they can't manipulate the code running on the instances we use, so they can't have direct power over it. Or am I missing something? I'm new to the fediverse.

BarbecueCowboy ,

There's very little to prevent them just pretending to be average users and very little preventing someone from just signing up a bunch of separate accounts to a bunch of separate instances.

No great automated way to tell whether someone is here legitimately.

bitfucker ,

Yeah, and that is true for a lot of service. Sybil attack is indeed quite hard to prevent since malicious users can blend with legitimate ones.

bitfucker ,

Federation means if you are federated then sure you get some BS. Otherwise, business as usual. Now, making sure there is no paid user or corporate bot is another matter entirely since it relies on instance moderators.

deweydecibel ,

We don't have the corporations like reddit manipulating our feeds, censoring what they dislike, and promoting shit.

Corporations aren't the only ones with incentives to do that. Reddit was very hands off for a good long while, but don't expect that same neutral mentality from fediverse admins.

BarbecueCowboy ,

mods could handle it more easily probably

I kind of feel like the opposite, for a lot of instances, 'mods' are just a few guys who check in sporadically whereas larger companies can mobilize full teams in times of crisis, it might take them a bit of time to spin things up, but there are existing processes to handle it.

I think spam might be what kills this.

FeelThePower ,

Hmm, good point.

deweydecibel ,

If a community is so small that the mod team can be so inactive, there's no incentive for the company to put any effort into spamming it like you're suggesting.

And if they do end up getting a shit ton of spam in there, and it sits around for a bit until a moderator checks in, so what? They'll just clean it up and keep going.

I'm not sure why people are so worried about this. It's been possible for bad actors to overrun small communities with automated junk for a very long time, across many different platforms, some that predate Reddit. It just gets cleaned up and things keep going.

It's not like if they get some AI produced garbage into your community, it infects it like a virus that cannot be expelled.

deweydecibel ,

The same way it's handled on Reddit: moderators.

Some will get through and sit for a few days but eventually the account will make itself obvious and get removed.

It's not exactly difficult to spot these things. If an account is spending the majority of its existence on a social media site talking about products, even if they add some AI generated bullshit here and there to make it seem like it's a regular person, it's still pretty obvious.

If the account seems to show up pretty regularly in threads to suggest the same things, there's an indicator right there.

Hell, you can effectively bait them by making a post asking for suggestions on things.

They also just tend to have pretty predictable styles of speak, and never fail to post the URL with their suggestion.

ILikeBoobies ,

This market; expected to replace the same market that just used bots to achieve the same thing

Milk_Sheikh ,

I still haven’t seen a use of AI that doesn’t serve state or corporate interests first, before the general public. AI medical diagnostics comes the closest, but that’s being leveraged to justify further staffing reductions, not an additional check.

The AI-captcha wars are on, and no matter who wins we lose.

FaceDeer ,
@FaceDeer@fedia.io avatar

no matter who wins we lose.

Not necessarily.

TimeSquirrel , (edited )
@TimeSquirrel@kbin.social avatar

AI is helping me learn and program C++. It's built into my IDE. Much more efficient than searching stackoverflow. Whenever it comes up with something I've never seen before, I learn what that thing does and mentally store it away for future use. As time goes on, I'm relying on it less and less. But right now it's amazing. It's like having a tutor right there with you who you can ask questions anytime, 24/7.

I hope a point comes where my kid can just talk to a computer, tell it the specifics of the program he wants to create, and have the computer just program the entire thing. That's the future we are headed towards. Ordinary folks being able to create software.

Milk_Sheikh ,

I’ll agree there’s huge potential for ‘assistant’ roles (exactly like you’re using) to give a concise summary for quick understanding. But LLMs aren’t knowledgeable like an accredited professor or tutor is, understanding the context and nuance of the topic. LLMs are very good at scraping together data and presenting the shallowest of information, but their limits get exposed quickly when you try to go into a topic.

For instance, I was working a project that required very long term storage (+10 years) with intermittent exposure to open air, and was concerned about oxidation and rust. ChatGPT was very adamant that desiccant alone was sufficient (wrong) and that VCI packs would last (also wrong). It did a great job of repackaging corporate ad-copy and industrial white papers written by humans, but not of providing an objective answer to a semi complex question.

TimeSquirrel ,
@TimeSquirrel@kbin.social avatar

I guess it's not great for things requiring domain knowledge. Programming seems to be easy for it, as programs are very structured, predictable, and logical. That's where its pattern-matching-and-prediction abilities shine.

ChaoticEntropy ,
@ChaoticEntropy@feddit.uk avatar

Well that's certainly one way for your brand to lose a lot of respect once it becomes apparent. Much like I when want to lose respect for myself, I use Chum brand dog food. Chum, it's still food, alright?

TropicalDingdong ,

Yeah this isn't new.

Ever wonder why you are such a fan of shitty played out franchises?

PrincessLeiasCat ,

The creator of the company, Alexander Belogubov, has also posted screenshots of other bot-controlled accounts responding all over Reddit. Begolubov has another startup called “Stealth Marketing” that also seeks to manipulate the platform by promising to “turn Reddit into a steady stream of customers for your startup.” Belogubov did not respond to requests for comment.

What an absolute piece of shit. Just a general trash person to even think of this concept.

andrew_bidlaw ,
@andrew_bidlaw@sh.itjust.works avatar

His surname translates from russian as 'white lips'. No wonder he is a ghoul.

owatnext ,
@owatnext@lemmy.world avatar

I was about ready to downvote out of pure annoyance lol.

Boomkop3 ,

Well, that was the last bit of usefulness I used to get out of google. I've been on yahoo for a while now

n3m37h ,

Yahoo is still alive?

Boomkop3 ,

Yep, it's sort of what google used to be. It took me a bit of setup tho. They really like to default to showing you a ton of news and crap. But after turning that all off I'm left with a super clean ui and useful search results

p0q ,

I see the yahoo ai bot is working well. /s

Boomkop3 ,

Absolutely, I am definitely not human

gencha ,

How exactly are they poisoning a pool of toxic waste?

gaylord_fartmaster ,

Pissing into an ocean of piss.

SlopppyEngineer ,

Now it's not only toxic, it's also acidic and instead of killing you, it'll also melt you.

How visiting Reddit will feel

heavy ,

Wow this is gross. I'm gonna wash it down with some MOUNTAIN DEW ™

homesweethomeMrL ,

I appreciate the mostly benign neglect we had for awhile. Now that they're paying attention it's just all bad. Or would be, if I was there. HA.

sirspate ,

If the rumor is true that a reddit/google training deal is what led to reddit getting boosted in search results, this would be a direct result of reddit's own actions.

Drinvictus ,

If only people moved to an open and federated platform. I mean I don't have to say that I hate reddit since I'm here but still whenever I Google a problem reddit answers are one of the most useful places. Especially about something local.

circuscritic ,

This isn't a problem that can be solved with a technical solution that isn't itself extremely dystopian in nature.

This is a problem that requires legislation and criminal liability, or genuine punitive civil liability that pierces the corporate legal shields.

Don't hold your breath for a serious solution to present itself.

paraphrand ,

Do you think legislation and laws would be reasonable for trolls who ban evade and disrupt and destroy synchronous online social spaces too?

The same issue happens there. Zero repercussions, ban evasion is almost always possible, and the only fool proof solutions seem to quickly turn dystopian too.

Ban evasion and cheating are becoming a bigger and bigger issue in online games/social spaces. And all the nerds will agree it’s impossible to fix. And many feel it’s just normal culture. But it’s not sustainable, and with AI and an ever escalating cat and mouse game, it’s going to continue to get worse.

Can anyone suggest a solution that is on the horizon?

circuscritic ,

No, I'm a free speech absolutist when it comes to private citizens. Be they communists, Nazis, Democrats, trolls, assholes or furries, the government should have no role in regulating their speech outside of reasonable exceptions i.e. yelling fire in a crowded theater, threats of physical violence, etc.

My moral conviction on relative free speech absolutism ends at the articles of incorporation, or other nakedly profit driven speech e.g. market manipulation.

So if the trolls and ban evaders are acting on behalf of a company, or for profit driven interests, their speech should be regulated. If they're just assholes or trolls, that's a problem for the website and mod teams.

paraphrand ,

Thanks for replying. As far as speech goes, I agree with you. And I agree that moderation, using tools like blocking or muting and social mores should take care of things.

Setting speech aside. What about people who hack the spaces, break things, blast ear piercing sounds, crash other users or otherwise do not use symbols or speech to destroy or harm a synchronous social space? And I am assuming these are mostly always individuals. Not some corporate scheme.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • technology@lemmy.world
  • incremental_games
  • meta
  • All magazines