I gave up reporting on major sites where I saw abuse. Stuff that if you said that in public, also witnessed by others, you've be investigated. Twitter was also bad for responding to reports with "this doesnt break our rules" when a) it clearly did and b) probably a few laws.
I gave up after I was told that people DMing me photographs of people committing suicide was not harassment but me referencing Yo La Tengo's album "I Am Not Afraid Of You And I Will Beat Your Ass" was worthy of a 30 day ban
On youtube I had a persistent one who only stopped threatening to track me down and kill me (for a road safety video) when I posted the address of a local police station and said "pop in, any time!"
That's true, but a lotnof things are illegal eeverywhere. Sexual Harassment or death treads will get you a lawsuit in probably every single country of the world.
Are the platforms guilty or are the users that supplied the radicalized content guilty? Last I checked, most of the content on YouTube, Facebook and Reddit is not generated by the companies themselves.
most of the content on YouTube, Facebook and Reddit is not generated by the companies themselves
Its their job to block that content before it reaches an audience, but since thats how they make their money, they dont or wont do that. The monetization of evil is the problem, those platforms are the biggest perpetrators.
Its their job to block that content before it reaches an audience
The problem is (or isn't, depending on your perspective) that it is NOT their job. Facebook, YouTube, and Reddit are private companies that have the right to develop and enforce their own community guidelines or terms of service, which dictate what type of content can be posted on their platforms. This includes blocking or removing content they deem harmful, objectionable, or radicalizing. While these platforms are protected under Section 230 of the Communications Decency Act (CDA), which provides immunity from liability for user-generated content, this protection does not extend to knowingly facilitating or encouraging illegal activities.
There isn't specific U.S. legislation requiring social media platforms like Facebook, YouTube, and Reddit to block radicalizing content. However, many countries, including the United Kingdom and Australia, have enacted laws that hold platforms accountable if they fail to remove extremist content. In the United States, there have been proposals to amend or repeal Section 230 of CDA to make tech companies more responsible for moderating the content on their sites.
The argument could be made (and probably will be) that they promote those activities by allowing their algorithms to promote that content. Its's a dangerous precedent to set, but not unlikely given the recent rulings.
Any precedent here regardless of outcome will have significant (and dangerous) impact, as the status quo is already causing significant harm.
For example Meta/Facebook used to prioritize content that generates an angry face emoji (over that of a "like") - - as it results in more engagement and revenue.
However the problem still exists. If you combat problematic content with a reply of your own (because you want to push back against hatred, misinformation, or disinformation) then they have even more incentiive to show similar content. And they justify it by saying "if you engaged with content, then you've clearly indicated that you WANT to engage with content like that".
The financial incentives as they currently exist run counter to the public good
Yeah i have made that argument before. By pushing content via user recommended lists and auto play YouTube becomes a publisher and meeds to be held accountable
Not how it works. Also your use of "becomes a publisher" suggests to me that you are misinformed - as so many people are - that there is some sort of a publisher vs platform distinction in Section 230. There is not.
Oh no i am aware of that distinction. I just think it needs to go away and be replaced.
Currently sec 230 treats websites as not responsible for user generated content. Example, if I made a video defaming someone I get sued but YouTube is in the clear. But if The New York Times publishes an article defaming someone they get sued not just the writer.
Why? Because NYT published that article but YouTube just hosts it. This publisher platform distinction is not stated in section 230 but it is part of usa law.
This is frankly bizarre. I don't understand how you can even write that and reasonably think that the platform hosting the hypothetical defamation should have any liability there. Like this is actually a braindead take.
Repealing Section 230 would actually have the opposite effect, and lead to less moderation as it would incentivize not knowing about the content in the first place.
I can't see that. Not knowing about it would be impossible position to maintain since you would be getting reports. Now you might say they will disable reports which they might try but they have to do business with other companies who will require that they do. Apple isn't going to let your social media app on if people are yelling at Apple about the child porn and bomb threats on it, AWS will kick you as well, even Cloudflare might consider you not worth the legal risk. This has already happened multiple times even with section 230 providing a lot of immunity to these companies. Without that immunity they would be even more likely to block.
Those sites determine what they promote. Such sites often promote extreme views as it gets people to watch or view the next thing. Facebook for instance researched this outcome, then ignored that knowledge.
I never liked that logic it's basically "success has many father's but failure is an orphan" applied.
Are you involved with something immoral? To the extent of your involvement is the extent of how immoral your actions are. Same goes for doing the right thing.
They're appealing the denial of motion to dismiss huh? I agree that this case really doesn't have legs but I didn't know that was an interlocutory appeal that they could do. They'd win in summary judgement regardless.
I don't understand the comments suggesting this is "guilty by proxy". These platforms have algorithms designed to keep you engaged and through their callousness, have allowed extremist content to remain visible.
Are we going to ignore all the anti-vaxxer groups who fueled vaccine hesitancy which resulted in long dead diseases making a resurgence?
To call Facebook anything less than complicit in the rise of extremist ideologies and conspiratorial beliefs, is extremely short-sighted.
"But Freedom of Speech!"
If that speech causes harm like convincing a teenager walking into a grocery store and gunning people down is a good idea, you don't deserve to have that speech. Sorry, you've violated the social contract and those people's blood is on your hands.
This may seem baseless, but I have seen this from years of experience in online forums. You don’t have to take it seriously, but maybe you can relate. We have seen time and time again that if there is no moderation then the shit floats to the top. The reason being that when people can’t post something creative or fun, but they still want the attention, they will post negative. It’s the loud minority, but it’s a very dedicated loud minority. Let’s say we have 5 people and 4 of them are very creative time and funny, but 1 of them complains all the time. If they make posts to the same community then there is a very good chance that the one negative person will make a lot more posts than the 4 creative types.
Not just "remain visible" - actively promoted. There's a reason people talk about Youtube's right-wing content pipeline. If you start watching anything male-oriented, Youtube will start slowly promoting more and more right-wing content to you until you're watching Ben Shaprio and Andrew Tate
YouTube is really bad about trying to show you right wing crap. It's overwhelming. The shorts are even worse. Every few minutes there's some new suggestion for some stuff that is way out of the norm.
Tiktok doesn't have this problem and is being attacked by politicians?
I got into painting mini Warhammer 40k figurines during covid, and thought the lore was pretty interesting.
Every time I watch a video, my suggested feed goes from videos related to my hobbies to entirely replaced with red pill garbage. The right wing channels have to be highly profitable to YouTube to funnel people into, just an endless tornado of rage and constant viewing.
The algorithm is, after all, optimized for nothing other than advertisements/time period. So long as the algorithm believes that a video suggestion will keep you on the website for a minute more, it will suggest it. I occasionally wonder about the implications of one topic leading to another. Is WH40k suggested the pipeline by demographics alone or more?
Irritation at suggestions was actually what originally led me to invidious. I just wanted to watch a speech without hitting the “____ GETS DUNKED ON LIKE A TINY LITTLE BITCH” zone. Fuck me for trying to verify information.
One thing to consider is that conservatives are likely paying for progressives to see their content, and geeks tend to have liberal views and follow the harm principle without many conditions.
Otherwise, it really shows the demographics of the people who play Warhammer. Before my sister transitioned, she played Warhammer and was a socialist but had a lot of really wehraboo interests. She has been talking about getting back into it, but she passes really well and imagines how it would go with the neckbeards.
YouTube will actually take action and has done in most instances. I won't say they're the fastest but they do kick people off the platform if they deem them high risk.
If that speech causes harm like convincing a teenager walking into a grocery store and gunning people down is a good idea, you don't deserve to have that speech.
In Germany we have a very good rule for this(its not written down, but that's something you can usually count onto). Your freedom ends, where it violates the freedom of others. Examples for this: Everyone has the right to live a healthy life and everyone has the right to walk wherever you want. If I now take my right to walk wherever to want to cause a car accident with people getting hurt(and it was only my fault). My freedom violated the right that the person who has been hurt to life a healthy life. That's not freedom.
In Canada, they have an idea called "right to peace". It means that you can't stand outside of an abortion clinic and scream at people because your right to free speech doesn't exceed that person's right to peace.
I don't know if that's 100% how it works so someone can sort me out, but I kind of liked that idea
Can we stop letting the actions of a few bad people be used to curtail our freedom on platforms we all use.
I don't want the internet to end up being policed by corporate AIs and poorly implemented bots (looking at you auto-mod).
The internet is already a husk of what it used to be, what it could be. It used to be personal, customisable... Dare I say it; messy and human...
.... maybe that was serving a need that now people feel alienated from. Now we live as corporate avatars who risk being banned every time we comment anywhere.
Facebook and others actively promote harmful content because they know it drives interactions, I believe it's possible to punish corps without making the internet overly policed.
I agree with you in spirit. The most common sentiment I see among the comments is not to limit what people can share but how actively platforms move people down rabbit holes. If there is not action on the part of the platforms to correct for this, they risk regulation which in turn puts freedom of speech at risk.
Facebook will have actively pushed this stuff. Reddit will have just ignored it, and YouTube just feeds your own bubble back to you.
YouTube doesn't radicalize people, it only increases their existing radicalization, but the process must start elsewhere, and to be completely fair they do put warnings and links to further information on the bottom of questionable videos, and they also delist quite a lot of stuff as well.
I don't know what's better to completely block conspiracy theory videos or to allow them and then have other people mock them.
Well I don't know who that is, my which is my point really. I'm assuming he's some right wing conspiracy theorist but because I'm not already pre-disposed to listen to that kind of stuff I don't get it in my recommendations.
Meanwhile Facebook would actively promote that stuff.
Yeah I feel like people are missing my point I don't know who it is and I don't get recommended his content.
The only people who get recommended his content are people who are already going to be thinking along those lines and watching videos along those lines.
YouTube does not radicalize people they do it to themselves.
Why do you believe "the process must start elsewhere"? I've literally had YouTube start feeding me this sort of content, which I have no interest in at all and actively try to avoid. It seems very obvious that YouTube is a major factor in inculcating these belief systems in people who would otherwise not be exposed to them without YouTube ensuring they reach an audience.
Completely different cases, questionable comparison;
social media are the biggest cultural industry at the moment, albeit a silent and unnoticed one. Cultural industries like this are means of propaganda, information and socilalization, all of which is impactful and heavily personal and personalised for everyone's opinion.
thus the role of such an impactul business is huge and can move opinions and whole movements, the choices that people takes are driven by their media consumption and communities they take part in.
In other words, policy, algorhitms, GUI are all factors that drive the users to engage in speific ways with harmful content.
I wish you guys would stop making me defend corporations. Doesn't matter how big they are, doesn't matter their influence, claiming that they are responsible for someone breaking the law because someone else wrote something that set them off and they, as overlords, didn't swoop in to stop it is batshit.
Since you don't like those comparisons, I'll do one better. This is akin to a man shoving someone over a railing and trying to hold the landowners responsible for not having built a taller railing or more gradual drop.
You completely fucking ignore the fact someone used what would otherwise be a completely safe platform because another party found a way to make it harmful.
polocy and algorithm are factors that drive users to engage
Yes. Engage. Not in harmful content specifically, that content just so happens to be the content humans react to the strongest. If talking about fields of flowers drove more engagement, we'd never stop seeing shit about flowers. It's not them maliciously pushing it, it's the collective society that's fucked.
The solution is exactly what it has always been. Stop fucking using the sites if they make you feel bad.
Again, no such a thing as a neutral space or platform, case in point, reddit with its gated communities and the lack of control over what people does with the platform is in fact creating safe spaces for these kind of things. This may not be inentional, but it ultimately leads towards the radicalization of many people, it's a design choice followed by the internal policy of the admins who can decide to let these communities be on one of the mainstream websites. If you're unsure about what to think, delving deep into these subreddits has the effect of radicalising you, whereas in a normal space you wouldn't be able o do it as easily. Since this counts as engagement, reddit can suggest similar forums, leading via algorhitms to a path of radicalisation. This is why a site that claims to be neutra is't truly neutral.
This is an example of alt-right pipeline that reddit succesfully mastered:
The alt-right pipeline (also called the alt-right rabbit hole) is a proposed conceptual model regarding internet radicalization toward the alt-right movement. It describes a phenomenon in which consuming provocative right-wing political content, such as antifeminist or anti-SJW ideas, gradually increases exposure to the alt-right or similar far-right politics. It posits that this interaction takes place due to the interconnected nature of political commentators and online communities, allowing members of one audience or community to discover more extreme groups (*https://en.wikipedia.org/wiki/Alt-right_pipeline*)
And yet you keep comparing cultural and media consumption to a physical infrastructure, which is regulated as to prevent what you mentioned, an unsafe management of the terrain for instace. So taking your examples as you wanted, you may just prove that regulations can in fact exist and private companies or citizens are supposed to follow them. Since social media started to use personalisation and predictive algorhitms, they also behave as editors, handling and selecting the content that users see. Why woul they not be partly responsible based on your argument?
They can suggest similar [communities] so it can't be neutral
My guy, what? If all you did was look at cat pictures you'd get communities to share fucking cat pictures. These sites aren't to blame for "radicalizing" people into sharing cat pictures any more than they are to actually harmful communities. By your logic, lemmy can also radicalize people. I see anarchist bullshit all the time, had to block those communities and curate my own experience. I took responsibility and instead of engaging with every post that pissed me off, removed that content or avoided it. Should the instance I'm on be responsible for not defederating radical instances? Should these communities be made to pay for radicalizing others?
Fuck no. People are not victims because of the content they're exposed to, they choose to allow themselves to become radical. This isn't a "I woke up and I really think Hitler had a point." situation, it's a gradual decline that isn't going to be fixed by censoring or obscuring extreme content. Companies already try to deal with the flagrant forms of it but holding them to account for all of it is truly and completely stupid.
Nobody should be responsible because cat pictures radicalized you into becoming a furry. That's on you. The content changed you and the platform suggesting that content is not malicious nor should it be held to account for that.
As neutral platforms that will as readily push cat pictures as often it will far right extremism and the only difference is how much the user personally engages with it?
Whatever you say, CopHater69. You're definitely not extremely childish and radical.
I doubt you could engineer a plug into your own asshole but sure, I'll take your word that you're not just lying and have expert knowledge on this field yet still refused to engage with the point to sling insults instead.
I’ve literally watched friends of mine descend into far right thinking and I can point to the moment when they started having algorithms suggest content that puts them down a “rabbit hole”
Like, you’re not wrong they were right wing initially but they became the “lmao I’m an unironic fascist and you should be pilled like me” variety over a period of six months or so. Started stock piling guns and etc.
This phenomena is so commonly reported it makes you start wonder where all these people deciding to “radicalize themselves” all at once seemingly came out in droves.
Additionally, these companies are responsible for their content serving algorithms, and if they did not matter for affecting the thoughts of the users: why do propaganda efforts from nation states target their narratives and interests appearing within them if it was not effective? Did we forget the spawn and ensuing fall out of the Arab Spring?
Please let me know if you want me to testify that reddit actively protected white supremacist communities and even banned users who engaged in direct activism against these communities
I have noticed a massive drop in the quality of posting in Reddit over the last year. It was on a decline, but there was a massive drop off.
It’s anecdotal to what I have read off Lemmy, but a lot of high Karma accounts have been nuked due to mods and admins being ridiculously over zealous in handing out permabans.
How people who supposedly care for children's safety are willing to ignore science and instead choose to hue and cry about bullshit stuff they perceive (or told by their favourite TV personality) as evil.
Have you got it now? Or should I explain it further?
Didn't expect Lemmy to have people who lack reading comprehension.
People don't appreciate having spurious claims attached to their legitimate claims, even in jest. It invokes the idea that since the previous targets of blame were false that these likely are as well.
They're all external factors. Music and videogames have been (wrongly, imo) blamed in the past. Media, especially nowadays, is probably more "blameable" than music and games, but i still think it's bs to use external factors as an excuse to justify mass shootings.
Sweet, I'm sure this won't be used by AIPAC to sue all the tech companies for causing October 7th somehow like unrwa and force them to shutdown or suppress all talk on Palestine. People hearing about a genocide happening might radicalize them, maybe we could get away with allowing discussion but better safe then sorry, to the banned words list it goes.
This isn't going to end in the tech companies hiring a team of skilled moderators who understand the nuance between passion and radical intention trying to preserve a safe space for political discussion, that costs money. This is going to end up with a dictionary of banned and suppressed words.
It's already out there. For example you can't use the words "Suicide" or "rape" or "murder" in YouTube, TikTok etc. even when the discussion is clearly about trying to educate people. Heck, you can't even mention Onlyfans on Twitch...
cnn.com
Hot