Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

@theluddite@lemmy.ml avatar

theluddite

@theluddite@lemmy.ml

I write about technology at theluddite.org

This profile is from a federated server and may be incomplete. Browse more on the original instance.

theluddite ,
@theluddite@lemmy.ml avatar

Oh hey I wrote that lol.

Not all protests for Gaza were meant to gain engagement, many were organized to cause direct economic disruption to those that profit from the war, that is a goal.

I actually totally agree with you. I should've been more careful in the text to distinguish between those two very different kinds of actions. I actually really, really like things that disrupt those that profit, but those are not nearly as common as going to the local park or whatever. I might throw in a footnote to clarify.

theluddite ,
@theluddite@lemmy.ml avatar

That's kind of a weird critique, because it's actually consistent with the book. He spends a lot of time talking about how wildly different every person's interpretation of the event is, and that's kind of the problem. It's part of why these movements are illegible to power. He's very clear that this is his interpretation, based on his own contacts, experience, and extensive research, but that it's not going to be the same as everyone else's.

Same is true with the moniker. Whether or not the people on the ground felt that way about it or not, that story, fabricated without input from those on the ground, is what ended up creating meaning out of the movement, at least insomuch as power is concerned. That's like the core thesis of the book: The problem with that wave of protests was not being able to assert their own meaning over their actions. The meaning was created for them by people like western media, and they weren't able to organize their own narrative, choose their own representatives, etc.

edit to add: IIRC, he even specifically discusses how the different people in the core group of Brazilian organizers disagree on what happened.

theluddite , (edited )
@theluddite@lemmy.ml avatar

I don’t think its wired to critique someone for having a widely different interpretation of what happened than multiple others that were directly involved and then taking this very peculiar subjective interpretation to make wide sweeping (and IMHO wrong) conclusions about what we should learn from it.

It is because that's literally what the book is about. The book is addressing that very phenomenon as its core thesis. That's exactly what he is talking about when he says that the protests are illegible. If someone says "people disagree a lot about what happened and that's a problem" responding to that by saying "i disagree about what happened" isn't really engaging with the argument.

My impression is that Bevin started out with a preconsived notion and then kinda made up a retrospective narrative of these protests to fit to that.

I'm sorry but I don't think that anyone who has actually read the book in good faith can come to that conclusion.

edit: added more explanation

theluddite ,
@theluddite@lemmy.ml avatar

Yeah, again, I take pretty strong issue with your characterization of Bevins's stance. Have you actually read the book? I think that this is an interesting and worthwhile discussion, but I also don't want to go in circles if you haven't...

When he says that they're illegible to state power, he doesn't mean that they want to appeal to the people currently in power (and maybe this is a conflation that I accidentally invite in my own write-up). He means that they cannot participate in state power as an institutional apparatus, be it as reformists or revolutionaries.

I get what you're saying, and I agree with a lot of it (but not all of it), but you're just not responding to an argument that Bevins makes, at least in how I read him. You are responding to one that many in western media did in fact make, and I agree with you in that context, but that was just not my reading of Bevins at all.

theluddite ,
@theluddite@lemmy.ml avatar

I once again disagree with your characterization of the book.

You realize how funny it is that you post this in an Anarchist community?

That's stupid. Anarchist revolutionary theory and historical practice are full of ideas that are perfectly compatible with this analysis, even if Bevins himself is clearly not an anarchist. There is no more legible act to the state than organized violence, for example.

I'm not sure why you've taken this unpleasant posture towards me. I'm genuinely here for a discussion, but this is my last response if you keep acting like I'm some sort of uncultured idiot that needs you "to start from the basics 😒"

theluddite ,
@theluddite@lemmy.ml avatar

To be clear, I wasn't advocating for organized violence as a good tactic. I was just picking a simple example.

I still think that Bevins's history and analysis has merit, even if you disagree with his conclusions. I've read at least two books by anarchists that put forth similar concepts of legibility: Graeber's "Utopia of Rules" and James Scott's "Seeing like a State" (which I actually read to write this post and have a bajillion opinions about, but that's a post for another day). Regardless of your stance on whether your movement should or shouldn't be legible, you have to understand legibility, both to the state, and to other capitalist powers like, say, social media (to pick one at random 😉 ).

theluddite ,
@theluddite@lemmy.ml avatar

Your comment perfectly encapsulates one of the central contradictions in modern journalism. You explain the style guide, and the need to communicate information in a consistent way, but then explain that the style guide is itself guided by business interests, not by some search for truth, clarity, or meaning.

I've been a long time reader of FAIR.org and i highly recommend them to anyone in this thread who can tell that something is up with journalism but has never done a dive into what exactly it is. Modern journalism has a very clear ideology (in the sorta zizek sense, not claiming that the journalists do it nefariously). Once you learn to see it, it's everywhere

theluddite ,
@theluddite@lemmy.ml avatar

The purpose of a system is what it does

According to the cybernetician, the purpose of a system is what it does. This is a basic dictum. It stands for bald fact, which makes a better starting point in seeking understanding than the familiar attributions of good intention, prejudices about expectations, moral judgment, or sheer ignorance of circumstances.

The AI is "supposed" to identify targets, but in reality, the system's purpose is to justify indiscriminate murder.

theluddite , (edited )
@theluddite@lemmy.ml avatar

Props to her, and this is intended as a friendly comment between people on the same side, but I think this can be dangerous.

Chomsky famously noted that brevity is inherently conservative, and that's actually a pretty profound observation. Any time that you are brief to an audience that doesn't have much context, your message is going to pick up conservative baggage. Just imagine debating someone on how American imperialism is bad in front of a crowd that has never questioned USA as the bastion of freedom and democracy in the world. Your opponent just has to say "freedom" and "support the troops" and "9/11" as pre-canned concepts with a lot of power and imagery, whereas you're going to have to spend a ton of words unpacking all that. Any time that you say freedom, you're going to have to explain what you mean, or the audience will interpret it as the canned American concept of Freedom(tm). This is something that the 19th and earliest 20th century anarchists and communists understood intuitively and talked about quite a lot, even if they didn't articulate it quite as succinctly (lol) as Chomsky did. It's everywhere in their revolutionary theories.

So, while I do think that it's important to create effective and engaging short-form agitation and propaganda materials, they should be part of a larger messaging apparatus that leads you to some sort of more profound relationship with politics. Getting the entirety of your politics from short form video will necessarily lead to a shallow and mostly aesthetic understanding of politics, easily exploitable by reactionaries. It's how you end up with the Red Scare podcast, or MAGA communism, or any of these other aesthetically pseudo-leftist but actually deeply conservative discombobulated ideologies.

edit: also meant to say that it was not a great interview lol.

theluddite , (edited )
@theluddite@lemmy.ml avatar

I completely and totally agree with the article that the attention economy in its current manifestation is in crisis, but I'm much less sanguine about the outcomes. The problem with the theory presented here, to me, is that it's missing a theory of power. The attention economy isn't an accident, but the result of the inherently political nature of society. Humans, being social animals, gain power by convincing other people of things. From David Graeber (who I'm always quoting lol):

Politics, after all, is the art of persuasion; the political is that dimension of social life in which things really do become true if enough people believe them. The problem is that in order to play the game effectively, one can never acknowledge this: it may be true that, if I could convince everyone in the world that I was the King of France, I would in fact become the King of France; but it would never work if I were to admit that this was the only basis of my claim.

In other words, just because algorithmic social media becomes uninteresting doesn't mean the death of the attention economy as such, because the attention economy is something innate to humanity, in some form. Today its algorithmic feeds, but 500 years ago it was royal ownership of printing presses.

I think we already see the beginnings of the next round. As an example, the YouTuber Veritsasium has been doing educational videos about science for over a decade, and he's by and large good and reliable. Recently, he did a video about self-driving cars, sponsored by Waymo, which was full of (what I'll charitably call) problematic claims that were clearly written by Waymo, as fellow YouTuber Tom Nicholas pointed out. Veritasium is a human that makes good videos. People follow him directly, bypassing algorithmic shenanigans, but Waymo was able to leverage their resources to get into that trusted, no-algorithm space. We live in a society that commodifies everything, and as human-made content becomes rarer, more people like Veritsaium will be presented with more and increasingly lucrative opportunities to sell bits and pieces of their authenticity for manufactured content (be it by AI or a marketing team), while new people that could be like Veritsaium will be drowned out by the heaps of bullshit clogging up the web.

This has an analogy in our physical world. As more and more of our physical world looks the same, as a result of the homogenizing forces of capital (office parks, suburbia, generic blocky bulidings, etc.), the fewer and fewer remaining parts that are special, like say Venice, become too valuable for their own survival. They become "touristy," which is itself a sort of ironically homogenized commodified authenticity.

edit: oops I got Tom's name wrong lol fixed

theluddite ,
@theluddite@lemmy.ml avatar

Haha I was actually paraphrasing myself from last year, but I've seen that because lots of readers sent me that article when it came out a few months later, for obvious reasons!

theluddite ,
@theluddite@lemmy.ml avatar

Vermont has several towns with as little as a thousand people that have fiber internet thanks to municipal cooperatives like ECFiber. Much of the state is a connectivity wasteland but it's really cool to see some towns working together to sort it out.

theluddite ,
@theluddite@lemmy.ml avatar

I will always upvote Astra Taylor, and everyone with debt should join the Debt Collective!

theluddite ,
@theluddite@lemmy.ml avatar

I cannot handle the fucking irony of that article being on nature, one of the organizations most responsible for fucking it up in the first place. Nature is a peer-reviewed journal that charges people thousands upon thousands of dollars to publish (that's right, charges, not pays), asks peer reviewers to volunteer their time, and then charges rent the very institutions that produced the knowledge rent to access it. It's all upside. Because they're the most prestigious journal (or maybe one of two or three), they can charge rent on that prestige, then leverage it to buy and start other subsidiary journals. Now they have this beast of an academic publishing empire that is a complete fucking mess.

theluddite ,
@theluddite@lemmy.ml avatar

Yeah, it's grotesque. Doubly so when you consider that it's often public money that funds the research that they get to paywall. I've been really ragging on them lately for their role in the AI hype, too, which you can read about here and here if that sort of thing interests you.

theluddite ,
@theluddite@lemmy.ml avatar

I'm suspicious of this concept of editorial independence. I think it's a smoke screen that lets companies have their cake and eat it too. As far as I'm concerned, whoever cashes the checks also gets the blame, because either ownership means something, in which case the concept exists to obfuscate that, or it doesn't, in which case why is nature buying up other journals?

Google suspends Gemini from making AI images of people after a backlash complaining it was 'woke' (www.businessinsider.com)

Google suspends Gemini from making AI images of people after a backlash complaining it was 'woke'::After users complained Google's Gemini had gone "woke," the company said it will pause the image-generating feature of people while working on fixes.

theluddite ,
@theluddite@lemmy.ml avatar

My two cents, but the problem here isn't that the images are too woke. It's that the images are a perfect metaphor for corporate DEI initiatives in general. Corporations like Google are literally unjust power structures, and when they do DEI, they update the aesthetics of the corporation such that they can get credit for being inclusive but without addressing the problem itself. Why would they when, in a very real way, they themselves are the problem?

These models are trained on past data and will therefore replicate its injustices. This is a core structural problem. Google is trying to profit off generative AI while not getting blamed for these baked-in problems by updating the aesthetics. The results are predictably fucking stupid.

theluddite ,
@theluddite@lemmy.ml avatar

I just wanted to point out why I think that people are reacting to it the way that they are, not necessarily because I want anything else from Google (other than their dissolution as an illegal monopoly). Personally, I think the entire AI hype is absurd and tedious.

theluddite ,
@theluddite@lemmy.ml avatar

This has been ramping up for years. The first time that I was asked to do "homework" for an interview was probably in 2014 or so. Since then, it's gone from "make a quick prototype" to assignments that clearly take several full work days. The last time I job hunted, I'd politely accept the assignment and ask them if $120/hr is an acceptable rate, and if so, I can send over the contract and we can get started ASAP! If not, I refer them to my thousands upon thousands of lines of open source code.

My experience with these interactions is not that they're looking for the most qualified applicants, but that they're filtering for compliant workers who will unquestioningly accept the conditions offered in exchange for the generally lucrative salaries. It's the kind of employees that they need to keep their internal corporate identity of being the good guys as tech goes from being universally beloved to generally reviled by society in general.

theluddite , (edited )
@theluddite@lemmy.ml avatar

I have worked at two different start ups where the boss explicitly didn't want to hire anyone with kids and had to be informed that there are laws about that, so yes, definitely anti-parent. One of them also kept saying that they only wanted employees like our autistic coworker when we asked him why he had spent weeks rejecting every interviewee that we had liked. Don't even get me started on people that the CEO wouldn't have a beer with, and how often they just so happen to be women or foreigners! Just gross shit all around.

It's very clear when you work closely with founders that they see their businesses as a moral good in the world, and as a result, they have a lot of entitlement about their relationship with labor. They view laws about it as inconveniences on their moral imperative to grow the startup.

theluddite ,
@theluddite@lemmy.ml avatar

I've posted this here before, but this phenomenon isn't unique to dating apps, though dating apps are a particularly good example. The problem is that capitalism uses computers backwards.

theluddite ,
@theluddite@lemmy.ml avatar

I've had similar experiences to what troyunrau@lemmy.ca describes. The problem comes more from the expectations that users have as consumers, which they bring with them to open source projects from general culture, not necessarily the existence of the users themselves. Some of those users for big open source projects are often corporations, to boot.

theluddite ,
@theluddite@lemmy.ml avatar

Maybe this is a hot take, but it's really unfortunate that only the unhinged conservative lunatics are willing to have this discussion. I actually think that it'd be really healthy in a democracy to come together and exercise some agency in how we allow tech companies to access our children, if at all, but American liberals seem committed to some very broken notions of technocratic progress paired with free speech, while American conservatives are happy to throw all that away in order to have total control over their children, arriving closer to the right place for very dangerous reasons.

Poisoned AI went rogue during training and couldn't be taught to behave again in 'legitimately scary' study (www.livescience.com)

Poisoned AI went rogue during training and couldn't be taught to behave again in 'legitimately scary' study::AI researchers found that widely used safety training techniques failed to remove malicious behavior from large language models — and one technique even backfired, teaching the AI to recognize its triggers and better...

theluddite ,
@theluddite@lemmy.ml avatar

AI systems in the future, since it helps us understand how difficult they might be to deal with," lead author Evan Hubinger, an artificial general intelligence safety research scientist at Anthropic, an AI research company, told Live Science in an email.

The media needs to stop falling for this. This is a "pre-print," aka a non-peer-reviewed paper, published by the AI company itself. These companies are quickly learning that, with the AI hype, they can get free marketing by pretending to do "research" on their own product. It doesn't matter what the conclusion is, whether it's very cool and going to save us or very scary and we should all be afraid, so long as its attention grabbing.

If the media wants to report on it, fine, but don't legitimize it by pretending that it's "researchers" when it's the company itself. The point of journalism is to speak truth to power, not regurgitate what the powerful say.

theluddite ,
@theluddite@lemmy.ml avatar

It's very obviously media bait, and Keumars Afifi-Sabet, a self-described journalist, is the most gullible fucking idiot imaginable and gobbled it up without a hint of suspicion. Joke is on us though, because it probably gets hella clicks.

theluddite ,
@theluddite@lemmy.ml avatar

When you’re creating something new, production is research. We can’t expect Dr. Frankenstein to be unbiased, but that doesn’t mean he doesn’t have insights worth knowing.

Yes and no. It's the same word, but it's a different thing. I do R&D for a living. When you're doing R&D, and you want to communicate your results, you write something like a whitepaper or a report, but not a journal article. It's not a perfect distinction, and there's some real places where there's bleed through, but this thing where companies have decided that their employees are just regular scientists publishing their internal research in arxiv is an abuse of that service./

LLM are pretty new, how many experts even exist outside of the industry?

... a lot, actually? I happen to be married to one. Her lab is at a university, where there are many other people who are also experts.

theluddite ,
@theluddite@lemmy.ml avatar

I'm deeply concerned that as a society we're becoming unable to distinguish between science, aka the search for knowledge, and corporate product development. More concerning still is the distinction between a scientific paper, which exists to communicate experimental finding such that it can be reproduced, and what is functionally advertising of proprietary products masquerading as such. No one can reproduce that "paper" cited there, because it's being done in-house at a company. That's antithetical to science.

theluddite ,
@theluddite@lemmy.ml avatar

It's probably either waiting for approval to sell ads or was denied and they're adding more stuff. Google has a virtual monopoly on ads, and their approval process can take 1-2 weeks. Google's content policy basially demands that your site by full of generated trash to sell ads. I did a case study here, in which Google denied my popular and useful website for ads until I filled it with the lowest-quality generated trash imaginable. That might help clarify what's up.

theluddite ,
@theluddite@lemmy.ml avatar

Dates could be made up, too.The blog posts that I generated for my site included made up dates in the past. The internet archive says it has a snapshot for March of 2023, but when I click it, it says it doesn't, so I have no way of verifying. The theory about parking real estate hoping to sell it also seems pretty plausible to me. Who knows what dumb shit they're up to.

theluddite ,
@theluddite@lemmy.ml avatar

My editor is an actual saint. Imagine all the shit that she has to put up with that gets cut if that made it through!

"The Airbnb-ification of the arts." How social media algorithms are gently nudging the art world towards sterility, comfort, and predictability (www.staygrounded.online)

This is an essay I wrote in 2022, inspired by Kyle Chaka's 2016 viral essay, "Welcome to Airspace". After seeing an excerpt from Kyle's new book on the front of /c/Technology, I thought y'all might be interested in reading this piece of mine, which is less about the design of physical spaces, and more about The Algorithm™'s...

theluddite ,
@theluddite@lemmy.ml avatar

I actually think that this is part of a larger phenomenon. It's something that Adorno and Horkheimer identified all the way in the 1940s (in "Dialect of Enlightenment," especially in the chapter "The Culture Industry") that is now greatly accelerating because of computers. The result is what I call The Tyranny of Data. The essay isn't that long and most of the length comes from examples, but I'll try to do a super quick tl;dr of my argument. Here's some Adorno and Horkheimer quotes that I cite:

For enlightenment, anything which does not conform to the standard of calculability and utility must be viewed with suspicion.

and

Bourgeois society is ruled by equivalence. It makes dissimilar things comparable by reducing them to abstract quantities. For the Enlightenment, anything which cannot be resolved into numbers, and ultimately into one, is illusion[.]

Basically, modern society culturally values arguments presented in numbers, especially when expressed in units of currency. I argue that now that we have computers, aka a machine capable of turning everything into numbers very easily, we can easily collapse everything into units of currency. This is a homogenizing and conservative (as in change averse) force (quoting myself):

You can measure how people feel about another Marvel movie, or a politician they already know, or whether they prefer this version or that version of a product. It's much harder to measure interest in a brand new movie idea, or an unknown politician, or a radically new invention. The bigger the change, the harder it is to measure.

Because it's so easy to turn things into numbers now, and because we culturally value data-based arguments as superior to other kinds, like moral or ideological, our collective ability to think in other ways is atrophying. As a result, we struggle to take the necessarily irrational risks that we need to take to make real progress, be it social progress, artistic progress, or whatever.

I go through a bunch of examples, like Joe Biden, who I call "a statistically generated median in corporeal form. He's literally a franchise reboot, the single most derivative but fiscally sound cultural product." I specifically talk about digital media too:

When deciding how much to value websites or podcasts or any other online media, we simply add up the number of downloads. No one actually thinks that's a good way to decide the value of art, writing, journalism, story-telling, lascivious true crime blogs, or reality TV rewatch podcasts. It's just the first number that fell out of a computer. Just like that, a complex social situation was transmuted into a number.

theluddite ,
@theluddite@lemmy.ml avatar

Yes absolutely! Debord comes up a lot on my blog too. I fucking love the Situationists. A lot of these theorists that lived through the earlier days of mass media saw it with such clarity for exactly what it is in a way that those of us born later I think would struggle to see were it not for their writing, not that we bothered to heed their warnings.

theluddite ,
@theluddite@lemmy.ml avatar
theluddite ,
@theluddite@lemmy.ml avatar

Totally agreed. In fact, I've written about almost exactly that.

theluddite ,
@theluddite@lemmy.ml avatar

Couldn't agree more! We shouldn't outsource planning the world that we want to make to oversimplified heuristics, including "whatever is cheapest."

‘The tide has turned’: why parents are suing US social media firms after their children’s death (www.theguardian.com)

While social media firms have long faced scrutiny from Congress and civil rights organizations over their impact on young users, the new wave of lawsuits underscores how parents are increasingly leading the charge, said Jim Steyer, an attorney and founder of Common Sense media, a non-profit that advocates for children’s online...

theluddite ,
@theluddite@lemmy.ml avatar

Whenever one of these stories come up, there's always a lot of discussion about whether these suits are reasonable or fair or whether it's really legally the companies' fault and so on. If that's your inclination, I propose that you consider it from the other side: Big companies use every tool in their arsenal to get what they want, regardless of whether it's right or fair or good. If we want to take them on, we have to do the same. We call it a justice system, but in reality it's just a fight over who gets to wield the state's monopoly of violence to coerce other people into doing what they want, and any notions of justice or fairness are window dressing. That's how power actually works. It doesn't care about good faith vs bad faith arguments, and we can't limit ourselves to only using our institutions within their veneer of rule of law when taking on powerful, exclusively self-interested, and completely antisocial institutions with no such scruples.

theluddite ,
@theluddite@lemmy.ml avatar

When the writer Ryan Broderick joined Substack in 2020, it felt, he told me, like an “oasis.” The email-newsletter platform gave him a direct line to his readers.

Everyone is going to be so pumped when they learn about websites. The media has reported on substack this way since they began and it's so fucking stupid. It's a website with an email list as a service. Substack is nothing.

theluddite ,
@theluddite@lemmy.ml avatar

It's not a solution, but as a mitigation, I'm trying to push the idea of an internet right of way into the public consciousness. Here's the thesis statement from my write-up:

I propose that if a company wants to grow by allowing open access to its services to the public, then that access should create a legal right of way. Any features that were open to users cannot then be closed off so long as the company remains operational. We need an Internet Rights of Way Act, which enforces digital footpaths. Companies shouldn't be allowed to create little paths into their sites, only to delete them, forcing guests to pay if they wish to maintain access to the networks that they built, the posts that they wrote, or whatever else it is that they were doing there.

As I explain in the link, rights of way already exist for the physical world, so it's easily explained to even the less technically inclined, and give us a useful legal framework for how they should work.

theluddite ,
@theluddite@lemmy.ml avatar

Yeah, as always, the devil is in the details. For now I think that we need a simple and clear articulation of the main idea. In the exceedingly unlikely event that it ever gets traction, I look forward to hammering out the many nuances.

theluddite ,
@theluddite@lemmy.ml avatar

If i may be so bold, I and a few others write about tech at https://theluddite.org/.

I focus on the intersection between technology and human decisions. A lot of tech coverage has a techno-optimist, or tech-as-progress default perspective, where tech is almost this inexorable, inevitable, and apolitical force of nature. I strongly disagree with this perspective, which I think is convenient for the powers that be because it obscures that, right now, a few rich humans are making all our tech decisions.

I also write code for a living, which shockingly few tech writers and commentators have ever done. That makes it possible for me to write stuff like this.

theluddite ,
@theluddite@lemmy.ml avatar

I'm becoming increasingly skeptical of the "destroying our mental health" framework that we've become obsessed with as a society. "Mental health" is so all-encompassing in its breadth (It's basically our entire subjective experience with the world) but at the same time, it's actually quite limiting in the solutions it implies, as if there's specific ailments or exercises or medications.

We're miserable because our world is bad. The mental health crisis is probably better understood as all of us being sad as we collectively and simultaneously burn the world and fill it with trash, seemingly on purpose, and we're not even having fun. The mental health framework, by converting our anger, loneliness, grief, and sadness into medicalized pathologies, stops us from understanding these feelings as valid and actionable. It leads us to seek clinical or technical fixes, like whether we should limit smart phones or whatever.

Maybe smart phones are bad for our mental health, but I think reducing our entire experience with the world into mental health is the worst thing for our mental health.

theluddite ,
@theluddite@lemmy.ml avatar

Or they're saying that regardless of whether or not heaven and hell are real, both carbon offsets and indulgences are a self-serving practice run by corrupt institutions allowing wealthy people to be publicly absolved from the harm they continue to do.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • incremental_games
  • meta
  • All magazines