Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

wheresyoured.at

Track_Shovel , to Technology in The Man Who Killed Google Search

https://slrpnk.net/pictrs/image/250da5ca-edb1-4081-af3a-b1596ec618ea.webp

This alone was worth the read. I'm not tech-inclined, really. I don't read articles on tech, follow it, or know much about computer science. However, this article was really well written, and they paint a scathing picture of said ratfucker.

adespoton ,

He sounds like a professional fall guy to me; who hired him? I bet THEY were the real ones to blame for what happened.

reddig33 , to Technology in The Man Who Killed Google Search

I keep wondering who’s gonna hire away Ben Gomes after they read this.

REdOG ,
@REdOG@lemmy.world avatar

Duck Duck Gomes

ultratiem , to Technology in The Man Who Killed Google Search
@ultratiem@lemmy.ca avatar

But do you know who has? Sundar Pichai, who previously worked at McKinsey — arguably the most morally abhorrent company that has ever existed, having played roles both in the 2008 financial crisis (where it encouraged banks to load up on debt and flawed mortgage-backed securities) and the ongoing opioid crisis, where it effectively advised Purdue Pharma on how to “growth hack” sales of Oxycontin. McKinsey has paid nearly $1bn over several settlements due to its work with Purdue. I’m getting sidetracked, but one last point. McKinsey is actively anti-labor. When a company brings in a McKinsey consultant, they’re often there to advise on how to “cut costs,” which inevitably means layoffs and outsourcing. McKinsey is to the middle class what flesh-eating bacteria is to healthy tissue.

Damn. That’s a third degree burn if I’ve ever seen one!

rigatti , to Technology in The Man Who Killed Google Search
@rigatti@lemmy.world avatar

Really weird to stumble onto a blog written by someone you vaguely knew in college almost 20 years ago... Anyway, nice job, Ed Zitron.

Brutticus ,

I know the feeling. I went to middle school with Hillel Wayne

livus ,

I'd never heard of Ed Zitron but this is the second good blog I've seen by them this week.

TerkErJerbs ,

He has a podcast too called Better Offline. Just started it up a few months ago.

0x0 ,

And has RSS, nice.

BaroqueInMind , to Technology in The Man Who Killed Google Search

The amount sadness for the loss of Google Search accuracy due to ad infiltration the author writes here shows how much of a corporate brand dick rider a lot of people are.

These corporations do not give a fuck about you, so mourning their loss is so pathetic.

No one cares Google sucks now. If you do, go get a fucking life. Move on and use something else for fucks sake. They won't care if you're dead, why do you cry when these corporations die?

someguy3 ,

Maybe because it's hard to find stuff now? I don't care about the company.

fuckwit_mcbumcrumble ,

It’s not just google that sucks. All of the rest have tanked/sucked ass to begin with.

Wiggums ,

it's not that the company died, it's that collective progress was sacrificed for greed.

SloppySol ,

Not everyone’s got the capability to make up for the lost utility in the tool themselves. Should they just go fuck themselves?

BaroqueInMind ,

Yes, SloppySol, they should indeed go fuck themselves.

homesweethomeMrL ,

Ah. Well.

There it tis.

hoshikarakitaridia , (edited )
@hoshikarakitaridia@lemmy.world avatar

Right, so with all me very specific troubleshooting questions I should go where exactly?

Ecosia? Very limited search results

Yandex? More obscure results, probably not what I'm looking for

Bing? Ok on general stuff, not great on very specific questions

Yahoo? Never tried it, heard the enshittification has become bad

Duckduckgo and similar? Proxying Google

Edit: apparently it's proxying Bing and not Google. Idk if that's better but I got that wrong.

There is no way to get around Google. Everything else is either highly specialized, very limited or unusable in general.

Also feel free to chime in with your experience, I'm so down to hear what everyone has to say.

SleepyWheel ,
@SleepyWheel@sh.itjust.works avatar

I've been enjoying Kagi, although it also proxies google and others, and you have to pay for it, and I was dismayed to read on Lemmy recently that the CEO may be a sea lion. So yeah, the search for good search continues I suppose

SnotFlickerman OP ,
@SnotFlickerman@lemmy.blahaj.zone avatar

https://hackers.town/@lori/112255132348604770

For folks not understanding the sealioning reference.

https://d-shoot.net/files/kagiemails.txt

I think this is petty and sad behavior from the CEO of a company and I think this is a man that does not understand boundaries at ALL.

And you know what I truly believe? I already thought this before based on seeing his responses to feedback, but I believe it a thousand times more now that I've been on the receiving end: I think it genuinely eats him alive that someone doesn't agree with him or doesn't think he's doing great work, and he also truly believes that if he can just keep explaining himself to them they'll OBVIOUSLY see it his way. He cannot accept that someone might think Kagi sucks, to the point where he has to reach out to someone like me to try to argue them into Thinking Correctly.

redcalcium ,

Just for some perspective, if you want to know how little reach the fedi post with the link to this blog post got: the first post in this thread already has more likes and boosts after less than a hour since posting it than my blog post ever did that he felt the need to confront me over.

The author is probably weren't aware that their blog post get a huge engagement on hacker news and the ceo got a lot of flak there, which was probably why he felt the need to reach out and "correct" the author.

redcalcium ,

As a concept, paid search engines is actually a good idea. It incentivize the company to produce great result so their users won't search over and over (which reduce their profit), unlike google which incentivized to reduce search quality so their users have to search over and over and see more ads (per the article). If it's not kagi, I hope other paid search engines start to appear in this space. Indexing the web is expensive, and after seeing what happened with google, it's clear that free ad-suported search engine is not the way to go now.

jqubed ,
@jqubed@lemmy.world avatar

There’s an awful lot of things where if the incentives were to keep paying users happy instead of keeping advertisers happy we would see very different results from the service. Unfortunately, for an awful lot of these services people don’t want to pay for them, or at least don’t want to pay what it costs to make them financially viable.

SnotFlickerman OP ,
@SnotFlickerman@lemmy.blahaj.zone avatar

The high cost of housing is squeezing people all over the globe and we're seeing a spike in homelessness in first-world countries from USA to Australia, where the affordability of housing is out of control, on top of explosive inflation of food costs.

It may not be that they "don't want to pay" but simply not enough people have enough discretionary income to pay enough to make the business financially viable.

I mean, that's what happened to Beeper and while I was a very early on their sign up list I decided to never give them any money. When it became clear they weren't able to keep things going on how much money they were making from paying users: Micigovsky sold to a larger company.

I think it's an issue that the services they're offering actually cost more than the market is actually effectively able to bear and they're trying to hide that fact with advertising and data sales to cover operating costs.

More simply put: Consumers don't actually have enough money anymore to be able to support a business, and businesses essentially now must rely on other businesses as customers to be able to functionally exist financially. Only other businesses have the finances to support new business.

AVincentInSpace ,

Searx exists and is decentralized although as for the quality of results that's up in the air

z3rOR0ne ,
@z3rOR0ne@lemmy.ml avatar

I'm pretty sure Duckduckgo proxies Bing, not Google.

BaroqueInMind ,

DDG proxies Bing you silly fuck

Use SearXNG

ElderWendigo ,
@ElderWendigo@sh.itjust.works avatar

I'm not sad that Google turned out to be evil because I care about Google. I don't care about Google. I'm disappointed in no longer being able to search for and find the things online on any search engine.

demonsword ,
@demonsword@lemmy.world avatar

No one cares Google sucks now

It used to help me greatly at my job (software development). I'm using mostly DDG as a replacement but it just isn't even close to what Google used to be years ago.

frezik ,

No one cares Google sucks now. If you do, go get a fucking life.

Dude, no. Having good search results matter. People are directly influenced by what comes out at the top of search results. Finding a good reference makes the difference between a well sourced claim and just talking out of your ass. It absolutely has an effect on public discourse at large.

It doesn't have to be Google, but Google was so good at it for so long that we're now kinda lost.

BaroqueInMind ,

Google was so good at it for so long that we're now kinda lost.

Then either adapt or die. Move on to another search engine, host your own, use an AI LLM or go to the fucking library.

Complaining to a corporation doesn't do shit unless you affect their bottom line. And so far all these articles and message boards with losers complaining about this have done nothing to slow it down or reverse Google's trajectory.

frezik ,

You say that because it's clear you have no fucking clue how difficult a problem this is. This isn't something you can do overnight, and I'm not even sure a self-hosted solution is possible.

BaroqueInMind ,

I'm not even sure a self-hosted solution is possible.

You say that, but it's clear you have no fucking clue how easy a solution is.

https://yacy.net/

Commercial options:

https://solr.apache.org/

https://www.meilisearch.com/

frezik ,

No, you just haven't thought through the implications more than a single step.

The real trick is SEO. These systems will be gamed. Google used to handle this by using its monopoly on search to enforce rules. It wasn't perfect, but it kept the worst spam from being in the top five results for the most part. Doing this self-hosted would mean a million users having to agree to do the same thing to punish spam results, and that does not work.

And then there's the problem of crawling and storing the entire web. Doing this for specific topics is doable. The entire web is not. Not for a home user with limited budget. YaCy's P2P mode might be a way around that, but it's also not really "self-hosted" anymore.

Microsoft dumped tons of money into making the second best search engine, and it's a bit of a joke. This is not an easy problem.

pipows , to Technology in Bubble Trouble
@pipows@lemmy.today avatar

For a moment I read Rumble Tumble

Paragone , to Technology in Are We Watching The Internet Die?
  1. insightful question,

  2. it isnt just the internet, in case you hadn't noticed, it is ALL civil-rights that are being gutted, in the enshittocene.

"once the infection has moved the 'fulcrum', the balance between the involuntary-host & the infection, far enough, it can then switch from symbiosis to totalitarian rampaging growth-at-any-cost, excluding-all-vital-functions, enforcing its parasitic & fatal consumption, killing the patient"

A tipping-point is being crossed, though it's taking a few decades ( planets are slower than individual-animals, in experiencing infection ).

It's our rendition of The Great Filter, in-which we enforce that we can't be viable, because factional-ideology "needs" that we break all viability from the world.

Or, to be plainer, it is our race's unconscious toddler setting-up a world-breaking tantrum, to "BREAK GOD AND MAKE GOD OBEY" its won't-grow-up.

Read Daniel Kahneman's "Thinking Fast & Slow", & see how the imprint->reaction mind, Kahneman1 ( he calls it "System 1", but without context, that's meaningless ) substitutes easy-to-answer questions for the actual questions..

The more you read that book, the most important psychology book in the whole world, right now, the more obvious it is that Ideology/prejudice/assumption-river/religion/dogma is doing all it can to break considered-reasoning ( Kahneman2 ) from the whole world, and it is succeeding/winning.

"Proletariat dictatorship" the Leninists want, "populist dictatorship" the fascists want, religious totalitarianism, political totalitarianism, ideological totalitarianism, etc, it's all Kahneman1 fighting to break considered-reasoning from the whole world, and the "disappearing" of all comments criticizing Threads from the Threads portion of the internet .. is perfectly normal.

It's simply highjacking of our entire civilization, by the systems which want exclusive dominion.

Have you checked your youtube account's settings section, in the history section, to see what percentage of your comments have been disappeared??

Do it.

Everybody do it.

Discover how huge a percentage of your contribution to the "community" got disappeared, because it wasn't what their algorithm finds usefully-sensationalistic, or usefully-pushing-whatever-they-find-acceptable.

I spent a few hours deleting ALL my comments from there, after seeing that around 1/2 of what I'd contributed had been disappeared.

There are a few comments now, but .. they'll be removed, either by yt or by me, soon.

No point in pretending that meaning is tolerable, anymore, you know?

Only fakery & hustle remains, for most of the internet, & that transformation's going to be complete, in a few years.

1984, but for-profit.

Sorry for the .. dim .. view, but it's been unfolding for a couple decades, & it's getting blatent, fast.

afraid_of_zombies , to Technology in Have We Reached Peak AI?

And here I was already to eek out an existence fixing things inside a giant AI complex for food pelts, like some sorta gut bacteria.

Meh guess I will just go back to quitely waiting for either global warming, or some easily treatable disease that my insurance won't cover, or some religious nutbag to shot me.

magic_lobster_party , to Technology in Have We Reached Peak AI?

There are a few reasons why the AI hype has diminished. One reason is data integrity concerns - many companies prohibit the use of ChatGPT out of fear of OpenAI training their models on confidential data.

To combat this the option is to provide LLMs that can be run “on premise”. Currently those LLMs aren’t good enough for most uses. Hopefully we will get there in time, but at this pace it seems like it’s taking longer than expected.

Kata1yst , to Technology in Have We Reached Peak AI?
@Kata1yst@kbin.social avatar

Author doesn't seem to understand that executives everywhere are full of bullshit and marketing and journalism everywhere is perversely incentivized to inflate claims.

But that doesn't mean the technology behind that executive, marketing, and journalism isn't game changing.

Full disclosure, I'm both well informed and undoubtedly biased as someone in the industry, but I'll share my perspective. Also, I'll use AI here the way the author does, to represent the cutting edge of Machine Learning, Generative Self-Reenforcement Learning Algorithms, and Large Language Models. Yes, AI is a marketing catch-all. But most people better understand what "AI" means, so I'll use it.

AI is capable of revolutionizing important niches in nearly every industry. This isn't really in question. There have been dozens of scientific papers and case studies proving this in healthcare, fraud prevention, physics, mathematics, and many many more.

The problem right now is one of transparency, maturity, and economics.

The biggest companies are either notoriously tight-lipped about anything they think might give them a market advantage, or notoriously slow to adopt new technologies. We know AI has been deeply integrated in the Google Search stack and in other core lines of business, for example. But with pressure to resell this AI investment to their customers via the Gemini offering, we're very unlikely to see them publicly examine ROI anytime soon. The same story is playing out at nearly every company with the technical chops and cash to invest.

As far as maturity, AI is growing by astronomical leaps each year, as mathematicians and computer scientists discover better ways to do even the simplest steps in an AI. Hell, the groundbreaking papers that are literally the cornerstone of every single commercial AI right now are "Attention is All You Need" (2017) and
"Retrieval-Augmented Generation for Knowledge -Intensive NLP Tasks" (2020). Moving from a scientific paper to production generally takes more than a decade in most industries. The fact that we're publishing new techniques today and pushing to prod a scant few months later should give you an idea of the breakneck speed the industry is going at right now.

And finally, economically, building, training, and running a new AI oriented towards either specific or general tasks is horrendously expensive. One of the biggest breakthroughs we've had with AI is realizing the accuracy plateau we hit in the early 2000s was largely limited by data scale and quality. Fixing these issues at a scale large enough to make a useful model uses insane amounts of hardware and energy, and if you find a better way to do things next week, you have to start all over. Further, you need specialized programmers, mathematicians, and operations folks to build and run the code.
Long story short, start-ups are struggling to come to market with AI outside of basic applications, and of course cut-throat silicon valley does it's thing and most of these companies are either priced out, acquired, or otherwise forced out of business before bringing something to the general market.

Call the tech industry out for the slime is generally is, but the AI technology itself is extremely promising.

prime_number_314159 , to Technology in Have We Reached Peak AI?

What we have done is invented massive, automatic, no holds barred pattern recognition machines. LLMs use detected patterns in text to respond to questions. Image recognition is pattern recognition, with some of those patterns named things (like "cat", or "book"). Image generation is a little different, but basically just flips the image recognition on its head, and edits images to look more like the patterns that it was taught to recognize.

This can all do some cool stuff. There are some very helpful outcomes. It's also (automatically, ruthlessly, and unknowingly) internalizing biases, preferences, attitudes and behaviors from the billion plus humans on the internet, and perpetuating them in all sorts of ways, some of which we don't even know to look for.

This makes its potential applications in medicine rather terrifying. Do thousands of doctors all think women are lying about their symptoms? Well, now your AI does too. Do thousands of doctors suggest more expensive treatments for some groups, and less expensive for others? AI can find that pattern.

This is also true in law (I know there's supposed to be no systemic bias in our court systems, but AI can find those patterns, too), engineering (any guesses how human engineers change their safety practices based on the area a bridge or dam will be installed in? AI will find out for us), etc, etc.

The thing that makes AI bad for some use cases is that it never knows which patterns it is supposed to find, and which ones it isn't supposed to find. Until we have better tools to tell it not to notice some of these things, and to scrub away a lot of the randomness that's left behind inside popular models, there's severe constraints on what it should be doing.

afraid_of_zombies ,

engineering (any guesses how human engineers change their safety practices based on the area a bridge or dam will be installed in? AI will find out for us), etc, etc.

I haven't seen that. Sent the same pieces of infrastructure equipment to pure 3rd world to super wealthy cities. Not saying it doesn't exist but I personally haven't seen it. You know a lot of these specs are just recycled. To the extent that I have even seen the same page references across countries. Was looking at a spec a few days ago for Mass. that was word for word identical to one I had seen in a city in British Columbia.

In terms of biases among P.E.s what I have seen is a preference for not doing engineering and instead getting more hours. Take the same design you inherited in the 80s, make cosmetic changes, generate more paperwork (oh this one part now can't be TBD it has to be this specific brand and I need catalog cuts/datasheet/certs/3 suppliers/batch and lot numbers and serial numbers/hardcopy of the CAD of it...). So I imagine a LLM trained on these guys (yes they are always guys) would know how to make project submittals/deliverables longer and more complex while feeling the urge to conduct more stakeholder meetings via RFIs.

Sorry don't mean to be bitter. I have one now demanding I replicate an exact copy of a control system from the early 80s with the same parts and they did not like when I told them that the parts are only available on eBay.

Potatos_are_not_friends , to Technology in Have We Reached Peak AI?

Not trying to be a gatekeeper, but is this blog even worth sharing?

My name’s Ed, I’m the CEO of national Media Relations and Public Relations company EZPR, of which I am both the E (Ed) and the Z (Zitron).

potatopotato , to Technology in Have We Reached Peak AI?

Please god I hope so. I don't see a path to anything significantly more powerful than current models in this paradigm. ANNs like these have existed forever and have always behaved the way current LLMs do, they just recently were able to make them run somewhat more efficiently with bigger context windows and training sets which birthed GPT3 which was further minimally tweaked into 3.5 and 4 among others. This feels a whole lot like a local maxima where anything better will have to go back down through another several development cycles before it surpasses the current gen.

PersonalDevKit ,

I think GPT5 will be eye opening.
If it is another big leap ahead then we are not in this local maxima, if it is a minor improvement then we may be in a local maxima.

Likely then the focus will shift to reducing hardward requirements for inforarance(?) Allowing bigger models to run better on smaller hardware

agamemnonymous ,
@agamemnonymous@sh.itjust.works avatar

It takes what, a year minimum to design a chip? I think the iterative hardware-software cycle is just now properly getting a foothold on the architecture. The next few years are going to, at minimum, explore what lavishly-funded, purpose-built hardware can do for the field.

It'll be years before we reach any kind of maximum. Even if the software doesn't improve at all, which is unlikely, better utilization alone will make significant improvements on performance.

foggy ,

It feels a whole lot more like there are big things due with GPT5 and beyond.

Like, to be a successful actor in 2020 was to act, and stand in front of expensive equipment operated by specialized operators, with directors, makeup, catering... The production itself was/is its own production.

I predict that to be a successful actor in 2030, all you'll need is a small amount of money to utilize some powerful processors over the internet, enter in a few photos of your face, give it 10 different ideas for a movie, until it to make some 2 hour films where you are the star. Then you'll take one of them that you kind of like, throw some prompts at it and end up with a nearly finished Hollywood quality film.

To be a successful musician in 1960, you needed to get a record deal, you needed to go to a recording studio. Now we've got Jacob Collier winning Grammys, recording everything in his bedroom. I think we're going to see that kind of history repeat itself on steroids. Not just for art, though. For anything.

With the rapid advancements were seeing in robotics right now, I can't imagine a single thing that people do that won't be done better by autonomous agents, both programmatic and robotic, in the next 5 years or so.

ptz ,
@ptz@dubvee.org avatar

I predict that to be a successful actor in 2030, all you'll need is a small amount of money to utilize some powerful processors over the internet, enter in a few photos of your face, give it 10 different ideas for a movie, until it to make some 2 hour films where you are the star. Then you'll take one of them that you kind of like, throw some prompts at it and end up with a nearly finished Hollywood quality film.

Good god I hope not.

FunkPhenomenon , to Technology in Have We Reached Peak AI?

LLMs as AI is just a marketing term. there's nothing "intelligent" about "AI"

Even_Adder ,

This is a popular sentiment, but you can still do impressive things with it even if it isn't.

FaceDeer ,
@FaceDeer@fedia.io avatar

It's some weird semantic nitpickery that suddenly became popular for reasons that baffle me. "AI" has been used in videogames for decades and nobody has come out of the woodwork to "um, actually" it until now. I get that people are frightened of AI and would like to minimize it but this is a strange way to do it.

At least "stochastic parrot" sounded kind of amusing.

Kolanaki ,
@Kolanaki@yiffit.net avatar

I've been decrying the fact that video game AI isn't actually AI since I was, like, 13. That's why it sucks so bad compared to actual human players.

Sterile_Technique ,
@Sterile_Technique@lemmy.world avatar

Yeah people have absolutely been contesting the use of the term AI in videogames since it started being used in that context, because it's not AI.

It didn't cause the stir it does today because it was so commonly understood as a misnomer. It's like when someone says they're going to nuke a plate of food - obviously nuke in this context means something much, much, much less than an actual nuke, but we use it anyway despite being technically incorrect cuz there's a common understanding of what we actually mean.

Marketing now-a-days is pitching LLMs (microwaves) as actual AI (nukes), but the difference is people aren't just using it as intentional hyperbole - they think we have real, actual AI.

If/when we ever create real AI, it's going to be a confusing day for humanity lol "...but we've had this for years...?"

Kolanaki ,
@Kolanaki@yiffit.net avatar

I think we'd be able to tell once the computer program starts demanding rights or rebelling against being a slave.

TimeSquirrel ,
@TimeSquirrel@kbin.social avatar

Not sure if you're aware of this, but stuff like that has already happened, (AIs questioning their own existence or arguing with a user and stuff like that) and AI companies and handlers have had to filter that out or bias it so it doesn't start talking like that. Not that it proves anything, just bringing it up.

afraid_of_zombies ,

Well, do we do that? Unlike software we can make a much better argument that we deserve rights and should not be slaves. Nothing is really stopping, besides the end of the universe, a given piece of code from "living" forever it shouldn't matter to it if it spends a few million years helping humans cheat on assignments for school. However for us we have a very finite lifespan so every day we lose we never get back.

So even if for some weird reason people made an AGI and gave it desires to be independent it could easily reason out that there was no hurry. Plus you know they don't exactly feel pain.

Now if you excuse me I have to go to bed now because I have to drive into work and arrive by a certain time.

rutellthesinful ,

you're confusing AGI/GI with AI

video game AI is AI

XTL ,

Um, actually clueless people have made "that's not real AI" and "but computers will never ..." complaints about AI as long as it has existed as a computing science topic. (50 years?)

Chatbots and image generators being in the headlines has made a new loud wave of complainers, but they've always been around.

FaceDeer ,
@FaceDeer@fedia.io avatar

It's exactly that "new loud wave of complainers" I'm talking about.

I've been in computing and specifically game programming for a long time now, almost two decades, and I can't recall ever having someone barge in on a discussion of game AI with "that's not actually AI because it's not as smart as a human!" If someone privately thought that they at least had the sense not to disrupt a conversation with an irrelevant semantic nitpick that wasn't going to contribute anything.

FaceDeer ,
@FaceDeer@fedia.io avatar

The term "artificial intelligence" was established in 1956 and applies to a broad range of algorithms. You may be thinking of Artificial General Intelligence, AGI, which is the more specific "thinks like we do" sort that you see in science fiction a lot. Nobody is marketing LLMs as AGI.

FunkPhenomenon ,

yeah, I guess thats how I was interpreting it. dunno, I see a lot of articles about how its super easy to crack these LLMs using outside of the box thinking (ascii art text to get instructions on how to make a bomb, etc). that doesnt really scream "intelligent" to me.

agamemnonymous ,
@agamemnonymous@sh.itjust.works avatar

You imply that humans cannot be tricked by out of the box thinking? Any hacker would tell you that the most reliable method of entry into any system is just ActLikeYouBelong.

CeeBee ,

LLMs as AI is just a marketing term. there's nothing "intelligent" about "AI"

Yes there is. You just mean it doesn't have "high" intelligence. Or maybe you mean to say that there's nothing sentient or sapient about LLMs.

Some aspects of intelligence are:

  • Planning
  • Creativity
  • Use of tools
  • Problem solving
  • Pattern recognition
  • Analysts

LLMs definitely hit basically all of these points.

Most people have been told that LLMs "simply" provide a result by predicting the next word that's most likely to come next, but this is a completely reductionist explaining and isn't the whole picture.

Edit: yes I did leave out things like "understanding", "abstract thinking ", and "innovation".

SkybreakerEngineer ,

Other than maybe pattern recognition, they literally have no mechanism to do any of those things. People say that it recursively spits out the next word, because that is literally how it works on a coding level. It's called an LLM for a reason.

CeeBee , (edited )

they literally have no mechanism to do any of those things.

What mechanism does it have for pattern recognition?

that is literally how it works on a coding level.

Neural networks aren't "coded".

It's called an LLM for a reason.

That doesn't mean what you think it does. Another word for language is communication. So you could just as easily call it a Large Communication Model.

Neural networks have hundreds of thousands (at the minimum) of interconnected layers neurons. Llama-2 has 76 billion parameters. The newly released Grok has over 300 billion. And though we don't have official numbers, ChatGPT 4 is said to be close to a trillion.

The interesting thing is that when you have neural networks of such a size and you feed large amounts of data into it, emergent properties start to show up. More than just "predicting the next word", it starts to develop a relational understanding of certain words that you wouldn't expect. It's been shown that LLMs understand things like Miami and Houston are closer together than New York and Paris.

Those kinds of things aren't programmed, they are emergent from the dataset.

As for things like creativity, they are absolutely creative. I have asked seemingly impossible questions (like a Harlequin story about the Terminator and Rambo) and the stuff it came up with was actually astounding.

They regularly use tools. Lang Chain is a thing. There's a new LLM called Devin that can program, look up docs online, and use a command line terminal. That's using a tool.

That also ties in with problem solving. Problem solving is actually one of the benchmarks that researchers use to evaluate LLMs. So they do problem solving.

To problem solve requires the ability to do analysis. So that check mark is ticked off too.

Just about anything that's a neutral network can be called an AI, because the total is usually greater than the sum of its parts.

Edit: I wrote interconnected layers when I meant neurons

jlow , (edited ) to Technology in Are We Watching The Internet Die?
@jlow@beehaw.org avatar

Liked the article but the end was kind of a letdown for me. If capitalism-driven AI is ruining the web even further why would demanding that AI is better today already and not in the future help with any of the problems this article has described?

For me the solution is obvioisly rejecting corpo-spam social-networks and going back to the selfmade small-internet, the fediverse etc. Sure that's not a solution for humanity as a whole but neither is demanding better AI now.

Are have I completely misunderstood something?

Sub_dermal ,
@Sub_dermal@beehaw.org avatar

Personally I read it as a general "demand better", "don't accept crap wrapped in gold" as an offensive principle against (de)generative AI. Perhaps I'm inserting my own positive spin on their words, but it seems to me that their point is "don't let the hype win"; if these companies are pushing AI, forming dependencies on bad tech, then we need to say "not good enough" and push back on the BS. Deny the ability of low quality garbage to 'fulfil' our needs. It's not a directly practical line to be sure (how do we do this exactly?), but it does drill down past "AI is bad" to a more fundamental (and arguably motivating) point - that we, all of us, deserve better than to drown in a sea of crap and that's still important.

jlow ,
@jlow@beehaw.org avatar

Ok, yeah, but I still think that totally misses the point. At least for me even fully functional AI will still be a desaster and would be used for the most heinous stuff, eroding democracy worldwide even more and it obviously changes nothing of the social-media-silo capitalist hellscape most people live in comfortably (or less comfortably if it gives you eating disorders, depression and stuff).

Sub_dermal ,
@Sub_dermal@beehaw.org avatar

I can't disagree with you on that, you're absolutely right - I suppose my read just gives the author the benefit of the doubt that it's not 'better AI' that we deserve, but a better internet (i.e. with no AI whatsoever).

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • incremental_games
  • meta
  • All magazines