Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

@QuadratureSurfer@lemmy.world avatar

QuadratureSurfer

@QuadratureSurfer@lemmy.world

This profile is from a federated server and may be incomplete. Browse more on the original instance.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

That's just a link to all datacenters and doesn't break out how much energy is going to AI vs how much energy is being used to stream Netflix.

You might as well say we should shut down the internet because it uses too much electricity.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

OK... warning: wall of text incoming.

TL/DR: We end up comparing LLM executions with Google searches (a single prompt to ChatGPT uses about 10x as much electricity as a single Google search execution). How many Google searches and links do you need to click on vs requesting information from ChatGPT?
I also touch on different use cases beyond just the use of LLMs.

The true argument comes down to this:
Is the increase in productivity worth the boost in electricity?
Is there a better tool out there that makes more sense than using an AI Model?


For the first article:

The only somewhat useful number in here just says that Microsoft had 30% higher emissions than what it's goals were from 2020... that doesn't breakdown how much more energy AI is using despite how much the article wants to blame the training of AI models.

The second article was mostly worthless, again pointing at numbers from all datacenters, but conveniently putting 100% of the blame on AI throughout most of the article. But, at the very end of the article it finally included something a bit more specific as well as an actual source:

AI could burn through 10 times as much electricity in 2026 as it did in 2023, according to the International Energy Agency.

Link to source: https://www.iea.org/reports/electricity-2024

A 170 page document by the International Energy Agency.
Much better.

Page 8:

Electricity consumption from data centres, artificial intelligence (AI) and the cryptocurrency sector could double by 2026.

Not a very useful number since it's lumping in cryptocurrency with all Data centers and "AI".

Moreover, we forecast that electricity consumption from data centres in the European Union in 2026 will be 30% higher than 2023 levels, as new data facilities are commissioned amid increased digitalisation and AI computations.

Again, mixing AI numbers with all datacenters.

Page 35:

By 2026, the AI industry is expected to have grown exponentially to consume at least ten times its demand in 2023.

OK, I'm assuming this is where they got their 10x figure, but this does not necessarily mean the same thing as using 10x more electricity especially if you're trying to compare traditional energy use for specific tasks to the energy use required by executing a trained AI Model.

Page 34:

When comparing the average electricity demand of a typical Google search (0.3 Wh of electricity) to OpenAI’s ChatGPT (2.9 Wh per request)

Link to source of that number:
https://www.sciencedirect.com/science/article/abs/pii/S2542435123003653?dgcid=author

It's behind a paywall, but if you're on a college campus or at certain libraries you might be able to access it for free.

Finally we have some real numbers we can work with. Let's break this down. A single Google search uses a little more than 1/10th of a request made to ChatGPT.

So here's the thing, how many times do you have to execute a Google search to get the right answer? And how many links do you need to click on to be satisfied?
It's going to depend based on what you're looking for.
For example, if I'm working on doing some research or solving a problem, I'll probably end up with about 10-20 browser tabs open at the same time by the time I get all of the information I need.
And don't forget that I have to click on a website and load it up to get more info.
However, when I'm finally done, I get the sweet satisfaction of closing all the tabs down.

Compare that to using an LLM, I get a direct answer to what I need, I then do a little double checking to verify that the answer is legitimate (maybe 1-2 Google equivalent searches), and I'm good to go. Not only have I spent less time overall on the problem, but in some cases I might have even used less electricity after factoring everything in.

Let's try a different use case: Images.
I could spend hours working in Photoshop to create some image that I can use as my Avatar on a website.
Or I can take a few minutes generating a bunch of images through Stable Diffusion and then pick out one I like. Not only have I saved time in this task, but I have used less electricity.

In another example I could spend time/electricity to watch a Video over and over again trying to translate what someone said from one language to another, or I could use Whisper to quickly translate and transcribe what was said in a matter of seconds.

On the other hand, there are absolutely use cases where using some ML model is incredibly wasteful.
Take, for example, a rain sensor on your car.
Now, you could setup some AI model with a camera and computer vision to detect when to turn on your windshield wipers.
But why do that when you could use this little sensor that shoots out a small laser against the window and when it detects a difference in the energy that's normally reflected back it can activate the windshield wipers.
The dedicated sensor with a low power laser will use far less energy and be way more efficient for this use case.

Of course we still need to factor in the amount of electricity that's required to train and later fine-tune a model. Small models only need a few seconds-minutes to train. Other models may need about a month or more to train. Once the training is complete, no more electricity is required, the model can be packaged up and spread out over the internet like any other file (of course electricity is used for that, but then you might as well complain about people streaming 8k video to their homes for entertainment purposes).

So everything being said, it really comes down to this:
Does the increase in productivity warrant the bump in electricity usage?
Is there a better tool out there that makes more sense than using an AI Model?

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Bitcoin difficulty chart - good point.

Effectiveness of AI powered search - Agreed, it is a very subjective topic. I don't use LLMs for the majority of my searches (who needs hallucinated dates and times for the movies playing at a cinema near me?) and it sounds like Google is trying to use their LLM with every search now... In my opinion we should have a button to activate the LLM on a search rather than have it respond every time (but I don't really use Google search anyway).

Translation/Transcription tech - It's incredibly useful for anyone who's deaf.
Your average person doesn't need this, although I'm sure they benefit from the auto-generated subtitles if they're trying to watch a video in a noisy environment (or with the volume off).
In my own personal use I've found it useful for cutting through the nonsense posted by both sides of either the Ukraine/Russia conflict or the Israel/Gaza conflict (in the case of misinformation targeting those who don't speak the language).

Generative AI - Yeah, this will be interesting to see how it plays out in courts. I definitely see good points raised by both sides, although I'm personally leaning towards a ruling that would allow smaller startups/research groups to be able to compete with larger corporations (when they will be able to buy their way into training data).
It'll be interesting to see how these cases proceed on the text vs audio vs image/art fronts.

Wasteful AI - Agreed... too many companies are jumping in on the "AI" bandwagon without properly evaluating whether there's a better way to do something.

Anyway, thanks for taking the time to read through everything.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

and more

I bet they included farming equipment in the exemption list...

The ugly truth behind ChatGPT: AI is guzzling resources at planet-eating rates (www.theguardian.com)

Despite its name, the infrastructure used by the “cloud” accounts for more global greenhouse emissions than commercial flights. In 2018, for instance, the 5bn YouTube hits for the viral song Despacito used the same amount of energy it would take to heat 40,000 US homes annually....

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

This article may as well be trying to argue that we're wasting resources by using "cloud gaming" or even by gaming on your own, PC.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

I'm going to assume that when you say "AI" you're referring to LLMs like chatGPT. Otherwise I can easily point to tons of benefits that AI models provide to a wide variety of industries (and that are already in use today).

Even then, if we restrict your statement to LLMs, who are you to say that I can't use an LLM as a dungeon master for a quick round of DnD? That has about as much purpose as gaming does, therefore it's providing a real benefit for people in that aspect.

Beyond gaming, LLMs can also be used for brainstorming ideas, summarizing documents, and even for help with generating code in every programming language. There are very real benefits here and they are already being used in this way.

And as far as resources are concerned, there are newer models being released all the time that are better and more efficient than the last. Most recently we had Llama 3 released (just last month), so I'm not sure how you're jumping to conclusions that we've hit some sort of limit in terms of efficiency with resources required to run these models (and that's also ignoring the advances being made at a hardware level).

Because of Llama 3, we're essentially able to have something like our own personal GLaDOS right now:
https://www.reddit.com/r/LocalLLaMA/comments/1csnexs/local_glados_now_running_on_windows_11_rtx_2060/

https://github.com/dnhkng/GlaDOS

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

I think you're confusing "AI" with "AGI".

"AI" doesn't mean what it used to and if you use it today it encompasses a very wide range of tech including machine learning models:

Speech to text (STT), text to speech (TTS), Generative AI for text (LLMs), images (Midjourney/Stable Diffusion), audio (Suno). Upscaling, Computer Vision (object detection, etc).

But since you're looking for AGI there's nothing specific to really point at since this doesn't exist.

Edit: typo

QuadratureSurfer , (edited )
@QuadratureSurfer@lemmy.world avatar

Edit: Ok it really doesn't help when you edit your comment to provide clarification on something based on my reply as well as including additional remarks.


I mean, that's kind of the whole point of why I was trying to nail down what the other user meant when they said "AI doesn't provide much benefit yet".

The definition of "AI" today is way too broad for anyone to make statements like that now.

And to make sure I understand your question, are you asking me to provide you with the definition of "AI"? Or are you asking for the definition of "AGI"?

Do bosses from video games count?

Count under the broad definition of "AI"?
Yes, when we talk about bosses from video games we talk about "AI" for NPCs. And no, this should not be lumped in with any machine learning models unless the game devs created a model for controlling that NPCs behaviour.

In either case our current NPC AI logic should not be classified as AGI by any means (which should be implied since this does not exist as far as we know).

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Ok, first off, I'm a big fan of learning new expressions where they come from and what they mean (how they came about, etc). Could you please explain this one?:

well, you dance and jump over the fire in the bank's vault.

And back to the original topic:

It isn't resource efficient, simple as that.

It's not that simple at all and it all depends on your use case for whatever model you're talking about:

For example I could spend hours working in Photoshop to create some image that I can use as my Avatar on a website.
Or I can take a few minutes generating a bunch of images through Stable Diffusion and then pick out one I like. Not only have I saved time in this task, but I have used less electricity.

In another example I could spend time/electricity to watch a Video over and over again trying to translate what someone said from one language to another, or I could use Whisper to quickly translate and transcribe what was said in a matter of seconds.

On the other hand, there are absolutely use cases where using some ML model is incredibly wasteful.
Take, for example, a rain sensor on your car.
Now, you could setup some AI model with a camera and computer vision to detect when to turn on your windshield wipers.
But why do that when you could use this little sensor that shoots out a small laser against the window and when it detects a difference in the energy that's normally reflected back it can activate the windshield wipers.
The dedicated sensor with a low power laser will use far less energy and be way more efficient for this use case.

Cheers on you if you found where to put it to work as I haven't and grown irritated over seeing this buzzword everywhere.

Makes sense, so many companies are jumping on this as a buzzword when they really need to stop and think if it's necessary to implement in the first place.
Personally, I have found them great as an assistant for programming code as well as brainstorming ideas or at least for helping to point me in a good direction when I am looking into something new. I treat them as if someone was trying to remember something off the top of their head. Anything coming from an LLM should be double checked and verified before committing to it.

And I absolutely agree with your final paragraph, that's why I typically use my own local models running on my own hardware for coding/image generation/translation/transcription/etc. There are a lot of open source models out there that anyone can retrain for more specific tasks. And we need to be careful because these larger corporations are trying to stifle that kind of competition with their lobbying efforts.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

I gave up on ChatGPT for help with coding.

But a local model that's been fine-tuned for coding? Perfection.

It's not that you use the LLM to do everything, but it's excellent for pseudo code. You can quickly get a useful response back about most of the same questions you would search for on stack overflow (but tailored to your own code). It's also useful for issues when you're delving into a newer programming language and trying to port over some code, or trying to look at different ways of achieving the same result.

It's just another tool in your belt, nothing that we should rely on to do everything.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

You know what's ironic? We're all communicating on a decentralized network which is inefficient when compared to a centralized network.

I'm sure we could nitpick and argue over what's the most efficient solution for every little thing, but at the end of the day we need to see if the pros outweigh the cons.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

NAND - one of the 2 you listed, or they give up.

QuadratureSurfer , (edited )
@QuadratureSurfer@lemmy.world avatar

You also have to keep in mind that, the more you compress something, the more processing power you're going to need.

Whatever compression algorithm that is proposed will also need to be able to handle the data in real-time and at low-power.

But you are correct that compression beyond 200x is absolutely achievable.

A more visual example of compression could be something like one of the Stable Diffusion AI/ML models. The model may only be a few Gigabytes, but you could generate an insane amount of images that go well beyond that initial model size. And as long as someone else is using the same model/input/seed they can also generate the exact same image as someone else.
So instead of having to transmit the entire 4k image itself, you just have to tell them the prompt, along with a few variables (the seed, the CFG Scale, the # of steps, etc) and they can generate the entire 4k image on their own machine that looks exactly the same as the one you generated on your machine.

So basically, for only a few bits about a kilobyte, you can get 20+MB worth of data transmitted in this way. The drawback is that you need a powerful computer and a lot of energy to regenerate those images, which brings us back to the problem of making this data conveyed in real-time while using low-power.

Edit:

Tap for some quick napkin math

For transmitting the information to generate that image, you would need about 1KB to allow for 1k characters in the prompt (if you really even need that),
then about 2 bytes for the height,
2 for the width,
8 bytes for the seed,
less than a byte for the CFG and the Steps (but we'll just round up to 2 bytes).
Then, you would want something better than just a parity bit for ensuring the message is transmitted correctly, so let's throw on a 32 or 64 byte hash at the end...
That still only puts us a little over 1KB (1078Bytes)...
So for generating a 4k image (.PNG file) we get ~24MB worth of lossless decompression.
That's 24,000,000 Bytes which gives us roughly a compression of about 20,000x
But of course, that's still going to take time to decompress as well as a decent spike in power consumption for about 30-60+ seconds (depending on hardware) which is far from anything "real-time".
Of course you could also be generating 8k images instead of 4k images... I'm not really stressing this idea to it's full potential by any means.

So in the end you get compression at a factor of more than 20,000x for using a method like this, but it won't be for low power or anywhere near "real-time".

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

A job interview! (I wish I was joking).

The reward for developing this miraculous leap forward in technology? A job interview, according to Neuralink employee Bliss Chapman. There is no mention of monetary compensation on the web page.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Sure, but this is just a more visual example of how compression using an ML model can work.

The time you spend reworking the prompt, or tweaking the steps/cfg/etc. is outside of the scope of this example.

And if we're really talking about creating a good pic it helps to use tools like control net/inpainting/etc... which could still be communicated to the receiving machine, but then you're starting to lose out on some of the compression by a factor of about 1KB for every additional additional time you need to run the model to get the correct picture.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

The first thing I said was, "the more you compress something, the more processing power you're going to need [to decompress it]"

I'm not removing the most computationally expensive part by any means and you are misunderstanding the process if you think that.

That's why I specified:

The drawback is that you need a powerful computer and a lot of energy to regenerate those images, which brings us back to the problem of making this data conveyed in real-time while using low-power.

And again

But of course, that's still going to take time to decompress as well as a decent spike in power consumption for about 30-60+ seconds (depending on hardware)

Those 30-60+ second estimates are based on someone using an RTX 4090, the top end Consumer grade GPU of today. They could speed up the process by having multiple GPUs or even enterprise grade equipment, but that's why I mentioned that this depends on hardware.

So, yes, this very specific example is not practical for Neuralink (I even said as much in my original example), but this example still works very well for explaining a method that can allow you a compression rate of over 20,000x.

Yes you need power, energy, and time to generate the original image, and yes you need power, energy, and time to regenerate it on a different computer. But to transmit the information needed to regenerate that image you only need to convey a tiny message.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Shout-out to Archive.org for all the awesome work they do to backup what they can from the internet.

(Especially when some stack overflow answer to a question is just a link to some website that has either changed or no longer exists).

CEO of Google Says It Has No Solution for Its AI Providing Wildly Incorrect Information (futurism.com)

You know how Google's new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries? In one instance, AI Overviews told a user to use glue on pizza to make sure the cheese won't slide off (pssst...please don't do this.)...

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Journalists are also in a panic about LLMs, they feel their jobs are threatened by its potential. This is why (in my opinion) we're seeing a lot of news stories that will focus on any imperfections that can be found in LLMs.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

What I mean is that Journalists feel threatened by it in someway (whether I use the word "potential" here or not is mostly irrelevant).

In the end this is just a theory, but it makes sense to me.

I absolutely agree that management has greatly misunderstood how LLMs should be used. They should be used as a tool, but treated like an intern who's speaking out loud without citing any sources. All of their statements and work should be double checked.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Technically, generative AI will always give the same answer when given the same input. But, what happens is a "seed" is mixed in to help randomize things, that way it can give different answers every time even if you ask it the same question.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

They still are. Giving a generative AI the same input and the same seed results in the same output every time.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

OK, but we're discussing whether computers are "reliable, predictable, idempotent". Statements like this about computers are generally made when discussing the internal workings of a computer among developers or at even lower levels among computer engineers and such.

This isn't something you would say at a higher level for end-users because there are any number of reasons why an application can spit out different outputs even when seemingly given the "same input".

And while I could point out that Llama.cpp is open source (so you could just go in and test this by forcing the same seed every time...) it doesn't matter because your statement effectively boils down to something like this:

"I clicked the button (input) for the random number generator and got a different number (output) every time, thus computers are not reliable or predictable!"

If you wanted to make a better argument about computers not always being reliable/predictable, you're better off pointing at how radiation can flip bits in our electronics (which is one reason why we have implemented checksums and other tools to verify that information hasn't been altered over time or in transition). Take, for instance, the example of what happened to some voting machines in Belgium in 2003:
https://www.businessinsider.com/cosmic-rays-harm-computers-smartphones-2019-7

Anyway, thanks if you read this far, I enjoy discussing things like this.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

If you think that "pretty much everything AI is a scam", then you're either setting your expectations way too high, or you're only looking at startups trying to get the attention of investors.

There are plenty of AI models out there today that are open source and can be used for a number of purposes: Generating images (stable diffusion), transcribing audio (whisper), audio generation, object detection, upscaling, downscaling, etc.

Part of the problem might be with how you define AI... It's way more broad of a term than what I think you're trying to convey.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Sure, but don't let that feed into the sentiment that AI = scams. It's way too broad of a term that covers a ton of different applications (that already work) to be used in that way.

And there are plenty of popular commercial AI products out there that work as well, so trying to say that "pretty much everything that's commercial AI is a scam" is also inaccurate.

We have:
Suno's music generation
NVidia's upscaling
Midjourney's Image Generation
OpenAI's ChatGPT
Etc.

So instead of trying to tear down everything and anything "AI", we should probably just point out that startups using a lot of buzzwords (like "AI") should be treated with a healthy dose of skepticism, until they can prove their product in a live environment.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Looks like a separate element that comes after the LLM summary which can be removed by ad blockers. That is, if you're still using Google search...

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Actually, if this is the requirement, then this means our data isn't leaving the device at all (for this purpose) since everything is being run locally.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Since everything is being run in a local LLM, most likely this will be some extra RAM usage rather than SSD usage, but that is assuming that they aren't saving these images to file anywhere.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

The whole thing is going to be run on a local LLM.
They don't have to upload that data anywhere for this to work (it will work offline). But considering what they already do, Microsoft is going to have to do a lot to prove that they aren't doing this.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Very true... what I meant to say was:
[...] then this means our data shouldn't need to leave the device at all [...]

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Videography
Photography
Downloading Machine Learning Models
Data for Training ML Models
Training ML Models
Gaming (the games themselves or saving replays)
Backing up movies/videos/images etc.
Backing up music
NAS

Take your pick, feel free to mix and match or add on to the list.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

I agree, but it's one thing if I post to public places like Lemmy or Reddit and it gets scraped.

It's another thing if my private DMs or private channels are being scraped and put into a database that will most likely get outsourced for prepping the data for training.

Not only that, but the trained model will have internal knowledge of things that are sure to give anxiety to any cyber security experts. If users know how to manipulate the AI model, they could cause the model to divulge some of that information.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Feel free to educate us instead of just saying the equivalent of "you're wrong and I hate reading comments like yours".

But I think, in general, the alteration to Section 230 that they are proposing makes sense as a way to keep these companies in check for practices like shadowbanning especially if those tools are abused for political purposes.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

A very useful video that explains what Quantum Internet is... and what it isn't:

https://www.youtube.com/watch?v=u-j8nGvYMA8

TL/DW: A big misconception here has to do with Quantum entanglement. Quantum Entanglement in Quantum Internet doesn't mean that you can transfer data at speeds faster than light.

It's true that this connection would be "ultra secure" but this would be very inefficient (slow) and it wouldn't be reliable in a noisy environment. It would probably be most useful for some sort of authentication protocol/key sharing.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Relevant video for explaining quantum internet as well as clearing up some misconceptions about what quantum internet can and can't do:

https://www.youtube.com/watch?v=u-j8nGvYMA8

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Are you saying "No... let's not advance mathematics"?
Or... "No, let's not advance mathematics using AI"?

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Just wait till someone creates a manically depressed chatbot and names it Marvin.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

This would actually explain a lot of the negative AI sentiment I've seen that's suddenly going around.

Some YouTubers have hopped on the bandwagon as well. There was a video posted the other day where a guy attempted to discredit AI companies overall by saying their technology is faked. A lot of users were agreeing with him.

He then proceeded to point out stories about how Copilot/ChatGPT output information that was very similar to a particular travel website.
He also pointed out how Amazon Fresh stores required a large number of outsourced workers to verify shopping cart totals (implying that there was no AI model at all and not understanding that you need workers like this to actually retrain/fine-tune a model).

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

I don't think that "fake" is the correct term here. I agree a very large portion of companies are just running API calls to ChatGPT and then patting themselves on the back for being "powered by AI" or some other nonsense.

Amazon even has an entire business to help companies pretend their AI works by crowdsourcing cheap labor to review data.

This is exactly the point I was referring to before. Just because Amazon is crowdsourcing cheap labor to backup their AI doesn't mean that the AI is "fake".
Getting an AI model to work well takes a lot of man hours to continually train and improve it as well as make sure that it is performing well.

Amazon was doing something new (with their shopping cart AI) that no model had been trained on before. Training off of demo/test data doesn't get you the kind of data that you get when you actually put it into a real world environment.

In the end it looks like there are additional advancements needed before a model like this can be reliable, but even then someone should be asking if AI is really necessary for something like this when there are more reliable methods available.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

After reading through that wiki, that doesn't sound like the sort of thing that would work well for what AI is actually able to do in real-time today.

Contrary to your statement, Amazon isn't selling this as a means to "pretend" to do AI work, and there's no evidence of this on the page you linked.

That's not to say that this couldn't be used to fake an AI, it's just not sold this way, and in many applications it wouldn't be able to compete with the already existing ML models.

Can you link to any examples of companies making wild claims about their product where it's suspected that they are using this service?
(I couldn't find any after a quick Google search... but I didn't spend too much time on it).

I'm wondering if the misunderstanding here is based on the sections here related to AI work? The kind of AI work that you would do with Turkers is the kind of work that's necessary to prepare the data for it to be used on training a machine learning model. Things like labelling images, transcribing words from images, or (to put it in a way that most of us have already experienced) solving captchas asking you to find the traffic lights (so that you can help train their self-driving car AI model).

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

But what app did you use to access OSM and download the maps for offline use... was it a web browser? OsmAnd? Vespucci?

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Do you have a source for those scientists you're referring to?

I know that LLMs can be trained on data output by other LLMs, but you're basically diluting your results unless you do a lot of work to clean up the data.

I wouldn't say it's "impossible" to determine if content was generated by an LLM, but I agree that it will not be reliable.

Hello GPT-4o (openai.com)

GPT-4o (“o” for “omni”) is a step towards much more natural human-computer interaction—it accepts as input any combination of text, audio, and image and generates any combination of text, audio, and image outputs. It can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds,...

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

The demo showcasing integration with BeMyEyes looks like an interesting way to help those who are blind.

https://vimeo.com/945587840

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

So raytracing will be supported in iPad apps now...

So far the M4 seems to only be announced for the iPad.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

It's worth pointing out that once Pokémon Go players found out about OSM, we saw a massive increase in new users as well as those contributing to OSM so that the maps would better reflect the areas they played in.

https://www.researchgate.net/publication/334378297_How_an_augmented_reality_game_Pokemon_GO_affected_volunteer_contributions_to_OpenStreetMap

Unfortunately there are always a few that will try to game any system. In this case they're essentially vandalizing OSM for their own selfish reasons.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

I get that Louis is against Sponsorblock and his personal feelings and morals influence the direction of the software too.

Louis may be against sponsorblock, but sponsorblock is supported in Grayjay, so at least he's not letting his personal feelings get in the way too much of what his userbase wants.

I hope Louis does well in case they go up against Google. I just hope they get a good judge that has a decent understanding of how the tech works and how a decision one way or another will really affect everything.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Right? I still have my brick and it still works fine... even after seeing how high I could throw it.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Tried to RMA a motherboard with Gigabyte and they will find any excuse to void the warranty.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • incremental_games
  • meta
  • All magazines