Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

yarr ,

This is a quote that should end in 'yet'. I am very confident in saying there will be an AAA game released that will be designed and implemented 95%+ by a machine. I am less confident in providing a timeline. If you consider the history of machine learning is ~70 years old (in one sense, one can argue other dates) and you plot the advances from tic-tac-toe to what machines can do today (chess being a prime example), it doesn't take much vision to see that it won't be but a matter of time before this is a real thing.

FiniteBanjo ,

lmfao

This kid is not going places.

Cyyy ,

you're right, you won't.

Trollception ,

Sure it may produce a game but much of what makes a game good is making it fun and memorable. If we can eventually create a general AI then absolutely I think such a thing is possible. Otherwise it will be a copypasta mishmash and having a cohesive and fluent design is a huge if.

hamid ,
@hamid@lemmy.world avatar

This was extremely ambitious. I am not a C# developer but I was able to get a LLM to build a really simple app for me to take in input and fill out a table in an azure storage account and then use an azure function to update a document with the information to add to a firewall threat feed for a demo. It took me 3 hours where learning and doing the old fashioned way would have taken me a few days. Cool stuff but not magic, it was like working with a really smart idiot.

PiratePanPan ,
@PiratePanPan@lemmy.dbzer0.com avatar

Using AI to automate super tedious and repetitive tasks is great and everybody should start doing it

Pilferjinx ,

Yeah current gen AI is still very much a human tool - an assistant - maybe a companion if you stretch it to it's edge. I for one welcome a personal AI buddy

SpaceCowboy ,
@SpaceCowboy@lemmy.ca avatar

Yeah, there's many times I type "class for:" followed by a a dump of SQL, JSON, XML or whatever and it'll make a class with properties named correctly with the right types. I still have to figure out tricky data relationships and that sort of thing, but the boring tasks of creating interfaces to databases and objects for serializing stuff goes a lot faster now.

So a much larger percentage of my time is devoted to solving problems rather than doing all the boring grunt work usually involved with getting data in and out of the app.

pythonoob ,

God I must be an idiot. I was trying to design my own db for simple flash cards with multiple tags, where tags can be stacked with each other to auto build flash card decks. Even with chatgpt helping me build out the interfaces and some functions I was not getting the functionality I wanted at all.

guacupado ,

Too many people see AI doing work as an either or thing. AI won't replace people outright, it'll just reduce the amount of people you need.

Trollception ,

Which in turn replaces people. What happens if a person is 50 percent more productive with AI? Is the company going to let them simply have 50% of the workload they would before, or will they lay off the other unneeded employees?

AnxiousOtter ,

Listen, you're not middle-managing hard enough. If AI tools can improve worker productivity by 50%, then you fire 75% of staff and overwork the remaining 25% while threatening to replace them with AI if they don't suck it up.

If in the future you're ever in doubt about how to tackle a problem, just think to yourself "what would an idiot with an MBA do?" and you'll be set.

systemglitch ,

I look forward to the day it can make a fully functioning game. The best games will mostly be AI created eventually

zalgotext ,

I respect your opinion, but it's one of the stupidest I've ever heard

systemglitch ,

You are confidently incorrect and I can respect that.

pup_atlas ,

No, he’s right.

stratoscaster ,

The reason that your favorite games are your favorite is because they aren't soulless cash grabs. They're made by people with imagination, passion, and ingenuity. AI simply can't create something brand new from existing parts, it can only give it a fresh coat of paint.

Furthermore, AI will always work like this, because that's how the models are trained. I don't think we'll have a model that learns to create on its own within any of our lifetimes, if ever.

andrewrgross ,
@andrewrgross@slrpnk.net avatar

I don't doubt that AI tools can be used to make great games, but I think part of the reason so many people disagree with you is because:

  1. You claim "The best games will mostly be AI created eventually", and I think most people question on what basis you think that AI will produce overall better quality. If you said that it's faster, or can allow indie studios to complete with AAA, that makes sense. Attributing quality to it -- at this stage -- seems odd.
  2. It's unlikely, imo, that the best games will be created by AI as opposed to with AI.

I think using AI throughout the process so that one person can achieve the productivity of a whole team is a credible vision. But to say that games will created "By AI" implies that a generative AI engine will generate the code de novo to a complete game. Which I think is already possible, but it will be very, very hard for such a system to innovate newer games. Because currently, these tools rely on replicating features in their training, so their ability to create quests that match a new genre or to generate dialogue that is funny in the context of the story is going to be very impaired.

By and large, I think current evidence shows that Human-AI cooperation almost always improves upon AI performance alone, and this is particularly the case when creating things for humans to enjoy.

kromem ,

When it comes to AI there's a lot of people that are confidently incorrect, particularly on Lemmy.

But as for your original thesis - I'd counter that it's hybrid development and efforts that will be the biggest hits and most enjoyable to play.

At least until we have good enough classifiers for what gameplay is fun, what writing is engaging, what art direction is interesting and appealing, etc.

That said - it would be a very good time to be in the games telemetry business, as they're sitting on gold whether they are aware of it or not.

betz24 ,

Not yet able to replace talent

Andrenikous ,

They forgot to import the talent module.

johsny ,
@johsny@lemmy.world avatar

Duh.

Nusm ,
@Nusm@yall.theatl.social avatar
andrewrgross ,
@andrewrgross@slrpnk.net avatar

That's a great image.

RIP Norm.

IsThisAnAI ,

Folks really didn't understand how AI will work. It's not going to be some big we're dropping 1000 people.

It's going to reduce demand over time.

kameecoding ,

And in that regard it's no different than any other productivity tool or automation, I have seen software being bought that immediately Eliminated 80 odd jobs.

Pyr_Pressure ,

It will start with going from 5 writers to 3, or going from 10 animators to 6.

Then 10 years from now as it gets more advanced we will be down to maybe 1 writer and 2 animators.

QuaternionsRock ,

going from 10 animators to 6

It’s still crazy to me that like half of Across the Spider-Verse was AI generated

deur ,

Folks really don't understand how AI will work. It's not going to be some big we're dropping 1000 people.

dariusj18 ,

I've heard it as "No one is losing their job to AI, but they will lose their jobs to someone who is using AI."

Semi_Hemi_Demigod ,
@Semi_Hemi_Demigod@lemmy.world avatar

Case in point: I'm using ChatGPT to help me write cover letters. I make sure to proofread them and sometimes it hallucinates my expertise, but it makes it a lot faster.

expr ,

Also not going to happen. It's massively overrated.

ObsidianZed ,

I mean that's already happening at some big companies now.

Will it last? My guess is no, but they'll enjoy saving the money that they would pay human beings in the mean time.

My hope is just that they'll suffer losses due to a drop in product quality and start struggling, but let's face it, the big tech companies are almost never the ones' that are actually hurt by their decisions.

mods_are_assholes ,

If you compared LLM outputs to even just a year ago you'll realize how stupid your statement is.

smackjack ,

Think of AI like computers and spreadsheet software in the early 80s. I bet a lot of accountants were pretty freaked out about what this new technology was going to mean for their jobs.

Did technology replace those accountants? No, but companies probably didn't need as many accountants as they did before. AI will likely reduce the number of programmers that a company needs, but it won't eliminate them

dariusj18 ,

Really I think it's kind of the opposite. There are plenty of jobs awaiting higher skilled labor. Just as Excel didn't hurt accounting, it gave many people who weren't trained I'm accounting to take on more tasks than they would have.

kromem ,

It's going to reduce demand over time.

At least in video games it's probably going to be more that scope increases while headcount stays the same.

If most of your budget is labor, and the cost of the good is fixed, with the number of units sold staying around the same, there's already an equilibrium.

So companies can either (a) reduce headcount to spend a few years making a game comparable to games today when it releases, or (b) keep the same headcount and release a game that reviews well and is what the market will expect in a few years.

So for example, you don't want to reduce the number of writers or voice actors to keep a game with a handful of main NPCs and a bunch of filler NPCs when you can keep the same number of writers and actors but extend their efforts to straight up have entire cities where every NPC has branching voiced dialogue generated by extending the writing and performances of that core team.

But you still need massive amounts of human generated content to align the generative AI to the world lore, character tone, style of writing, etc.

Pipelines will change, scope will increase, but the number of people used for a AAA will largely stay the same and may even slightly grow.

mods_are_assholes ,

But that's not how corporations view it because the people making the decision aren't tec people, but beancounters.

Some slick but ignorant C-suite gets the bright idea that AI is The Way and makes the call to lay off a bunch of people.

I BET that is what Hasbro is thinking for DnD, and I am absolutely certain some of their recent content is AI, and that's why they canned most of the real people involved.

IsThisAnAI ,

You've just said all corporations view it this way. Think about what you said. You have taken articles from a few businesses and applied that across the board.

mods_are_assholes ,

[Thread, post or comment was deleted by the moderator]

  • Loading...
  • turkishdelight ,

    give it a few more years.

    erwan ,

    Just like self driving! In 2010 it was almost there, just needed a few more years...

    realharo ,

    Yes actually (except more than a few years).

    Waymo is already operating a robotaxi service in 3 cities, now they just need to expand and find a way to make it not lose money.

    owen ,

    Sounds like they just need a few more years...

    realharo ,

    Until what? 100% replacement of human-driven cars? Being rolled out for areas covering 50% of the population? Where is the goal line here?

    We are already at the stage of commercial operation, with rides available to the general public - even though only in a few locations.

    Sure, it's far from being everywhere, but why pretend that progress has stalled, when it clearly hasn't?

    owen ,

    My point is that the 'give it a few more years' mantra gets repeated for decades

    Thorny_Insight ,

    Go see videos of how well FSD V12 performs and you're up for a surprise. Full self driving sucks untill it doesn't. AIDRIVR puts up good content if you want recommendations.

    DAMunzy ,

    We'll have cold fusion in 10 more years‽

    KeenFlame ,

    I really don't think there's more examples of optimistic predictions than there are pessimistic ones.

    The discoveries made in recent years definitely point to an emergent incredibly useful set of tools that it would be amiss to pretend wouldn't eventually replace junior developers in different disciplines. It's just that without juniors there will never be any seniors. And someone needs to babysit those juniors. So what we get is not something that can replace an entire workforce in a long long while even if top brass would love that

    Kissaki ,
    @Kissaki@feddit.de avatar

    I am astonished by an established, commercial website having good structure.

    It looks like a documentation website. Sidebar with clear categories and navigation. I really like it.

    Kissaki ,
    @Kissaki@feddit.de avatar

    The article doesn't say much. So I checked the source for more information. It doesn't say much more, but IMO in a much better way, in two concise paragraphs. In the sourced financial report, it is in the intro, two paragraphs:

    An example R&D initiative, sponsored by the Innovation team was Project Ava, where a team, initially from Electric
    Square Malta, attempted to create a 2D game solely using Gen AI. Over the six-month process, the team shared
    their findings across the Group, highlighting where Gen AI has the potential to augment the game development
    process, and where it lags behind. Whilst the project team started small, it identified over 400 tools, evaluating and
    utilising those with the best potential. Despite this, we ultimately utilised bench resource from seven different game
    development studios as part of the project, as the tooling was unable to replace talent.

    One of the key learnings was that whilst Gen AI may simplify or accelerate certain processes, the best results and
    quality needed can only be achieved by experts in their field utilising Gen AI as a new, powerful tool in their creative
    process. As a research project, the game will not be released to the public, but has been an excellent initiative to
    rapidly spread tangible learnings across the Group, provide insights to clients and it demonstrates the power and
    level of cross-studio collaboration that currently exists. Alongside Project Ava, the team is undertaking a range of
    Gen AI R&D projects, including around 3D assets, to ensure that we are able to provide current insights in an ever-
    evolving part of the market


    The central quote and conclusion being:

    One of the key learnings was that whilst Gen AI may simplify or accelerate certain processes, the best results and
    quality needed can only be achieved by experts in their field utilising Gen AI as a new, powerful tool in their creative
    process.

    Which is obvious and expected for anyone familiar with the technology. Of course, experiments and confirming expectations has value too. And I'm certain actually using tools and finding out which ones they can use where is very useful to them specifically.

    0xD ,

    The overall point may be relatively obvious, but the details are not.

    Which steps of which processes is it good at, and which not? What can be easily integrated into existing tooling? Where is is best completely skipped?

    Kissaki ,
    @Kissaki@feddit.de avatar

    Yes, exactly. That's what I was referring to in my last sentence.

    FiniteBanjo ,

    Honestly it sounds extremely generous by saying the best results can be achieved by experts with GenAI. In my opinion the best results can be achieved without it entirely.

    Damage ,

    "House made entirely of cement is a failure because you still need doors and windows and stuff."

    JackGreenEarth ,
    @JackGreenEarth@lemm.ee avatar

    The game will not be released to the public as it was just a research project, and Keywords didn't provide any additional information about what type of 2D game it created.

    So we just have to trust them on this? Yeah, no.

    Feathercrown ,

    Well, I'm glad somebody did the experiment at least.

    coffinwood ,

    Add ", yet" to the headline and come back in a year or two.

    Currently AI may fail to produce a video game, but so was the case for images, videos, and texts only a few years ago.

    Failure is a good thing because it's preceded by attempt.

    JackGreenEarth ,
    @JackGreenEarth@lemm.ee avatar

    Yeah. Just because it can't do it now, doesn't mean it won't ever. And also refer to my other comment for how this is a bad study as they didn't even provide any details on the game itself, let alone release the game. But anyone can do a similar study for themselves at home, since AI is free to use!

    WallEx ,

    Huh, would you look at that.

    Rentlar ,

    "Replacing Talent" is not what AI is meant for, yet, it seems to be every penny-pinching, bean counting studio's long term goal with it.

    darthsid ,

    Yep AI at best can supplement talent, not replace it.

    9488fcea02a9 ,

    I'm not a developer, but I use AI tools at work (mostly LLMs).

    You need to treat AI like a junior intern.... You give it a task, but you still need to check the output and use critical thinking. You cant just take some work from an intern, blindly incorporate it into your presentation, and then blame the intern if the work is shoddy....

    AI should be a time saver for certain tasks. It cannot (currently) replace a good worker.

    Gradually_Adjusting ,
    @Gradually_Adjusting@lemmy.ca avatar

    It's clutch for boring emails with several tedious document summaries. Sometimes I get a day's work done in 4 hours.

    Automation can be great, when it comes from the bottom-up.

    isles ,

    Honestly, that's been my favorite - bringing in automation tech to help me in low-tech industries (almost all corporate-type office jobs). When I started my current role, I was working consistently 50 hours a week. I slowly automated almost all the processes and now usually work about 2-3 hours a day with the same outputs. The trick is to not increase outputs or that becomes the new baseline expectation.

    fidodo ,

    I am a developer and that's exactly how I see it too. I think AI will be able to write PRs for simple stories but it will need a human to review those stories to give approval or feedback for it to fix it, or manually intervene to tweak the output.

    Lmaydev ,

    As a developer I use it mainly for learning.

    What used to be a Google followed by skimming a few articles or docs pages is now a question.

    It pulls the specific info I need, sources it and allows follow up questions.

    I've noticed the new juniors can get up to speed on new tech very quickly nowadays.

    As for code I don't trust it beyond snippets I can use as a base.

    FiniteBanjo , (edited )

    JFC they've certainly got the unethical shills out in full force today. Language Models do not and will never amount to proper human work. It's almost always a net negative everywhere it is used, final products considered.

    Lmaydev ,

    Then you're using it wrong.

    FiniteBanjo ,

    Its intended use is to replace human work in exchange for lower accuracy. There is no ethical use case scenario.

    Lmaydev ,

    It's intended to show case its ability to generate text. How people use it is up to them.

    As I said it's great for learning as it's very accurate when summarising articles / docs. It even sources it so you can read up more if needed.

    FiniteBanjo ,

    It's been known to claim commands and documentation exist when they don't. It very commonly gets simple addition wrong.

    Lmaydev ,

    That's because it's a language processor not a calculator. As I said you're using it wrong.

    FiniteBanjo ,

    So the correct usage is to have documents incorrectly explained to you? I fail to see how that does any good.

    Lmaydev ,

    I know you do buddy.

    altima_neo ,
    @altima_neo@lemmy.zip avatar

    Not even that, it's a tool. Like the same way Photoshop, or 3ds max are tools . You still need the talent to use the tools.

    Thorny_Insight ,

    Current AI*

    I don't see any reason to expect this to be the case indefinitely. It has been getting better all the time and lately been doing so at a quite rapid pace. In my view it's just a matter of time untill it surpasses human capabilities. It can already do so in specific narrow fields. Once we reach AGI all bets are off.

    thundermoose ,

    Maybe this comment will age poorly, but I think AGI is a long way off. LLMs are a dead-end, IMO. They are easy to improve with the tech we have today and they can be very useful, so there's a ton of hype around them. They're also easy to build tools around, so everyone in tech is trying to get their piece of AI now.

    However, LLMs are chat interfaces to searching a large dataset, and that's about it. Even the image generators are doing this, the dataset just happens to be visual. All of the results you get from a prompt are just queries into that data, even when you get a result that makes it seem intelligent. The model is finding a best-fit response based on billions of parameters, like a hyperdimensional regression analysis. In other words, it's pattern-matching.

    A lot of people will say that's intelligence, but it's different; the LLM isn't capable of understanding anything new, it can only generate a response from something in its training set. More parameters, better training, and larger context windows just refine the search results, they don't make the LLM smarter.

    AGI needs something new, we aren't going to get there with any of the approaches used today. RemindMe! 5 years to see if this aged like wine or milk.

    Thorny_Insight ,

    Yeah LLMs might very well be a dead-end when it comes to AGI but just like chatGPT seemingly came out of nowhere and took the world by surprise, this might just aswell be the case with an actual AGI aswell. My comment doesn't really make any claims about the timescale of it but rather just tires to point out the inevitability of it.

    KeenFlame ,

    How does this amazing prediction engine discovery that basically works like our brain does not fit in a larger solution?

    The way emergent world simulation can be found in the larger models definitely point to this being a cornerstone, as it provides functional value in both image and text recall.

    Nevermid that tools like memgpt doesn't satisfy long term memory and context windows doesn't satisfy attention functions properly, I need a much harder sell on LLM technology not proving an important piece of agi

    thundermoose ,

    I didn't say it wasn't amazing nor that it couldn't be a component in a larger solution but I don't think LLMs work like our brains and I think the current trend of more tokens/parameters/training LLMs is a dead-end. They're simulating the language area of human brains, sure, but there's no reasoning or understanding in an LLM.

    In most cases, the responses from well-trained models are great, but you can pretty easily see the cracks when you spend extended time with them on a topic. You'll start to get oddly inconsistent answers the longer the conversation goes and the more branches you take. The best fit line (it's a crude metaphor, but I don't think it's wrong) starts fitting less and less well until the conversation completely falls apart. That's generally called "hallucination" but I'm not a fan of that because it implies a lot about the model that isn't really true. Y

    You may have already read this, but if you haven't: Steven Wolfram wrote a great overview of how GPT works that isn't too technical. There's also a great sci-fi novel from 2006 called Blindsight that explores the way facsimiles of intelligence can be had without consciousness or even understanding and I've found it to be a really interesting way to think about LLMs.

    It's possible to build a really good Chinese room that can pass the Turing test, and I think LLMs are exactly that. More tokens/parameters/training aren't going to change that, they'll just make them better Chinese rooms.

    KeenFlame ,

    Thanks, I'll check those out. The entire point of your comment was that llm is a dead end. The branching as you call it is just more parameters which approach, in lower token models a collapse. Which is why more tokens and larger context does improve accuracy and why it does make sense to increase them. LLMs have also proven to in some cases have what you call reason and what many call reason but which is not a good word for the error. Larger models provide a way to stimulate the world which in turn gives us access to the sensing mechanism of our brain, which is to stimulate and then attend to disparages between the simulation and actual. This in turn gives access to action which unfortunately is not very well understood. Simulation, or prediction, is what our brains constantly do to be able to react and adapt to the world without massive timing failure and massive energy cost, for instance consider driving where you focus on unusual sensing and let action be an extension of purpose by just allowing constant prediction to happen where your muscles have already prepared to commit even precise movements due to enough practice with your "model" of how wheel and foot apply to the vehicle.

    KeenFlame ,

    *Simulate, not stimulate lol

    assassinatedbyCIA ,

    The problem is the crazy valuations of AI companies is based on it replacing talent and soon. Supplementing talent is far less exciting and far less profitable.

    Defaced ,

    https://www.cognition-labs.com/introducing-devin There are people out there deliberately working to make that vision a reality. Replacing software engineers is the entire point of Devin AI.

    time_fo_that ,

    I saw this the other day and I'm like well fuck might as well go to trade school before it gets saturated like what happened with tech in the last couple years.

    Defaced ,

    Yeah, the sad thing about Devin AI is that they're clearly doing it for the money, they have absolutely no intentions on bettering humanity, they just want to build this up and sell it off for that fat entrepreneur paycheck. If they really cared about bettering humanity they would open it up to everyone, but they're only accepting inquiries from businesses.

    brbposting ,

    One single comment when I posted this on the technology community:

    https://sh.itjust.works/pictrs/image/7efcbb97-4ec0-4b27-8cc2-41954229404f.png

    Rentlar ,

    I do think given time, AI can improve to the level that it can do nearly all of the same things junior level people in many different sectors can.

    The problem and unfortunate thing for companies I forsee is that it can't turn juniors into seniors if the AI "replaces" juniors, which means that company will run out of seniors with retirement or will have to pay piles and piles of cash for people just to hire the few non-AI people left with industry knowledge to babysit the AIs.

    Pyr_Pressure ,

    It's very short sighted, but capitalism doesn't reward long term thinking.

    gravitas_deficiency ,
    sed “s/studio’s/tech industry c-suite’s/“
    

    As an engineer, the amount of non-engineering idiots in tech corporate leadership trying to apply inappropriate technical solutions to something because it became a buzzword is just absurdly high.

    Ragnarok314159 ,

    Just make the modulus of elasticity more agile. Problem solved!

    ZILtoid1991 ,

    But that's pretty much why AI is developed.

    KeenFlame ,

    It was more like a scientific discovery

    FiniteBanjo ,

    Not really, no, all of the current models built to intended scale are selling it as a product, especially OpenAI, Microsoft, and Google. It was built with a purpose and that purpose was to potentially replace expensive human assets.

    KeenFlame ,

    Yes, it was. Like all scientific discoveries several corporations started building proprietary products. You are wrong that it was built with that purpose.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • technology@lemmy.world
  • incremental_games
  • meta
  • All magazines