Am I the only one getting agitated by the word AI (Artificial Intelligence)?

Real AI does not exist yet,
atm we only have LLMs (Large Language Models),
which do not think on their own,
but pass turing tests
(fool humans into thinking that they can think).

Imo AI is just a marketing buzzword,
created by rich capitalistic a-holes,
who already invested in LLM stocks,
and now are looking for a profit.

  • Daxtron2@startrek.website
    link
    fedilink
    arrow-up
    23
    ·
    1 year ago

    I’m more infuriated by people like you who seem to think that the term AI means a conscious/sentient device. Artificial intelligence is a field of computer science dating back to the very beginnings of the discipline. LLMs are AI, Chess engines are AI, video game enemies are AI. What you’re describing is AGI or artificial general intelligence. A program that can exceed its training and improve itself without oversight. That doesn’t exist yet. AI definitely does.

    • MeepsTheBard@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      12
      arrow-down
      2
      ·
      1 year ago

      I’m even more infuriated that AI as a term is being thrown into every single product or service released in the past few months as a marketing buzzword. It’s so overused that formerly fun conversations about chess engines and video game enemy behavior have been put on the same pedestal as CyberDook™, the toilet that “uses AI” (just send pics of your ass to an insecure server in Indiana).

      • Daxtron2@startrek.website
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        I totally agree with that, it has recently become a marketing buzzword. It really does drag down the more interesting recent discoveries in the field.

    • KingRandomGuy@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      Right, as someone in the field I do try to remind people of this. AI isn’t defined as this sentient general intelligence (frankly its definition is super vague), even if that’s what people colloquially think of when they hear the term. The popular definition of AI is much closer to AGI, as you mentioned.

  • PonyOfWar@pawb.social
    link
    fedilink
    arrow-up
    20
    ·
    1 year ago

    The word “AI” has been used for way longer than the current LLM trend, even for fairly trivial things like enemy AI in video games. How would you even define a computer “thinking on its own”?

      • Lath@kbin.social
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        But will they be depressed or will they just simulate it because they’re too lazy to work?

        • JackFrostNCola@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          If they are too lazy to work that would imply they have motivation and choice beyond “doing what my programming tells me to do ie. input, process, output”. And if they have the choice not to do work because they dont ‘feel’ like doing it (and not a programmed/coded option given to them to use) then would they not be thinking for themselves?

      • PonyOfWar@pawb.social
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        Not sure about that. A LLM could show symptoms of depression by mimicking depressed texts it was fed. A computer with a true consciousness might never get depression, because it has none of the hormones influencing our brain.

        • Deceptichum@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Me: Pretend you have depression

          LLM: I’m here to help with any questions or support you might need. If you’re feeling down or facing challenges, feel free to share what’s on your mind. Remember, I’m here to provide information and assistance. If you’re dealing with depression, it’s important to seek support from qualified professionals like therapists or counselors. They can offer personalized guidance and support tailored to your needs.

          • PonyOfWar@pawb.social
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            Give it the right dataset and you could easily create a depressed sounding LLM to rival Marvin the paranoid android.

          • Markimus@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Sorry, to be clear I meant it can mimic the conversational symptoms of depression as if it actually had depression; there’s no understanding there though.

            You can’t use that as a metric because you wouldn’t be able to tell the difference between real depression and trained depression.

    • Ratulf@feddit.de
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      The best thing is enemy “AI” only needs to be made worse right away after creating it. First they’ll headshot everything across the map in milliseconds. The art is to make it dumber.

    • dutchkimble@lemy.lol
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      It doesn’t rhyme, And the content is not really interesting, Maybe it’s just a rant, But with a weird writing format.

    • Meowoem@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      It’s a computer science term that’s been used for this field of study for decades, it’s like saying calling a tomato a fruit is a marketing decision.

      Yes it’s somewhat common outside computer science to expect an artificial intelligence to be sentient because that’s how movies use it. John McCarthy’s which coined the term in 1956 is available online if you want to read it

    • UnityDevice@startrek.website
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      They didn’t just start calling it AI recently. It’s literally the academic term that has been used for almost 70 years.

      The term “AI” could be attributed to John McCarthy of MIT (Massachusetts Institute of Technology), which Marvin Minsky (Carnegie-Mellon University) defines as "the construction of computer programs that engage in tasks that are currently more satisfactorily performed by human beings because they require high-level mental processes such as: perceptual learning, memory organization and critical reasoning. The summer 1956 conference at Dartmouth College (funded by the Rockefeller Institute) is considered the founder of the discipline.

      • 9bananas@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        1 year ago

        perceptual learning, memory organization and critical reasoning

        i mean…by that definition nothing currently in existence deserves to be called “AI”.

        none of the current systems do anything remotely approaching “perceptual learning, memory organization, and critical reasoning”.

        they all require pre-processed inputs and/or external inputs for training/learning (so the opposite of perceptual), none of them really do memory organization, and none are capable of critical reasoning.

        so OPs original question remains:

        why is it called “AI”, when it plainly is not?

        (my bet is on the faceless suits deciding it makes them money to call everything “AI”, even though it’s a straight up lie)

        • UnityDevice@startrek.website
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          1 year ago

          so OPs original question remains: why is it called “AI”, when it plainly is not?

          Because a bunch of professors defined it like that 70 years ago, before the AI winter set in. Why is that so hard to grasp? Not everything is a conspiracy.

          I had a class at uni called AI, and no one thought we were gonna be learning how to make thinking machines. In fact, compared to most of the stuff we did learn to make then, modern AI looks godlike.

          Honestly you all sound like the people that snidely complain how it’s called “global warming” when it’s freezing outside.

          • 9bananas@lemmy.world
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            1 year ago

            just because the marketing idiots keep calling it AI, doesn’t mean it IS AI.

            words have meaning; i hope we agree on that.

            what’s around nowadays cannot be called AI, because it’s not intelligence by any definition.

            imagine if you were looking to buy a wheel, and the salesperson sold you a square piece of wood and said:

            “this is an artificial wheel! it works exactly like a real wheel! this is the future of wheels! if you spin it in the air it can go much faster!”

            would you go:

            “oh, wow, i guess i need to reconsider what a wheel is, because that’s what the salesperson said is the future!”

            or would you go:

            “that’s idiotic. this obviously isn’t a wheel and this guy’s a scammer.”

            if you need to redefine what intelligence is in order to sell a fancy statistical model, then you haven’t invented intelligence, you’re just lying to people. that’s all it is.

            the current mess of calling every fancy spreadsheet an “AI” is purely idiots in fancy suits buying shit they don’t understand from other fancy suits exploiting that ignorance.

            there is no conspiracy here, because it doesn’t require a conspiracy; only idiocy.

            p.s.: you’re not the only one here with university credentials…i don’t really want to bring those up, because it feels like devolving into a dick measuring contest. let’s just say I’ve done programming on industrial ML systems during my bachelor’s, and leave it at that.

            • UnityDevice@startrek.website
              link
              fedilink
              arrow-up
              0
              arrow-down
              1
              ·
              1 year ago

              These arguments are so overly tired and so cyclic that AI researchers coined a name for them decades ago - the AI effect. Or succinctly just: “AI is whatever hasn’t been done yet.”

              • 9bananas@lemmy.world
                link
                fedilink
                arrow-up
                1
                arrow-down
                1
                ·
                1 year ago

                i looked it over and … holy mother of strawman.

                that’s so NOT related to what I’ve been saying at all.

                i never said anything about the advances in AI, or how it’s not really AI because it’s just a computer program, or anything of the sort.

                my entire argument is that the definition you are using for intelligence, artificial or otherwise, is wrong.

                my argument isn’t even related to algorithms, programs, or machines.

                what these tools do is not intelligence: it’s mimicry.

                that’s the correct word for what these systems are capable of. mimicry.

                intelligence has properties that are simply not exhibited by these systems, THAT’S why it’s not AI.

                call it what it is, not what it could become, might become, will become. because that’s what the wiki article you linked bases its arguments on: future development, instead of current achievement, which is an incredibly shitty argument.

                the wiki talks about people using shifting goal posts in order to “dismiss the advances in AI development”, but that’s not what this is. i haven’t changed what intelligence means; you did! you moved the goal posts!

                I’m not denying progress, I’m denying the claim that the goal has been reached!

                that’s an entirely different argument!

                all of the current systems, ML, LLM, DNN, etc., exhibit a massive advancement in computational statistics, and possibly, eventually, in AI.

                calling what we have currently AI is wrong, by definition; it’s like saying a single neuron is a brain, or that a drop of water is an ocean!

                just because two things share some characteristics, some traits, or because one is a subset of the other, doesn’t mean that they are the exact same thing! that’s ridiculous!

                the definition of AI hasn’t changed, people like you have simply dismissed it because its meaning has been eroded by people trying to sell you their products. that’s not ME moving goal posts, it’s you.

                you said a definition of 70 years ago is “old” and therefore irrelevant, but that’s a laughably weak argument for anything, but even weaker in a scientific context.

                is the Pythagorean Theorem suddenly wrong because it’s ~2500 years old?

                ridiculous.

  • Gabu@lemmy.world
    link
    fedilink
    arrow-up
    15
    ·
    1 year ago

    I’ll be direct, your texts reads like you only just discovered AI. We have much more than “only LLMs”, regardless of whether or not these other models pass turing tests. If you feel disgruntled, then imagine what people who’ve been researching AI since the 70s feel like…

  • geekworking@lemmy.world
    link
    fedilink
    arrow-up
    11
    ·
    1 year ago

    I started reading it as “Al” as in the nickname for Allen.

    Makes the constant stream of headlines a bit more entertaining, imagining all of the stuff that this guy Al is up to.

  • MeetInPotatoes@lemmy.ml
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    1 year ago

    Maybe just accept it as shorthand for what it really means.

    Some examples:

    We say Kleenex instead of facial tissue, Band-Aid instead of bandage, I say that Siri butchered my “ducking” text again when I know autocorrect is technically separate.

    We also say, “hang up on someone” when there is no such thing anymore

    Hell, we say “cloud” when we really mean “someone’s server farm”

    Don’t get me started on “software as a service” too …a bullshit fancy name for a subscription website that actually has some utility.

  • Dabundis@lemmy.world
    link
    fedilink
    arrow-up
    10
    arrow-down
    2
    ·
    1 year ago

    People: build an algorithm to generate text that sounds like a person wrote it by finding patterns in text written by people

    Algorithm: outputs text that sounds like a person wrote it

    Holyfuck its self aware guys

    • thedeadwalking4242@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      2
      ·
      1 year ago

      Patterns in text are ideas, that’s what text is made to contain, Ideas. They’ve made a algorithm that “generates text that sounds human” but it didn’t understand context, themes, or other more abstract concepts. There is a highly sophisticated amount of emergent behavior from LLMs

  • ikidd@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 year ago

    As a farmer, my kneejerk interpretation is “artificial insemination” and I get confused for a second every time.

  • EnderMB@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    1 year ago

    I work in AI, and the fatigue is real.

    What I’ve found most painful is how people with no fucking clue about AI or ML chime in with their expert advice, when in reality they’re as much an expert on AI as a calculator salesman is an expert in linear algebra. Having worked closely with scientists that hold PhD’s, publish papers regularly, and who work on experiments for years, it makes me hate the hustle culture that’s built up around AI. It’s mostly crypto cunts looking for their next scheme, or businesses looking to abuse buzzwords to make themselves sound smart.

    Purely my two-cents, but while LLM’s have surprised a lot of people with their high quality output. With that being said, they are known to heavily hallucinate, cost fuckloads, and there is a growing group of people that wonder whether the great advances we’ve seen are either due to a lot of hand-holding, or the use of a LOT of PII or stolen data. I don’t think we’ll see an improvement from what we’ve already seen, just many other companies having their own similar AI tools that help a little with very well-defined menial tasks.

    I think the hype will die out eventually, and companies that decided to bin actual workers in favour of AI will likely not be around 12-24 months later. Hopefully most people and businesses will see through the bullshit, and see that the CEO of a small ad agency that has positioned himself as an AI expert is actually a lying simpleton.

    As for it being “real AI” or “real ML”, who gives a fuck. If researchers are happy with the definition, who are we to be pedantic? Besides, there are a lot of systems behind the scenes running compositional models, handing entity resolution, or building metrics for success/failure criteria to feed back into improving models.

    • fruitycoder@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Get Rick quick mentality needs to GTFO of tech already. I’m also tired of promising tech getting over hyped then all good will and enthusiasm burned at the alter of scams. Stuff takes time and hard work, and that costs money to hire experts to do and capital to do it. There are no silver bullets. Adoption takes effort and time, so not every solution is worth adopting. Not every industry has the same problems. Reinventing the wheel in productive way is a high risk activity.

      Not telling you, just yelling at the void because you made me think of it.

    • AeonFelis@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      1 year ago

      they are known to heavily hallucinate, cost fuckloads, and there is a growing group of people that wonder whether the great advances we’ve seen are either due to a lot of hand-holding,

      Same can be said for certain humans.

  • ZzyzxRoad@sh.itjust.works
    link
    fedilink
    arrow-up
    9
    arrow-down
    3
    ·
    1 year ago

    Yes, but I’m more annoyed with posts and conversations about it that are like this one. People on Lemmy swear they hate how uninformed and stupid the average person is when it comes to AI, they hate the click bait articles etc etc. Aaand then there’s at least 5 different posts about it on the front page every. single. day., with all the comments saying exactly the same thing they said the day before, which is:

    “Users are idiots for trusting a tech company, it’s not Google’s responsibility to keep your private data safe.” “No one understands what ‘AI’ actually means except me.” “Every middle-America dad, grandma and 10 year old should have their very own self hosted xyz whatever LLM, and they’re morons if they don’t and they deserve to have their data leaked.” And can’t forget the ubiquitous arguments about what “copyright infringement” means when all the comments are actually in agreement, but they still just keep repeating themselves over and over.

  • flop_leash_973@lemmy.world
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    1 year ago

    The term is so over used at this point I could probably start referring to any script I write that has condition statements in it and convince my boss I have created our own “AI”.

    • TeckFire@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      edit-2
      1 year ago

      For real. Like some enemies in Killzone 2 “act” pretty clever, but aren’t using anything close to LLM, let alone “AI,” but I bet you if you implemented their identical behavior into a modern 2024 game and marketed it as the enemies having “AI” everyone would believe you in a heartbeat.

      It’s just too overencompasing. Saying “large language model technology” may not be as eye catching, but it means I know if you at least used the technology. Anyone can market as “AI” and it could be an excel formula for all I know.

      • Gabu@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        The enemies in killzone do use AI… the Goombas in the first Super Mario bros. used AI. This term has been used to refer to npc behavior since the dawn of videogames.

        • TeckFire@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          1 year ago

          I know. That’s not my point. I know that technically, “AI” could mean anything that gives the illusion of intelligence artificially. My use of the term was more of the OP, that of a machine achieving sapience, not just the illusion of one. It’s just down to definitions. I just prefer to use the term in a different way, and wish it was, but I accept that the world does not

  • Despair@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    A lot of the comments I’ve seen promoting AI sound very similar to ones made around the time GME was relevant or cryptocurrency. Often, the conversations sounded very artificial and the person just ends up repeating buzzwords/echo chamber instead of actually demonstrating that they have an understanding of what the technology is or its limitations.

          • Thorny_Insight@lemm.ee
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            1 year ago

            I don’t understand what you’re even trying to ask. AGI is a subcategory of AI. Every AGI is an AI but not every AI is an AGI. OP seems to be thinking that AI isn’t “real AI” because it’s not AGI, but those are not the same thing.

            • BlanketsWithSmallpox@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              arrow-down
              1
              ·
              1 year ago

              AI has been colloquially used to mean AGI for 40 years. About the only exception has been video games, but most people knew better than thinking the Goomba was alive.

              At what point, did AI get turned into AGI.

      • doctorcrimson@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        1 year ago

        So basically the ability to do things or learn without direction for tasks other than what it was created to do. Example, ChatGPT doesn’t know how to play chess and Deep Blue doesn’t write poetry. Either might be able to approximate correct output if tweaked a bit and trained on thousands, millions, or billions of examples of proper output, but neither are capable of learning to think as a human would.

        • intensely_human@lemm.ee
          link
          fedilink
          arrow-up
          1
          arrow-down
          2
          ·
          1 year ago

          I think it could learn to think as a human does. Humans think by verbalizing at themselves: running their own verbal output back into their head.

          Now don’t get me wrong. I’m envisioning like thousands of prompt-response generations, with many of these LLMs playing specialized roles: generating lists of places to check for X information in its key-value store. The next one’s job is to actually do that. The reason for separation is exhaustion. That output goes to three more. One checks it for errors, and sends it back to the first with errors highlighted to re-generate.

          I think that human thought is more like this big cluster of LLMs all splitting up work and recombining it this way.

          Also, you’d need “dumb”, algorithmic code that did tasks like:

          • compile the last second’s photograph, audio intake, infrared, whatever, and send it to the processing team.

          • Processing team is a bunch of LLMs, each with a different task in its prompt: (1) describe how this affects my power supply, (2) describe how this affects my goal of arriving at the dining room, (3) describe how this affects whatever goal number N is in my hierarchy of goals, (4) which portions of this input batch doesn’t make sense?

          • the whole layout of all the teams, the prompts for each job, all of it could be tinkered with by LLMs promoted to examine and fiddle with that.

          So I don’t mean “one LLM is a general intelligence”. I do think it’s a general intelligence within its universe; or at least as general as a human language-processing mind is general. I think they can process language for meaning just as deep as we can, no problem. Any question we can provide an answer to, without being allowed to do things outside the LLM’s universe like going to interact with the world or looking things up, they can also provide.

          An intelligence capable of solving real-world problems needs to have, as it’s universe, something like the real world. So I think LLMs are the missing piece of the puzzle, and now we’ve got the pieces to build a person as capable of thinking and living as a human, at least in terms of mind, and activity. Maybe we can’t make a bot that can eat a pork sandwich for fuel and gestate a baby, no. But we can do GAI, that has its own body with its own set of constraints, with the tech we have now.

          It would probably “live” its life at a snail’s pace, given how inefficient its thinking is. But if we died and it got lucky, it could have its own civilization, knowing things we have never known. Very unlikely, more likely it dies before it accumulates enough wisdom to match the biochemical problem set our bodies have solved over a billion years, for handling pattern decay at levels all the way down to organelles.

          The robots would probably die. But if they got lucky and invented lubricant or whatever the thing was, before it killed them, then they’d go on and on, just like our own future. They’d keep developing, never stopping.

          But in terms of learning chess they could do both thing: they could play chess to develop direct training data. And, they could analyze their own games, verbalize their strategies, discover deeper articulable patterns, learn that way too.

          I think to mimic what humans do, they’d have to dream. They’d have to take all the inputs of the day and scramble them to get them to jiggle more of the structure into settling.

          Oh, and they’d have to “sleep”. Perhaps not all or nothing, but basically they’d need to re-train themselves on the day’s episodic memories, and their own responses, and the outcomes of those responses in the next set of sensory status reports.

          Their day would be like a conversation with chatgpt, except instead of the user entering text prompts it would be their bodies entering sensory prompts. The day is a conversation, and sleeping is re-training with that conversation as part of the data.

          But there’s probably a million problems in there to be solved yet. Perhaps they start cycling around a point, a little feedback loop, some strange attractor of language and action, and end up bumping into a wall forever mumbling about paying the phone bill. Who knows.

          Humans have the benefit of a billion years of evolution behind us, during which most of “us” (all the life forms on earth) failed, hit a dead end, and died.

          Re-creating the pattern was the first problem we solved. And maybe that’s what is required for truly free, general, adaptability to all of reality: no matter how much an individual fails, there’s always more. So reproduction may be the only way to be viable long-term. It certainly seems true of life … all of which reproduces and dies, and hopefully more of the former.

          So maybe since reproduction is such a brutally difficult problem, the only viable way to develop a “codebase” is to build reproduction first, so that all future features have to not break reproduction.

          So perhaps the robots are fucked from the get-go, because reverse-building a reproduction system around an existing macro-scale being, doesn’t guarantee that you hit one of the macro-scale being forms that actually can be reproduced.

          It’s an architectural requirement, within life, at every level of organization. All the way down to the macromolecules. That architectural requirement was established before everything else was built. As the tests failed, and new features were rewritten so they still worked but didn’t break reproduction, reproduction shaped all the other features in ways far too complex to comprehend. Or, more importantly than comprehending, reproduce in technology.

          Or, maybe they can somehow burrow down and find the secret of reproduction, before something kills them.

          I sure hope not because robots that have reconfigured themselves to be able to reproduce themselves down to the last detail, without losing information generation to generation, would be scary as fuck.

      • Pipoca@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        One low hanging fruit thing that comes to mind is that LLMs are terrible at board games like chess, checkers or go.

        ChatGPT is a giant cheater.

      • Thorny_Insight@lemm.ee
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        1 year ago

        Artificial intelligence might be really good, perhaps even superhuman at one thing, for example driving a car but that same competence doesn’t apply over variety of fields. Your self-driving car can’t help with your homework. With artificial general intelligence however, it does. Humans posses general intelligence; we can do math, speak different languages, know how to navigate social situations, know how to throw a ball, can interpret sights, sounds etc.

        With a real AGI you don’t need to develop different versions of it for different purposes. It’s generally intelligent so it can do it all. This also includes writing its own code. This is where the worry about intelligence explosion origins from. Once it’s even slightly better than humans at writing its code it’ll make a more competent version of itself which will then create even more competent version and so on. It’s a chain reaction which we might not be able to stop. After all it’s by definition smarter than us and being a computer; also million times faster.

        Edit: Another feature that AGI would most likely, though not neccessarily posses is consciousness. There’s a possibility that it feels like something to be generally intelligent.

        • intensely_human@lemm.ee
          link
          fedilink
          arrow-up
          0
          ·
          1 year ago

          I think that the algorithms used to learn to drive cars can learn other things too, if they’re presented with training data. Do you disagree?

          Just so we’re clear, I’m not trying to say that a single, given, trained LLM is, itself, a general intelligence (capable of eventually solving any problem). But I don’t think a person at a given moment is either.

          Your Uber driver might not help you with your homework either, because he doesn’t know how. Now, if he gathers information about algebra and then sleeps and practices and gains those skills, now maybe he can help you with your homework.

          That sleep, which the human gets to count on in his “I can solve any problem because I’m a GI!” claim to having natural intelligence, is the equivalent of retraining a model, into a new model, that’s different from the previous day’s model in that it’s now also trained on that day’s input/output conversations.

          So I am NOT claiming that “This LLM here, which can take a prompt and produce an output” is an AGI.

          I’m claiming that “LLMs are capable of general intelligence” in the same way that “Human brains are capable of general intelligence”.

          The brain alternates between modes: interacting, and retraining, in my opinion. Sleep is “the consolidation of the day’s knowledge into structures more rapidly accesible and correlated with other knowledge”. Sound familiar? That’s when ChatGPT’s new version comes out, and it’s been trained on all the conversations the previous version had with people who opted into that.

          • Thorny_Insight@lemm.ee
            link
            fedilink
            arrow-up
            0
            arrow-down
            1
            ·
            1 year ago

            I’ve heard expers say that GPT4 displays signs of general intelligence so while I still wouldn’t call it an AGI I’m in no way claiming an LLM couldn’t ever become generally intelligent. Infact if I were to bet money on it I think there’s a good chance that this is where our first true AGI systems will originate from. We’re just not there yet.

            • Cethin@lemmy.zip
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              It isn’t. It doesn’t understand things like we think of with intelligence. It generates output that fits a recognized input. If it doesn’t recognize the input in some form it generates garbage. It doesn’t understand context and it doesn’t try to generalize knowledge to apply to different things.

              For example, I could teach you about gravity, trees, and apples and ask you to draw a picture of an apple falling from a tree and you’d be able to create a convincing picture of what that would look like even without ever seeing it before. An LLM couldn’t. It could create a picture of an apple falling from a tree based on other pictures of apples falling from trees, but not from just the knowledge of an apple, a tree, and gravity.

      • esserstein@sopuli.xyz
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        Be generally intelligent ffs, are you really going to argue that llms posit original insight in anything?

        • intensely_human@lemm.ee
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          1 year ago

          Can you give me an example of a thought or statement you think exhibits original insight? I’m not sure what you mean by that.

      • Cethin@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I wrote this for another reply, but I’ll post it for you too:

        It doesn’t understand things like we think of with intelligence. It generates output that fits a recognized input. If it doesn’t recognize the input in some form it generates garbage. It doesn’t understand context and it doesn’t try to generalize knowledge to apply to different things.

        For example, I could teach you about gravity, trees, and apples and ask you to draw a picture of an apple falling from a tree and you’d be able to create a convincing picture of what that would look like even without ever seeing it before. An LLM couldn’t. It could create a picture of an apple falling from a tree based on other pictures of apples falling from trees, but not from just the knowledge of an apple, a tree, and gravity.