• RagingSnarkasm@lemmy.world
    link
    fedilink
    English
    arrow-up
    47
    ·
    10 months ago

    Building a metaverse that people want to actually engage with was too hard, so he’s decided to scale back his ambitions and tackle something less difficult: AGI.

    • ffhein@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      10 months ago

      He just want some virtual friends to hang out with in the metaverse, since humans weren’t interested.

  • Buffalox@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    4
    ·
    edit-2
    10 months ago

    Is Zuckerberg an idiot? Or does he have an actual plan with this?
    Seems to me it’s completely useless like Metaverse.
    If the LLM is so stupid it can’t figure out the sides of an equal sign can be reversed as simple as in 2+2=4 <=> 4=2+2. He will never achieve general intelligence by just throwing more compute power at it.
    As powerful as LLM is, it’s still astoundingly stupid when it hits its limitations.

      • Sylver@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        5
        ·
        10 months ago

        The difference is that we can go beyond that limitation. Even self-coding AI will either solve a problem, or compound its own inefficiencies before asking an operator to help out.

        • Neil@lemmy.ml
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          2
          ·
          10 months ago

          Your post sounds almost as dense as:

          “everything that can be invented has been invented.” - Duell 1899.

          • Sylver@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            2
            ·
            10 months ago

            The difference here is that Zuck is not planning on inventing or revolutionizing anything. He’s just throwing more computation power at an already inefficient method of modeling AI.

        • aname@lemmy.one
          link
          fedilink
          English
          arrow-up
          3
          ·
          10 months ago

          or compound its own inefficiencies before asking an operator to help out.

          Some people do. Some people refuse to ask for help.

    • deafboy@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      10 months ago

      I don’t know much, but from what I know, we still haven’t reach a point of diminishing returns, so more power = more better.

      • kadu@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        10 months ago

        There is a lot of theoretical work on this problem, but I’m in the camp that isn’t convinced large language models are the path towards general intelligence.

        Throw 10x the computing power on it and it might learn that a maths equation is reversible, because it will probably have seen enough examples of that. But it won’t learn what an equation represents, and therefore won’t extrapolate situations that can be solved by equations.

        • Kogasa@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 months ago

          You can already ask ChatGPT to model a real life scenario with a simple math equation. There is at least a rough model of how basic math can be used to solve problems.

      • fidodo@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 months ago

        Not necessarily since you also need better techniques. A competitor could easily surpass you with less by being smarter about how the AI is trained.

    • fidodo@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      10 months ago

      Trying to achieve AGI by throwing more compute at LLMs is like trying to reach the moon by building a more powerful hot air balloon.

      • Buffalox@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 months ago

        Assuming that “not compute” should be “more compute” I totally agree. That’s a very apt analogy.

        • fidodo@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          10 months ago

          Yes, thanks, swipe typing picked up “not” instead of “more”. Maybe someone can throw some more compute at the swipe typing algorithm to better pick up on the context of the sentence when picking words.

  • Boozilla@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    10 months ago

    Zuckerberg: Why are my pupils vertical slits? Why am I always cold? Why do people find me so repellent?

    AI: Sir, I can answer all three with one response, but you won’t like it.

  • iAvicenna@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    10 months ago

    I wonder if he really thinks that AGI is just AI with more parameters and gpus thrown into the mix.

  • John Colagioia@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    5
    ·
    10 months ago

    Sure, we could point to thousands of years of really smart people trying and utterly failing to build mathematical models for innovation and thought, but it also does make a certain amount of sense that, if you pile up enough transistors and wish really hard, that your investment will Frosty the Snowman itself into being your friend, right…?