Lugh@futurology.todayM to Futurology@futurology.todayEnglish · 7 months agoEvidence is growing that LLMs will never be the route to AGI. They are consuming exponentially increasing energy, to deliver only linear improvements in performance.arxiv.orgexternal-linkmessage-square66fedilinkarrow-up1300arrow-down127
arrow-up1273arrow-down1external-linkEvidence is growing that LLMs will never be the route to AGI. They are consuming exponentially increasing energy, to deliver only linear improvements in performance.arxiv.orgLugh@futurology.todayM to Futurology@futurology.todayEnglish · 7 months agomessage-square66fedilink
minus-squareCanadaPlus@lemmy.sdf.orglinkfedilinkEnglisharrow-up1·7 months agoSo does having more parts make something a mystery, like the second paragraph, or not a mystery like the first? I was a skeptic back in the day too, but they’ve already far exceeded what an algorithm I could write from memory seems like it should be able to do.
minus-squareconciselyverbose@sh.itjust.workslinkfedilinkEnglisharrow-up1·7 months agoA combination of unique, varied parts is a complex algorithm. A bunch of the same part repeated is a complex model. Model complexity is not in any way similar to algorithmic complexity. They’re only described using the same word because language is abstract.
So does having more parts make something a mystery, like the second paragraph, or not a mystery like the first?
I was a skeptic back in the day too, but they’ve already far exceeded what an algorithm I could write from memory seems like it should be able to do.
A combination of unique, varied parts is a complex algorithm.
A bunch of the same part repeated is a complex model.
Model complexity is not in any way similar to algorithmic complexity. They’re only described using the same word because language is abstract.