There is a lot of theoretical work on this problem, but I’m in the camp that isn’t convinced large language models are the path towards general intelligence.
Throw 10x the computing power on it and it might learn that a maths equation is reversible, because it will probably have seen enough examples of that. But it won’t learn what an equation represents, and therefore won’t extrapolate situations that can be solved by equations.
You can already ask ChatGPT to model a real life scenario with a simple math equation. There is at least a rough model of how basic math can be used to solve problems.
I don’t know much, but from what I know, we still haven’t reach a point of diminishing returns, so more power = more better.
There is a lot of theoretical work on this problem, but I’m in the camp that isn’t convinced large language models are the path towards general intelligence.
Throw 10x the computing power on it and it might learn that a maths equation is reversible, because it will probably have seen enough examples of that. But it won’t learn what an equation represents, and therefore won’t extrapolate situations that can be solved by equations.
You can already ask ChatGPT to model a real life scenario with a simple math equation. There is at least a rough model of how basic math can be used to solve problems.
Not necessarily since you also need better techniques. A competitor could easily surpass you with less by being smarter about how the AI is trained.