#LLM and statistical methods of
#AI still cannot do symbolic computation
Richard Speed at the Register is reporting that someone named Robert Caruso twice attempted to have a modern #LRM (retreival-augmented #LLM based #AI), Chat GPT and Microsoft Copilot, to play a game of chess against the Atari 2600, a gaming console first released in the year 1982. The Atari 2600 won.
Now, it is possible that if you want to optimize a LRM to play chess this can be done by including into it’s dataset a bunch of chess games this would probably improve the ability of the AI to win chess.
But this should be another blow against the argument that simply increasing the computing power of these AI models is going to result in a “general intelligence” or #AGI . The way these AI systems “learn” is by copying humans, they still have not been designed to synthesize new ideas too far different from ideas expressed in the training data on which they were built.
I maintain that a better AI will be a smaller LLM cleverly combined with symbolic computation techniques such as proof assistants like Lean, or dependently typed programming languages like MiniKanren. I also maintain that if trying to create a general intelligence, one that can think for humans, is a really bad idea that will only teach people to become too dependent on technology. Using AI to think is more similar to an addiction than a superpower.