Technically, it involves three people. The interrogator is supposed to pick which unseen participant is human. (Originally proposed as, picking which participant is a woman. Chatbots being extremely not invented yet.) If people can’t tell the difference - there is no difference.
LLMs definitely are not there. I doubt they ever will be. They’re the wrong shape of network. I object when people say there’s nothing like cognition going on inside, but ‘what’s the next word’ is estimating the wrong function, for a machine to be smart enough to notice when it’s wrong.
Technically, it involves three people. The interrogator is supposed to pick which unseen participant is human. (Originally proposed as, picking which participant is a woman. Chatbots being extremely not invented yet.) If people can’t tell the difference - there is no difference.
LLMs definitely are not there. I doubt they ever will be. They’re the wrong shape of network. I object when people say there’s nothing like cognition going on inside, but ‘what’s the next word’ is estimating the wrong function, for a machine to be smart enough to notice when it’s wrong.