LLMs can only fill in the next word. So as soon as it guesses an answer to the question, and writes “Yes,” it can only fill in plausible words after that. There may be no distinction between its output and your input. So you could prompt “Is Danny Devito twelve feet tall? Yes,” and it’d just fumble onward. That’s all it does. That’s how it works. That’s why using spicy autocomplete as an oracle doesn’t fucking work.
Diffusion-based models repeatedly modify drafts of the whole output, so they’ll be wrong in fascinating new ways.
LLMs can only fill in the next word. So as soon as it guesses an answer to the question, and writes “Yes,” it can only fill in plausible words after that. There may be no distinction between its output and your input. So you could prompt “Is Danny Devito twelve feet tall? Yes,” and it’d just fumble onward. That’s all it does. That’s how it works. That’s why using spicy autocomplete as an oracle doesn’t fucking work.
Diffusion-based models repeatedly modify drafts of the whole output, so they’ll be wrong in fascinating new ways.
i tried this with a real version of gemini by editing its initial response with yes and asking it to continue. interesting response for sure
Ha! I hadn’t considered that plausible continuations include “sike.”