My impressions are completely different from yours, but that’s likely due
- It’s really easy to interpret LLM output as assumptions (i.e. “to vomit certainty”), something that I outright despise.
- I used Gemini a fair bit more than ChatGPT, and Gemini is trained with a belittling tone.
Even then, I know which sort of people you’re talking about, and… yeah, I hate a lot of those things too. In fact, one of your bullet points (“it understands and responds…”) is what prompted me to leave Twitter and then Reddit.
I’ve read this text. It’s a good piece, but unrelated to what OP is talking about.
The text boils down to “people who believe that LLMs are smart do so for the same reasons as people who believe that mentalists can read minds do.” OP is not saying anything remotely close to that; instead, they’re saying that LLMs lead to pleasing and insightful conversations in their experience.