• Randomgal@lemmy.ca
    link
    fedilink
    English
    arrow-up
    7
    ·
    6 months ago

    You can just add “Focus on information, avoid conversationalisms.” And suddenly you get very similar, dry answers.

    • then_three_more@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 months ago

      I told Microsoft CoPilot to “stop the sycophantic attitude” which worked very well. But thinking about it I’m still personifising it with that phrasing

      • Randomgal@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 months ago

        I wouldn’t say that’s personalizing. You are using the tool the way it was designed to be used. Natural language just happens to be the means of interaction.

        It’s just like putting commands in a command line, but with extra steps. 🤣

      • howrar@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        6 months ago

        The training data anthropomorphizes the LLMs, so you’ll get the best results by doing the same.

    • Jerkface (any/all)@lemmy.caOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 months ago

      Sometimes. But the model has invisible alignment prompts that contradict that request, and the model will constantly revert to the intended ingratiating manner that attempts to simulate things like moral agency and a point of view.