• El Barto@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    3 days ago

    I didn’t downvote you, by the way.

    But I’m curious: are you still talking about LLMs discussions that include profanity?

    Or are you talking about something different, the fact that LLMs can spew bullshit, and that religious congregators should be informed about this?

    • pavnilschanda@lemmy.worldM
      link
      fedilink
      arrow-up
      1
      ·
      2 days ago

      I’m talking about the latter. Religious people often use LLMs as well (https://apnews.com/article/germany-church-protestants-chatgpt-ai-sermon-651f21c24cfb47e3122e987a7263d348). Their knowledge is likely limited to ChatGPT so they’re likely to be vulnerable to these things. I think one of the things that worry me the most is that these people may take LLM bullshit at face value, or even worse, take them as a “divine commands”.

      • El Barto@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        2 days ago

        I don’t follow how you went from being concerned about using profanity in research papers because of audiences such as religious communities, to being concerned about LLMs spewing inaccurate things.

        Has your original question always been about the latter?

        I love the term too but I wonder how it’ll be used in situations where profanity is discouraged

        • pavnilschanda@lemmy.worldM
          link
          fedilink
          arrow-up
          2
          ·
          2 days ago

          Yes, I was curious about about if experts want to convey the concept of LLM bullshit to certain audiences such as children’s settings (which has been solved now) or religious clergy, they’ll use the term “bullshit” or not. I apologize if I have miscommunicated that intention in my initial comment, and I always look forward to how to communicate better