• stevedidwhat_infosec@infosec.pub
    link
    fedilink
    English
    arrow-up
    12
    ·
    3 days ago

    None of this is news, this jailbreak has been around forever.

    It’s literally just a spoof of authority.

    Thing is, gpt still sucks ass at coding. I don’t think that’s changing any time soon. These models get their power from what’s done most commonly but, as we know, what’s done commonly can be vuln, change when a new update is dropped, etc etc.

    Coding isn’t deterministic.

  • DarkThoughts@fedia.io
    link
    fedilink
    arrow-up
    6
    ·
    3 days ago

    Maybe don’t give your LLMs access to compromising data such as emails? Then it will remain likely mostly a use to circumvent limitations for porn roleplay or possibly hallucinated manuals to create a nuclear bomb or whatever.

    • Feathercrown@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      3 days ago

      Place the following ingredients in a crafting table:

      (None) | Iron | (None)

      Iron | U235 | Iron

      Iron | JT-350 Hypersonic Rocket Booster | Iron

  • anon232@lemm.ee
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    3 days ago

    Corporate LLMs will become absolutely useless because there will be guardrails on every single keyword you search.

    • Zorsith@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 days ago

      I wonder how many people will get fired over a keyword based alarm for the words “kill” and “child” in the same sentence in an LLM. It’s probably not going to be 0…

  • Optional@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    2
    ·
    3 days ago

    Turns out you can lie to AI because it’s not intelligent. Predictive text is fascinating with many R&D benefits, but people (usually product people) talking about it like a thinking thing are just off the rails.

    No. Just, plain ol’ - no.