• rtxn@lemmy.world
      link
      fedilink
      English
      arrow-up
      31
      ·
      9 months ago

      It’s like right-clicking an ugly monkey NFT, but even more stupid.

      • theUnlikely@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        17
        ·
        9 months ago

        Nah those NFTs are way stupider. Making actually good looking AI art without any oddities can take several hours once you really get into the intricacies and often still needing something like Photoshop for finishing. I’m referring to Stable Diffusion. Others like DALLE-E and MidJourney are basically just the prompt.

        • Omega_Haxors@lemmy.ml
          link
          fedilink
          arrow-up
          9
          ·
          edit-2
          9 months ago

          The amount of work required to make Stable Diffusion look good is why I now allow it through my AI policy, provided they supply the reference material, the prompt, use a publicly available model, and credit it. Fail one of those and you’re getting a removal and warning.

      • ZILtoid1991@kbin.social
        link
        fedilink
        arrow-up
        12
        ·
        9 months ago

        There’s some people that specializes in reverse engineering prompts. Sometimes it’s funny, as they often disprove posts claiming "this is what the AI gave to me to “average [political viewpoint haver]”, only to turn out their prompt never contained the words “liberal”, “conservative”, etc, but words describing the image.

        • Omega_Haxors@lemmy.ml
          link
          fedilink
          arrow-up
          5
          ·
          edit-2
          9 months ago

          Reversing prompts is kind of pseudoscience. It’s like using a C# decompiler on a JAR file. Yes it produces working code from the binary, but it’s nowhere near what the original writer intended. They are also rife with false positives and negatives and that’s ignoring the weird idiosyncrasies such as nonsense tokens based on random artists who it thinks is 0.001% the style of, because it can’t find a better token to use instead. They can’t really tell you what the AI was thinking, but rather just make an educated guess which is more often than not completely wrong. Anyone claiming to be able to reverse engineer these black boxes flawlessly is outright lying to you. Nobody knows how they work.

          EDIT: and prompt reversal also assumes that someone is using a certain model, not switching it out half way for another model (or multiple times even) using reference photos through img->img or adding in custom drawing through inpaint at any point of the process. Like, I can’t even begin to begin on how literally impossible it would be to untangle that absolute mess of chaos when all you have is the end result.

      • Flipper@feddit.de
        link
        fedilink
        arrow-up
        3
        ·
        9 months ago

        Just be happy that the nft craze was over before those ai were released. There would have been even more trash and low effort nfts.

    • AdrianTheFrog@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      9 months ago

      Most Stable Diffusion UIs embed the generation information as metadata in the image by default. Unfortunately when you upload it to places like Reddit they recompress it and strip the metadata.