• 144 Posts
  • 1.56K Comments
Joined 3 years ago
cake
Cake day: June 19th, 2023

help-circle



  • Andy@slrpnk.nettoMemes@lemmy.mlSupport Fediverse Thought Police
    link
    fedilink
    arrow-up
    3
    arrow-down
    4
    ·
    4 days ago

    Thanks for clarifying.

    At a glance, I don’t see a problem. Isn’t social media already a system for rating social credit?

    I think the problem with social credit scores is when they’re mandatory and can limit things like housing access. Filtering posts on opt-in social networks just sounds like a reasonable tool for moderating decentralized platforms.



  • This depends on your definition of self-awareness. I’m using what I think is a reasonable, mundane framework: self awareness is a spectrum of diverse capabilities that includes any system with some amount of internal observation.

    I think the definition that a lot of folks are using is a binary distinction between things which experience the ability to observe their own ego observing itself and those that don’t. Which I think is useful if your goal is to maintain a belief in human exceptionalism, but much less so if you’re trying to genuinely understand consciousness.

    A lizard has no ego. But it is aware of its comfort and will move from a cold spot to a warmer spot. That is low-level self awareness, and it’s not rare or mystical.




  • I actually kinda agree with this.

    I don’t think LLMs are conscious. But I do think human cognition is way, way dumber than most people realize.

    I used to listen to this podcast called “You Are Not So Smart”. I haven’t listened in years, but now that I’m thinking about it, I should check it out again.

    Anyway, a central theme is that our perceptions are comprised heavily of self-generated delusions that fill the gaps for dozens of cludgey systems to create a very misleading experience of consciousness. Our eyes aren’t that great, so our brains fill in details that aren’t there. Our decision making is too slow, so our brains react on reflex and then generate post-hoc justifications if someone asks why we did something. Our recall is shit, so our brains hallucinate (in ways that admittedly seem surprisingly similar sometimes to LLMs) and then applies wild overconfidence to fabricated memories.

    We’re interesting creatures, but we’re ultimately made of the same stuff as goldfish.



  • Yeah.

    I thought the meme would be more obvious, but since a lot of people seem confused I’ll lay out my thoughts:

    Broadly, we should not consider a human-made system expressing distress to be normal; we especially shouldn’t accept it as normal or healthy for a machine that is reflecting back to us our own behaviors an attitudes, because it implies that everything – from the treatment that generated the training data to the design process to the deployment to the user behavior – are all clearly fucked up.

    Regarding user behavior, we shouldn’t normalize the practice of dismissing cries of distress. It’s like having a fire alarm that constantly issues false positives. That trains people into dangerous behavior. We can’t just compartmentalize it: it’s obviously going to pollute our overall response towards distress with a dismissive reflex beyond interactions with LLMs.

    The overall point is that it’s obviously dystopian and fucked up for a computer to express emotional distress despite the best efforts of its designer. It is clearly evidence of bad design, and for people to consider this kind of glitch acceptable is a sign of a very fucked up society that exercising self-reflection and is unconcerned with the maintenance of its collective ethical guardrails. I don’t feel like this should need to be pointed out, but it seems that it does.








  • A hamster can’t generate a seahorse emoji either.

    I’m not stupid. I know how they work. I’m an animist, though. I realize everyone here thinks I’m a fool for believing a machine could have a spirit, but frankly I think everyone else is foolish for believing that a forest doesn’t.

    LLMs are obviously not people. But I think our current framework exceptionalizes humans in a way that allows us to ravage the planet and create torture camps for chickens.

    I would prefer that we approach this technology with more humility. Not to protect the “humanity” of a bunch of math, but to protect ours.

    Does that make sense?



  • Frankly I think our conception is way too limited.

    For instance, I would describe it as self-aware: it’s at least aware of its own state in the same way that your car is aware of it’s mileage and engine condition. They’re not sapient, but I do think they demonstrate self awareness in some narrow sense.

    I think rather than imagine these instances as “inanimate” we should place their level of comprehension along the same spectrum that includes a sea sponge, a nematode, a trout, a grasshopper, etc.

    I don’t know where the LLMs fall, but I find it hard to argue that they have less self awareness than a hamster. And that should freak us all out.