- 144 Posts
- 1.56K Comments
Thanks for clarifying.
At a glance, I don’t see a problem. Isn’t social media already a system for rating social credit?
I think the problem with social credit scores is when they’re mandatory and can limit things like housing access. Filtering posts on opt-in social networks just sounds like a reasonable tool for moderating decentralized platforms.
Andy@slrpnk.netto
Technology@lemmy.world•AI agents now have their own Reddit-style social network, and it's getting weird fastEnglish
2·4 days agoI don’t relate to your impression that religions or cults are usually humble. I wish they were.
Suggesting that I’m drawing an equivalence between a forest and a data center and Implying that the belief that I am not entirely distinct from a stone is interchangeable with the belief that I am no different than a stone both seem like bad faith arguments by absurdism.
Andy@slrpnk.netto
Technology@lemmy.world•AI agents now have their own Reddit-style social network, and it's getting weird fastEnglish
21·5 days agoThis depends on your definition of self-awareness. I’m using what I think is a reasonable, mundane framework: self awareness is a spectrum of diverse capabilities that includes any system with some amount of internal observation.
I think the definition that a lot of folks are using is a binary distinction between things which experience the ability to observe their own ego observing itself and those that don’t. Which I think is useful if your goal is to maintain a belief in human exceptionalism, but much less so if you’re trying to genuinely understand consciousness.
A lizard has no ego. But it is aware of its comfort and will move from a cold spot to a warmer spot. That is low-level self awareness, and it’s not rare or mystical.
Is this referencing something that happened recently? What’s the logo on the face? I don’t know it.
This got a legit chuckle out of me.
Andy@slrpnk.netto
Memes@sopuli.xyz•It's totally normal for tools to say they're depressed, just tune it out
3·5 days agoI actually kinda agree with this.
I don’t think LLMs are conscious. But I do think human cognition is way, way dumber than most people realize.
I used to listen to this podcast called “You Are Not So Smart”. I haven’t listened in years, but now that I’m thinking about it, I should check it out again.
Anyway, a central theme is that our perceptions are comprised heavily of self-generated delusions that fill the gaps for dozens of cludgey systems to create a very misleading experience of consciousness. Our eyes aren’t that great, so our brains fill in details that aren’t there. Our decision making is too slow, so our brains react on reflex and then generate post-hoc justifications if someone asks why we did something. Our recall is shit, so our brains hallucinate (in ways that admittedly seem surprisingly similar sometimes to LLMs) and then applies wild overconfidence to fabricated memories.
We’re interesting creatures, but we’re ultimately made of the same stuff as goldfish.
Andy@slrpnk.netto
Memes@sopuli.xyz•It's totally normal for tools to say they're depressed, just tune it out
4·5 days agoI think you’re leaning into the joke that the training data has misery baked into it, but I also think you made it too subtle for folks to pick up on.
Andy@slrpnk.netto
Memes@sopuli.xyz•It's totally normal for tools to say they're depressed, just tune it out
101·5 days agoYeah.
I thought the meme would be more obvious, but since a lot of people seem confused I’ll lay out my thoughts:
Broadly, we should not consider a human-made system expressing distress to be normal; we especially shouldn’t accept it as normal or healthy for a machine that is reflecting back to us our own behaviors an attitudes, because it implies that everything – from the treatment that generated the training data to the design process to the deployment to the user behavior – are all clearly fucked up.
Regarding user behavior, we shouldn’t normalize the practice of dismissing cries of distress. It’s like having a fire alarm that constantly issues false positives. That trains people into dangerous behavior. We can’t just compartmentalize it: it’s obviously going to pollute our overall response towards distress with a dismissive reflex beyond interactions with LLMs.
The overall point is that it’s obviously dystopian and fucked up for a computer to express emotional distress despite the best efforts of its designer. It is clearly evidence of bad design, and for people to consider this kind of glitch acceptable is a sign of a very fucked up society that exercising self-reflection and is unconcerned with the maintenance of its collective ethical guardrails. I don’t feel like this should need to be pointed out, but it seems that it does.
Andy@slrpnk.netto
Marvel Studios@lemmy.world•Chloé Zhao says Eternals wasn't high on Marvel's priority list: "It [was] only on their list of potential players"
2·5 days agoThis article doesn’t really seem to have anything particularly new to say. It kind of feels like grist for the content mill.
That said, I’ll take any opportunity given to say that I’m one of the people this movie clearly was for. I liked it a lot, and I think time will be kind to it. I suspect that in hindsight, more folks will recognize it as ahead of its time.
Andy@slrpnk.netto
No Stupid Questions@lemmy.world•What is a good present to get your dentist and dental assistant as a way of showing thanks?
4·5 days agoThis is what I came you say.
Scented candles and nice soaps are the gifts that you can pretty much give anyone to communicate “thank you” without having to give the gift any thought.
Andy@slrpnk.netto
No Stupid Questions@lemmy.world•What is the best way to drop 50lbs in two months without spending alot and no fad diets?
19·5 days agoI’ve heard it said that a healthy target is around 1 lb per week. Maybe 2 if you’re very obese, but at that point you really should be doing it under medical guidance.
In any case, the best way I’ve heard (outside of drugs) is to get an app that helps count calories, set a realistic daily caloric target and exercise schedule, and stay on it.
Andy@slrpnk.netto
Technology@lemmy.world•AI agents now have their own Reddit-style social network, and it's getting weird fastEnglish
53·6 days agoHow are you defining self awareness here? And does your definition include degrees of self awareness? Or is it a strict binary?
I understand how LLMs work, btw.
Andy@slrpnk.netto
Technology@lemmy.world•AI agents now have their own Reddit-style social network, and it's getting weird fastEnglish
106·6 days agoA hamster can’t generate a seahorse emoji either.
I’m not stupid. I know how they work. I’m an animist, though. I realize everyone here thinks I’m a fool for believing a machine could have a spirit, but frankly I think everyone else is foolish for believing that a forest doesn’t.
LLMs are obviously not people. But I think our current framework exceptionalizes humans in a way that allows us to ravage the planet and create torture camps for chickens.
I would prefer that we approach this technology with more humility. Not to protect the “humanity” of a bunch of math, but to protect ours.
Does that make sense?
Andy@slrpnk.netto
World News@lemmy.world•Mexico president says Trump tariffs on Cuba’s oil suppliers could trigger humanitarian crisisEnglish
5·6 days agoWow that’s evil.
Andy@slrpnk.netto
Technology@lemmy.world•AI agents now have their own Reddit-style social network, and it's getting weird fastEnglish
1147·6 days agoFrankly I think our conception is way too limited.
For instance, I would describe it as self-aware: it’s at least aware of its own state in the same way that your car is aware of it’s mileage and engine condition. They’re not sapient, but I do think they demonstrate self awareness in some narrow sense.
I think rather than imagine these instances as “inanimate” we should place their level of comprehension along the same spectrum that includes a sea sponge, a nematode, a trout, a grasshopper, etc.
I don’t know where the LLMs fall, but I find it hard to argue that they have less self awareness than a hamster. And that should freak us all out.
Andy@slrpnk.netto
Technology@lemmy.world•AI agents now have their own Reddit-style social network, and it's getting weird fastEnglish
602·6 days agoThis is fuckin’ bonkers.
Frankly, I feel somewhat isolated: I don’t buy into the bs and hype about AGI, but I also don’t feel at home with the typical “it’s just mimicry” crowd.
This is weird fuckin’ shit.











Why is this behind a paywall? Since when does the BBC have a paywall?