Freedom is the right to tell people what they do not want to hear.

  • George Orwell
  • 2 Posts
  • 62 Comments
Joined 10 days ago
cake
Cake day: July 17th, 2025

help-circle

  • Well, I didn’t literally mean there’d be just a single place. Obviously, once you set that precedent, other places like it would start popping up too. But it’s not obvious to me that this is a bad thing - or at least worse than the alternatives. I don’t think there’s anything inherently wrong with people wanting to live among like-minded people and, in effect, build echo chambers for themselves.

    I do think the philosophy behind it is immoral from my perspective, but that’s not really the point. What matters is the concrete effect this ideology has in the real world. And if, in our current cities, we have racists committing racist violence against minorities, then is it really so much worse to just let them all move off into their own little enclave where they can live out their perfect lives without any black people around, if that’s what they want? At least then the rest of us wouldn’t have to deal with them on a daily basis.


  • Way to move the goalposts.

    If you take that question seriously for a second - AlphaFold doesn’t spew chemicals or drain lakes. It’s a piece of software that runs on GPUs in a data center. The environmental cost is just the electricity it uses during training and prediction.

    Now compare that to the way protein structures were solved before: years of wet lab work with X‑ray crystallography or cryo‑EM, running giant instruments, burning through reagents, and literally consuming tons of chemicals and water in the process. AlphaFold collapses that into a few megawatt‑hours of compute and spits out a 3D structure in hours instead of years.

    So if the concern is environmental footprint, the AI way is dramatically cleaner than the old human‑only way.




  • Artificial intelligence isn’t designed to maximize human fulfillment. It’s built to minimize human suffering.

    What it cannot do is answer the fundamental questions that have always defined human existence: Who am I? Why am I here? What should I do with my finite time on Earth?

    Expecting machines to resolve existential questions is like expecting a calculator to write poetry. We’re demanding the wrong function from the right tool.

    Pretty weird statements. There’s no such thing as just “AI” - they should be more specific. LLMs aren’t designed to maximize human fulfillment or minimize suffering. They’re designed to generate natural-sounding language. If they’re talking about AGI, then that’s not designed for any one thing - it’s designed for everything.

    Comparing AGI to a calculator makes no sense. A calculator is built for a single, narrow task. AGI, by definition, can adapt to any task. If a question has an answer, an AGI has a far better chance of figuring it out than a human - and I’d argue that’s true even if the AGI itself isn’t conscious.





  • Lemmy on se virallinen nimi joka suomentuu Sopuliksi. Lemmy itsessään on vähän kuin sähköposti. Ei ole olemassa mitään “virallista” sähköpostia vaan on gmail, outlook, hotmail jne. joista voit valita haluamasi ja se kommunikoi sitten vapaasti muiden palveluntarjoajien kanssa. Niinkun näet, niin en itse ole Sopulin jäsen, mutta silti voin kommentoida tänne.








  • Perspectivist@feddit.uktoAtheist Memes@lemmy.worldToxic empathy...?
    link
    fedilink
    arrow-up
    18
    arrow-down
    2
    ·
    edit-2
    2 days ago

    Not sure what the article is getting at, but there’s a thing called “weaponized empathy” - or “concern trolling” - which is a bad-faith argumentation tactic where you pretend to be worried about someone, when in reality you’re just using that as a cover for judgment or hostility.

    It can also be used more broadly. Think of how often “think of the children” gets trotted out as a justification to invade people’s privacy, when the supposed concern for kids’ wellbeing is really just an excuse.