• 3 Posts
  • 593 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle












  • Because the original flavor is very clearly a list of rules made to enable theocratic control and create a religious apartheid.

    I am pretty sure you are over-interpreting here. Theocratic control? Kinda, that’s the whole point of all the “holy books” - I mean whaddya expect? But apartheid is separating people based on (perceived) ethnicity - and the ten commandments do not even attempt to separate people based on religion. They are presented as rules for everyone, no distinction made.

    I can’t believe that you managed to present such a stupid take, that I, a lifelong atheist who thinks all religion is stupid, has to defend the commandments… facepalm








  • That is indeed exactly my point. LLMs are just a language-tailored expression of deep-learning, which can be incredibly useful, but should never be confused for any kind of intelligence (i.e. logical conclusions).

    I appreciate that you see my point and admit that it makes some sense :)

    Example where I think pattern recognition by deep learning can be extremely useful:

    • recheck medical imaging data of patients that have already been screened by a doctor, to flag some data for a re-check by a second doctor. This could improve chances of e.g. early cancer detection for patients, without a real risk of a false detection, because again, a real doctor will look at the flagged results in detail before even alarming a patient to a potential diagnosis
    • pre-filter large amounts of data for potential matches -> e.g. exoplanet search by certain patterns (planet hunters lets humans do this as crowdsourcing)

    But what I am afraid is happening for people who do not see why a very simple algorithm is already AI, but consider LLMs AI, is that they mentally decide to call AI what seems “AGI” / “human-like”. They mistake the patterns of LLMs for a conscious being and that is incredibly dangerous in terms of trusting the answers given by LLMs.

    Why do I think they subconsciously imply (self-)awareness / conscience? Because to not consider as (very limited) AI a control mechanism like a simple room thermostat, is viewing it as “too simple” to be AI - which means that a person with such a view makes a qualitative distinction between control laws and “AI”, where a quantitative distinction between “simple AI” and “advanced AI” would be appropriate.

    And such a qualitative distinction that elevates a complex word guessing machine to “intelligence”, that can only be made by people who actually believe there’s understanding behind those word predictions.

    That’s my take on this.