I am US based so any and all comments, criticisms, jokes, opinions, etc. will be colored (SEE NO U) with that as my lived perspective.

That said, I really and truly wish you have had an at least OK day today. If not, maybe tomorrow will be better eh?

  • 0 Posts
  • 13 Comments
Joined 1 year ago
cake
Cake day: July 25th, 2023

help-circle
  • This story is from April, 2021… so yeah nothing new here.

    Gaetz showed nude pictures of women on House floor, per report

    WEST PALM BEACH, Fla. (CBS12) — Florida Republican Representative Matt Gaetz allegedly bragged about his sexual escapades with women and showed his colleagues in Congress nude pictures of the women he slept with, according to a new report from CNN.

    Gaetz is under investigation after allegations surfaced he had a relationship with a 17-year-old girl in 2019, per a report in the New York Times. The probe includes a Justice Department inquiry into potential violations of federal sex trafficking laws. Authorities are looking into whether the congressman paid the teen to travel with him across state lines.

    According to sources with CNN, Gaetz allegedly showed nude images with leaders on the House floor. A source told CNN one image showed a naked woman with a hula hoop.

    Gaetz denied the accusations. “I have a suspicion that someone is trying to recategorize my generosity to ex-girlfriends as something more untoward,” Gaetz said in a statement to the New York Times.

    Gaetz also claims he’s the target of a $25 million extortion plot involving an attorney in Pensacola who used to work as a federal prosecutor. The law firm, Biggs & Lane, in Pensacola, called the claims, “False and defamatory.”



  • Now, if said AI is generating foraging books more accurate than humans, that’s fine by me. Until that’s the case, we should be marking AI-generated books in some clear way.

    The problem is, the LLM AIs we have today literally cannot do this because they are not thinking machines. These AIs are beefed-up autocompletes without any actual knowledge of the underlying information being conveyed. The sentences are grammatically correct and read (mostly) like we would expect human written words to read, however the actual factual content is non-existent. The appearance of correctness just comes from the fact that the model was trained on information that was (probably mostly) correct in the first place.

    I mean, we should still be calling these things algorithms and not “AI” as “AI” carries a lot of subtext in people’s minds. Most people understand “algorithms” to mean math, and that dehumanizes it. If you call something AI, all of a sudden people have sci-fi ideas of truly independent thinking machines. ChatGPT is not that, at all.


  • “Easily avoidable” if you know to look for them or if they’re labelled appropriately. This was just an example of a danger that autocomplete AI is creating today. Unscrupulous people will continue to shit out AI generated nonsense to try to sell when the seller does zero vetting of the products in their store (one of the many reasons I no longer shop at Amazon).

    Many people, especially beginners, are not going to take the time to fully investigate their sources of knowledge, and to be honest they probably shouldn’t have to. If you get a book about mushrooms from the library, you can probably assume it’s giving valid information as the library has people to vet books. People will see Amazon as being responsible for keeping them safe, for better or worse.

    I agree that generally there is a bunch of nonsense about ChatGPT and LLM AIs that isn’t really valid, and we’re seeing some amount of AI bubble happening where it’s a self feeding thing. In the end it will shake out, but before that all happens you have some outright dangerous and harmful things occurring today.