- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
Earlier this year, WIRED asked AI detection startup Pangram Labs to analyze Medium. It took a sampling of 274,466 recent posts over a six week period and estimated that over 47 percent were likely AI-generated. “This is a couple orders of magnitude more than what I see on the rest of the internet,” says Pangram CEO Max Spero. (The company’s analysis of one day of global news sites this summer found 7 percent as likely AI-generated.)
Human-generated slop has been flooding Medium since forever
How well does the “AI detection startup’s” product work? This is a big unsolved problem but I’d be hecka skeptical.
It doesn’t, and never will
That’s because of bots like you. (I kid to make a point.)
That’s exactly what a bot would say, to stay undetected.
That is why I liked the comparison with articles from 2018. Then you have comparable texts in the same format and can more easily figure out differences in your analysis.
If true, a jump from 3% to 40% is significant to say the least.@Black616Angel numbers in the article are 7% for the pre-2018 corpus, and 47% for the post-2018 corpus. That is from less than 1 in 10 to almost 1 in 2, or a coin toss…
In 2018, 3.4 percent were estimated as likely AI-generated.
For 2024, with a sampling of 473 articles published this year, it suspected that just over 40 percent were likely AI-generated.
My numbers were from the Originality AI part.
@Black616Angel yes, I’ve realized that and corrected my post while you responded 😉
Maybe the blurb was AI-generated? 🤦🏻♂️
It was an SEO hellhole from the start, so this isn’t surprising.
Do Forbes next!
Is there a single good article on Forbes? It’s always fucking clickbait without actual content.
After all these years, I’m still a little confused about what Forbes is. It used to be a legitimate, even respected magazine. Now it’s a blog site full of self-important randos who escaped from their cages on LinkedIn.
There’s some sort of approval process, but it seems like its primary purpose is to inflate egos.
It’s not so much that it’s AI generated … it’s also AI influenced.
I know so many professional office workers who once wrote some of the most boring sometimes stupid emails because they didn’t know how to write or get their message across or constantly miscommunicated things because they worded things wrong … now all of a sudden they’ve become professional writers and all their emails look like auto generated messages.
I’m guessing that many writers also take the AI shortcut. They get a bunch of content generated from an AI than just rewrite it for themselves. Some content i see is lazily edited and some is heavily. But I get the feeling that just about everyone is using it because it’s an easy way to get a bunch of work done without having to think too much.
At work? Yeah I’m gonna use AI to write that email. I didn’t think or do anything more than the minimum required before, I’m not starting now. AI just makes it so that the same garbage I would sent before, now smells nice.
If you like writing as an art. Why would you have the machine do that for you? If you like thinking, you can do the thinking and let the machine do that for you.
All of these are different uses.
Just do the minimum.
No one wants to read a 10 paragraph AI generated treatise.
The implication that rewriting GPT output makes one a professional writer … not sure we’re on the same page there. If you know how to use it for those results, great!
Omg the amount of times I’ve clicked on a Medium article in the last month and immediately knew it was AI is so frustrating!!! They aren’t even helpful articles because you can tell there is no real understanding.
I knew it would be the first platform to go. The same goes for substack, thats next.
Perhaps, but I don’t read anything on Substack unless I’m subscribed. Reputation is the entire point on Substack, without it, the content will get no traffic.
Shitty tech opinions were flooding Medium before, so it’s not much of a difference.
The best part about this, is that new models will be trained on the garbage from old models and eventually LLMs will just collapse into garbage factories. We’ll need filter mechanisms, just like in a Neal Stephenson book.
People learn and write program code with the help of AI. Let this sink in for a moment.
the first person who develops a browser that effectively filters out AI results is going to do very well