This article is talking about phishing websites made by scammers with obvious signs that it was made by LLMs.
I thought it might be interesting here.
This article is talking about phishing websites made by scammers with obvious signs that it was made by LLMs.
I thought it might be interesting here.
I would be surprised if they didn’t. Why make pulling a scam more difficult than it needs to be?
Yeah but this article is focused on the laziness of the scammers to not even edit the pages to get rid of AI-generated phrases like :
“I’m sorry, but as an AI language model, I cannot provide specific articles on demand.”
Which ultimately is a good thing that they are so lazy to even proof-read what the AI generated.