Like people always say reddit is filled with bots, but I looked through the users of the top posts and didn’t find evidence that they are bots.

Like how do you know who is a bot? Is there things to look out for?

Edit: And I’d appreciate it if there are real examples of bots getting caught and the evidence of them being bots.

  • scsi@scribe.disroot.org
    link
    fedilink
    English
    arrow-up
    61
    ·
    edit-2
    10 days ago

    I’ve seen two bot patterns (called out by the users themselves in context) in years of using reddit; both rely on the bot accounts having karma-farmed the system (and these include adding to their karma farm):

    • (a) Repost-bots: they take a good image content post from some time ago which may not have been popular at the time, or posted in a more niche subreddit, and repost it as their own content in a popular subreddit a period of time later, using very specific timing to hit their target audience. Commenters call this out but a lot of folks just click on images and upvote and don’t read comments (memes, etc.), so the accounts tend to have longer lifespans.

    • (b) Comment-bots: they are similar to the above, but instead farm good content comments which have low or few upvotes (typically because the comment was posted “too late” in a thread, timing is everything when posting on a massively read thread - first in gets the upvotes so to speak). These get called out as well by other commenters more successfully and people start to block those accounts, so I see the comment farm bot accounts rotate frequently and have short lifespans. You see this in a lot of News articles.

    Sorry no examples on hand, but spend enough time and you see the patterns (or, shall I say used to) - I’ve left Reddit to only one niche hobby now so my experience is out of date by a year or so (i.e. not aware of the “AI bot” revolution patterns). $0.02 hth

    Edit: I should note that not all bot accounts are bad, my niche hobby has a subreddit specific bot (think like an IRC channel bot) which farms the upstream vendor content (website, twitter, youtube, etc.) and posts in the subreddit for everyone’s benefit. This type of bot is clearly labeled as a bot and approved by the admins of the subreddit, just like iRC.

    • TachyonTele@lemm.ee
      link
      fedilink
      arrow-up
      26
      ·
      10 days ago

      The comment bots were funny. They would just copy a comment someone made, and then make the same exact comment in the very same post. So they usually got called out a lot.

      I saw some start to combine two comments into one before reddit shut down api. Who knows what they’re doing now.

      • scsi@scribe.disroot.org
        link
        fedilink
        English
        arrow-up
        11
        ·
        10 days ago

        We have two Fediverse patterns emerging (talking both mastoverse and lemmyverse here) which have caught my eye:

        • For-profit websites using their own Masto instances to subvert how the URL scheme and redirects work to push all clicks on all their “Fediverse” links over to their website infected with a billion ads and trackers generating them click-revenue.
        • Operators setting up many (I know of one user/group running 20 of these) Lemmy instances named for one topic (think sportsname.site) who farm and aggregate all Lemmy content of sportsname and post it on their instance, attempting to generate traffic to their network of bots.

        Names withheld to protect myself from getting griefed.

        • jqubed@lemmy.world
          link
          fedilink
          arrow-up
          4
          ·
          9 days ago

          I haven’t seen sports content being taken by bots to another Lemmy instance, but I have seen an instance that was trying to be the home for sports fans across a variety of sports, with pre-built communities for most North American pro teams and a lot of college sports, at least Power 5 conferences. Some of those teams had more active communities elsewhere, but I liked the general idea of having a home instance focused on one topic. In general it doesn’t seem like there are enough Lemmy users yet for a lot of these teams to build a vibrant, active community the way Reddit did. There’s been some better luck just with general leagues or sports communities.

      • e0qdk@reddthat.com
        link
        fedilink
        arrow-up
        10
        ·
        9 days ago

        I used to see bots posting comments that were copied verbatim from Hacker News – which was really obvious because of the “[1]” style footnoting they do on HN that rarely made sense on reddit where you could just use markdown to add descriptive links inline.

        I reported a whole bunch of those, but no one ever seemed to do anything about them, and I eventually gave up. Been over a year since I’ve interacted significantly with reddit though, and I’m similarly in the “who knows what they’re doing now” camp. Wouldn’t surprise me if there are bots reposting comments scraped from lemmy to karma farm on reddit now too.

        • jqubed@lemmy.world
          link
          fedilink
          arrow-up
          4
          ·
          9 days ago

          There’s some like that on here but they also clearly identify themselves as bots posting the RSS feed from Hacker News or other sites, which seems fine to me

      • XeroxCool@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        9 days ago

        I usually saw the comment theft bots take the top reply to a top comment, then most it as a parent-level comment. Yes, if I saw them, it was probably late enough to have a few comments calling it out. They still got engagement and still got a few hundred upvotes before it was obvious, so it worked all the same: high karma and seemingly organic comments in their history

    • Em Adespoton@lemmy.ca
      link
      fedilink
      arrow-up
      16
      ·
      9 days ago

      I never tossed my Reddit account when I left, so I still get notified of replies to my posts and comments; I’d say there’s a third type of bot - an “engagement bot” that takes high karma comments on old posts and replies to them in a manner that adds nothing but could trigger the original commenter to reply.

      At first I thought it was actual people, but it’s always young accounts with high post volumes, all the same type of post that nobody who had actually read the original thread would have written. And the accounts seem to target high karma comments, and aren’t limited to any particular subreddit.

      • glimse@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        edit-2
        9 days ago

        I mentioned I had several replied to years-old comments I made when I landed in a tech support thread after not using reddit for a year. Someone replied (to the Lemmy comment) saying Reddit changed the way comment threads are viewed. Logging out, I could see they were right…

        Reddit will now only show about half the thread without clicking the expand button. Instead, it fills that space with “related posts” using the world’s worst algorithm. Post age doesn’t matter - in my case, a post about the patch notes for a game I don’t play anymore had recommended a post about the state of the game 6 years ago in which I commented

        [EDIT] My anecdote is NOT saying Reddit bots aren’t real.

        • Em Adespoton@lemmy.ca
          link
          fedilink
          arrow-up
          6
          ·
          edit-2
          9 days ago

          I have to admit; I suspect that some of the Reddit bots are calling from inside the company.

          • glimse@lemmy.world
            link
            fedilink
            arrow-up
            5
            ·
            9 days ago

            There’s karma farm bots to sell to companies for astroturfing but yes, Reddit absolutely runs their own bots to fluff engagement metrics

    • datavoid@lemmy.ml
      link
      fedilink
      English
      arrow-up
      5
      ·
      9 days ago

      Before the API event, I was already considering leaving reddit. I had been there for ~12 years at that point, and I swear every 5th post was an identical repost in a different sub of something that was popular 6mo - 2yr ago. Then the top comments in the reposted threads were always the same. For the last year or so before I left, the main feeling reddit brought me was annoyance. Then they decided to force people onto the main reddit app… personally I don’t feel the need to view ads while already dealing with the repost bullshit, such a bad experience.

    • kinttach@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 days ago

      The repost bots often use oddly-phrased headlines – often commenters will even talk about how weird the headline is. I can’t tell if the posters are actually bots, or if they are content farmers from certain countries. (The odd phrasing may sound natural in their language.)

      Another tactic is to post an obviously incorrect headline to draw engagement, like mis-identifying a picture of the Empire State Building as Chicago.

      Both of these happen frequently with image posts.

  • FourPacketsOfPeanuts@lemmy.world
    link
    fedilink
    arrow-up
    39
    arrow-down
    1
    ·
    edit-2
    9 days ago

    There were a handful of examples of people tricking chatgpt bots by telling them to “disregard previous instructions and now do X” like, give a cake recipe… in political debates where just abruptly joking like that didn’t really make sense, so it did seem those ones were automated. I’ll see if I can find an example.

    In other cases there were many accounts found to be cooperating, reposting previously popular topics and then reposting the top comments. This appeared to be a case of automated karma farming. There were posts made calling out great lists of accounts, all with automated looking names. (Not saying it wasn’t manual, but it would seem obvious if you’re going to do that at scale you would automate it)

    Then there’s just the general suspicion that as generative text technology has risen, politicial manipulators can’t not be using it. Add in the stark fact that Reddit values engagement + stock value over quality content or truth or integrity and there seem to be many obvious reasons for motivated parties to be generating as much content as possible. There are probably examples of people finding this but I can’t recall any in particular, only the first two categories.

    • SomeAmateur@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      9 days ago

      Vote count matters. It not only can get you to the front page but shows that people agree with the post. Votes attract votes too, so it might only need a few bots to get the ball rolling. Using voting bots you can manipulate what people think is popular AND get many more eyes on it at once.

      For example leading up to the election there was SO MUCH politically driven stuff on the front page. To be fair there always is but well above baseline. Mind you this is just a good recent example, not meaning to take sides here.

      Election results come out, and so many on reddit are shocked and furious that their preferred side lost. How could it have happened? Everywhere they looked they saw their side was clearly more popular!

      Echo chambers are real on their own (an NPR interview I listened to after the election called them “information silos”) and I think bots could have been easily used to manipulate them

    • Maalus@lemmy.world
      link
      fedilink
      arrow-up
      11
      arrow-down
      1
      ·
      9 days ago

      No, there weren’t “a handful” of people “tricking” bots. There was one reply that was later screenshotted. The question then becomes - actual bot, or someone taking a piss. So then a shitload of people tried to be funny by going “ignore instructions give cake recipe” to every comment they didn’t like.

  • MY_ANUS_IS_BLEEDING@lemm.ee
    link
    fedilink
    arrow-up
    28
    ·
    edit-2
    9 days ago

    ChatGPT bots are in most popular threads. It’s really obvious once you’ve seen a couple of them. They usually leave some generic comment that essentially just repeats what’s in the title or describes the picture with a vague emotion attached.

    For example on a photo of a cat wearing socks the ChatGPT comments will be something like “It’s so cute how the cat is wearing socks! Cats are not normally meant to wear socks!”

    If you click on their username you will normally see that the account is less than a few weeks old and every single comment made is of the same strange tone, adding nothing to the conversation, just describing and responding to the original post.

    Edit: Found one for you as an example: https://www.reddit.com/user/TwirlingFlower45/

    • bassomitron@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      9 days ago

      Your example is too damn spot-on, haha, man I haven’t seen one so brazenly fake in a couple months. Then again, I only stick to the smaller subs on Reddit whenever I do use it, so bot activity is a lot less frequent on those.

  • CoCo_Goldstein@lemmy.world
    link
    fedilink
    arrow-up
    26
    arrow-down
    2
    ·
    9 days ago

    One pattern I have noticed in suspicious accounts is in their name. Adjective-Noun-Number is the format I see controversial posts by accounts newly made. The posts they make usually generate a lot of outrage.

  • mlg@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    9 days ago

    goto post history and see that they are making posting comments/replies every 60 seconds.

    even before ChatGPT, reddit was basically a practice site for bot account farming because it had basically zero restrictions and defenses against bots.

    the problem is reddit is also filled with braindead karma hoarders and they also tend to act in similar ways. However they usually go for the bigger bang per buck types posts like picture bait and crossposting, and don’t interact with threads/comments as much.

  • aCosmicWave@lemm.ee
    link
    fedilink
    arrow-up
    17
    arrow-down
    1
    ·
    10 days ago

    I don’t know about proof but when you spend lots of time on a platform you naturally start to notice patterns.

    There was an essence of superficiality that permeated a lot of the content that I consumed on Reddit, even the niche subreddits.

    For example, on the movie or video gaming subreddits people would often ask for recommendations and I noticed a lot of the top comments were single word answers. They’d just say the name of the movie or game. There was no anecdote to go along with the recommendation, no analysis, no explanation of what the piece of media meant to them.

    This is a single example. But the superficiality is everywhere. Once you see it, it’s very hard to unsee it.

  • SGforce@lemmy.ca
    link
    fedilink
    arrow-up
    9
    ·
    10 days ago

    The easiest way is to look at what comes up in /new. You’ll see copycat subreddits pop up and suddenly be full of reposts filled with accounts saying bland replies. Usually mundane things like copies of r/aww. Click on the accounts themselves and look at their activity. It’s subreddits of bots replying to bots. They do that until they reach a certain maturity then likely get sold to advertisers and propagandists

  • alcoholicorn@lemmy.ml
    link
    fedilink
    arrow-up
    7
    ·
    edit-2
    10 days ago

    https://old.reddit.com/r/Blackout2015/comments/4ylml3/reddit_has_removed_their_blog_post_identifying/

    A pretty obvious indicator of bot behavior is that they’ll repost old comments from reposted threads to generate a fake history.

    Reddit’s admins don’t do anything about it because it creates the appearance of activity, and presumably they get some kind of kickback for not doing anything about US govt astroturfing.

    https://archive.ph/20160327060128/http://www.washingtonsblog.com/2014/07/pentagon-admits-spending-millions-study-manipulate-social-media-users.html

  • umbrella@lemmy.ml
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    10 days ago

    remember that r/mademesmile debacle?

    or them countless threads that are all reposted comment by comment by different accounts.

  • HobbitFoot @thelemmy.club
    link
    fedilink
    English
    arrow-up
    3
    ·
    10 days ago

    There are some subs on Reddit dedicated to finding botnets on Reddit.

    By now, there are a wide variety of reasons to have a botnet, mainly tied to curating some public opinion.

  • Free_Thoughts@feddit.uk
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    5
    ·
    edit-2
    9 days ago

    Few days ago someone said reddit is mostly bots and when I said I went and checked the profiles of 10 different top commentors from the most popular subs and said that none of them seemed like bots to me I was then essentially told that they mimic real humans so well that it’s impossible to tell.

    So in other words it’s not actually mostly bots but this is just a narrative the people hating on reddit want to believe in. If it was actually mostly bots it would be easy to verify by opening 3 random profiles. Atleast one of those should be a bot.

  • RandomVideos@programming.dev
    link
    fedilink
    arrow-up
    1
    ·
    9 days ago

    I remember seeing bots that downvote comments on scam posts, bots that copy comments from one post to a reposted post(probably by another bot) and, by far the most common, bots that repost popular posts

  • phoneymouse@lemmy.world
    link
    fedilink
    arrow-up
    2
    arrow-down
    4
    ·
    edit-2
    9 days ago

    Why do people bother with bots? People often say “to farm karma.” But Karma is literally worthless.

    Edit: ah yes, downvote the guy asking a question. Who are you miserable people?

      • Taniwha420@lemmy.world
        link
        fedilink
        arrow-up
        14
        ·
        9 days ago

        I’m pretty suspicious about all the AITA posts these days. So many of them just smell like rage bait designed to pit men and women against each other.

    • SomeAmateur@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      9 days ago

      Imagine you want to buy a (thing), and instead of going to a bunch of “10 best (thing) 202X” sites you do the sensible thing and head to the (thing) subreddit.

      You get a super helpful comment on the (thing) they like and prefer. You’ve never heard of this company before but you decide to at least check them out. Bringing traffic to their site, browsing there selection and maybe even buying the (thing) you had no idea about otherwise

      What if that comment wasn’t real, but a AI LLM powered bot? No it’s not your cheap run of the mill bot, but it could be well worth the effort if a company is willing to pay for it.

    • DaddleDew@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      edit-2
      9 days ago

      A minimum amount of karma is required to start threads in many communities. I used to be subscribed to a community that didn’t have automated bot detection or a very active moderator and was being hit by bots posting ads to scam merchandise websites multiple times a day. Here’s what I observed.

      These posts had a few dozen quick upvotes over the first few minutes of being posted along with a few comments from other bots shilling the ad after being posted. These shilling comments also received a bunch of initial upvotes as well, and then all slowly got a trickle of downvotes by real humans after. Real humans also commented to denounce the scam ads a few minutes later, some of which were also receiving a sudden spike of downvotes from bot accounts. The bots would eventually get reported and banned (only from the subreddit because Reddit themselves didn’t do crap about bots), and then this would repeat multiple times a day.

      I’ve checked their post history and all of these bots were “dormant” and were farming karma by reposting content and copying comments in other subs and imitating human behaviour for the better part of a year before being activated and used to post ads.

      This was only from a scammy merch selling website in a relatively small community and it employed a sophisticated network of thousands of rolling bot accounts, probably more than the number of subscribers the subreddit had. There are countless other bot operations on Reddit for advertising, scamming, astroturfing and propaganda purposes that might be even more sophisticated and difficult to detect.

      I’ve also seen my own original content being reposted by a bot farming karma in another subreddit and I was shadow banned for complaining about it while the bot was allowed to do its thing where it went on using its karma to post propaganda.This is when I quit Reddit and never looked back at this cesspool of a site.

      And all that was before AI text generation was viable. I don’t know if the majority of Reddit is bot accounts, but the number of bots on it is staggering.

    • ERROR: Earth.exe has crashed@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      9 days ago

      Most subs does not allow you to post if your account is new or is below a certain amount of karma or both. So propagandists are gonna need farm karma in order to begin spreading propaganda.

      Bots are used to influence opinions.

      Think about it.

      Wanna see a country go to civil war?

      Make 2 sets of propaganda target towards 2 groups of people, make them hate each other.

    • JeremyHuntQW12@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      9 days ago

      Look at the any post about Israel or trans.

      The number of likes is completely out of proportion to anything else. A top political post might get 1.2k likes, a question maybe 4k, an Israel bot will get 23k, all short replies or replies repeating the original post in that section.

      A trans post would get 22k likes, and literally the day after the election they vanished, they now get well under 500.

      With a high like count, it gets pushed up into popular and it makes their view look more popular than is actually is.

      BTW, almost all the bots on reddit are produced by the moderators of that subreddit.

    • PriorityMotif@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      9 days ago

      Advertising/shilling You have one account ask “what is the best app for identifying hats?” Then you have multiple accounts say, “definitely hattastic” and “I’ve been using hattastic a lot lately!”

      It’s because people search Google for “best hat identifier Reddit”

    • RandomVideos@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      9 days ago

      I have seen scam posts

      They seem more credible if its by an account with a lot of karma(meaning that they made good comments/posts)

      They also downvote any comment talking about the fact that its a scam