Millions of articles from The New York Times were used to train chatbots that now compete with it, the lawsuit said.

  • spaduf@slrpnk.net
    link
    fedilink
    arrow-up
    64
    arrow-down
    8
    ·
    11 months ago

    The existing industry that’s popped up around LLMs has conveniently ignored that what these models are doing may have been illegal the whole time and a lot of the experts knew it. This is why it’s so important for folks to realize that the industry is not just thin wrappers around ChatGPT (and that interesting applications of this technology are largely being pushed out by the lowest hanging fruit). If this is ruled as not fair use then the whole industry will basically disappear overnight and we’ll have to rebuild it from scratch either with a new business model that pays authors or as open source/crowd sourced models (probably both). All that said we’re almost certainly better off. Open AI may have kicked off the most recent “gold rush” but their methods have been terrible for both the industry at large and for further development of the tech.

    • KᑌᔕᕼIᗩ@lemmy.ml
      link
      fedilink
      English
      arrow-up
      24
      arrow-down
      7
      ·
      11 months ago

      It always should have had the right business model where they paid for this access for AI training. They knew it was wrong but in their rush to be known they decided it was better to take without asking and then ask for forgiveness later. Regardless what happens now, people have already made a name for themselves swindling the likes of Microsoft out of it and will have long well-paying careers from it.

      • gedaliyah@lemmy.worldM
        link
        fedilink
        English
        arrow-up
        8
        ·
        11 months ago

        It seems like it was almost necessary to go through this phase for the sake of developing the tech. Doesn’t a lot of CS research uses web crawling algorithms to gather data without identifying that the information is licensed for such use? What about the fediverse? it remains unclear what the copyright and licensing will be should it come into question. There is no EULA to access fedi, just a set of open protocols.

      • Blue_Morpho@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        11 months ago

        I seem to remember NYT suing Google years ago for effectively the same thing. Google copies all NYT articles into it’s index, then sells ads for people to search for that copyrighted information.

    • helenslunch@feddit.nl
      link
      fedilink
      arrow-up
      8
      arrow-down
      2
      ·
      11 months ago

      If this is ruled as not fair use then the whole industry will basically disappear overnight

      Nah, most of these companies have more than enough of their own users to mine data from.

      • spaduf@slrpnk.net
        link
        fedilink
        arrow-up
        7
        arrow-down
        2
        ·
        11 months ago

        This is a fair point with regards to a handful of companies (Microsoft, Google, Meta) but there will still be an immediate loss in quality as they go back to basics on their data pipelines. Given how long they’ve spent playing catch up in this space, I suspect progress will be pretty slow from there

      • StereoTrespasser@lemmy.world
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        edit-2
        11 months ago

        If this is ruled as not fair use then the whole industry will basically disappear overnight

        This is cute naiveté. This case will drag on for years and eventually be settled behind closed doors.

    • EnderMB@lemmy.world
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      11 months ago

      These models can still be trained on data that they’re allowed to use, but I think that what we’re seeing is that the better LLM services are probably trained with shocking amounts of private data, whereas the less performant probably don’t use stolen data.

      • spaduf@slrpnk.net
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        11 months ago

        Textbooks are a big one that I suspect we’ll probably see a set of suits over. Particularly because they seem to be some of the most valuable training data.

    • Blue_Morpho@lemmy.world
      link
      fedilink
      arrow-up
      4
      arrow-down
      2
      ·
      11 months ago

      It certainly seems illegal, but if it is, then all search engines are too. They do the same thing. Search engines copy everything to their internal servers, index it, then sell access to that copyrighted data (via ads and other indirect revenue generators).

      • spaduf@slrpnk.net
        link
        fedilink
        arrow-up
        3
        arrow-down
        3
        ·
        11 months ago

        Definitely not. Search engines point you to the original and aren’t by any means selling access. That is the resources are accessible without using a search engine. LLMs are different because they do fold the inputs into the final product in a way that makes accessing the original material impossible. What’s more LLMs can fully reproduce copyrighted works and will try to pass them off as it’s own work.

        • Blue_Morpho@lemmy.world
          link
          fedilink
          arrow-up
          2
          arrow-down
          2
          ·
          11 months ago

          Search engines point you to the original

          That seems the only missing part. Openai should provide a list of the links used to give it’s response.

          That is the resources are accessible without using a search engine.

          I don’t understand what you mean? The resources are accessible whether you have a dumb or smart parser for your search.

          What’s more LLMs can fully reproduce copyrighted works

          Google has entire copyrighted works copied on its servers. It’s how you can query a phrase and get a reply back. They are selling the links to the copyrighted work. If Google had a bug in its search engine ui like openai, you could get that copyrighted data from Google’s servers. Google has “preview page” which gives you a page of copyrighted material without clicking the link. Then there was the Google Books lawsuit that Google won where several pages of copyrighted books are shown.

          • spaduf@slrpnk.net
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            11 months ago

            Your first point is probably where we’re headed but it still requires a change to how these models are built. Absolutely nothing wrong with an RAG focused implementation but those methods are not well developed enough for there to be turn key solutions. The issue is still that the underlying model is fairly dependent on works that they do not own to achieve the performance standards that that’ve become more or less a requirement for these sorts of products.

            With regards to your second point is worth considering how paywalls will factor in. The Times intend to argue these models can be used to bypass their paywall. Something Google does not do.

            Your third point is wrong in very much the same way. These models do not have a built in reference system under the hood and so cannot point you to the original source. Existing implementations specifically do not attempt to do this (there are of course systems that use LLMs to summarize a query over a dataset and that’s fine). That is the models themselves do not explicitly store any information about the original work.

            The fundamental distinction between the two is that Google does a basic amount of due diligence to keep their usage within the bounds of what they feel they can argue is fair use. OpenAI so far has largely chosen to ignore that problem.

            • Blue_Morpho@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              11 months ago

              The Times intend to argue these models can be used to bypass their paywall. Something Google does not do.

              The Google preview feature bypasses paywalls. Google Books bypasses paywalls. Google was sued and won.

              • spaduf@slrpnk.net
                link
                fedilink
                arrow-up
                1
                ·
                edit-2
                11 months ago

                Most likely the times could win a case on the first point. Worth noting, Google also respects robots.txt so if the times wanted they could revoke access and I imagine that’d be considered something of an implicit agreement to it’s usage. OpenAI famously do not respect robots.txt.

                Google books previews are allowed primarily on the basis that you can thumb through a book at a physical store without buying it.

                • Blue_Morpho@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  11 months ago

                  Google books previews are allowed primarily on the basis that you can thumb through a book at a physical store without buying it.

                  If that’s the standard then any NYT article that has been printed is up for grabs because you can read a few pages of a newspaper without paying.

    • yamanii@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      11 months ago

      If the companies can profit of stealing work and charging access for it, why can’t I just pirate it myself without making anyone richer?

    • General_Effort@lemmy.world
      link
      fedilink
      arrow-up
      7
      arrow-down
      7
      ·
      edit-2
      11 months ago

      Why would the NYT pay the authors again?

      I don’t see why we would be better off if the NYT and other newspapers get a windfall profit. I don’t see the reasoning here at all.

      ETA: 6 downvotes so far. Would anyone mind explaining what the problem is? I’m not lying when I say that I don’t see it.

      • assassin_aragorn@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        11 months ago

        The NYT as a company is much closer to its authors than AI is to its authors. When it exercises copyright, the owners of those copyrights are the NYT, but the authors are the … Well, authors. You’re right that a victory means newspapers get a lot more money.

        … But would that be a bad thing? If newspapers become more profitable again, maybe we can see a resurgence of local papers and more reliable news. Instead of MSNBC and Fox and CNN, various papers could be our main media sources.

        In any case – there’s times when business interests align with employee interests, and this is one of them. The NYT is effectively saying with this lawsuit that OpenAI et al. have been stealing from them, and by proxy, the authors. A victory in this court case would strengthen author rights and ownership. A loss would mean big corporations can take anything made by the public, use it for their AI, and then charge money for it. The training materials have a quantifiable value in what a trained model sells for versus an untrained model.

        • General_Effort@lemmy.world
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          edit-2
          11 months ago

          Ok, I see. It’s “trickle-down economics”. Sorry, if you don’t like the term. Feel free to suggest a better one.

          The simple fact is, it won’t work.

          There is no reason for the NYT, or any other newspaper, to share the profit. I’m not saying that none of the owners will, but most won’t. Even the generous ones won’t bother tracking down former employees or their heirs. In fairness, that’s not economics. It’s just an observation about how people behave. I do note, though, that you are not actually claiming that the authors will get paid.

          It won’t make newspapers more profitable, either. The owners of old newspapers will be able to extract a rent for their archives. But where would the extra cash flow for a new newspaper come from? You could say that they have a new potential buyer. But the US population is growing and every new person in the US is a new potential buyer. Every new business is a potential new advertising client. Having a new potential buyer is just not going to make the difference. Although, I do note that you are not actually claiming that this would make newspapers more profitable.

          At least I can say that in the last paragraph, you are wrong:

          A victory in this court case would strengthen author rights and ownership.

          No. It will not strengthen authors. Strengthening ownership strengthens owners. Strengthening ownership of buildings, strengthens landlords and does not strengthen construction workers. They have already been paid in full.

          I don’t know if the poster, I originally replied to, agrees with you, but I can definitely say that I simply do not share your ideology. I hold the view that intellectual property is a privilege granted by society, for the benefit of society. Call that socialism if you like, it’s in the US Constitution. OTOH, You clearly believe that IP entitles someone to a benefit from society, regardless of any harm to society. I don’t know if you believe in these suggested trickle-down benefits. I find it disturbing that you actually did not go so far as to make a definitive claim.

          • assassin_aragorn@lemmy.world
            link
            fedilink
            arrow-up
            2
            arrow-down
            1
            ·
            11 months ago

            If IP is freely given to AI developers, and the AI they create is freely given to the public, I have no issue at all. In fact, I’d love for that to happen. What I don’t want is what you describe at the end – companies use public and freely given IP to create a closed product sold for profit. And that’s exactly what I see these AI as. As long as they have a premium model, they’re benefiting from society and profiting privately.

            I think we actually agree on a fundamental level what the ideal situation and dynamic should be. I just think the NYT winning the court case brings us closer to that ideal. And I think we both detest the idea of OpenAI getting rich off of other people.

            • General_Effort@lemmy.world
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              11 months ago

              I really doubt we agree on any level. You’re pitching a neo-feudal hellhole. I don’t know if you believe that you will be one of the lucky few, or if you believe that the magic of the market will fix things. If it’s the former, then play the lottery instead. If it’s the latter, then you are just wrong. If you believe that you are just arguing for good ole American capitalism, then you are deluding yourself. This is the kind of nonsense that was abolished at the birth of the US, or any other developed nation. It won’t work. Never has, never will.

              • assassin_aragorn@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                11 months ago

                We agree that information should be freely given and provided to create products that are freely given and provided. We agree that it’s bad for information to be freely given to create products which are sold for private gain.

                I’m not sure you’re understanding what I’m saying. Do you disagree with any of the above? When it comes to this specific court case, either the NYT will win or OpenAI will win, and I’m saying the NYT winning is the better of the two outcomes. I’m not saying it’s the ideal we aspire to by any means.

                • General_Effort@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  11 months ago

                  I don’t think I agree with any of that. I’m not sure if I understand any of that.

                  I hold the view that intellectual property is a privilege granted by society, for the benefit of society.

                  We agree that information should be freely given…

                  I guess information means facts and data that cannot be intellectual property? I don’t necessarily agree that this should be freely given, depending on what “should” means. Unearthing facts takes effort and money. The logic behind some kinds of IP, like patents, is that it is supposed to allow eg inventors to monetize their efforts. Society benefits by having more inventions/information. Put another way, it gives people together a way to pay inventors without working through the government.

                  In some cases, it would cause disproportionate harm to society to enforce a monopoly on certain information. Say, some newspaper sleuths uncover a corruption scandal. As soon as they publish, all the other news media will pick it up and report on it. I don’t think it’s a good thing for society that this is so hard to monetize. But I don’t have a solution.

                  … and provided to create products that are freely given and provided.

                  I’ve already mentioned that I agree with patents, despite all abuses of the system. Patents provide a more direct incentive than government funding to think of ways of improving things. It also allows people to vote with their wallet, whether the effort is worth it. Electing representatives that decide on taxes and budgets, and watch over government officials giving grants, is extremely indirect. The patent system cannot replace government funding, but I believe that it is a beneficial complement.

                  We agree that it’s bad for information to be freely given to create products which are sold for private gain.

                  So, obviously I don’t agree with this. In fact, I don’t even understand why it would be bad. Why is it bad?

                  When it comes to this specific court case, either the NYT will win or OpenAI will win, and I’m saying the NYT winning is the better of the two outcomes.

                  How am I supposed to make sense of that in light of your first paragraph? Apparently, the second sentence (“…sold for private gain”) is the absolute, over-riding concern. I don’t understand why. I especially don’t understand why it is so important to you, that you want to do away with free information if you can’t have that.

                  Obviously, this implies opposition to any kind of “public domain” information (expired patents or copyrights, scientific facts and laws, and so on…), until we have some kind of communist economic system. I don’t know if you have thought it through to that point.

      • spaduf@slrpnk.net
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        11 months ago

        Pretty sure you were downvoted because it looks like you’ve misunderstood. The NYT do, in fact, pay their authors.

        • General_Effort@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          11 months ago

          Yes, which is why I have trouble understanding what you wrote earlier: “If this is ruled as not fair use then the whole industry will basically disappear overnight and we’ll have to rebuild it from scratch either with a new business model that pays authors[…]”.

          Why would the NYT pay the authors again when their archive is used for AI training?

          • spaduf@slrpnk.net
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            11 months ago

            I don’t think NYT contributors should expect a payday out of this but the precedent set may mean that they could expect some royalties for future work that they own outright. The precedent is really the important part here and this will definitely not be the only suit.

            • General_Effort@lemmy.world
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              11 months ago

              Ok. Then it’s not about authors, but about copyright owners. Bit misleading to talk about authors, then.

              FWIW it wouldn’t work. The NYT and other newspapers have their whole archives to sell. A few months of a daily newspaper is more than even someone like Stephen Kind has published in his entire life. It’s not even worth negotiating about such a tiny amount of writing. At best, you could do like with stock photography. They upload their texts to some website and accept whatever terms are offered. It might be a good business for some middle-men.

              A prolific amateur might find it a welcome bit of extra cash. But the story doesn’t stop there.

              The extra costs must be passed on to the user. You transfer wealth from the public to a few large-scale owners, aka rich people. And since these AIs are text generators, you can expect that actual authors will bear the brunt of that.

              Do you think trickle down has ever worked?

              • spaduf@slrpnk.net
                link
                fedilink
                arrow-up
                1
                ·
                11 months ago

                Why do you keep trying to make this about trickle down? That’s not even sort of relevant to what’s going on here.

                My preferred solution actually has these models being trained on crowd sourced open datasets and these models are primarily locally run.

                • General_Effort@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  11 months ago

                  Are you seriously trying to gaslight me? Like I can’t still read your original post…

                  Sure, you didn’t say “trickle down”. Call it whatever you like. It doesn’t change the facts.

  • gedaliyah@lemmy.worldM
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    4
    ·
    11 months ago

    Digital computer-aided plagiarism is new ground for copyright. Google has successfully defended Google Books with the defense that it is searching an archive of legally purchased and licensed books for specific information without reproducing the entire work. It’s the equivalent of visiting a physical library or bookstore and flipping through a book without actually purchasing it.

    AI is something else entirely. It’s more like a program that incorporates ALL of the text (training data) and alters it according to an algorithm. This has been a problem with news-crawling websites for a long time. They would download copywritten text, edit multiple sources together, or use an algorithm to replace common words, etc., then post it on their own ad (and often virus) filled sites. It seems like AI is just a more sophisticated version of that. In any case, I’m not a lawyer so who knows what the argument will be on one side or the other.

    • echindod@programming.dev
      link
      fedilink
      arrow-up
      2
      ·
      11 months ago

      I’m glad you bring up Google Books in this. Those lawsuits in the early teens about this issue are really important. But two things bother me: Google really won the case, but then basically abandoned the project. It’s still there, but a shell of what it used to be. I wonder if the case may be, even though they won, they really lost. Or it could be Google just abandoning another project because they never cared about it.

      I think AI for searching books like Google books would be an a amazing use case, and really, it is t that much different than what Google books is: an index of all of the published words. In fact, I can imagine AI being able to help you figure out if this book has the info you actually need from the book. That’s not what GPT is, but one could make one that could do it.

      I am torn. I am sort of a GPT may sayer, but on the other hand, is it really all that philosophically different than what humans do? I don’t think it is materially different, but it is a little.

    • whoisearth@lemmy.ca
      link
      fedilink
      arrow-up
      6
      arrow-down
      1
      ·
      11 months ago

      I have to say it’s fun to watch. I’m bringing this up with my boss when he’s back because all fortune 500 companies are big on both products right now and from a technology perspective and a business edge with their competitors it makes sense.

      For me I care more from a philosophical and moral perspective and I’m curious with our “AI Steering Committee” how seriously they’re taking into account the actions of these companies. Microsoft is one thing as they’re so embedded but OpenAI? How long does a company wanting to be perceived as “good” going to continue using ChatGPT?

      I don’t have answers. Genuinely curious.

        • whoisearth@lemmy.ca
          link
          fedilink
          arrow-up
          4
          ·
          11 months ago

          If we continue to run into issues with AI and copywrite laws maybe copywrite laws are the issue. Maybe our broken system is keeping us held back.

          • GarlicToast@programming.dev
            link
            fedilink
            arrow-up
            3
            ·
            11 months ago

            I’m sure that Wine decelopers would be thrilled to be allowed to use leaked Windows code. I have a funny feeling that Microsoft may object.

            • whoisearth@lemmy.ca
              link
              fedilink
              arrow-up
              2
              ·
              11 months ago

              Those with the power want to keep the power. The pattern is consistent be it Microsoft, Paramount or John Grisham.

              Now the question is, how do we abolish antiquated copywrite laws while also ensuring people are adequately compensated for what they create?

              Off the top, Microsoft and Paramount don’t create. They’re not people. They shouldn’t be in the conversation and they have no rights (yes I know not reality but I believe this). John Grisham has a leg to stand on.

              I don’t know what the solution is. I merely know the current solutions we have in place don’t work but we continue to use them because those in power are benefitting from it.

    • Blue_Morpho@lemmy.world
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      11 months ago

      On the one hand it should be a copyright violation but if it is then Google search, and all search engines are too.

      The only reason you can search for an article and get a hit is Google already read the page and copied it all to it’s internal servers where everything is indexed. So when you search, Google can look up the keywords and provide you a link.

      If there was a bug in Google’s search engine like OpenAI’s, you could craft a query that would leak Google’s indexed data.

      So all search engines are the same copyright violators as OpenAI. They take data from everyone and profit from it.(even if it is indirect or paying salaries)

      • GarlicToast@programming.dev
        link
        fedilink
        arrow-up
        2
        ·
        11 months ago

        Google is directing me to NYT, which make revenue for both parties. OpenAI does not direct me to the NYT, they try to replace them, this is a parasitic relation. If you hacked Google to pull the article from their cache, you will go to jail.

        • Blue_Morpho@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          11 months ago

          If you hacked Google to pull the article from their cache, you will go to jail.

          Google has a “preview” button which shows the article without clicking the link.

          Is crafting a query to show an article “hacking”? Does that make the OpenAI researcher who got chatgpt to show an article a hacker?

      • yamanii@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        11 months ago

        The google search doesn’t summarize the article for me so I have no reason to ever visit the site though.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      11 months ago

      The thing is these are two separate arguments.

      One is whether or not training is infringement.

      The other is whether or not there needs to be stricter filters on output to avoid copyright.

      The second one is easy to both argue for and implement, it just means funneling money towards a fine tuned RAG model to detect infringement before spitting out content. I’d expect we’ll be seeing that in the near future. It’s similar to the arguments YouTube was doomed at its acquisition because of rampant copyright infringement, but they just created a tagging system and now people complain about over-zealous DCMA enforcement - generative AI will end up in the same place with the same results for cloud-based models.

      The first is much more murky, and I’m extremely skeptical that the suits regarding it will be successful given the degree of transformation and the relative scope of the material in any given suit compared to the total training set. As well the purpose of the laws in the first place were to encourage creation, and setting back arguably the most powerful creative tool in history (particularly when it means likely being eclipsed by other nation states with different attitudes towards IP) doesn’t seem all that encouraging.

      If I were putting money on it, we’ll see multiple rulings against training as infringement which will settle the issue, but we will see “copyright detection as a service” models pretty much everywhere for a short period until suddenly the use of generative AI by creatives is so widespread that its being unable to be copyrighted means business models shift from media as a product to a service.

      • assassin_aragorn@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        11 months ago

        There is clearly value in a trained AI that an untrained model lacks, otherwise you could sell them as a product or service for the same price. That training has value, and price difference between a trained and untrained model is that value.

        Because training has a value, the training material has value as well. You can’t commercially extract value from someone’s product to make your own product and sell it, unless you buy their product wholesale or through a license.

        And it they argue that paying would be financially prohibitive to training, they admit that the training has financial value. It’d be cheap if the training material wasn’t valuable.

        I see two likely paths here for the future, presuming the court rules in favor of the NYT. The first is that AI companies work out a deal with publishers and media companies to use their work while not breaking the bank. The second is that AI companies don’t change the training process, but they change their financial model – if the AI is free to the public, they aren’t making money off of anyone’s work. They’d have to charge for ads or something.

        • kromem@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          11 months ago

          Spaceballs extracts almost all of its value from Star Wars without paying for it.

          You absolutely can extract value from things when the way in which you do it is fair use.

          Which is typically considered to be use that is transformative enough so as to not simply be derivative, or in the public interest.

          And I think you’d have a very difficult time showing LLMs general use to be derivative of any specific part of the training data.

          We’ll see soon, as these court cases resolve.

          And if the cases find in favor of the plaintiffs, “not charging” isn’t going to work out. You can’t copy material and not charge for it and get away with it. If there’s prior law that training is infringement, it’s unlikely the decision will be worded so narrowly that similar cases against companies that don’t charge will be found not to be infringement.

          Keep in mind one of the pending cases is against Meta, whose model is completely free to access and use.

    • kibiz0r@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      3
      ·
      11 months ago

      In a perfect world, yes, I think AIs can and should be trained on real world content, but if those AIs still don’t understand the nuances of attribution, paraphrasing, and plagiarism, then that’s still a problem that needs to be addressed.

      What a joke. Oh okay, if the LLMs output can annotate where the snippets came from, then it’s totally cool.

      The fuck are we doing? We’re really sleepwalking into a future where a few companies are able to slurp up the entire history of human creative thought, crunch some statistics about it with the help of severely underpaid Kenyans, and put a paywall around it, and that’s totally legal.

      Every time I see an “AI” (these are not fucking AI, and yet we’re fucking doomed already) apologist, I always think of Peter Gibbons explaining the “fractions of a penny” scheme. https://www.youtube.com/watch?v=yZjCQ3T5yXo

      “It becomes ours”

      Are we really this dumb? Maybe we deserve the dystopia we’re building.

      • Blue_Morpho@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        11 months ago

        We’re really sleepwalking into a future where a few companies are able to slurp up the entire history of human creative thought, crunch some statistics about it with the help of severely underpaid Kenyans, and put a paywall around it, and that’s totally legal.

        That future already happened ten years ago when NYT lost its lawsuits against Google.

      • brbposting@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        11 months ago

        I get it. Can seem alarming, and I won’t argue here about training on copyrighted works.

        a few companies are able to slurp up the entire history of human creative thought, crunch some statistics about it with the help of severely underpaid Kenyans, and put a paywall around it, and that’s totally legal.

        If a few companies can slurp up our entire public domain history and profitably paywall useful products of it, have there still been moral failings?

  • MilitantAtheist@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    11 months ago

    Everyone going nuts over ai being trained on copyrighted works. No one cares about how Spotify launched with warez released mp3s.