Barack Obama: “For elevator music, AI is going to work fine. Music like Bob Dylan or Stevie Wonder, that’s different”::Barack Obama has weighed in on AI’s impact on music creation in a new interview, saying, “For elevator music, AI is going to work fine”.

  • remus989@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    44
    arrow-down
    4
    ·
    1 year ago

    Do people actually care what Obama has to say about AI? I’m just having a hard time seeing where his skillset overlaps with this topic.

    • EnderMB@lemmy.world
      link
      fedilink
      English
      arrow-up
      25
      arrow-down
      1
      ·
      1 year ago

      Probably as much as I care about most other people’s thoughts on AI. As someone that works in AI, 99% of the people making noise about it know fuck all about it, and are probably just as qualified as Barack Obama to have an opinion on it.

      • krazzyk@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        What do you do exactly in AI? I’m a software engineer interested in getting involved.

        • EnderMB@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          I work for Amazon as a software engineer, and primarily work on a mixture of LLM’s and compositional models. I work mostly with scientists and legal entities to ensure that we are able to reduce our footprint of invalid data (i.e. anything that includes deleted customer data, anything that is blocked online, things that are blocked in specific countries, etc). It’s basically data prep for training and evaluation, alongside in-model validation for specific patterns that indicate a model contains data it shouldn’t have (and then releasing a model that doesn’t have that data within a tight ETA).

          It can be interesting at times, but the genuinely interesting work seems to happen on the science side of things. They do some cool stuff, but have their own battles to fight.

          • krazzyk@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            That sounds cool, I’ve had roles that were heavy on data cleansing, although never on something so interesting. What languages / frameworks are used for transforming the data, I understand if you can’t go into too much detail.

            I did wonder how much software engineers contribute in the field, it’s the scientists doing the really interesting stuff when it comes to AI? Not surprisingly I guess 😂

            I’m a full stack engineer, I was thinking of getting into contracting, now I’m not so sure, I don’t know enough about AI’s potential coding capabilities to know whether I should be concerned about job security in the short, or long term.

            Getting involved in AI in some capacity seems like a smart move though…

            • EnderMB@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              We do a lot of orchestration of closed environments, so that we can access critical data without worry of leaks. We use Spark and Scala for most of our applications, with step functions and custom EC2 instances to host our environments. This way, we build verticals that can scale with the amount of data we process.

              If I’m perfectly honest, I don’t know how smart a move it is, considering our org just went through layoffs. We’re popular right now, but who knows how long for.

              It can be interesting at times, but to be honest if I were really interested in it, I would go back and get my PhD so I could actually contribute. Sometimes, it feels like SWE’s are support roles, and science managers only really care that we are unblocking scientists from their work. They rarely give a shit if we release anything cool.

        • unexpectedteapot@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          There is a tad bit of difference between caring about an opinion and tolerating one. Obama’s opinions on AI are unqualified pop culture nonsense. They wouldn’t be relevant in an actual discussion that would cite relevant technical, economical and philosophical aspects of AI as points.

          • lugal@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Sure, care about it or don’t, I don’t care. It was the “being qualified to have an opinion” bit I didn’t like. I don’t have to qualify to have an opinion and I can write an opinion piece and sure enough, less people will read it than Obama’s. I might not be qualified to teach on that subject but everyone is qualified to build one’s own opinion.

            But maybe that’s just overly pedantic on my side. You are qualified to have a different opinion.

      • Queen HawlSera@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        4
        ·
        1 year ago

        I know this was once said about the automobile, but I am confident in the knowledge that AI is just a passing fad

        • ricecake@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Why? It’s a tool like any other, and we’re unlikely to stop using it.

          Right now there’s a lot of hype because some tech that made a marked impact of consumers was developed, and that’s likely to ease off a bit, but the actual AI and machine learning technology has been a thing for years before that hype, and will continue after the hype.

          Much like voice driven digital assistants, it’s unlikely to redefine how we interact with technology, but every other way I set a short timer has been obsoleted at this point, and I’m betting that auto complete having insight into what your writing will just be the norm going forward.

            • Trantarius@programming.dev
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              The Chinese room argument doesn’t have anything to do with usefulness. Its about whether or not a computer that passes the turing test is conscious. Besides, the argument is a ridiculous one to begin with. It assumes that if a subcomponent of a system (ie the human) lacks “understanding”, then the system itself (the human + the room + the program) lacks understanding.

              • ricecake@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                2
                ·
                1 year ago

                Anything else aside, I wouldn’t be so critical of the thought experiment. It’s from 1980 and was intended as an argument against the thought that symbolic manipulation is all that’s required for a computer to have understanding of language.
                It being a thought experiment that examines where understanding originates in a system that’s been given serious reply and discussion for 43 years makes me feel like it’s not ridiculous.

                https://plato.stanford.edu/entries/chinese-room/#LargPhilIssu

            • SCB@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              1 year ago

              You not having a job where you work at a level to see how useful AI is just means you don’t have a terribly important job.

              • aesthelete@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                What an brain drained asshole take to have. But I’ve seen your name before in my replies and it makes sense that you’d have it.

                AI is useful for filling out quarterly goal statements at my job, and boy are those terribly important… 😆

            • ricecake@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              1 year ago

              What?

              At best you’re arguing that because it’s not conscious it’s not useful, which… No.
              My car isn’t conscious and it’s perfectly useful.

              A system that can analyze patterns and either identify instances of the pattern or extrapolate on the pattern is extremely useful. It’s the “hard but boring” part of a lot of human endeavors.

              We’re gonna see it wane as a key marketing point at some point, but it’s been in use for years and it’s gonna keep being in use for a while.

              • aesthelete@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                1 year ago

                A system that can analyze patterns and either identify instances of the pattern or extrapolate on the pattern is extremely useful. It’s the “hard but boring” part of a lot of human endeavors.

                I agree with most of what you’re saying here, but just wanted to add that another really hard part of a lot of human endeavors is actual prediction, which none of these things (despite their names) actually do.

                These technologies are fine for figuring out that you often buy avocados when you buy tortillas, but they were utter shit at predicting anything about, for instance, pandemic supply chains…and I think that’s at least partially because they expect (given the input data and the techniques that drive them) the future to be very similar to the past. Which holds ok, until it very much doesn’t anymore.

                • jasondj@ttrpg.network
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  edit-2
                  1 year ago

                  I’m sorry, they aren’t good at predicting?

                  My man, do you have any idea how modern meteorology works?

                  A ton of data gets dumped into a ton of different systems. That data gets analyzed against a bunch of different models to predict forecasts The median of al those models is essentially what makes it into the forecast on the news.

                • ricecake@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 year ago

                  Well, I would disagree that they don’t predict things. That’s entirely what LLMs and such are.

                  Making predictions about global supply chains isn’t the “hard but boring” type of problem I was talking about.
                  Circling a defect, putting log messages under the right label, or things like that is what it’s suited for.

                  Nothing is good at predicting global supply chain issues. It’s unreasonable to expect AI to be good at it when I is also shit at it.

      • Rooskie91@discuss.online
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        3
        ·
        edit-2
        1 year ago

        Absolutely not. We need to learn the difference between intelligence and expertise. Is Obama an intelligent person? Of course. Is he allowed to have and voice an opinion? Sure, it’s a free country. Does that mean that his opinion is informed by expertise and should dictate peoples actions and therefore the direction of an industry? No.

        This is the same logic that allows right wing ideologues to become legitimate sources of information. A causal interest in a topic is NOT the same as being an industry expert, and the opinions of industry experts should be weighted far heavier in our minds than people who “sound like they know what they’re talking about”.

        • aesthelete@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          1 year ago

          This is the same logic that allows right wing ideologues to become legitimate sources of information. A causal interest in a topic is NOT the same as being an industry expert, and the opinions of industry experts should be weighted far heavier in our minds than people who “sound like they know what they’re talking about”.

          And your logic is the same followed by government agencies when they effectively agree to regulatory capture because all of the industry experts work at this company, so why not just let the company write the rulebook? 🤔

          I personally don’t believe we need “industry experts” in every new, emerging type of tech to be the sole voices considered about them because that’s how we largely arrived at the great enshitterment we’re already experiencing.

          Edit: It’s really quite a baffling take (given a moment’s thought) that the big problem and/or a large problem facing America is that we aren’t cozy enough with “industry experts”. Industry practically write the policy in this country, and the only places where we have any kind of great debate (e.g. net neutrality, encryption) is where there are conflicting industry concerns.

    • Mango@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      5
      ·
      1 year ago

      I’m just a dude who does general labor and have lots of insights about AI just because I’m interested and smart. People tend to come to me just to hear what I have to say.

      Now look at Obama. He’s all of that and much more in the eyes of a society that’s put Obama in the spotlight. He can talk about totally boring stuff and people will still respect his opinion.

      • Inmate@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        14
        ·
        1 year ago

        If he’s an unqualified bystander, then what the fuck are you?

        I’m always surprised that the people with all the answers only share them with thirty other assholes on the Internet.

        I’m confident a 14 year old can write their own AI, maybe even a smart 10 year old.

        Here’s instructions for kids:

        https://youtu.be/XJ7HLz9VYz0?si=1QN3fqT03HSMufib

        You think Obama can’t wrap his head around a little algebra?

        Why, when speaking intelligently and thoughtfully in the subject, is he so wrong in his assessment, when you, in one lazy sentence, are so right?

        I’m really worried about would-be wise people just throwing in the towel cause they don’t know how much better they could be with a little discipline, and settle for being clever here and there.

        • lloram239@feddit.de
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          2
          ·
          edit-2
          1 year ago

          Why, when speaking intelligently and thoughtfully in the subject, is he so wrong in his assessment, when you, in one lazy sentence, are so right?

          Obama is employing good old human exceptionalism and moving the goal post. A tried and true method of argumentation that has continued to fail for the last 50+ years when it comes to AI. “AI is good at X, but not Y” becomes “AI is good at X and Y, but not Z” the next year. Focus on a tiny niche that AI hasn’t covered yet, while ignoring the pace at which AI is advancing. Wasn’t too long ago that people where proclaiming that computers could never be creative. Nowadays that switched to “but it can’t beat the human masters”. Well, guess what? That did neither hold true for Chess nor Go and it won’t work out for Bob Dylan music either. Be prepared for a future where AI is better at everything. It will come and much sooner than people expect.

          It’s also worth keeping in mind the quantity of AI generated content. I still hear tons of artists talk as if AI were competing with them on a level playing field. But in the time they finish one image, AI finished thousands or even millions. This is not just about AI replacing the human, but completely shifting how we deal with information in general. Something like ChatGPT isn’t interesting because it can write better websites than a human, but because it completely bypasses the need to visit websites in the first place. You ask the AI and the AI delivers the answers. There is no intermediate step where knowledge needs to get dumped into a static website or a book.

        • Fisch@lemmy.ml
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          1 year ago

          As far as I know, Obama has nothing to do with IT and doesn’t have a big interest in it. A lot of people on here are probably more qualified than he is when it comes to these topics simply because they spent a lot of their free time learning about it.

    • Sagifurius@lemm.ee
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      3
      ·
      1 year ago

      Because he’s a world leader and AI programs are answering search engine queries with what you want to hear now, not actual answers. Ain;t no way hes unaware that.

    • Inmate@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      12
      ·
      1 year ago

      Because you can teach a teen to do it in two weeks. He was a constitutional law professor, as well as the first elected African-American president in the United States. I learned LLMs in a couple months and I never used a comp until 2021. Why are you gatekeeping?

      • Daxtron2@lemmy.ml
        link
        fedilink
        English
        arrow-up
        17
        arrow-down
        3
        ·
        1 year ago

        Using the end product and having any idea how it works are two VERY different things.

        • Inmate@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          9
          ·
          1 year ago

          I agree, my argument is that both aren’t challenging for even the average person if they really want/need to understand how these models produce refined noise informed by human patterns.

          There are electricians everywhere you know.

          This isn’t a random person thoughtlessly yelling one-sentence nonsense pablum on the Internet like you.

          You think this person can’t understand something as straightforward as programming, coming from law?

          https://en.wikipedia.org/wiki/Barack_Obama

          Please link your Wikipedia below 🫠

          • Daxtron2@lemmy.ml
            link
            fedilink
            English
            arrow-up
            8
            arrow-down
            3
            ·
            1 year ago

            It’s a bit more complicated than you’re making it out to be lmfao, there’s a reason it’s only really been viable for the past few years.

            • skulkingaround@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              6
              arrow-down
              1
              ·
              1 year ago

              The principles are really easy though. At its core, neural nets are just a bunch of big matrix multiplication operations. Training is still fundamentally gradient descent, which while it is a fairly new concept in the grand scheme of things, isn’t super hard to understand.

              The progress in recent years is primarily due to better hardware and optimizations at the low levels that don’t directly have anything to do with machine learning.

              We’ve also gotten a lot better at combining those fundamentals in creative ways to do stuff like GANs.

    • lledrtx@lemmy.world
      cake
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      4
      ·
      1 year ago

      AI researcher (PhD) here and for what it’s worth, Obama got it extremely right. I saw this and went “holy shit, he gets it”

      • gmtom@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        Yeah I dont believe you at all. I got my master in AI 8 years ago and have been working in the field ever since and no one with any knowledge would agree with you at all. In fact I showed a couple of my colleagues the headline of this article and they both just laughed.

      • Azhad@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        If you don’t think ai will get there and surpass everything humans have done in the past, you should change career.

        • lledrtx@lemmy.world
          cake
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          3
          ·
          1 year ago

          I’m saying this because I do this for a living. It has become obvious to everyone in research (for example - https://arxiv.org/abs/2311.00059) that "AI"s don’t understand what they are outputting. The secret sauce with all these large models is the data scale. That is, we have not had real algorithmic breakthroughs - it’s just model scale and data scale. So we can make models that mimic human language and music etc but to go beyond, we need multiple fundamentally different breakthroughs. There is a ton of research attention now so it might happen, but it’s not guaranteed - the improvements we’ve seen in the past few years will plateau as data plateaus (we are already there according to some, i.e we’ve used all the data on the Internet). Also, this - https://arxiv.org/abs/2305.17493v2

          • Azhad@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            You do it for a living and you can’t even understand what a general ai is. Alas I long since understood that mostly everyone is profoundly incompetent at their own jobs.

  • eronth@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    5
    ·
    1 year ago

    While reassuring for many to hear, that’s only going to be true for so long. Eventually it’s going to be real fucking good at making “real” music. We need to be preparing for those advancements rather than acting like they’ll never come.

    • IDontHavePantsOn@lemm.ee
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      3
      ·
      edit-2
      1 year ago

      I feel very reassured to hear that from the AI expert / musical virtuoso himself, 62 year old, former United States President Barack Obama.

    • 31337@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      1 year ago

      To make “real” music, AI will probably need a lot of help. Image generators and chat bots seem to have their own, very boring style. I’ve seen videos of artists using AI tools in their workflow, but it’s still a very involved process. I think it will just be another tool for musicians and sound engineers.

      • eronth@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I’m the immediate term, yeah I 100% agree. However, I’m not thinking we bank on that being true forever.

  • kandoh@reddthat.com
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    1
    ·
    1 year ago

    One of my jobs involved updating blogs for small businesses. I had a Shutterstock subscription for the images that goes along with these blog posts. For this task, I think AI generated images work a lot better than stock photography.

    • boogetyboo@aussie.zone
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      There’s some recruitment company advertising jobs on LinkedIn. All the pictures are clearly AI generated and they’re terrifying. Uncanny Valley freaks grinning at you from your screen.

    • b3nsn0w@pricefield.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      you can already api into chatgpt and dall-e 3 as one cohesive service, and make a system in an afternoon’s work that reads the article, decides on a thumbnail, and automatically generates one. the whole thing costs like 8 cents per article.

      • Fisch@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 year ago

        You could also self host Stable Diffusion to save some money

        Edit: Or is it free to use with ChatGPT Premium (or whatever it’s called)? Then that would actually be cheaper

        • b3nsn0w@pricefield.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          no, the premium stuff doesn’t give you api access. which is total bs, but yeah, it’s only for that grey interface. (i’m also quite salty that the playground has no easy to access image inputs but that’s beside the point)

          you’re completely right about self-hosting sd, it’s just a matter of prompting. sd workflows tend to get a little more experimental but i guess you could still make chatgpt write a few prompts that are close to correct and just manually rerun if an image failed

          • Fisch@lemmy.ml
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            Maybe ChatGPT knows how to write SD prompts at this point. You could try just telling it to generate a prompt specifically for SD.

    • S_204@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 year ago

      My office is using AI to do this years xmas card. It’s pretty cool.

  • the_q@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    10
    ·
    1 year ago

    Young people think all this AI stuff is great and older folks are suspicious. I think older folks are right this time.

    • interceder270@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      1 year ago

      Eh. Hard to take people’s opinion on art seriously considering what’s popular.

      This is an interesting crossroads where greedy creators have to fight against greedy owners.

    • Blueberrydreamer@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      My experience has been the opposite. The boomers think it’s gonna do everything for them, and the young people I know think it’s gonna destroy the world.

    • desconectado@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 year ago

      As any tool, it’s as great as its user. I think younger generations are probably more eager to explore and expand, but it’s ok to be suspicious when used incorrectly.

      AI is great when used for some specific applications, but I had a discussion last week with someone asking chatgpt about immigration advice… Ehh no thanks, I rather talk to an actual expert.

    • egeres@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I agree with the conclusions of the boomers, but for very different I think long-term AI will produce vastly more harm than good. Just this week we got a headline about google, which is a serious and grown company which already makes billions was up to some fuckery against firefox, facebook has been fined a million times for not respecting privacy and amazon workers have to pee in bottles. To my sadness, all movement against the integration of AI in weapons basically to “kill people” will be very noble but won’t do jackshit. Do we think china/rusia are going to give a single fuck about this? Even the US will start selling AI-drones when it becomes normalized. And that’s just AI in war, but there’s another trillion things where AI will fuck things up, artists will be devalued, misinformation will reach a new all-time high, capchas are long dead making the internet a more polluted place, surveillance will be more toxic, the list goes on

  • Loqzer@lemmings.world
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    3
    ·
    1 year ago

    Definitely need more people to tell me about ai and what it will be capable of. Make a daily show so that every shitty celebrity can tell us about ai, there might still be plenty of word combinations that haven’t been used!

    • Otter@lemmy.ca
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      edit-2
      1 year ago

      I think the statement was more about the impact, which will depend on each person’s subjective experience

      Personally I agree. Even if AI could produce identical work, the impact would be lessened. Art is more meaningful when you know it took time and was an expression/interpretation by another human (rather than a pattern prediction algorithm Frankenstein-ing existing work together). Combine that with the volume of AI content that’s produced, and the impact of any particular song/art piece is even more limited.

      • Even_Adder@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 year ago

        People are social, if enough people feel the same way about one thing it’ll succeed. It doesn’t matter where it came from or how it was made, like how people can still admire and appreciate nature. Or maybe the impact will be that it reduces all impacts. Every group and subgroup might be able to have their own thing.

    • Knusper@feddit.de
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      1 year ago

      I think, it will eventually become obsolete, because we keep changing what ‘AI’ means, but current AI largely just regurgitates patterns, it doesn’t yet have a way of ‘listening’ to a song and actually judging whether it’s good or bad.

      So, it may expertly regurgitate the pattern that makes up a good song, but humans spend a lot of time listening to perfect every little aspect before something becomes an excellent song, and I feel like that will be lost on the pattern regurgitating machine, if it’s forced to deviate from what a human composed.

      • TopRamenBinLaden@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        I have seen a couple successful artists in different genres admit to using AI to help them write some of their most popular songs, and describe it’s use in the songwriting process. You hit the nail on the head with AI not being able to tell if something is good or bad. It takes a human ear for that.

        AI is good at coming up with random melodies, chord progressions, and motifs, but it is not nearly as good at composing and producing as humans are, yet. AI is just going to be another instrument for musicians to use, in its current form.

        • Knusper@feddit.de
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Yeah, I do imagine, it won’t be just AIs either. And then, it will obviously be possible to take it to an excellent song, given enough human hours invested.

          I do wonder, how useful it will actually be for that, though. Often times, it really fucks you up to try to go from good to excellent and it can be freeing to start fresh instead. In particular, ‘excellent’ does require creative ideas, which are easier for humans to generate with a fresh start.
          But AI may allow us to start over fresh more readily, if it can just give us a full song when needed. Maybe it will even be possible to give it some of those creative snippets and ask it to flesh it all out. We’ll have to see…

    • gregorum@lemm.ee
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      6
      ·
      edit-2
      1 year ago

      I don’t know. I think Obama kind of nailed it. AI can create boring and mediocre elaborations just fine. But for the truly special and original? It could never.

      For the new and special, humans will always be required. End of line.

      • kromem@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        5
        ·
        1 year ago

        At this point I want a calendar of at what date people say “AI could never” - like “AI could never explain why a joke it’s never seen before is funny” (such as March 2019) - and at what date it happens (in that case April 2022).

        (That “explaining the joke” bit is actually what prompted Hinton to quit and switch to worrying about AGI sooner than expected.)

        I’d be wary of betting against neural networks, especially if you only have a casual understanding of them.

        • rambaroo@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          4
          ·
          1 year ago

          I mean the limitations of LLMs are very well documented, they aren’t going to advance a whole lot more without huge leaps in computing technology. There are limits on how much context they can store for example, so you aren’t going to have AIs writing long epic stories without human intervention. And they’re fundamentally incapable of originality.

          General AI is another thing altogether that we’re still very far away from.

          • kromem@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            5
            ·
            1 year ago

            Nearly everything you wrote is incorrect.

            As an example, rolling context windows paired with RAG would easily allow for building an implementation of LLMs capable of writing long stories.

            And I’m not sure where you got the idea that they were fundamentally incapable of originality. This part in particular tells me you really don’t know how the tech is working.

            • rambaroo@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              ·
              edit-2
              1 year ago

              A rolling context window isn’t a real solution and will not produce works that even come close to matching the quality of human writers. That’s like having a writer who can only remember the last 100 pages they wrote.

              The tech is trained on human created data. Are you suggesting LLMs are capable of creativity and imagination? Lmao - and you try to act like I’m the one who’s full of shit.

              • kromem@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                2
                ·
                1 year ago

                That’s like having a writer who can only remember the last 100 pages they wrote.

                That’s why you pair it with RAG.

                The tech is trained on human created data. Are you suggesting LLMs are capable of creativity and imagination?

                They are trained by iterating through network configurations until there’s diminishing returns on how accurately they can complete that human created data.

                But they don’t just memorize the data. They develop the capabilities to extend it.

                So yes, they absolutely are capable of generating original content that’s not in the training set. As has been demonstrated over and over. From explaining jokes not found in the training data, solving riddles not found in it, or combining different concepts to result in a new synthesis not found in the original data.

                What do you think it’s doing? Copy/pasting or something?

  • spudwart@spudwart.com
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    1 year ago

    Okay, I love the elevator music idea as a gag in media.

    But I’ve never been in an elevator that has ever played music, and I can completely understand why. Elevator music sounds obnoxious.

  • Pretzilla@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    1 year ago

    That actually might make elevator and phone hold music survivable - continual compositions that never repeat

    • fosforus@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      1 year ago

      24h/7d elevator music that never repeats itself. I think you’ve described Hell.

      • Pretzilla@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        It doesn’t need to be shitty though.

        And it really couldn’t be worse than the current state of shit on a loop hold musak that drills into your brain.

      • Pretzilla@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 year ago

        That’s the current state of fuckery and makes self immolation a tempting option

    • Sagifurius@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      edit-2
      1 year ago

      Could you even imagine the necessary programming to create a thing like Bobby Zimmerman, a brilliant songwriter that can’t even sing, but made of go of it anyways and is regarded as one of the best singers of the last half of the previous century regardless?

  • raptir@lemdro.id
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 year ago

    Where are they getting the training data from for AI music models? I guess it’s the same issue as art and language models, but wouldn’t they need to only use royalty free music?

  • egeres@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    It’s reassuring that this opinion is based on many years of experience reading scientific papers, implementing these models and following the trends closely!

  • prototyperspective@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    It’s more or less only (that is mainly) useful for building components that you then use in your man-made tracks. It’s a tool, just like AI image generators are tools albeit there the replacement use-case is substantial. AI-generated voice also needs to be considered in this context I think.

    • banneryear1868@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 year ago

      Yeah generative music has been a thing for a long time, Brian Eno is probably the household name recognizable for generative compositions, but most sequencers have had randomization elements built in for a long time now. I use one where you feed it a scale of notes and can define the chance a certain note will play and chances around the quality of the note like duration, velocity, etc. Even my entry level MicroFreak has a randomization option which you can use to get musical ideas from. There’s some cool eurorack modules like Mutable Instruments Grids which function like this for drum sequencing, where you have this axis to explore and can control via an lfo if you want.

      I realize generative and AI are a technically different, I think AI is much better at “can you create a synth preset to make x sound” or “write a specific genre of melody/chord progression/etc.” It’s a lot better at factoring in the broader context.