• carpelbridgesyndrome@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    44
    arrow-down
    4
    ·
    11 months ago

    Voice assistants are money losing products. If they can do something like processing the wakewords on the device before chosing to send to a server they will. These companies are far too stingy to continuously stream audio to their servers

    • linearchaos@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      11 months ago

      Back in the day when everything had to be processed server-side sure.

      Now we have purpose-built hardware helping work this shit out. The devices are basically capable of handling native language resolution locally. They’re no longer need to farm the data out. I still don’t think they’re doing this we would see it in the open source operating systems, but if they wanted to any late model cell phone would be absolutely fine parsing out your interests from your conversations. Hell, I’m sure the contents of this dictation I’m making now are being reduced and added to my social graph at Google.

    • howrar@lemmy.ca
      link
      fedilink
      arrow-up
      9
      ·
      11 months ago

      I think this should be fairly easy to test yourself. Just disconnect from the WAN, say the wake word, and see if the device responds.

    • books@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      11 months ago

      Someone can correct me if I’m wrong but home assistant is currently struggling with this and is processing everything on your local box because it can’t do wakewords on the device.

      • ReadingCat@programming.dev
        link
        fedilink
        arrow-up
        7
        ·
        11 months ago

        I think they’re choosing to do it that way. Raspberry pi’s easily have that capability to do the wake word recognition on device (i think they are also working on that). Esp’s on the other hand, can only stream audio to the server and not much more. Since esp’s are far cheaper than installing a raspberry in each room, they are focusing to do wake word detection on the server not on device.

    • byroon@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      11 months ago

      Yeah what possible use could this company, whose business model relies on surveillance, have for surveiling you

    • Pohl@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      11 months ago

      Exactly. If it is practical and money can be made doing it, then continuous, ambient sound parsing will be the norm. Currently it seems like it’s not a valuable business. When it is valuable to them, they will add a checkbox somewhere in your account to disable it, and most people will not be bothered enough to look for it.

    • douglasg14b@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      11 months ago

      Are they though?

      My experiences are much MUCH different. The amount of compute waste is through the roof, and we shrug at +$50k/m provisioning. You don’t even need approvals for that, and you can leave it idle and you MIGHT get a ping from gloudgov after a few months.