cross-posted from: https://sh.itjust.works/post/55351

This is a cool project for an ESP32-S3-Box which can give really good voice support to Homeassistant or Openhab. Once installed on supported hardware, you can host the Inference Server yourself, use their cloud based version, or perform local actions on the device.

  • Communist@beehaw.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Oh wow, I didn’t even hear it was discontinued, interesting.

    Does it use a large language model? it says “Willow users can now self-host the Willow Inference Server for lightning-fast language inference tasks with Willow and other applications (even WebRTC) including STT, TTS, LLM, and more!”

    but i’m not sure if that refers to using a large language model or if LLM refers to something else

    • cyberscribe@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Yes - I have not tested it out yet but the author of this project suggests Llama derivatives like Vicuna. I am excited to see how this project evolves alongside Homeasisstant’s voice goals. The author of Rhasspy is working for Nabu Casa so im sure that will grow too!