Your home server might have the required bandwidth but not requisite the infra to support server load (hundreds of parallel connections/downloads).
Bandwidth is only one aspect of the problem.
That solves the media distribution related storage issue, but not the CI/CD pipeline infra issue.
Exactly the same rationale as mine.
Users are only shown Big Tech “3rd-party” options. Mozilla made this choice intentionally.
Well, how many users really have LLM local-hosted?
To be honest, I never tried publicly available instances of any privacy front-ends (SearxNG, Nitter, Redlib etc.). I always self-host and route all such traffic via VPN.
My initial issue with SearxNG was with the default selection of search engines. Default inclusion of Qwant engine caused irrelevant and non-english results to return. Currently my selection is limited to Google, Bing and Brave as DDG takes around 2 sec to return result (based on the VPN server location I’m using).
If you still remember the error messages, I might help to help fix that.
Though it is an off-topic but what exact issues you faced with SearxNG?
On Ubuntu, replacing Firefox/Thunderbird snap version with actual deb version.
The built-in AI staff,you referred to, is nothing but an accelerator to integrate with 3rd-party or self-hosted LLMs. It’s quite similar to choosing a search engine in settings. This feature itself is lightweight and can be disabled in settings if not required.
But pocket can be disabled via about:config, right?
I thought that’s how all those soft forks handled that mess.
You may self-host SearxNG (via Docker) and avoid direct interaction with search engines - be it google, bing, Brave or DDG.
SearxNG will act as a privacy front-end for you.
This is just an add-on BTW. It’s completely up to you to decide if you need this.
My (docker based) configuration:
Software stack: Linux > Docker Container > Nvidia Runtime > Open WebUI > Ollama > Llama 3.1
Hardware: i5-13600K, Nvidia 3070 ti (8GB), 32 GB RAM
Docker: https://docs.docker.com/engine/install/
Nvidia Runtime for docker: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html
Open WebUI: https://docs.openwebui.com/
No, but the “AI” option available on Mozilla Lab tab in settings allows you to integrate with self-hosted LLM.
I have this setup running for a while now.
BTW, Lab option works better privacy wise (than Add-on) if you have LLM running locally, IMO.
Even they choose to do so in future, usually there always is a about:config entry to disable it.
It’s an add-on, not something baked-in the browser. It’s not on your property at the first place, unless you choose to install it 🙂
In such scenario you need to host your choice of LLM locally.
I came here to tell my tiny Raspberry pi 4 consumes ~10 watt, But then after noticing the home server setup of some people and the associated power consumption, I feel like a child in a crowd of adults 😀
This is exactly what came to my mind while reading through the article.