Im using Ollama on my server with the WebUI. It has no GPU so its not quick to reply but not too slow either.
Im thinking about removing the VM as i just dont use it, are there any good uses or integrations into other apps that might convince me to keep it?
Sure, or you could send an email to the leading international institution on the matter to get a very accurate answer!
Is it the most reasonable course of action? No. Is it more reasonable than waste a gazillion Watt so you can maybe get some better keywords to then paste in a search engine? Yes.
Once the model is trained, the electricity that it uses is trivial. LLMs can run on a local GPU. So you’re completely wrong.
No I’m not. Other questions?
Those were statements. Statements of fact.
Once the models are already trained, it takes almost no power to use them.
Yes, TRAINING the models uses an immense amount of power - but utilizing the training datasets locally consumes almost nothing. I can run the llama 7b set on a 15w Raspberry Pi for example. Just leaving my PC on uses 400w. This is all local – Nothing entering or leaving the Pi. No communication to an external server, nothing being done on anybody else’s server or any AWS instances, etc.
Notwithstanding that running an LLM is still more expensive than a search engine, in any reasoning around running an LLM you must include the training and, most of all, the incentive as a consumer you are giving to further training.
It’s like arguing that cooking a steak has negligible environmental impact. The point is the whole industry meant to provide you the steak in the first place.
Notwithstanding that running an LLM is still more expensive than a search engine, in any reasoning around running an LLM you must include the training and, most of all, the incentive as a consumer you are giving to further training.
It’s like arguing that cooking a steak has negligible environmental impact. The point is the whole industry meant to provide you the steak in the first place.