The best way to tune the algorithm on Youtube is to aggressively prune your watch/search history.
Even just one “stereotypical” video can cause your recommendations to go to shit.
The best way to tune the algorithm on Youtube is to aggressively prune your watch/search history.
Even just one “stereotypical” video can cause your recommendations to go to shit.
Is there anything the rest of us can do to cultivate such a mindset?
For cardio it’s basically “go slow”. The main source of discomfort is the exertion.
An easy long run with good music is quite meditative and enjoyable.
When your legs hurt and you’re wheezing your lungs out, not so much.
Running, not so much because my calves tend to seize up and it gets a little painful
This is often a form and/or a shoe type issue.
Doing it yourself is fine as an educational exercise for newbies, but skilled linux users generally have better things to do than to do the setup by hand for the nth time. On the other hand the “vanilla”/bleeding-edge approach of Arch makes it one of the best bases for derivative distros available, so basing your distro on it is a no-brainer for many.
“Manjaro is not stable because it ensures no breaking updates are pushed to users” is such a weird statement to make.
It’s never been customary to adhere to KISS in Linux. This whole explanation reads like it came out of a game of Chinese whispers.
does not comply with the principle of K.I.S.S. One application should solve one task and can be replaced
That’s not KISS, but the UNIX principle. And even that part is wrong, as in traditional UNIXes applications were certainly not replaceable.
Manjaro ended my distro hopping itch +10 years ago. I occasionally test distros in VM, but nothing has made me want to switch so far.
How does this compare to Org-mode? Notable pros/cons?
The features themselves are very useful for basically any user. Whether they are worth the non-standardness and issues that come with it is another question.
Oh boy, you’re going to be in for a disappointment
Twitter probably opened the floodgates when they managed to shaft users and cut API access without outright killing themselves. Now everyone else is emboldened to ask “why can’t we do that too?”.
Most of the data used in training GPT4 has been gathered through open initiatives like Wikipedia and CommonCrawl. Both are freely accessible by anyone. As for building datasets and models, there are many non-profits like LAION and EleutherAI involved that release their models for free for others to iterate on.
While actually running the larger models at a reasonable scale will always require expensive computational resources, you really only need to do the expensive base model training once. So the cost is not nearly as expensive as one might first think.
Any headstart OpenAI may have gotten is quickly diminishing, and it’s not like they actually have any super secret sauce behind the scenes. The situation is nowhere as bleak as you make it sound.
Fighting against the use of publicly accessible data is ultimately as self-sabotaging ludditism as fighting against encryption.
Yet primitivist delusions seem to get a full pass around here.