Currently, AI models are trained using GPUs. In the future though, Generative AI will probably require its own specialized ASICs to achieve the best performance. This happened with bitcoin mining a few years ago and is also the reason big tech companies are making their own CPUs now.
Since there are only a few companies on the planet capable of producing these chips in bulk, the government could easily place restrictions on the purchase of AI hardware. This would control who has access to the best AI.
Only the government and a few permitted parties have access to the best AI. Everyone else would use worse AI that, while still good enough for most people, could be detected by the government. The government could use their superior models to easily detect whether a post is AI-generated, for example, and provide that insight as a service to citizens.
Effectively, the government becomes the sole purveyor of truth, as opposed to that power being in the hands of whoever can afford the biggest computer.
So a government and anyone who can pay a government’s fee. This isn’t really fixing the problem, just putting an extra barrier in the way of any smaller org that wants to get involved.
Never mind the issue that there isn’t a government that can be trusted. Do you think the world is going to be improved by making perception manipulating tech the private weapon of whatever bunch of psychopaths happen to rule at the time?
Would you rather let anyone with the money buy a nuke, or only let the governments have them? At least this way there’s a fewer number of psychopaths to worry about.
Yeah, totally the same thing. Utterly comparable, you clearly fully understand what it is capable of and the risks it poses.
I also respect your knowledge of nuclear weapons and the reasons why every billionaire doesn’t have a home defence warhead.
I’d say LLMs are pretty comparable to an operating system (i.e., something anyone can buy, use and develop without any outside interference) and not comparable at all to nuclear weapons.