I thought it was going to be a cringe inducing fluff piece but the guy showed a decent grasp of the benefits and pitfalls of expanded AI use as well as why the EU’s approach is not the way we probably want to go. Pleasantly surprised.
Unfortunately, the press releases are PR fluff. The EU’s publicity guys don’t work any differently than those of any major corporations.
I know parts of the AI act and may be able to answer questions about particular aspects.
Off the top of my head: 3 general problems.
It is simply a mistake to regulate software based on how it is made, rather than what it is used for. EG They ended up regulating chatbots in the same act as mass surveillance. I don’t think that helped, either. Hard to say for sure.
They ended up doing a lot of bad micromanaging. The training data for “high risk” AI must fulfill certain conditions. This is certainly going to increase costs, but it’s unclear if it will lead to any improvement. The sane thing would have been to define the desired performance. It’s a typical problem. People without technical knowledge demand things to be done a certain way, because they figure it will get them what they want, instead of saying what they want.
Finally, there’s the interference of existing industry. The copyright lobby got some stuff in there, that may or may not enable them to extract some free money. It will certainly harm European citizens by making development much harder than it needs to be.
I thought it was going to be a cringe inducing fluff piece but the guy showed a decent grasp of the benefits and pitfalls of expanded AI use as well as why the EU’s approach is not the way we probably want to go. Pleasantly surprised.
Is that part in a different article? I’d like to read more on that, but this article didn’t seem to go into it that much
It was mentioned in the video interview. I don’t know of a single article with a good summary.
What’s wrong with how the EU is doing it? Seems pretty level headed to me.
Unfortunately, the press releases are PR fluff. The EU’s publicity guys don’t work any differently than those of any major corporations.
I know parts of the AI act and may be able to answer questions about particular aspects.
Off the top of my head: 3 general problems.
It is simply a mistake to regulate software based on how it is made, rather than what it is used for. EG They ended up regulating chatbots in the same act as mass surveillance. I don’t think that helped, either. Hard to say for sure.
They ended up doing a lot of bad micromanaging. The training data for “high risk” AI must fulfill certain conditions. This is certainly going to increase costs, but it’s unclear if it will lead to any improvement. The sane thing would have been to define the desired performance. It’s a typical problem. People without technical knowledge demand things to be done a certain way, because they figure it will get them what they want, instead of saying what they want.
Finally, there’s the interference of existing industry. The copyright lobby got some stuff in there, that may or may not enable them to extract some free money. It will certainly harm European citizens by making development much harder than it needs to be.