This is an interesting choice; a big part of AI-mania has been corporations tripping over themselves to prove how “all-in” they are on AI. In firing a bunch of AI staff, Meta risks looking like they’re not committed enough to it. Are things starting to change? Is this all-in posture no longer important? (We can only hope.)



The thing is that the frameworks for running things on competitors’ GPUs are actually fine (rocm and oneapi), the GPUs are price competitive or better as well. It’s just that CUDA/NVidia is the standard, and no one wants to learn a new language just to make something that most people aren’t going to be able to use. Very few people want to put in the effort to make something work across platforms.
There are some nice frameworks for general purpose GPU computing but it seems like they all have limitations of one type or another.