• semi [he/him]@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    23 hours ago

    For inference (running previously-trained models that need lots of RAM), the desktop could be useful, but I would be surprised if training anything bigger than toy examples on this hardware would make sense because I expect compute performance to be limited.

    Does anyone here have practical recent experience with ROCm and how it compares with the far-more-dominant CUDA? I would imagine that compatibility is much better now that most models are using PyTorch and that is supported, but what is the performance compared to a dedicated Nvidia GPU?

    • geneva_convenience@lemmy.ml
      link
      fedilink
      arrow-up
      2
      ·
      11 hours ago

      ROCM is complete garbage. AMD has an event every year that “Pytorch works now!” and it never does.

      ZLUDA is supposedly a good alternative to ROCM but I have not tried it.

      • semi [he/him]@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        8 hours ago

        Thanks for the comment. I have had exposure to similar claims, but wasn’t seeing anyone using AMD GPUs for AI unless they were somehow incentivized by AMD, which made me suspicious.

        In principle, more competition in the AI hardware market would be amazing, and Nvidia GPUs do feel overpriced, but I personally don’t want to deal with the struggles of early adoption.