Perhaps RDNA3 GPUs get comparable results, but RDNA2 GPUs are behind.
I bought a RX 6800XT to do some AI work because of the 16GB VRAM, and while the VRAM allows me to do stuff that my 6GB RTX 2060 wasn't able to, on performance side it's actually a downgrade in many aspects.
But the main issue is software support. To get acceptable performance you need to use ROCm, which is Linux only. There was some Windows release of ROCm few weeks ago, but I am not sure how usable it is and none of the libraries have picked up on it yet.
Even with a Linux installed, most frameworks still assume CUDA and it's an effort to get them to use ROCm. For some tools all it takes is uninstalling PyTorch or Tensorflow and installing a special ROCm enabled version of those libraries. Sometimes it will be enough, sometimes it wasn't. Sometimes the project uses some auxiliary library like bitsandbytes which doesn't have an official ROCm fork, so you have to use unofficial ones (that you have to compile manually and Makefiles quickly get out of date). Which once again, may work or may not.
I have things set up for stable diffusion and text generation (oobabooga), and things mostly work, but sometimes they still don't. For example I can train stable diffusion embeddings and dreambooth checkpoints, but for some reason it crashes when I attempt to train a LORA. And I don't have enough expertise to debug it myself.
For things like video encoding most tools also assume CUDA will be present so you're stuck with CPU encoding which takes forever. If you're lucky, some tools may have a DirectML backend, which kinda works under Windows for AMD, but it's performance is usually far behind a ROCm implementation.
However AMD is still as terrible at H264 as ever, and their AV1 encoder also has a hardware defect they’ve patched by forcing it to round up to 16 line multiples, so it is incapable of encoding a 1080p video and instead outputs 1082p that won’t be ingested properly when streaming.
Also the AV1 quality is not as good as intel+nvidia even with resolutions that aren’t glitched. AMD seemingly went big on HEVC (supposedly because of stadia?) but everything else is a mess.
And most places won’t touch HEVC because of the licensing costs. Microsoft makes it a windows store plugin you have to buy separately etc. somewhat odd that google picked HEVC for stadia but I guess those customers are actually directly paying you vs YouTube being a minus on their balance sheet (at least until recently possibly)
I bought a RX 6800XT to do some AI work because of the 16GB VRAM, and while the VRAM allows me to do stuff that my 6GB RTX 2060 wasn't able to, on performance side it's actually a downgrade in many aspects.
But the main issue is software support. To get acceptable performance you need to use ROCm, which is Linux only. There was some Windows release of ROCm few weeks ago, but I am not sure how usable it is and none of the libraries have picked up on it yet.
Even with a Linux installed, most frameworks still assume CUDA and it's an effort to get them to use ROCm. For some tools all it takes is uninstalling PyTorch or Tensorflow and installing a special ROCm enabled version of those libraries. Sometimes it will be enough, sometimes it wasn't. Sometimes the project uses some auxiliary library like bitsandbytes which doesn't have an official ROCm fork, so you have to use unofficial ones (that you have to compile manually and Makefiles quickly get out of date). Which once again, may work or may not.
I have things set up for stable diffusion and text generation (oobabooga), and things mostly work, but sometimes they still don't. For example I can train stable diffusion embeddings and dreambooth checkpoints, but for some reason it crashes when I attempt to train a LORA. And I don't have enough expertise to debug it myself.
For things like video encoding most tools also assume CUDA will be present so you're stuck with CPU encoding which takes forever. If you're lucky, some tools may have a DirectML backend, which kinda works under Windows for AMD, but it's performance is usually far behind a ROCm implementation.