No, the amount of code written in CUDA for pytorch could easily be rewritten in CUDA for few million or tens of millions of investment. The problem is that it is damn near impossible to get good performance in AMD. For complicated CUDA programs like flash attention(few 100 lines of code), no amount of developers could write those few 100 lines for AMD to get the same performance.
Even worse: GPGPU is not only about LLM or even ML. It's also for computer vision, signal processing, pointcloud processing, e.g. Opencv has backend for CUDA, open3d, PCL the same. Even apple is kind of worse than AMD regarding ecosystem of libraries and open source high performance algorithms - when I tried to port some ICP pipeline to apple metal there was nothing there, most libraries and research code target only CUDA