> When training a 65B-parameter model, our code processes around 380 tokens/sec/GPU on 2048 A100 GPU with 80GB of RAM.[1]
Note that you probably need to budget for double to triple that because things go wrong and it usually takes multiple starts to get a good training run.
it pains me to see AMD just sitting on their asses through this incredible development of AI & possibly AGI. if they still cant get their shit together then they should spin-off the discrete gpu division into something purely compute focused. I believe now there is enough momentum in the AI/ML space to fully develop innovative ideas on h/w front.
Is 40 a100s enough though? I am interested in what this would cost.