Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This also means local fine-tuning is possible. Expect to see an explosion of new things like we did with Stable Diffusion, limited to some extent by the ~0.7 order of magnitude more VRAM required.


Does it? I would have expected compression losses to make training really hard.


The compression is optional.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: