Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

While OPT-175B is great to have publicly available, it needs a lot more training to achieve good results. Meta trained OPT on 180B tokens, compared to 300B that GPT-3 saw. And the Chinchilla scaling laws suggest that almost 4T tokens would be required to get the most bang for compute buck.

And on top of that, there are some questions on the quality of open source data (The Pile) vs OpenAI’s proprietary dataset, which they seem to have spent a lot of effort cleaning. So: open source models are probably data-constrained, in both quantity and quality.



OPT-175B isn't publicly available, sadly. It's available to research institutions, which is much better than "Open"AI, but it doesn't help us hobbyists/indie researchers much.


I wonder when we'll start putting these models on the pirate bay or similar. Seems like an excellent use for the tech. Has no one tried to upload OPT-175B anywhere like that yet?


It could go on the clear net since trained weights aren't subject to copyright.


It’s fun to think about a few billion weights being the difference between useless and gold.


Looking at my bank account I can relate :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: