Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
benlivengood
on Feb 20, 2023
|
parent
|
context
|
favorite
| on:
Running large language models like ChatGPT on a si...
This also means local fine-tuning is possible. Expect to see an explosion of new things like we did with Stable Diffusion, limited to some extent by the ~0.7 order of magnitude more VRAM required.
bioemerl
on Feb 20, 2023
[–]
Does it? I would have expected compression losses to make training really hard.
Miraste
on Feb 20, 2023
|
parent
[–]
The compression is optional.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: