> Interesting, though apparently the OPT175B model is 350GB:
Only in FP16. In the paper they use int4 quantization to reduce it to a quarter of that. In addition to the model weights, there's also a KV cache that takes up considerable amounts of memory, and they use int4 on that as well.
> I wonder what FlexGen is doing.. a naive guess is a mix of SSD and system memory.
That's correct, but other approaches have done this as well. What's "new" here seems to be the optimized data access pattern in combination with some other interesting techniques (prefetching, int4 quantization, CPU offload).
I want to emphasize how fascinating I find that the transform from 16 bit to a 4 bit quantization results in negligible performance loss. That's huge. Is the original FP16 not compressed?
The allowance for this more granular quantization seems to suggest the "bottleneck" is in some other aspect of the system, and maybe until that is addressed, a higher fidelity quantization does not improve performance.
Or maybe it's the relative values/ratio between weights that is important, and as long as the intended ratio between weights can be expressed, the exact precision of the weights themselves may not be important?
Found an interesting paper on this below. There's doubtless heavy research underway in this area
In my understanding, at a very high level and omitting many crucial details, the key is that when you have mainly largish matrix multiplications (as in transformers) well-behaved (mean zero uncorrelated random or so) quantization errors cancel out.
People do/did experiment with 1 or 2 bit compression of gradients/updates in the context of distributed training, but there it has been generally deemed useful to keep track of compression errors locally.
Only in FP16. In the paper they use int4 quantization to reduce it to a quarter of that. In addition to the model weights, there's also a KV cache that takes up considerable amounts of memory, and they use int4 on that as well.
> I wonder what FlexGen is doing.. a naive guess is a mix of SSD and system memory.
That's correct, but other approaches have done this as well. What's "new" here seems to be the optimized data access pattern in combination with some other interesting techniques (prefetching, int4 quantization, CPU offload).