You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are currently experimenting with LoRA adapters and experimenting on possible degradation when attaching a quantized adapter to a quantized model. We are currently stuck as ggml_add_inplace does not support addition of 2 quantised tensors or fp16.
Alternatively if there is a quick hacky way to bypass this limitation, that would also be good enough for now. We were able to reduce the cached adapter size from 8Gb to 4Gb and 1.2Gb for (fp16 and int4) approx. If we are able to quickly validate or invalidate this to see if there is minimal quality degradation wrt adding int4 + fp32 vs int4 + fp16 vs int4 + int4 I think that would help with making adapter switching more feasible during runtime.
Furthermore I think it would also be great to add support for ggml_sub_inplace. This would enable detaching the LoRA adapters.
Currently we are doing this in a hacky way where we multiply the adapter weights with -1 to detach the adapter.
Appreciate your feedback regarding this!
The text was updated successfully, but these errors were encountered:
Really appreciate your work on the repository.
We are currently experimenting with LoRA adapters and experimenting on possible degradation when attaching a quantized adapter to a quantized model. We are currently stuck as
ggml_add_inplace
does not support addition of 2 quantised tensors or fp16.Alternatively if there is a quick hacky way to bypass this limitation, that would also be good enough for now. We were able to reduce the cached adapter size from 8Gb to 4Gb and 1.2Gb for (fp16 and int4) approx. If we are able to quickly validate or invalidate this to see if there is minimal quality degradation wrt adding
int4 + fp32
vsint4 + fp16
vsint4 + int4
I think that would help with making adapter switching more feasible during runtime.Furthermore I think it would also be great to add support for
ggml_sub_inplace
. This would enable detaching the LoRA adapters.Currently we are doing this in a hacky way where we multiply the adapter weights with -1 to detach the adapter.
Appreciate your feedback regarding this!
The text was updated successfully, but these errors were encountered: