Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possible support for addition of two quantised tensors #97

Open
PotatoSpudowski opened this issue Apr 20, 2023 · 1 comment
Open

Possible support for addition of two quantised tensors #97

PotatoSpudowski opened this issue Apr 20, 2023 · 1 comment

Comments

@PotatoSpudowski
Copy link

Really appreciate your work on the repository.

We are currently experimenting with LoRA adapters and experimenting on possible degradation when attaching a quantized adapter to a quantized model. We are currently stuck as ggml_add_inplace does not support addition of 2 quantised tensors or fp16.
Alternatively if there is a quick hacky way to bypass this limitation, that would also be good enough for now. We were able to reduce the cached adapter size from 8Gb to 4Gb and 1.2Gb for (fp16 and int4) approx. If we are able to quickly validate or invalidate this to see if there is minimal quality degradation wrt adding int4 + fp32 vs int4 + fp16 vs int4 + int4 I think that would help with making adapter switching more feasible during runtime.

Furthermore I think it would also be great to add support for ggml_sub_inplace. This would enable detaching the LoRA adapters.
Currently we are doing this in a hacky way where we multiply the adapter weights with -1 to detach the adapter.

Appreciate your feedback regarding this!

@PotatoSpudowski
Copy link
Author

@amitsingh19975 implemented a hack. Is this correct? If yes we can maybe raise a PR for this library

We are a little sceptical about int4 tho

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant