Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gcc-8: _mm_loadu_si64 missing #99

Closed
jploski opened this issue Apr 21, 2023 · 6 comments
Closed

gcc-8: _mm_loadu_si64 missing #99

jploski opened this issue Apr 21, 2023 · 6 comments

Comments

@jploski
Copy link
Contributor

jploski commented Apr 21, 2023

The library does not compile using gcc<9 (e.g. on Debian 10) because of missing _mm_loadu_si64 intrinsic (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=78782):

[  6%] Built target ggml_utils
[  6%] Building C object src/CMakeFiles/ggml.dir/ggml.c.o
/tmp/ggml/src/ggml.c: In function ‘bytes_from_nibbles_16’:
/tmp/ggml/src/ggml.c:476:19: warning: implicit declaration of function ‘_mm_loadu_si64’; did you mean ‘_mm_loadl_epi64’? [-Wimplicit-function-declaration]
     __m128i tmp = _mm_loadu_si64( ( const __m128i* )rsi );
                   ^~~~~~~~~~~~~~
                   _mm_loadl_epi64
/tmp/ggml/src/ggml.c:476:19: error: incompatible types when initializing type ‘__m128i’ {aka ‘__vector(2) long long int’} using type ‘int’
make[2]: *** [src/CMakeFiles/ggml.dir/build.make:63: src/CMakeFiles/ggml.dir/ggml.c.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:91: src/CMakeFiles/ggml.dir/all] Error 2
make: *** [Makefile:141: all] Error 2

Not sure if this is a bug since Debian 10 is pretty old, but if there is an easy fix and no other reasons that require gcc-9, it may be worth staying backward-compatible. Otherwise, maybe fail more gracefully (and/or mention gcc-9 in the docs).

@jploski
Copy link
Contributor Author

jploski commented Apr 21, 2023

Note: this now also affects llama.cpp (as of commit d40fded) - which used to compile using gcc-8.3.0 before...

@prusnak
Copy link
Sponsor

prusnak commented Apr 23, 2023

llama.cpp issue: ggerganov/llama.cpp#1120

@jploski
Copy link
Contributor Author

jploski commented Apr 23, 2023

I noticed that simply replacing the invocation of _mm_loadu_si64 by _mm_loadl_epi64 in ggml.c:439 allows the code to compile using gcc-8 again - and the quantized models seem to run just as well as before. This appears to be the only reference in the entire code base - perhaps you could add some #ifdef magic to make us folks with old compilers happy?...

@ggerganov
Copy link
Owner

PR welcome - not sure what #ifdef to use

@ggerganov
Copy link
Owner

The fix was suggested by @sw in the llama.cpp repo: ggerganov/llama.cpp#1154

@jploski
Copy link
Contributor Author

jploski commented Apr 24, 2023

Great, if no #ifdef necessary, so much the better.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants