-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Lambda Function Unable To Find Header Files #50
Comments
Huh, that looks correct offhand. I can build fine on the Can you try
and paste the full output? That might give some better sense of what's going wrong. |
|
Interesting, it's not picking up any of the headers. If you run
by hand, what is the output? |
Running that command by hand doesn't throw any compilation errors. |
The only difference I can see, from what you've described on your blog post, is that I'm not passing the '-DLLVM_USE_LINKER=lld' argument to cmake, because I don't have lld. But then again, llvm still builds fine locally without that argument. |
That command should output a list of dependencies for |
Yep, it is just returning with no error and no output. |
If I run that command in a Buster docker image, I see this:
Which is what What if you run |
Hmm, that's strange, running that command doesn't generate any output either. |
How did you get your buster VM? Is it a stock image somewhere I could easily reproduce so I can investigate? This is surprising behavior to me and smells obviously buggy, although – as you point out – the fact it can build llvm locally suggests it's mostly fine. |
The VM is built using a preseeded iso that we have here in the office, and unfortunately I can't point you to a reproduceable image. Is there anything in particular that I can check out in my VM and send you outputs for? |
What is your C compiler? I've never seen this issue before and am a bit stumped offhand about what might be the issue. |
g++ (Debian 6.3.0-18+deb9u1) 6.3.0 20170516 |
Hmm, that's not the normal buster gcc, which should be 8.x: https://packages.debian.org/buster/gcc That looks to be an oldstable version: https://packages.debian.org/stretch/gcc-6 I'm still surprised it doesn't appear to support -M properly, but are you able to upgrade gcc and try again? |
Oh gosh, I just grabbed a docker image with that version of |
Thanks for the fix @nelhage ! I am now able to build using llama without any difficulties :) |
Context:
I'm quite new to AWS and am go illiterate as well (for now). My current aim is to reproduce the experiment from this blog so that I can see how llama works and then hand it off to an intern to do more research to understand how we can integrate llama into our current meson-and-ninja based build system to build our firmware applications. However, in my efforts to build LLVM, I hit the following type of errors:
[98/2572] Building CXX object lib/Support/CMakeFiles/LLVMSupport.dir/AMDGPUMetadata.cpp.o FAILED: lib/Support/CMakeFiles/LLVMSupport.dir/AMDGPUMetadata.cpp.o /home/njpau/go/bin/llamac++ -DGTEST_HAS_RTTI=0 -D_GNU_SOURCE -D__STDC_CONSTANT_MACROS -D__STDC_FORMAT_MACROS -D__STDC_LIMIT_MACROS -Ilib/Support -I/home/njpau/git/llvm-project/llvm/lib/Support -I/usr/include/libxml2 -Iinclude -I/home/njpau/git/llvm-project/llvm/include -fPIC -fvisibility-inlines-hidden -Werror=date-time -std=c++11 -Wall -Wextra -Wno-unused-parameter -Wwrite-strings -Wcast-qual -Wno-missing-field-initializers -pedantic -Wno-long-long -Wno-maybe-uninitialized -Wdelete-non-virtual-dtor -Wno-comment -fdiagnostics-color -ffunction-sections -fdata-sections -O0 -fno-exceptions -fno-rtti -MD -MT lib/Support/CMakeFiles/LLVMSupport.dir/AMDGPUMetadata.cpp.o -MF lib/Support/CMakeFiles/LLVMSupport.dir/AMDGPUMetadata.cpp.o.d -o lib/Support/CMakeFiles/LLVMSupport.dir/AMDGPUMetadata.cpp.o -c /home/njpau/git/llvm-project/llvm/lib/Support/AMDGPUMetadata.cpp _root/home/njpau/git/llvm-project/llvm/lib/Support/AMDGPUMetadata.cpp:16:28: fatal error: llvm/ADT/Twine.h: No such file or directory #include "llvm/ADT/Twine.h" ^ compilation terminated. Running llamacc: invoke: exit 1
Everything I have done today is from scratch, including setting up my free tier AWS account. These are the steps I've gone through:
llama bootstrap
. After completion, I went to my aws console and verified that the cloudformation stack had been successful created and that I had an empty S3 bucket, and empty ECR repo and an IAM role for basic lambda execution.home/njpau/go/src/github.com/nelhage/llama/scripts/build-gcc-image --local-headers
to build my debian image and lambda function. Verified that I see 'gcc' image and function in my aws console under ECR and Lambda.cd llvm-project && mkdir -p build && cd build
cmake -GNinja \ -DCMAKE_BUILD_TYPE=Release \ -DLLVM_ENABLE_PROJECTS=clang \ -DLLVM_TARGETS_TO_BUILD=X86 \ -DLLVM_PARALLEL_LINK_JOBS=8 \ -DLLVM_BUILD_TOOLS=OFF \ -DLLVM_BUILD_UTILS=OFF \ -DCMAKE_CXX_FLAGS_RELEASE="-O0" \ -DCLANG_ENABLE_STATIC_ANALYZER=OFF \ -DCLANG_ENABLE_ARCMT=OFF \ ../llvm/
to test building locally first.ninja build
. Build successfully completed in roughly 20 mins.LLAMACC_LOCAL=1 CC=llamacc CXX=llamac++ \ cmake -GNinja \ -DCMAKE_BUILD_TYPE=Release \ -DLLVM_ENABLE_PROJECTS=clang \ -DLLVM_TARGETS_TO_BUILD=X86 \ -DLLVM_PARALLEL_LINK_JOBS=8 \ -DLLVM_BUILD_TOOLS=OFF \ -DLLVM_BUILD_UTILS=OFF \ -DCMAKE_CXX_FLAGS_RELEASE="-O0" \ -DCLANG_ENABLE_STATIC_ANALYZER=OFF \ -DCLANG_ENABLE_ARCMT=OFF \ ../llvm/
ninja build -j100
. And that's when the build bombed out with 99 errors similar to the one above (i.e., reporting 'No such file or directory' for required headers).What am I doing wrong here? Have I configured llama incorrectly? I have tried to follow the README + this blog post.
The text was updated successfully, but these errors were encountered: