Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Instructions on how to build a wasm ggml. #419

Closed
fire opened this issue Jul 26, 2023 · 5 comments
Closed

Instructions on how to build a wasm ggml. #419

fire opened this issue Jul 26, 2023 · 5 comments

Comments

@fire
Copy link

fire commented Jul 26, 2023

It sounds obvious that we can compile ggml into wasm with WebGPU or Webgl2 support.

Has anyone done it and written about how to do it?

Alternatively, one can assist with some idea how to do it.

@fire
Copy link
Author

fire commented Jul 27, 2023

The information was in the whispher.cpp repo

@fire fire closed this as completed Jul 27, 2023
@westurner
Copy link

westurner commented Sep 1, 2023

make \
  CC=emcc \
  CXX=em++ \
  LLAMA_NO_ACCELERATE=1 \
  CFLAGS="\
    -DNDEBUG \
    -s MEMORY64" \
  CXXFLAGS="\
    -DNDEBUG \
    -s MEMORY64" \
  LDFLAGS="\
    -s MEMORY64 \
    -s FORCE_FILESYSTEM=1 \
    -s EXPORT_ES6=1 \
    -s MODULARIZE=1 \
    -s TOTAL_MEMORY=2GB \
    -s STACK_SIZE=524288 \
    -s ALLOW_MEMORY_GROWTH \
    -s EXPORTED_FUNCTIONS=_main \
    -s EXPORTED_RUNTIME_METHODS=callMain \
    -s BUILD_AS_WORKER=1 \
    -s SINGLE_FILE=1 \
    -s NO_EXIT_RUNTIME=1" \
  main.js

@westurner
Copy link

@limcheekin
Copy link

Thanks for sharing. As the issue had been closed, do you found the solution to build llama.cpp to WebAssembly?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants