Skip to content

Commit

Permalink
Minor improvements (#147)
Browse files Browse the repository at this point in the history
  • Loading branch information
ggerganov committed Apr 16, 2024
1 parent 181dd9e commit e876c84
Show file tree
Hide file tree
Showing 3 changed files with 7 additions and 3 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -19,3 +19,4 @@

bark_weights/
build/
models/
8 changes: 6 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -113,10 +113,14 @@ wget https://huggingface.co/suno/bark/raw/main/vocab.txt
mv ./vocab.txt ./models/

# convert the model to ggml format
python3 convert.py --dir-model ./models --out-dir ./ggml_weights/ --vocab-path ./models
python3 convert.py --dir-model ./models --out-dir ./ggml_weights/ --vocab-path ./models --use-f16

# convert the codec to ggml format
python3 encodec.cpp/convert.py --dir-model ./models/ --out-dir ./ggml_weights/ --use-f16
mv ggml_weights/ggml-model.bin ggml_weights/encodec_weights.bin

# run the inference
./build/examples/main/main -m ./ggml_weights/ -p "this is an audio"
./build/examples/main/main -m ./ggml_weights/ -em ./ggml_weights/encodec_weights.bin -p "this is an audio"
```

### (Optional) Quantize weights
Expand Down
1 change: 0 additions & 1 deletion bark.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -737,7 +737,6 @@ static bool bark_model_load(std::ifstream& fin, gpt_model& model, int n_gpu_laye
#ifdef GGML_USE_METAL
if (n_gpu_layers > 0) {
fprintf(stderr, "%s: using Metal backend\n", __func__);
ggml_metal_log_set_callback(ggml_log_callback_default, nullptr);
model.backend = ggml_backend_metal_init();
if (!model.backend) {
fprintf(stderr, "%s: ggml_backend_metal_init() failed\n", __func__);
Expand Down

0 comments on commit e876c84

Please sign in to comment.