Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Working example on Google Colab? #95

Closed
hswlab opened this issue Aug 28, 2023 · 10 comments
Closed

Working example on Google Colab? #95

hswlab opened this issue Aug 28, 2023 · 10 comments

Comments

@hswlab
Copy link

hswlab commented Aug 28, 2023

Can anyone show a working example on Google Colab where a concrete audio file is generated? In my attempts, execution strangely breaks after these lines.

bark_forward_coarse_encoder: ...................................................................................................................................................................................................................................................................................................................................

bark_forward_coarse_encoder: mem per token = 8.51 MB
bark_forward_coarse_encoder: sample time = 8.16 ms
bark_forward_coarse_encoder: predict time = 95368.38 ms / 294.35 ms per token
bark_forward_coarse_encoder: total time = 95518.55 ms

Here is the link to my attempt on Google Colab:
https://colab.research.google.com/drive/1JVtJ6CDwxtKfFmEd8J4FGY2lzdL0d0jT?usp=sharing

@AGogikar
Copy link

I tested this, the execution breaks -
bark_forward_coarse_encoder: mem per token = 8.51 MB
bark_forward_coarse_encoder: sample time = 11.67 ms
bark_forward_coarse_encoder: predict time = 197814.42 ms / 610.54 ms per token
bark_forward_coarse_encoder: total time = 198112.20 ms

@PABannier
Copy link
Owner

Hi @hswlab !
Thanks for trying on Google Colab.
I tested the prompt "this is an audio" on multiple machines and it works for me. Maybe, the CPU provided by Google Colab is not powerful enough even to run bark.cpp. A few suggestions:

  • Try a different seed by passing with --seed
  • Quantize the weights

@qnixsynapse
Copy link

qnixsynapse commented Aug 29, 2023

Hi. @PABannier Sorry for asking here since I don't know how to contact you, but I want to know why bark_forward_fine_encoder tries to allocate 30GB of memory? None of the weights are at that size. Also using the original bark model(non ggml), it runs well.

fine_gpt_eval: failed to allocate 32031611289 bytes
bark_forward_fine_encoder: ggml_aligned_malloc: insufficient memory (attempted to allocate 30547.73 MB)
GGML_ASSERT: ggml.c:4408: ctx->mem_buffer != NULL

@PABannier
Copy link
Owner

Hi @akarshanbiswas ! Thanks for reaching out on this problem. Are you able to track back which operations cause this surge in memory? Also, which prompt did you input to the model?

@qnixsynapse
Copy link

I followed the same instructions the OP has in his colab notebook. Additionally I tested with quantize weights using the scripts available in the repo.

I also found out that the weight with codec in its name is not being quantized and the program crashes in the process.

I am yet to check the coredumps that I got which I will do in a few hours for now. (currently afk). 🙂

@PABannier
Copy link
Owner

Yes! Codec weights are not currently quantized because it does not provide any significant speed-ups (the forward pass is already fast), but degrades the audio quality.
I'm currently investigating other problems, so your input on this memory problem is very much welcome ;)

@qnixsynapse
Copy link

I moved this discussion to a separate issue.

@hswlab
Copy link
Author

hswlab commented Aug 29, 2023

@PABannier thank you, I tried to quantize the weights and could successfully generate an output.wav
Here's the whole script for GooGle Colab, in case anyone else wants to try it out.

# install cmake
!apt update
!apt install -y cmake

# Clone bark c++ from github
%cd /content
!git clone https://github.com/PABannier/bark.cpp.git

# Build
%cd /content/bark.cpp
!mkdir build
%cd ./build
!cmake ..
!cmake --build . --config Release

# install Python dependencies
%cd /content/bark.cpp
!python3 -m pip install -r requirements.txt

# obtain the original bark and encodec weights and place them in ./models
!python3 download_weights.py --download-dir ./models

# convert the model to ggml format
!python3 convert.py \
        --dir-model ./models \
        --codec-path ./models \
        --vocab-path ./ggml_weights/ \
        --out-dir ./ggml_weights/

# Quantize weights
!mkdir ggml_weights_q4
!cp /content/bark.cpp/ggml_weights/ggml_vocab.bin /content/bark.cpp/ggml_weights_q4
!cp /content/bark.cpp/ggml_weights/ggml_weights_codec.bin /content/bark.cpp/ggml_weights_q4
!cp /content/bark.cpp/ggml_weights/vocab.txt /content/bark.cpp/ggml_weights_q4

!/content/bark.cpp/build/bin/quantize /content/bark.cpp/ggml_weights/ggml_weights_text.bin /content/bark.cpp/ggml_weights_q4/ggml_weights_text.bin q4_0
!/content/bark.cpp/build/bin/quantize /content/bark.cpp/ggml_weights/ggml_weights_coarse.bin /content/bark.cpp/ggml_weights_q4/ggml_weights_coarse.bin q4_0
!/content/bark.cpp/build/bin/quantize /content/bark.cpp/ggml_weights/ggml_weights_fine.bin /content/bark.cpp/ggml_weights_q4/ggml_weights_fine.bin q4_0

# run the inference
!/content/bark.cpp/build/bin/main -m ./ggml_weights_q4 -p "Hi this is a test with quantized weights"

# ZIP Files
!zip -r bark.zip /content/bark.cpp/build/bin /content/bark.cpp/ggml_weights /content/bark.cpp/ggml_weights_q4

# Move files to Google Drive for download
from google.colab import drive 
drive.mount('/content/drive') 
!cp /content/bark.cpp/bark.zip '/content/drive/My Drive/'

@khimaros
Copy link

it would be great to have #95 (comment) linked or incorporated into the README

@PABannier
Copy link
Owner

@khimaros I added it to the README. Thank you @hswlab !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants