Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Trying to build a model of PY007/TinyLlama-1.1B-step-50K-105b #3018

Closed
4 tasks done
hksk opened this issue Sep 4, 2023 · 8 comments · Fixed by #3364
Closed
4 tasks done

Trying to build a model of PY007/TinyLlama-1.1B-step-50K-105b #3018

hksk opened this issue Sep 4, 2023 · 8 comments · Fixed by #3364

Comments

@hksk
Copy link

hksk commented Sep 4, 2023

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • I am running the latest code. Development is very rapid so there are no tagged versions as of now.
  • I carefully followed the README.md.
  • I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • I reviewed the Discussions, and have a new bug or useful enhancement to share.

Expected Behavior

a q4 model builded from hf.

Current Behavior

Segmentation fault (core dumped)

Environment and Context

Please provide detailed information about your computer setup. This is important in case the issue is not reproducible except for under certain specific conditions.

  • Physical (or virtual) hardware you are using, e.g. for Linux:

(env) server@server:~/llama.cpp$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) W-2145 CPU @ 3.70GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 4
CPU max MHz: 4500.0000
CPU min MHz: 1200.0000
BogoMIPS: 7399.70
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopo
logy nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm
abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd mba ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx
smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req md_cle
ar flush_l1d arch_capabilities
Caches (sum of all):
L1d: 256 KiB (8 instances)
L1i: 256 KiB (8 instances)
L2: 8 MiB (8 instances)
L3: 11 MiB (1 instance)
NUMA:
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerabilities:
Gather data sampling: Mitigation; Microcode
Itlb multihit: KVM: Mitigation: VMX unsupported
L1tf: Mitigation; PTE Inversion
Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Meltdown: Mitigation; PTI
Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Retbleed: Mitigation; IBRS
Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Spectre v2: Mitigation; IBRS, IBPB conditional, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Srbds: Not affected
Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable

  • Operating System, e.g. for Linux:

(env) server@server:~/llama.cpp$ uname -a
Linux server 5.15.0-82-generic #91-Ubuntu SMP Mon Aug 14 14:14:14 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux

  • SDK version, e.g. for Linux:
(env) server@server:~/llama.cpp$ python3 --version
Python 3.10.12
(env) server@server:~/llama.cpp$ make --version
GNU Make 4.3
Built for x86_64-pc-linux-gnu
Copyright (C) 1988-2020 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http:https://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
(env) server@server:~/llama.cpp$ g++ --version
g++ (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Copyright (C) 2021 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Failure Information (for bugs)

(env) server@server:~/llama.cpp$ ./quantize models/PY007__TinyLlama-1.1B-step-50K-105b/ggml-model-f16.gguf models/PY007__TinyLlama-1.1B-step-50K-105b/ggml-model-q4_0.gguf q4_0
ggml_init_cublas: found 1 CUDA devices:
Device 0: Quadro P5000, compute capability 6.1
main: build = 1053 (01f2224)
main: quantizing 'models/PY007__TinyLlama-1.1B-step-50K-105b/ggml-model-f16.gguf' to 'models/PY007__TinyLlama-1.1B-step-50K-105b/ggml-model-q4_0.gguf' as Q4_0
Segmentation fault (core dumped)

Steps to Reproduce

Please provide detailed steps for reproducing the issue. We are not sitting in front of your screen, so the more detail the better.

  1. I clone the HF repo https://huggingface.co/PY007/TinyLlama-1.1B-step-50K-105b
  2. $ python3 convert.py models/PY007__TinyLlama-1.1B-step-50K-105b/ --outtype f16
  3. $ ./quantize models/PY007__TinyLlama-1.1B-step-50K-105b/ggml-model-f16.gguf models/PY007__TinyLlama-1.1B-step-50K-105b/ggml-model-q4_0.gguf q4_0

Failure Logs

just the Segmentation fault (core dumped)

@Green-Sky
Copy link
Collaborator

your llama.cpp main is over 2 weeks old, if the convert.py is newer, it will produce gguf v2 files, which just crash older code.

I could run it successfully, but i used f32 for the intermediary (since casting is down to f16 is technically lossy).

@hksk
Copy link
Author

hksk commented Sep 5, 2023

Hi @Green-Sky thanks for the information, Im just triying again recompíling ./main.

(env) server@server:~/llama.cpp$ ./quantize models/PY007__TinyLlama-1.1B-step-50K-105b/ggml-model-f32.gguf models/PY007__TinyLlama-1.1B-step-50K-105b/ggml-model-q4_0.gguf q4_0

the conversions not gives me an error
but when I try to run f32 or q4 , I got the segmentation error too

(env) server@server:~/llama.cpp$ ./main -m models/PY007__TinyLlama-1.1B-step-50K-105b/ggml-model-f32.gguf
Segmentation fault (core dumped)

(env) server@server:~/llama.cpp$ ./main -m models/PY007__TinyLlama-1.1B-step-50K-105b/ggml-model-q4_0.gguf
Segmentation fault (core dumped)

I see your post in HF early , do you have some tip about how can I run it?

@hksk
Copy link
Author

hksk commented Sep 5, 2023

I just trying cloning again, but this time I just build without gpu option, and the model runs

f32:
Building a website can be done in 10 simple steps:
Step 1:
The page one of the most important, an article on a page. That's all. You will the world of 20thousa year.
5387516589405886. 8600,03.00600
It'9 1114160008100093181172472928729780850161431512150810851031332104011791028504068585321123192055513466672426004545624358724513063326293581552585361560451182403715216131356438290101919A1993953056003780622458030100683031915814571519541717265957651049946080608201537504982045050101611522690446248458615073020325091017041202622067923330
llama_print_timings: load time = 317.35 ms
llama_print_timings: sample time = 291.57 ms / 400 runs ( 0.73 ms per token, 1371.88 tokens per second)
llama_print_timings: prompt eval time = 290.46 ms / 19 tokens ( 15.29 ms per token, 65.41 tokens per second)
llama_print_timings: eval time = 44434.68 ms / 399 runs ( 111.37 ms per token, 8.98 tokens per second)
llama_print_timings: total time = 45154.49 ms


q4:
Building a website can be done in 10 simple steps:
Step 1: Step 25 6
8a 9 1124017 14611731462
3.72.5.1.82.4.5.
8752548586.51.
4819345-2852-13450.5.58.552160.67.68.96.2.77430.48.5.58.8.8.111.7.1134.02100. 1.410.122351.5
.61913.66.25518.1.81.1.2.7.16.2.4.7.2.68.1607.120
2.60.05.60311.20.2.91120.55.81.3-842.5
56.97.8.7.68485
9.6.5.48.17.23.1.70.1320.2.1.
3.1083.605.239095
S1.011.9.0.
487.5.1.2.52.3-
362. 40.
F.2.72913.7,911160642.64714.1.2.1.1144.1.4.14.5.562.
llama_print_timings: load time = 58.80 ms
llama_print_timings: sample time = 285.19 ms / 400 runs ( 0.71 ms per token, 1402.57 tokens per second)
llama_print_timings: prompt eval time = 168.96 ms / 19 tokens ( 8.89 ms per token, 112.45 tokens per second)
llama_print_timings: eval time = 7559.79 ms / 399 runs ( 18.95 ms per token, 52.78 tokens per second)
llama_print_timings: total time = 8147.60 ms
Log end

--
so I guess this kind of model is more for other uses like the speculative thing (sorry I dont understand yet)

@hksk
Copy link
Author

hksk commented Sep 5, 2023

Okay now im more confused, when I Compile the main with CUBLAS support, the model exported wrotes those numbers
but when I build without gpu support only make -j , I got a legible words, that is normal?

output without CUBLAS support ./main

llm_load_tensors: ggml ctx size = 0.06 MB
llm_load_tensors: mem required = 4196.42 MB (+ 11.00 MB per state)
...........................................................................................
llama_new_context_with_model: kv self size = 11.00 MB
llama_new_context_with_model: compute buffer total size = 67.97 MB

system_info: n_threads = 5 / 16 | AVX = 1 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 |
sampling: repeat_last_n = 64, repeat_penalty = 1.100000, presence_penalty = 0.000000, frequency_penalty = 0.000000, top_k = 40, tfs_z = 1.000000, top_p = 0.950000, typical_p = 1.000000, temp = 0.800000, mirostat = 0, mirostat_lr = 0.100000, mirostat_ent = 5.000000
generate: n_ctx = 512, n_batch = 512, n_predict = -1, n_keep = 0

The meaning of life to the human to experience the human.
the first and they are the human. To be able to know how is and what it what the a little that you can take care of of being the person to have been done in your life has been made to get into my was done is and it.
This is a better.
what
The is he is, has been.
s
to put this so well for all this is all. The life to know. what was in its
you have the I the a is done has it's it is what of these as how it the
as well.
has to beer this is at the most.
and you are is this is so that can that they will not.
a what was what is a I was have been, but now.
it is just happened and how we can also in the other person is an has a it was the not me.
this way to it is this is.
was is that.
this is to be. So why did not. The other of I am a this, you had is that. This is, and they are the. the a the so it's been at all you will it, that has been it is it can in the is.
the
I have what you know but of course of course was not. the be done with the most the to the in it is and well, but the was how many other you is
this is a long and in it's this is, that is the and I was never been a has the is what it's the other's an in fact, are the first to have is to know.
in and can be is. The more then hey and it the of, you the in to it has nothing else what I want. I the this is a of for so many

@Green-Sky
Copy link
Collaborator

AVX2 (cpu only):

main: build = 1178 (2ba85c8)
main: seed  = 1693876256
llama_model_loader: loaded meta data with 19 key-value pairs and 201 tensors from ../models/TinyLlama-1.1B-step-50K-105b/ggml-model-q4_0.gguf (version GGUF V2 (latest))

...

llama_model_loader: - type  f32:   45 tensors
llama_model_loader: - type q4_0:  155 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_print_meta: format         = GGUF V2 (latest)
llm_load_print_meta: arch           = llama
llm_load_print_meta: vocab type     = SPM
llm_load_print_meta: n_vocab        = 32000
llm_load_print_meta: n_merges       = 0
llm_load_print_meta: n_ctx_train    = 2048
llm_load_print_meta: n_ctx          = 512
llm_load_print_meta: n_embd         = 2048
llm_load_print_meta: n_head         = 32
llm_load_print_meta: n_head_kv      = 4
llm_load_print_meta: n_layer        = 22
llm_load_print_meta: n_rot          = 64
llm_load_print_meta: n_gqa          = 8
llm_load_print_meta: f_norm_eps     = 1,0e-05
llm_load_print_meta: f_norm_rms_eps = 1,0e-05
llm_load_print_meta: n_ff           = 5632
llm_load_print_meta: freq_base      = 10000,0
llm_load_print_meta: freq_scale     = 1
llm_load_print_meta: model type     = ?B
llm_load_print_meta: model ftype    = mostly Q4_0
llm_load_print_meta: model size     = 1,10 B
llm_load_print_meta: general.name   = models
llm_load_print_meta: BOS token = 1 '<s>'
llm_load_print_meta: EOS token = 2 '</s>'
llm_load_print_meta: UNK token = 0 '<unk>'
llm_load_print_meta: LF token  = 13 '<0x0A>'
llm_load_tensors: ggml ctx size =    0,06 MB
llm_load_tensors: mem required  =  606,59 MB (+   11,00 MB per state)
......................................................................................
llama_new_context_with_model: kv self size  =   11,00 MB
llama_new_context_with_model: compute buffer total size =   67,97 MB

system_info: n_threads = 12 / 24 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 |
sampling: repeat_last_n = 64, repeat_penalty = 1,100000, presence_penalty = 0,000000, frequency_penalty = 0,000000, top_k = 40, tfs_z = 1,000000, top_p = 0,950000, typical_p = 1,000000, temp = 0,800000, mirostat = 0, mirostat_lr = 0,100000, mirostat_ent = 5,000000
generate: n_ctx = 512, n_batch = 512, n_predict = -1, n_keep = 0


 The meaning of life for free foods and we take in the natural farmers who will be able to eat. It is one- it can get with the local food in the time.
The days. They have to buy.
Buy of them.
The.
el in the United States.
Sale of 20,s The most common of all of the term you can be a that.
in the
wanted in. Of the.
being, and a. In a.
he.
fine. This. The. in. On the, in the. In the. In. A-r. This. The. O. So. the nd. to of.
.e. If you all this.e the.
b.
thusinon. A.s e.s of. That is. the that, and. and is.
The one, this the in. I. that. The. The. .
. I.
r. I. The. This. This Is.
r. You. In the of it. the of.i. 6. the, I should.
l and.t ea. I is as. a. the In the tos.
n.
 the one of in I. I . In.e.
This.is to. To. The.
r. It.
in, the.
I the 1s, the. I. 2 the. the.e.l. The.
J, you.a.d. that, of the. a, hey. Is.
, the. I it is.e.is in.the. the I I.
I.
The. The, and the, a,i. I'e and. If it. I. It. The.
1s.
s in, isi. 2)r I. I. In i-n. I.is. I.
It. Is.
A.I.i, ini.l.in.iI. . I. I. This.i,i. The, which. of.e I. I. It. I I. I,a.i.
In.i. A. iIe. and the 1 i. The.a.e.i. In.
in thisi. I

f32:

Log start
main: build = 1178 (2ba85c8)
main: seed  = 1693876360
llama_model_loader: loaded meta data with 17 key-value pairs and 201 tensors from ../models/TinyLlama-1.1B-step-50K-105b/ggml-model-f32.gguf (version GGUF V2 (latest))

...

llama_model_loader: - kv   0:                       general.architecture str
llama_model_loader: - kv   1:                               general.name str
llama_model_loader: - kv   2:                       llama.context_length u32
llama_model_loader: - kv   3:                     llama.embedding_length u32
llama_model_loader: - kv   4:                          llama.block_count u32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32
llama_model_loader: - kv   7:                 llama.attention.head_count u32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32
llama_model_loader: - kv  10:                       tokenizer.ggml.model str
llama_model_loader: - kv  11:                      tokenizer.ggml.tokens arr
llama_model_loader: - kv  12:                      tokenizer.ggml.scores arr
llama_model_loader: - kv  13:                  tokenizer.ggml.token_type arr
llama_model_loader: - kv  14:                tokenizer.ggml.bos_token_id u32
llama_model_loader: - kv  15:                tokenizer.ggml.eos_token_id u32
llama_model_loader: - kv  16:            tokenizer.ggml.unknown_token_id u32
llama_model_loader: - type  f32:  201 tensors
llm_load_print_meta: format         = GGUF V2 (latest)
llm_load_print_meta: arch           = llama
llm_load_print_meta: vocab type     = SPM
llm_load_print_meta: n_vocab        = 32000
llm_load_print_meta: n_merges       = 0
llm_load_print_meta: n_ctx_train    = 2048
llm_load_print_meta: n_ctx          = 512
llm_load_print_meta: n_embd         = 2048
llm_load_print_meta: n_head         = 32
llm_load_print_meta: n_head_kv      = 4
llm_load_print_meta: n_layer        = 22
llm_load_print_meta: n_rot          = 64
llm_load_print_meta: n_gqa          = 8
llm_load_print_meta: f_norm_eps     = 1,0e-05
llm_load_print_meta: f_norm_rms_eps = 1,0e-05
llm_load_print_meta: n_ff           = 5632
llm_load_print_meta: freq_base      = 10000,0
llm_load_print_meta: freq_scale     = 1
llm_load_print_meta: model type     = ?B
llm_load_print_meta: model ftype    = all F32 (guessed)
llm_load_print_meta: model size     = 1,10 B
llm_load_print_meta: general.name   = models
llm_load_print_meta: BOS token = 1 '<s>'
llm_load_print_meta: EOS token = 2 '</s>'
llm_load_print_meta: UNK token = 0 '<unk>'
llm_load_print_meta: LF token  = 13 '<0x0A>'
llm_load_tensors: ggml ctx size =    0,06 MB
llm_load_tensors: mem required  = 4196,42 MB (+   11,00 MB per state)
...........................................................................................
llama_new_context_with_model: kv self size  =   11,00 MB
llama_new_context_with_model: compute buffer total size =   67,97 MB

system_info: n_threads = 12 / 24 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 |
sampling: repeat_last_n = 64, repeat_penalty = 1,100000, presence_penalty = 0,000000, frequency_penalty = 0,000000, top_k = 40, tfs_z = 1,000000, top_p = 0,950000, typical_p = 1,000000, temp = 0,800000, mirostat = 0, mirostat_lr = 0,100000, mirostat_ent = 5,000000
generate: n_ctx = 512, n_batch = 512, n_predict = -1, n_keep = 0


 The meaning of life. The whole society is a religion. What is the world. I was built. And he has an agreement with a 21980 the government in Germany and is what in Russia's a.
their.
Their.
what is a person's, they, or the the the in the last week and their 1still got to be. 10, but the. It is, a person,the is this man's. The one.
w a.
is to 2.
them who the people is what a person to do i. It is the.

yea, quality is not there yet :)

i quickly uploaded my q4_0, so you can check. pls compare the file hashes etc.
https://huggingface.co/Green-Sky/TinyLlama-1.1B-step-50K-105b-GGUF/blob/main/ggml-model-q4_0.gguf

@Green-Sky
Copy link
Collaborator

update: there is an error somewhere in llama-cpp / convert.py . most likely in the GQA permuting.
jzhang38/TinyLlama#24

@Green-Sky
Copy link
Collaborator

update: @magician-blue found a workaround

Only do two things, remove the permute part in the convertor.py 983 987 line.

Change the ROPE part 2568 and 2572 line of llama.cpp from (from mode 0 to mode 2)

(GPTNeoX style rope???)

The reason why it will generate terrible output is the default llama's rope is to rotate pairs of even and odd dimensions (GPT-J style). However, tinyllama-1.1 is to rotate1st half and 2nd half (GPT-NeoX style).

Now I don't understand why removing the permute make the model work.

I remember when converting meta's llama model need permuting.
Maybe converting all hf model doesn't need permuting. I'll check this later.

https://huggingface.co/kirp/TinyLlama-1.1B-Chat-v0.2-gguf/

jzhang38/TinyLlama#24 (comment)

@Green-Sky
Copy link
Collaborator

should be fixed by #3364

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants