Skip to content

Commit

Permalink
Squashed commit of the following:
Browse files Browse the repository at this point in the history
commit 8432e9d
Author: YellowRoseCx <[email protected]>
Date:   Sun Jul 9 16:55:30 2023 -0500

    Update Makefile

commit b58c189
Author: YellowRoseCx <[email protected]>
Date:   Sun Jul 9 16:20:00 2023 -0500

    Add multi-gpu CuBLAS support to new GUI

commit 0c1c71b
Author: YellowRoseCx <[email protected]>
Date:   Sat Jul 8 07:56:57 2023 -0500

    Update Makefile

commit f864f60
Author: Johannes Gäßler <[email protected]>
Date:   Sat Jul 8 00:25:15 2023 +0200

    CUDA: add __restrict__ to mul mat vec kernels (ggerganov#2140)

commit 4539bc2
Author: YellowRoseCx <[email protected]>
Date:   Sat Jul 8 01:36:14 2023 -0500

    update makefile for changes

commit 912e31e
Merge: 74e2703 ddaa4f2
Author: YellowRoseCx <[email protected]>
Date:   Fri Jul 7 23:15:37 2023 -0500

    Merge remote-tracking branch 'upstream/concedo'

commit ddaa4f2
Author: Concedo <[email protected]>
Date:   Fri Jul 7 22:14:14 2023 +0800

    fix cuda garbage results and gpu selection issues

commit 95eca51
Author: Concedo <[email protected]>
Date:   Fri Jul 7 18:39:47 2023 +0800

    add gpu choice for GUI for cuda

commit a689a66
Author: Concedo <[email protected]>
Date:   Fri Jul 7 17:52:34 2023 +0800

    make it work with pyinstaller

commit 9ee9a77
Author: Concedo <[email protected]>
Date:   Fri Jul 7 16:25:37 2023 +0800

    warn outdated GUI (+1 squashed commits)

    Squashed commits:

    [15aec3d] spelling error

commit 32102c2
Merge: 8424a35 481f793
Author: Concedo <[email protected]>
Date:   Fri Jul 7 14:15:39 2023 +0800

    Merge branch 'master' into concedo_experimental

    # Conflicts:
    #	README.md

commit 481f793
Author: Howard Su <[email protected]>
Date:   Fri Jul 7 11:34:18 2023 +0800

    Fix opencl by wrap #if-else-endif with \n (ggerganov#2086)

commit dfd9fce
Author: Georgi Gerganov <[email protected]>
Date:   Thu Jul 6 19:41:31 2023 +0300

    ggml : fix restrict usage

commit 36680f6
Author: Judd <[email protected]>
Date:   Fri Jul 7 00:23:49 2023 +0800

    convert : update for baichuan (ggerganov#2081)

    1. guess n_layers;
    2. relax warnings on context size;
    3. add a note that its derivations are also supported.

    Co-authored-by: Judd <[email protected]>

commit a17a268
Author: tslmy <[email protected]>
Date:   Thu Jul 6 09:17:50 2023 -0700

    alpaca.sh : update model file name (ggerganov#2074)

    The original file name, `ggml-alpaca-7b-q4.bin`, implied the first-generation GGML. After the breaking changes (mentioned in ggerganov#382), `llama.cpp` requires GGML V3 now. Those model files are named `*ggmlv3*.bin`. We should change the example to an actually working model file, so that this thing is more likely to run out-of-the-box for more people, and less people would waste time downloading the old Alpaca model.

commit 8424a35
Author: Concedo <[email protected]>
Date:   Thu Jul 6 23:24:21 2023 +0800

    added the ability to ban any substring tokens

commit 27a0907
Author: Concedo <[email protected]>
Date:   Thu Jul 6 22:33:46 2023 +0800

    backport MM256_SET_M128I to ggml_v2, updated lite, added support for selecting the GPU for cublas

commit 220aa70
Merge: 4d1700b 31cfbb1
Author: Concedo <[email protected]>
Date:   Thu Jul 6 15:40:40 2023 +0800

    Merge branch 'master' into concedo_experimental

    # Conflicts:
    #	.github/workflows/build.yml
    #	CMakeLists.txt
    #	Makefile
    #	README.md
    #	pocs/vdot/q8dot.cpp
    #	pocs/vdot/vdot.cpp
    #	scripts/sync-ggml.sh
    #	tests/test-grad0.c
    #	tests/test-quantize-fns.cpp
    #	tests/test-quantize-perf.cpp

commit 4d1700b
Author: Concedo <[email protected]>
Date:   Thu Jul 6 15:17:47 2023 +0800

    adjust some ui sizing

commit 1c80002
Author: Vali-98 <[email protected]>
Date:   Thu Jul 6 15:00:57 2023 +0800

    New UI using customtkinter (LostRuins#284)

    * Initial conversion to customtkinter.

    * Initial conversion to customtkinter.

    * Additions to UI, still non-functional

    * UI now functional, untested

    * UI now functional, untested

    * Added saving configs

    * Saving and loading now functional

    * Fixed sliders not loading

    * Cleaned up duplicate arrays

    * Cleaned up duplicate arrays

    * Fixed loading bugs

    * wip fixing all the broken parameters. PLEASE test before you commit

    * further cleaning

    * bugfix completed for gui. now evaluating save and load

    * cleanup prepare to merge

    ---------

    Co-authored-by: Concedo <[email protected]>

commit 31cfbb1
Author: Tobias Lütke <[email protected]>
Date:   Wed Jul 5 16:51:13 2023 -0400

    Expose generation timings from server & update completions.js (ggerganov#2116)

    * use javascript generators as much cleaner API

    Also add ways to access completion as promise and EventSource

    * export llama_timings as struct and expose them in server

    * update readme, update baked includes

    * llama : uniform variable names + struct init

    ---------

    Co-authored-by: Georgi Gerganov <[email protected]>

commit 74e2703
Merge: cf65429 f9108ba
Author: YellowRoseCx <[email protected]>
Date:   Wed Jul 5 15:16:49 2023 -0500

    Merge branch 'LostRuins:concedo' into main

commit 983b555
Author: Jesse Jojo Johnson <[email protected]>
Date:   Wed Jul 5 18:03:19 2023 +0000

    Update Server Instructions (ggerganov#2113)

    * Update server instructions for web front end
    * Update server README
    * Remove duplicate OAI instructions
    * Fix duplicate text

    ---------

    Co-authored-by: Jesse Johnson <[email protected]>

commit ec326d3
Author: Georgi Gerganov <[email protected]>
Date:   Wed Jul 5 20:44:11 2023 +0300

    ggml : fix bug introduced in ggerganov#1237

commit 1b6efea
Author: Georgi Gerganov <[email protected]>
Date:   Wed Jul 5 20:20:05 2023 +0300

    tests : fix test-grad0

commit 1b107b8
Author: Stephan Walter <[email protected]>
Date:   Wed Jul 5 16:13:06 2023 +0000

    ggml : generalize `quantize_fns` for simpler FP16 handling (ggerganov#1237)

    * Generalize quantize_fns for simpler FP16 handling

    * Remove call to ggml_cuda_mul_mat_get_wsize

    * ci : disable FMA for mac os actions

    ---------

    Co-authored-by: Georgi Gerganov <[email protected]>

commit 8567c76
Author: Jesse Jojo Johnson <[email protected]>
Date:   Wed Jul 5 15:13:35 2023 +0000

    Update server instructions for web front end (ggerganov#2103)

    Co-authored-by: Jesse Johnson <[email protected]>

commit 924dd22
Author: Johannes Gäßler <[email protected]>
Date:   Wed Jul 5 14:19:42 2023 +0200

    Quantized dot products for CUDA mul mat vec (ggerganov#2067)

commit 051c70d
Author: Howard Su <[email protected]>
Date:   Wed Jul 5 18:31:23 2023 +0800

    llama: Don't double count the sampling time (ggerganov#2107)

commit ea79e54
Author: Concedo <[email protected]>
Date:   Wed Jul 5 17:29:35 2023 +0800

    fixed refusing to quantize some models

commit 9e4475f
Author: Johannes Gäßler <[email protected]>
Date:   Wed Jul 5 08:58:05 2023 +0200

    Fixed OpenCL offloading prints (ggerganov#2082)

commit 7f0e9a7
Author: Nigel Bosch <[email protected]>
Date:   Tue Jul 4 18:33:33 2023 -0500

    embd-input: Fix input embedding example unsigned int seed (ggerganov#2105)

commit b472f3f
Author: Georgi Gerganov <[email protected]>
Date:   Tue Jul 4 22:25:22 2023 +0300

    readme : add link web chat PR

commit ed9a54e
Author: Georgi Gerganov <[email protected]>
Date:   Tue Jul 4 21:54:11 2023 +0300

    ggml : sync latest (new ops, macros, refactoring) (ggerganov#2106)

    - add ggml_argmax()
    - add ggml_tanh()
    - add ggml_elu()
    - refactor ggml_conv_1d() and variants
    - refactor ggml_conv_2d() and variants
    - add helper macros to reduce code duplication in ggml.c

commit f257fd2
Author: jwj7140 <[email protected]>
Date:   Wed Jul 5 03:06:12 2023 +0900

    Add an API example using server.cpp similar to OAI. (ggerganov#2009)

    * add api_like_OAI.py
    * add evaluated token count to server
    * add /v1/ endpoints binding

commit 7ee76e4
Author: Tobias Lütke <[email protected]>
Date:   Tue Jul 4 10:05:27 2023 -0400

    Simple webchat for server (ggerganov#1998)

    * expose simple web interface on root domain

    * embed index and add --path for choosing static dir

    * allow server to multithread

    because web browsers send a lot of garbage requests we want the server
    to multithread when serving 404s for favicon's etc. To avoid blowing up
    llama we just take a mutex when it's invoked.

    * let's try this with the xxd tool instead and see if msvc is happier with that

    * enable server in Makefiles

    * add /completion.js file to make it easy to use the server from js

    * slightly nicer css

    * rework state management into session, expose historyTemplate to settings

    ---------

    Co-authored-by: Georgi Gerganov <[email protected]>

commit acc111c
Author: Henri Vasserman <[email protected]>
Date:   Tue Jul 4 15:38:04 2023 +0300

    Allow old Make to build server. (ggerganov#2098)

    Also make server build by default.

    Tested with Make 3.82

commit 23c7c6f
Author: ZhouYuChen <[email protected]>
Date:   Tue Jul 4 20:15:16 2023 +0800

    Update Makefile: clean simple (ggerganov#2097)

commit 69add28
Merge: 00e35d0 698efad
Author: Concedo <[email protected]>
Date:   Tue Jul 4 18:51:42 2023 +0800

    Merge branch 'master' into concedo_experimental

    # Conflicts:
    #	.github/workflows/build.yml

commit 00e35d0
Merge: fff705d f9108ba
Author: Concedo <[email protected]>
Date:   Tue Jul 4 18:46:40 2023 +0800

    Merge branch 'concedo' into concedo_experimental

commit f9108ba
Author: Michael Moon <[email protected]>
Date:   Tue Jul 4 18:46:08 2023 +0800

    Make koboldcpp.py executable on Linux (LostRuins#293)

commit fff705d
Merge: 784628a c6c0afd
Author: Concedo <[email protected]>
Date:   Tue Jul 4 18:42:02 2023 +0800

    Merge remote-tracking branch 'ycros/improve-sampler-api-access' into concedo_experimental

commit c6c0afd
Author: Concedo <[email protected]>
Date:   Tue Jul 4 18:35:03 2023 +0800

    refactor to avoid code duplication

commit 784628a
Merge: ca9a116 309534d
Author: Concedo <[email protected]>
Date:   Tue Jul 4 16:38:32 2023 +0800

    Merge remote-tracking branch 'ycros/improve-sampler-api-access' into concedo_experimental

commit 698efad
Author: Erik Scholz <[email protected]>
Date:   Tue Jul 4 01:50:12 2023 +0200

    CI: make the brew update temporarily optional. (ggerganov#2092)

    until they decide to fix the brew installation in the macos runners.
    see the open issues. eg actions/runner-images#7710

commit 14a2cc7
Author: Govlzkoy <[email protected]>
Date:   Tue Jul 4 07:50:00 2023 +0800

    [ggml] fix index for ne03 value in ggml_cl_mul_f32 (ggerganov#2088)

commit cf65429
Author: YellowRoseCx <[email protected]>
Date:   Mon Jul 3 16:56:40 2023 -0500

    print cuda or opencl based on what's used

commit 72c16d2
Author: YellowRoseCx <[email protected]>
Date:   Mon Jul 3 16:45:39 2023 -0500

    Revert "fix my mistake that broke other arches"

    This reverts commit 777aed5.

commit 1cf14cc
Author: Henri Vasserman <[email protected]>
Date:   Tue Jul 4 00:05:23 2023 +0300

    fix server crashes (ggerganov#2076)

commit 777aed5
Author: YellowRoseCx <[email protected]>
Date:   Mon Jul 3 15:53:32 2023 -0500

    fix my mistake that broke other arches

commit cc45a7f
Author: Howard Su <[email protected]>
Date:   Tue Jul 4 02:43:55 2023 +0800

    Fix crash of test-tokenizer-0 under Debug build (ggerganov#2064)

    * Fix crash of test-tokenizer-0 under Debug build

    * Change per comment

commit ca9a116
Author: Concedo <[email protected]>
Date:   Tue Jul 4 00:35:02 2023 +0800

    possibly slower, but cannot use larger batches without modifying ggml library.

commit bfeb347
Author: Concedo <[email protected]>
Date:   Mon Jul 3 21:36:42 2023 +0800

    fix typos

commit 55dbb91
Author: Howard Su <[email protected]>
Date:   Mon Jul 3 19:58:58 2023 +0800

    [llama] No need to check file version when loading vocab score (ggerganov#2079)

commit d7d2e6a
Author: WangHaoranRobin <[email protected]>
Date:   Mon Jul 3 05:38:44 2023 +0800

    server: add option to output probabilities for completion (ggerganov#1962)

    * server: add option to output probabilities for completion
    * server: fix issue when handling probability output for incomplete tokens for multibyte character generation
    * server: fix llama_sample_top_k order
    * examples/common.h: put all bool variables in gpt_params together

commit 27780a9
Author: YellowRoseCx <[email protected]>
Date:   Sun Jul 2 16:03:27 2023 -0500

    rocm fixes

commit f52c7d4
Author: YellowRoseCx <[email protected]>
Date:   Sun Jul 2 16:02:58 2023 -0500

    Revert "rocm fixes"

    This reverts commit 2fe9927.

commit 2fe9927
Author: YellowRoseCx <[email protected]>
Date:   Sun Jul 2 15:58:21 2023 -0500

    rocm fixes

commit efe7560
Author: YellowRoseCx <[email protected]>
Date:   Sun Jul 2 15:55:43 2023 -0500

    Revert "move HIPBLAS definitions into ggml-cuda.h"

    This reverts commit bf49a93.

commit 4fc0181
Author: YellowRoseCx <[email protected]>
Date:   Sun Jul 2 15:55:36 2023 -0500

    Revert "move hipblas definitions to header files"

    This reverts commit 2741ffb.

commit 89eb576
Merge: 2741ffb 3d2907d
Author: YellowRoseCx <[email protected]>
Date:   Sun Jul 2 14:44:13 2023 -0500

    Merge branch 'LostRuins:concedo' into main

commit 309534d
Author: Ycros <[email protected]>
Date:   Sun Jul 2 18:15:34 2023 +0000

    implement sampler order, expose sampler order and mirostat in api

commit 3d2907d
Author: Concedo <[email protected]>
Date:   Sun Jul 2 18:28:09 2023 +0800

    make gptneox and gptj work with extended context too

commit d6b47e6
Merge: e17c849 46088f7
Author: Concedo <[email protected]>
Date:   Sun Jul 2 17:26:39 2023 +0800

    Merge branch 'master' into concedo_experimental

commit e17c849
Author: Concedo <[email protected]>
Date:   Sun Jul 2 17:25:08 2023 +0800

    switched to NTK aware scaling

commit e19483c
Author: Concedo <[email protected]>
Date:   Sun Jul 2 14:55:08 2023 +0800

    increase scratch for above 4096

commit 46088f7
Author: Georgi Gerganov <[email protected]>
Date:   Sun Jul 2 09:46:46 2023 +0300

    ggml : fix build with OpenBLAS (close ggerganov#2066)

commit b85ea58
Merge: ef3b8dc 0bc2cdf
Author: Concedo <[email protected]>
Date:   Sun Jul 2 14:45:25 2023 +0800

    Merge branch 'master' into concedo_experimental

    # Conflicts:
    #	README.md

commit 2741ffb
Author: YellowRoseCx <[email protected]>
Date:   Sat Jul 1 17:07:42 2023 -0500

    move hipblas definitions to header files

commit bf49a93
Author: YellowRoseCx <[email protected]>
Date:   Sat Jul 1 16:38:50 2023 -0500

    move HIPBLAS definitions into ggml-cuda.h

commit 540f4e0
Merge: 2c3b46f eda663f
Author: YellowRoseCx <[email protected]>
Date:   Sat Jul 1 14:58:32 2023 -0500

    Merge remote-tracking branch 'upstream/concedo'

commit 0bc2cdf
Author: Johannes Gäßler <[email protected]>
Date:   Sat Jul 1 21:49:44 2023 +0200

    Better CUDA synchronization logic (ggerganov#2057)

commit befb3a3
Author: Johannes Gäßler <[email protected]>
Date:   Sat Jul 1 21:47:26 2023 +0200

    Test-based VRAM scratch size + context adjustment (ggerganov#2056)

commit b213227
Author: Daniel Drake <[email protected]>
Date:   Sat Jul 1 20:31:44 2023 +0200

    cmake : don't force -mcpu=native on aarch64 (ggerganov#2063)

    It's currently not possible to cross-compile llama.cpp for aarch64
    because CMakeLists.txt forces -mcpu=native for that target.

    -mcpu=native doesn't make sense if your build host is not the
    target architecture, and clang rejects it for that reason, aborting the
    build. This can be easily reproduced using the current Android NDK to build
    for aarch64 on an x86_64 host.

    If there is not a specific CPU-tuning target for aarch64 then -mcpu
    should be omitted completely. I think that makes sense, there is not
    enough variance in the aarch64 instruction set to warrant a fixed -mcpu
    optimization at this point. And if someone is building natively and wishes
    to enable any possible optimizations for the host device, then there is
    already the LLAMA_NATIVE option available.

    Fixes LostRuins#495.

commit 2f8cd97
Author: Aaron Miller <[email protected]>
Date:   Sat Jul 1 11:14:59 2023 -0700

    metal : release buffers when freeing metal context (ggerganov#2062)

commit 471aab6
Author: Judd <[email protected]>
Date:   Sun Jul 2 01:00:25 2023 +0800

    convert : add support of baichuan-7b (ggerganov#2055)

    Co-authored-by: Judd <[email protected]>

commit 463f2f4
Author: Georgi Gerganov <[email protected]>
Date:   Sat Jul 1 19:05:09 2023 +0300

    llama : fix return value of llama_load_session_file_internal (ggerganov#2022)

commit cb44dbc
Author: Rand Xie <[email protected]>
Date:   Sun Jul 2 00:02:58 2023 +0800

    llama : catch llama_load_session_file_internal exceptions (ggerganov#2022)

    * convert checks in llama_load_session_file to throw and handle them

    * make llama_load_session_file_internal static

    * address feedbacks to avoid using exceptions

commit 79f634a
Author: Georgi Gerganov <[email protected]>
Date:   Sat Jul 1 18:46:00 2023 +0300

    embd-input : fix returning ptr to temporary

commit 04606a1
Author: Georgi Gerganov <[email protected]>
Date:   Sat Jul 1 18:45:44 2023 +0300

    train : fix compile warning

commit b1ca8f3
Author: Qingyou Meng <[email protected]>
Date:   Sat Jul 1 23:42:43 2023 +0800

    ggml : disable GGML_TASK_INIT and GGML_TASK_FINALIZE by default (ggerganov#1995)

    Will not be scheduled unless explicitly enabled.

commit 2c3b46f
Author: YellowRoseCx <[email protected]>
Date:   Thu Jun 29 18:43:43 2023 -0500

    changes to fix build

commit c9e1103
Author: YellowRoseCx <[email protected]>
Date:   Thu Jun 29 18:20:07 2023 -0500

    Update ggml_v2-cuda-legacy.cu for ROCM

commit b858fc5
Author: YellowRoseCx <[email protected]>
Date:   Thu Jun 29 17:49:39 2023 -0500

    changes to work with upstream

commit 69a0c25
Merge: 096f0b0 1347d3a
Author: YellowRoseCx <[email protected]>
Date:   Thu Jun 29 16:59:06 2023 -0500

    Merge remote-tracking branch 'upstream/concedo'

commit 096f0b0
Author: YellowRoseCx <[email protected]>
Date:   Wed Jun 28 15:27:02 2023 -0500

    revert unnecessary hipblas conditionals

commit d81e81a
Author: YellowRoseCx <[email protected]>
Date:   Wed Jun 28 14:48:23 2023 -0500

    Update Makefile hipblas nvcc correction

commit 2579ecf
Merge: abed427 d2034ce
Author: YellowRoseCx <[email protected]>
Date:   Sun Jun 25 17:50:04 2023 -0500

    Merge branch 'LostRuins:concedo' into main

commit abed427
Author: YellowRoseCx <[email protected]>
Date:   Sat Jun 24 19:16:30 2023 -0500

    reorganize If statements to include proper headers

commit 06c3bf0
Merge: ea6d320 8342fe8
Author: YellowRoseCx <[email protected]>
Date:   Sat Jun 24 16:57:20 2023 -0500

    Merge branch 'LostRuins:concedo' into main

commit ea6d320
Author: YellowRoseCx <[email protected]>
Date:   Fri Jun 23 01:53:28 2023 -0500

    Update README.md

commit 4d56ad8
Author: YellowRoseCx <[email protected]>
Date:   Thu Jun 22 16:19:43 2023 -0500

    Update README.md

commit 21f9308
Author: YellowRoseCx <[email protected]>
Date:   Thu Jun 22 15:42:05 2023 -0500

    kquants_iter for hipblas and add gfx803

commit b6ff890
Merge: eb094f0 e6ddb15
Author: YellowRoseCx <[email protected]>
Date:   Thu Jun 22 12:42:09 2023 -0500

    Merge branch 'LostRuins:concedo' into main

commit eb094f0
Author: YellowRoseCx <[email protected]>
Date:   Wed Jun 21 23:59:18 2023 -0500

    lowvram parameter description

commit 3a5dfeb
Merge: 665cc11 b1f00fa
Author: YellowRoseCx <[email protected]>
Date:   Wed Jun 21 16:53:03 2023 -0500

    Merge branch 'LostRuins:concedo' into koboldcpp-rocm

commit 665cc11
Author: YellowRoseCx <[email protected]>
Date:   Wed Jun 21 01:13:19 2023 -0500

    add lowvram parameter

commit 222cbbb
Author: YellowRoseCx <[email protected]>
Date:   Tue Jun 20 19:03:28 2023 -0500

    add additional hipblas conditions for cublas

commit e1f9581
Author: YellowRoseCx <[email protected]>
Date:   Tue Jun 20 16:51:59 2023 -0500

    Add hip def for cuda v2

commit 3bff5c0
Merge: a7e74b3 266d47a
Author: YellowRoseCx <[email protected]>
Date:   Tue Jun 20 13:38:06 2023 -0500

    Merge branch 'LostRuins:concedo' into koboldcpp-rocm

commit a7e74b3
Author: YellowRoseCx <[email protected]>
Date:   Mon Jun 19 22:04:18 2023 -0500

    Update README.md

commit 5e99b3c
Author: YellowRoseCx <[email protected]>
Date:   Mon Jun 19 22:03:42 2023 -0500

    Update Makefile

commit 9190b17
Author: YellowRoseCx <[email protected]>
Date:   Mon Jun 19 21:47:10 2023 -0500

    Update README.md

commit 2780ea2
Author: YellowRoseCx <[email protected]>
Date:   Sun Jun 18 15:48:00 2023 -0500

    Update Makefile

commit 04a3e64
Author: YellowRoseCx <[email protected]>
Date:   Sun Jun 18 14:33:39 2023 -0500

    remove extra line

commit cccbca9
Author: YellowRoseCx <[email protected]>
Date:   Sun Jun 18 14:31:17 2023 -0500

    attempt adding ROCM hipblas

commit a44a1d4
Author: YellowRoseCx <[email protected]>
Date:   Sun Jun 18 14:31:01 2023 -0500

    attempt adding ROCM hipblas

commit b088184
Author: YellowRoseCx <[email protected]>
Date:   Sun Jun 18 14:30:54 2023 -0500

    attempt adding ROCM hipblas
  • Loading branch information
YellowRoseCx committed Jul 10, 2023
1 parent 631b115 commit 8a8218c
Show file tree
Hide file tree
Showing 47 changed files with 7,043 additions and 2,233 deletions.
20 changes: 14 additions & 6 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -41,9 +41,10 @@ if (NOT MSVC)
endif()

# 3rd party libs
option(LLAMA_CUBLAS "llama: use cuBLAS" ON)
option(LLAMA_CUBLAS "llama: use cuBLAS" OFF)
set(LLAMA_CUDA_DMMV_X "32" CACHE STRING "llama: x stride for dmmv CUDA kernels")
set(LLAMA_CUDA_DMMV_Y "1" CACHE STRING "llama: y block size for dmmv CUDA kernels")
set(LLAMA_CUDA_MMV_Y "1" CACHE STRING "llama: y block size for mmv CUDA kernels")
option(LLAMA_CUDA_DMMV_F16 "llama: use 16 bit floats for dmmv CUDA kernels" OFF)
set(LLAMA_CUDA_KQUANTS_ITER "2" CACHE STRING "llama: iters./thread per block for Q2_K/Q6_K")
option(LLAMA_HIPBLAS "llama: use hipBLAS" OFF)
Expand Down Expand Up @@ -77,8 +78,11 @@ if (LLAMA_CUBLAS)
set(GGML_V2_LEGACY_CUDA_SOURCES otherarch/ggml_v2-cuda-legacy.cu otherarch/ggml_v2-cuda-legacy.h)

add_compile_definitions(GGML_USE_CUBLAS)
add_compile_definitions(GGML_CUDA_FORCE_DMMV) #non dmmv broken for me

add_compile_definitions(GGML_CUDA_DMMV_X=${LLAMA_CUDA_DMMV_X})
add_compile_definitions(GGML_CUDA_DMMV_Y=${LLAMA_CUDA_DMMV_Y})
add_compile_definitions(GGML_CUDA_MMV_Y=${LLAMA_CUDA_MMV_Y})
if (LLAMA_CUDA_DMMV_F16)
add_compile_definitions(GGML_CUDA_DMMV_F16)
endif()
Expand All @@ -90,6 +94,15 @@ if (LLAMA_CUBLAS)
set(LLAMA_EXTRA_LIBS ${LLAMA_EXTRA_LIBS} CUDA::cudart CUDA::cublas CUDA::cublasLt)
endif()

if (NOT DEFINED CMAKE_CUDA_ARCHITECTURES)
if (LLAMA_CUDA_DMMV_F16)
set(CMAKE_CUDA_ARCHITECTURES "61") # needed for f16 CUDA intrinsics
else()
set(CMAKE_CUDA_ARCHITECTURES "52;61") # lowest CUDA 12 standard + lowest for integer intrinsics
endif()
endif()
message(STATUS "Using CUDA architectures: ${CMAKE_CUDA_ARCHITECTURES}")

else()
message(WARNING "cuBLAS not found")
endif()
Expand Down Expand Up @@ -200,11 +213,6 @@ if (${CMAKE_SYSTEM_PROCESSOR} MATCHES "arm" OR ${CMAKE_SYSTEM_PROCESSOR} MATCHES
if (MSVC)
# TODO: arm msvc?
else()
if (${CMAKE_SYSTEM_PROCESSOR} MATCHES "aarch64")
# Apple M1, M2, etc.
# Raspberry Pi 3, 4, Zero 2 (64-bit)
add_compile_options(-mcpu=native)
endif()
if (${CMAKE_SYSTEM_PROCESSOR} MATCHES "armv6")
# Raspberry Pi 1, Zero
add_compile_options(-mfpu=neon-fp-armv8 -mfp16-format=ieee -mno-unaligned-access)
Expand Down
41 changes: 30 additions & 11 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -144,16 +144,18 @@ ifdef LLAMA_CUBLAS
CUBLASLD_FLAGS = -lcublas -lculibos -lcudart -lcublasLt -lpthread -ldl -lrt -L/usr/local/cuda/lib64 -L/opt/cuda/lib64 -L$(CUDA_PATH)/targets/x86_64-linux/lib
CUBLAS_OBJS = ggml-cuda.o ggml_v2-cuda.o ggml_v2-cuda-legacy.o
NVCC = nvcc
NVCCFLAGS = --forward-unknown-to-host-compiler -arch=native
NVCCFLAGS = --forward-unknown-to-host-compiler -arch=native -DGGML_CUDA_FORCE_DMMV
ifdef LLAMA_CUDA_DMMV_X
NVCCFLAGS += -DGGML_CUDA_DMMV_X=$(LLAMA_CUDA_DMMV_X)
else
NVCCFLAGS += -DGGML_CUDA_DMMV_X=32
endif # LLAMA_CUDA_DMMV_X
ifdef LLAMA_CUDA_DMMV_Y
NVCCFLAGS += -DGGML_CUDA_MMV_Y=$(LLAMA_CUDA_MMV_Y)
NVCCFLAGS += -DGGML_CUDA_DMMV_Y=$(LLAMA_CUDA_DMMV_Y)
else
NVCCFLAGS += -DGGML_CUDA_DMMV_Y=1
NVCCFLAGS += -DGGML_CUDA_MMV_Y=1
endif # LLAMA_CUDA_DMMV_Y
ifdef LLAMA_CUDA_DMMV_F16
NVCCFLAGS += -DGGML_CUDA_DMMV_F16
Expand All @@ -175,23 +177,40 @@ ifdef LLAMA_HIPBLAS
ROCM_PATH ?= /opt/rocm
CC := $(ROCM_PATH)/llvm/bin/clang
CXX := $(ROCM_PATH)/llvm/bin/clang++
GPU_TARGETS = gfx803 gfx900 gfx906 gfx908 gfx90a gfx1030
GPU_TARGETS = gfx803 gfx900 gfx906 gfx908 gfx90a gfx1030 gfx1100
LLAMA_CUDA_DMMV_X ?= 64
LLAMA_CUDA_DMMV_Y ?= 2
LLAMA_CUDA_MMV_Y ?= 2
LLAMA_CUDA_FORCE_DMMV = true
CFLAGS += -DGGML_USE_HIPBLAS -DGGML_USE_CUBLAS $(shell $(ROCM_PATH)/bin/hipconfig -C)
CXXFLAGS += -DGGML_USE_HIPBLAS -DGGML_USE_CUBLAS $(shell $(ROCM_PATH)/bin/hipconfig -C)
LDFLAGS += -L/opt/rocm/lib -Wl,-rpath=$(ROCM_PATH)/lib -lhipblas -lamdhip64
OBJS += ggml-cuda.o ggml_v2-cuda.o ggml_v2-cuda-legacy.o

ifdef LLAMA_CUDA_DMMV_X
CXXFLAGS += -DGGML_CUDA_DMMV_X=$(LLAMA_CUDA_DMMV_X)
else
CXXFLAGS += -DGGML_CUDA_DMMV_X=32
endif
ifeq ($(LLAMA_CUDA_FORCE_DMMV), true)
CXXFLAGS += -DGGML_CUDA_FORCE_DMMV
endif
ifdef LLAMA_CUDA_MMV_Y
CXXFLAGS += -DGGML_CUDA_MMV_Y=$(LLAMA_CUDA_MMV_Y)
else ifdef LLAMA_CUDA_DMMV_Y
CXXFLAGS += -DGGML_CUDA_MMV_Y=$(LLAMA_CUDA_DMMV_Y) # for backwards compatibility
else
CXXFLAGS += -DGGML_CUDA_MMV_Y=1
endif

ifdef LLAMA_CUDA_KQUANTS_ITER
CXXFLAGS += -DK_QUANTS_PER_ITERATION=$(LLAMA_CUDA_KQUANTS_ITER)
else
CXXFLAGS += -DK_QUANTS_PER_ITERATION=2
endif

ggml-cuda.o: CXXFLAGS += $(addprefix --offload-arch=,$(GPU_TARGETS)) \
-DGGML_CUDA_DMMV_X=$(LLAMA_CUDA_DMMV_X) \
-DGGML_CUDA_DMMV_Y=$(LLAMA_CUDA_DMMV_Y)
ggml-cuda.o: CXXFLAGS += $(addprefix --offload-arch=,$(GPU_TARGETS))


# DGGML_CUDA_DMMV_F16 does not currently work with AMD.
ggml-cuda.o: ggml-cuda.cu ggml-cuda.h
$(CXX) $(CXXFLAGS) -x hip -c -o $@ $<
Expand Down Expand Up @@ -259,11 +278,11 @@ else
OPENBLAS_NOAVX2_BUILD = $(CXX) $(CXXFLAGS) $^ $(ARCH_ADD) -lopenblas -shared -o [email protected] $(LDFLAGS)
endif
ifdef LLAMA_CLBLAST
ifeq ($(UNAME_S),Darwin)
CLBLAST_BUILD = $(CXX) $(CXXFLAGS) $^ -lclblast -framework OpenCL $(ARCH_ADD) -lopenblas -shared -o $@.so $(LDFLAGS)
else
CLBLAST_BUILD = $(CXX) $(CXXFLAGS) $^ -lclblast -lOpenCL $(ARCH_ADD) -lopenblas -shared -o $@.so $(LDFLAGS)
endif
ifeq ($(UNAME_S),Darwin)
CLBLAST_BUILD = $(CXX) $(CXXFLAGS) $^ -lclblast -framework OpenCL $(ARCH_ADD) -lopenblas -shared -o [email protected] $(LDFLAGS)
else
CLBLAST_BUILD = $(CXX) $(CXXFLAGS) $^ -lclblast -lOpenCL $(ARCH_ADD) -lopenblas -shared -o [email protected] $(LDFLAGS)
endif
endif

ifdef LLAMA_CUBLAS
Expand Down
19 changes: 18 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,22 @@
# koboldcpp
# koboldcpp-ROCM

To install, run
```make LLAMA_HIPBLAS=1```
To use ROCM, set GPU layers with --gpulayers when starting koboldcpp
Original [llama.cpp rocm port](https://github.com/ggerganov/llama.cpp/pull/1087) by SlyEcho, ported to koboldcpp by yellowrosecx

Comparison with OpenCL using 6800xt
| Model | Offloading Method | Time Taken - Processing 593 tokens| Time Taken - Generating 200 tokens| Total Time | Perf. Diff.
|-----------------|----------------------------|--------------------|--------------------|------------|---|
| Robin 7b q6_K |CLBLAST 6-t, All Layers on GPU | 6.8s (11ms/T) | 12.0s (60ms/T) | 18.7s (10.7T/s) | 1x
| Robin 7b q6_K |ROCM 1-t, All Layers on GPU | 1.4s (2ms/T) | 5.5s (28ms/T) | 6.9s (29.1T/s)| **2.71x**
| Robin 13b q5_K_M |CLBLAST 6-t, All Layers on GPU | 10.9s (18ms/T) | 16.7s (83ms/T) | 27.6s (7.3T/s) | 1x
| Robin 13b q5_K_M |ROCM 1-t, All Layers on GPU | 2.4s (4ms/T) | 7.8s (39ms/T) | 10.2s (19.6T/s)| **2.63x**
| Robin 33b q4_K_S |CLBLAST 6-t, 46/63 Layers on GPU | 23.2s (39ms/T) | 48.6s (243ms/T) | 71.9s (2.8T/s) | 1x
| Robin 33b q4_K_S |CLBLAST 6-t, 50/63 Layers on GPU | 25.5s (43ms/T) | 44.6s (223ms/T) | 70.0s (2.9T/s) | 1x
| Robin 33b q4_K_S |ROCM 6-t, 46/63 Layers on GPU | 14.6s (25ms/T) | 44.1s (221ms/T) | 58.7s (3.4T/s)| **1.19x**

--------
A self contained distributable from Concedo that exposes llama.cpp function bindings, allowing it to be used via a simulated Kobold API endpoint.

What does it mean? You get llama.cpp with a fancy UI, persistent stories, editing tools, save formats, memory, world info, author's note, characters, scenarios and everything Kobold and Kobold Lite have to offer. In a tiny package around 20 MB in size, excluding model weights.
Expand Down
47 changes: 42 additions & 5 deletions convert.py
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,7 @@ def find_n_mult(n_ff: int, n_embd: int) -> int:
calc_ff = (((8*n_embd) // 3 + n_mult - 1) // n_mult)*n_mult
if calc_ff == n_ff:
return n_mult
return 1
raise Exception(f"failed to find n_mult for (n_ff={n_ff}, n_embd={n_embd}).")

@dataclass
class Params:
Expand All @@ -154,9 +154,15 @@ def guessed(model: 'LazyModel') -> 'Params':
# try transformer naming first
if "model.layers.0.self_attn.q_proj.weight" in model:
n_layer=next(i for i in itertools.count() if f"model.layers.{i}.self_attn.q_proj.weight" not in model)
elif "model.layers.0.self_attn.W_pack.weight" in model: # next: try baichuan naming
n_layer=next(i for i in itertools.count() if f"model.layers.{i}.self_attn.W_pack.weight" not in model)
else:
n_layer=next(i for i in itertools.count() if f"layers.{i}.attention.wq.weight" not in model)

if n_layer < 1:
raise Exception("failed to guess 'n_layer'. This model is unknown or unsupported.\n"
"Suggestion: provide 'config.json' of the model in the same directory containing model files.")

n_head=n_embd // 128 # guessed

return Params(
Expand Down Expand Up @@ -321,6 +327,10 @@ def astype(self, data_type: DataType) -> 'Tensor': ...
@abstractmethod
def permute(self, n_head: int) -> 'Tensor': ...
@abstractmethod
def permute_part(self, n_part: int, n_head: int) -> 'UnquantizedTensor': ...
@abstractmethod
def part(self, n_part: int) -> 'UnquantizedTensor': ...
@abstractmethod
def to_ggml(self) -> 'GGMLCompatibleTensor': ...


Expand All @@ -345,6 +355,14 @@ def astype(self, data_type: DataType) -> Tensor:
def to_ggml(self) -> 'UnquantizedTensor':
return self

def permute_part(self, n_part: int, n_head: int) -> 'UnquantizedTensor':
r = self.ndarray.shape[0] // 3
return UnquantizedTensor(permute(self.ndarray[r * n_part : r * n_part + r, ...], n_head))

def part(self, n_part: int) -> 'UnquantizedTensor':
r = self.ndarray.shape[0] // 3
return UnquantizedTensor(self.ndarray[r * n_part : r * n_part + r, ...])

def permute(self, n_head: int) -> 'UnquantizedTensor':
return UnquantizedTensor(permute(self.ndarray, n_head))

Expand Down Expand Up @@ -642,6 +660,19 @@ def load() -> Tensor:
return lazy_tensor.load().permute(n_head)
return LazyTensor(load, lazy_tensor.shape, lazy_tensor.data_type, f'permute({n_head}) ' + lazy_tensor.description)

def permute_part_lazy(lazy_tensor: LazyTensor, n_part: int, n_head: int) -> LazyTensor:
def load() -> Tensor:
return lazy_tensor.load().permute_part(n_part, n_head)
s = lazy_tensor.shape.copy()
s[0] = s[0] // 3
return LazyTensor(load, s, lazy_tensor.data_type, f'permute({n_head}) ' + lazy_tensor.description)

def part_lazy(lazy_tensor: LazyTensor, n_part: int) -> LazyTensor:
def load() -> Tensor:
return lazy_tensor.load().part(n_part)
s = lazy_tensor.shape.copy()
s[0] = s[0] // 3
return LazyTensor(load, s, lazy_tensor.data_type, 'part ' + lazy_tensor.description)

def convert_transformers_to_orig(model: LazyModel, params: Params) -> LazyModel:
out: LazyModel = {}
Expand All @@ -650,11 +681,17 @@ def convert_transformers_to_orig(model: LazyModel, params: Params) -> LazyModel:
out["output.weight"] = model["lm_head.weight"]

for i in itertools.count():
if f"model.layers.{i}.self_attn.q_proj.weight" not in model:
if f"model.layers.{i}.self_attn.q_proj.weight" in model:
out[f"layers.{i}.attention.wq.weight"] = permute_lazy(model[f"model.layers.{i}.self_attn.q_proj.weight"], params.n_head)
out[f"layers.{i}.attention.wk.weight"] = permute_lazy(model[f"model.layers.{i}.self_attn.k_proj.weight"], params.n_head)
out[f"layers.{i}.attention.wv.weight"] = model[f"model.layers.{i}.self_attn.v_proj.weight"]
elif f"model.layers.{i}.self_attn.W_pack.weight" in model:
out[f"layers.{i}.attention.wq.weight"] = permute_part_lazy(model[f"model.layers.{i}.self_attn.W_pack.weight"], 0, params.n_head)
out[f"layers.{i}.attention.wk.weight"] = permute_part_lazy(model[f"model.layers.{i}.self_attn.W_pack.weight"], 1, params.n_head)
out[f"layers.{i}.attention.wv.weight"] = part_lazy(model[f"model.layers.{i}.self_attn.W_pack.weight"], 2)
else:
break
out[f"layers.{i}.attention.wq.weight"] = permute_lazy(model[f"model.layers.{i}.self_attn.q_proj.weight"], params.n_head)
out[f"layers.{i}.attention.wk.weight"] = permute_lazy(model[f"model.layers.{i}.self_attn.k_proj.weight"], params.n_head)
out[f"layers.{i}.attention.wv.weight"] = model[f"model.layers.{i}.self_attn.v_proj.weight"]

out[f"layers.{i}.attention.wo.weight"] = model[f"model.layers.{i}.self_attn.o_proj.weight"]

out[f"layers.{i}.feed_forward.w1.weight"] = model[f"model.layers.{i}.mlp.gate_proj.weight"]
Expand Down
2 changes: 1 addition & 1 deletion examples/alpaca.sh
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
cd `dirname $0`
cd ..

./main -m ./models/ggml-alpaca-7b-q4.bin \
./main -m ./models/alpaca.13b.ggmlv3.q8_0.bin \
--color \
-f ./prompts/alpaca.txt \
--ctx_size 2048 \
Expand Down
3 changes: 2 additions & 1 deletion examples/common.h
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ struct gpt_params {
int32_t n_gpu_layers = 0; // number of layers to store in VRAM
int32_t main_gpu = 0; // the GPU that is used for scratch and small tensors
float tensor_split[LLAMA_MAX_DEVICES] = {0}; // how split tensors should be distributed across GPUs
bool low_vram = 0; // if true, reduce VRAM usage at the cost of performance
int32_t n_probs = 0; // if greater than 0, output the probabilities of top n_probs tokens.

// sampling parameters
std::unordered_map<llama_token, float> logit_bias; // logit bias for specific tokens
Expand Down Expand Up @@ -59,6 +59,7 @@ struct gpt_params {
std::string lora_adapter = ""; // lora adapter path
std::string lora_base = ""; // base model path for the lora adapter

bool low_vram = false; // if true, reduce VRAM usage at the cost of performance
bool memory_f16 = true; // use f16 instead of f32 for memory kv
bool random_prompt = false; // do not randomize prompt if none provided
bool use_color = false; // use color to distinguish generations and inputs
Expand Down
11 changes: 7 additions & 4 deletions examples/embd-input/embd-input-lib.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ struct MyModel* create_mymodel(int argc, char ** argv) {

fprintf(stderr, "%s: build = %d (%s)\n", __func__, BUILD_NUMBER, BUILD_COMMIT);

if (params.seed < 0) {
if (params.seed == LLAMA_DEFAULT_SEED) {
params.seed = time(NULL);
}
fprintf(stderr, "%s: seed = %d\n", __func__, params.seed);
Expand Down Expand Up @@ -210,9 +210,12 @@ llama_token sampling_id(struct MyModel* mymodel) {
const char * sampling(struct MyModel * mymodel) {
llama_context * ctx = mymodel->ctx;
int id = sampling_id(mymodel);
std::string ret;
if (id == llama_token_eos()) ret = "</s>";
else ret = llama_token_to_str(ctx, id);
static std::string ret;
if (id == llama_token_eos()) {
ret = "</s>";
} else {
ret = llama_token_to_str(ctx, id);
}
eval_id(mymodel, id);
return ret.c_str();
}
Expand Down
4 changes: 1 addition & 3 deletions examples/embd-input/embd-input.h
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,6 @@
#include "llama.h"
#include "build-info.h"


extern "C" {

typedef struct MyModel {
Expand All @@ -14,14 +13,13 @@ typedef struct MyModel {
int n_past = 0;
} MyModel;


struct MyModel* create_mymodel(int argc, char ** argv);

bool eval_float(void* model, float* input, int N);
bool eval_tokens(void* model, std::vector<llama_token> tokens);
bool eval_id(struct MyModel* mymodel, int id);
bool eval_string(struct MyModel* mymodel, const char* str);
const char* sampling(struct MyModel* mymodel);
const char * sampling(struct MyModel* mymodel);
llama_token sampling_id(struct MyModel* mymodel);
void free_mymodel(struct MyModel* mymodel);

Expand Down
2 changes: 1 addition & 1 deletion examples/embedding/embedding.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ int main(int argc, char ** argv) {
params.embedding = true;

if (params.n_ctx > 2048) {
fprintf(stderr, "%s: warning: model does not support context sizes greater than 2048 tokens (%d specified);"
fprintf(stderr, "%s: warning: model might not support context sizes greater than 2048 tokens (%d specified);"
"expect poor results\n", __func__, params.n_ctx);
}

Expand Down
2 changes: 1 addition & 1 deletion examples/main/main.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ int main(int argc, char ** argv) {
}

if (params.n_ctx > 2048) {
fprintf(stderr, "%s: warning: model does not support context sizes greater than 2048 tokens (%d specified);"
fprintf(stderr, "%s: warning: model might not support context sizes greater than 2048 tokens (%d specified);"
"expect poor results\n", __func__, params.n_ctx);
} else if (params.n_ctx < 8) {
fprintf(stderr, "%s: warning: minimum context size is 8, using minimum size.\n", __func__);
Expand Down
2 changes: 1 addition & 1 deletion examples/perplexity/perplexity.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -130,7 +130,7 @@ int main(int argc, char ** argv) {
params.n_batch = std::min(params.n_batch, params.n_ctx);

if (params.n_ctx > 2048) {
fprintf(stderr, "%s: warning: model does not support context sizes greater than 2048 tokens (%d specified);"
fprintf(stderr, "%s: warning: model might not support context sizes greater than 2048 tokens (%d specified);"
"expect poor results\n", __func__, params.n_ctx);
}

Expand Down
14 changes: 7 additions & 7 deletions examples/quantize-stats/quantize-stats.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -147,7 +147,7 @@ void test_roundtrip_on_chunk(
const ggml_tensor * layer,
int64_t offset,
int64_t chunk_size,
const quantize_fns_t & qfns,
const ggml_type_traits_t & qfns,
bool use_reference,
float * input_scratch,
char * quantized_scratch,
Expand All @@ -163,11 +163,11 @@ void test_roundtrip_on_chunk(
}

if (use_reference) {
qfns.quantize_row_q_reference(input_scratch, quantized_scratch, chunk_size);
qfns.from_float_reference(input_scratch, quantized_scratch, chunk_size);
} else {
qfns.quantize_row_q(input_scratch, quantized_scratch, chunk_size);
qfns.from_float(input_scratch, quantized_scratch, chunk_size);
}
qfns.dequantize_row_q(quantized_scratch, output_scratch, chunk_size);
qfns.to_float(quantized_scratch, output_scratch, chunk_size);

update_error_stats(chunk_size, input_scratch, output_scratch, stats);
}
Expand All @@ -177,7 +177,7 @@ void test_roundtrip_on_chunk(
void test_roundtrip_on_layer(
std::string & name,
bool print_layer_stats,
const quantize_fns_t & qfns,
const ggml_type_traits_t & qfns,
bool use_reference,
const ggml_tensor * layer,
std::vector<float> & input_scratch,
Expand Down Expand Up @@ -388,8 +388,8 @@ int main(int argc, char ** argv) {
if (!params.include_types.empty() && std::find(params.include_types.begin(), params.include_types.end(), i) == params.include_types.end()) {
continue;
}
quantize_fns_t qfns = ggml_internal_get_quantize_fn(i);
if (qfns.quantize_row_q && qfns.dequantize_row_q) {
ggml_type_traits_t qfns = ggml_internal_get_type_traits(type);
if (qfns.from_float && qfns.to_float) {
if (params.verbose) {
printf("testing %s ...\n", ggml_type_name(type));
}
Expand Down
Loading

0 comments on commit 8a8218c

Please sign in to comment.