Skip to content

Tags: edgenai/llama_cpp-rs

Tags

llama_cpp-v0.1.3

Toggle llama_cpp-v0.1.3's commit message
### New Features

 - more `async` function variants
 - add `LlamaSession.model`

### Other

 - typo

### Commit Statistics

 - 5 commits contributed to the release.
 - 3 commits were understood as [conventional](https://www.conventionalcommits.org).
 - 0 issues like '(#ID)' were seen in commit messages

### Commit Details

 * **Uncategorized**
    - Typo (0a0d5f3)
    - Release llama_cpp v0.1.2 (4d0b130)
    - More `async` function variants (1019402)
    - Add `LlamaSession.model` (c190df6)
    - Release llama_cpp_sys v0.2.1, llama_cpp v0.1.1 (a9e5813)

llama_cpp-v0.1.2

Toggle llama_cpp-v0.1.2's commit message
### New Features

 - more `async` function variants
 - add `LlamaSession.model`

### Commit Statistics

 - 2 commits contributed to the release.
 - 2 commits were understood as [conventional](https://www.conventionalcommits.org).
 - 0 issues like '(#ID)' were seen in commit messages

### Commit Details

 * **Uncategorized**
    - More `async` function variants (dcfccdf)
    - Add `LlamaSession.model` (56285a1)

llama_cpp-v0.1.1

Toggle llama_cpp-v0.1.1's commit message
### Chore

 - Remove debug binary from Cargo.toml

### New Features

 - add `LlamaModel::load_from_file_async`

### Bug Fixes

 - require `llama_context` is accessed from behind a mutex
   This solves a race condition when several `get_completions` threads are spawned at the same time
 - `start_completing` should not be invoked on a per-iteration basis
   There's still some UB that can be triggered due to llama.cpp's threading model, which needs patching up.

### Commit Statistics

 - 5 commits contributed to the release.
 - 13 days passed between releases.
 - 4 commits were understood as [conventional](https://www.conventionalcommits.org).
 - 0 issues like '(#ID)' were seen in commit messages

### Commit Details

 * **Uncategorized**
    - Add `LlamaModel::load_from_file_async` (3bada65)
    - Remove debug binary from Cargo.toml (3eddbab)
    - Require `llama_context` is accessed from behind a mutex (b676baa)
    - `start_completing` should not be invoked on a per-iteration basis (4eb0bc9)
    - Update to llama.cpp 0a7c980 (94d7385)

llama_cpp_sys-v0.2.2

Toggle llama_cpp_sys-v0.2.2's commit message
### Bug Fixes

 - do not rerun build on changed header files
   this restores functionality lost in the latest upgrade to `bindgen`, which enabled this functionality

### Commit Statistics

 - 2 commits contributed to the release.
 - 1 commit was understood as [conventional](https://www.conventionalcommits.org).
 - 0 issues like '(#ID)' were seen in commit messages

### Commit Details

 * **Uncategorized**
    - Do not rerun build on changed header files (674f395)
    - Release llama_cpp_sys v0.2.1, llama_cpp v0.1.1 (a9e5813)

llama_cpp_sys-v0.2.1

Toggle llama_cpp_sys-v0.2.1's commit message
### Chore

 - Update to `bindgen` 0.69.1

### Bug Fixes

 - `start_completing` should not be invoked on a per-iteration basis
   There's still some UB that can be triggered due to llama.cpp's threading model, which needs patching up.

### Commit Statistics

 - 2 commits contributed to the release.
 - 13 days passed between releases.
 - 2 commits were understood as [conventional](https://www.conventionalcommits.org).
 - 0 issues like '(#ID)' were seen in commit messages

### Commit Details

 * **Uncategorized**
    - Update to `bindgen` 0.69.1 (ccb794d)
    - `start_completing` should not be invoked on a per-iteration basis (4eb0bc9)

llama_cpp-v0.1.0

Toggle llama_cpp-v0.1.0's commit message
### Chore

 - <csr-id-702a6ff49d83b10a0573a5ca1fb419efaa43746e/> remove `include` from llama_cpp
 - <csr-id-116fe8c82fe2c43bf9041f6dbfe2ed15d00e18e9/> Release
 - <csr-id-96548c840d3101091c879648074fa0ed1cee3011/> latest fixes from upstream

### Chore

 - add CHANGELOG.md

### Bug Fixes

 - use SPDX license identifiers

### Other

 - <csr-id-a5fb19499ecbb1060ca8211111f186efc6e9b114/> configure for `cargo-release`

### Commit Statistics

 - 8 commits contributed to the release over the course of 5 calendar days.
 - 6 commits were understood as [conventional](https://www.conventionalcommits.org).
 - 1 unique issue was worked on: #3

### Commit Details

 * **#3**
    - Release (116fe8c)
 * **Uncategorized**
    - Add CHANGELOG.md (aa5eed4)
    - Remove `include` from llama_cpp (702a6ff)
    - Use SPDX license identifiers (2cb06ae)
    - Release llama_cpp_sys v0.2.0 (d1868ac)
    - Latest fixes from upstream (96548c8)
    - Configure for `cargo-release` (a5fb194)
    - Initial commit (6f672ff)

llama_cpp_sys-v0.2.0

Toggle llama_cpp_sys-v0.2.0's commit message
### Chore

 - <csr-id-116fe8c82fe2c43bf9041f6dbfe2ed15d00e18e9/> Release
 - <csr-id-96548c840d3101091c879648074fa0ed1cee3011/> latest fixes from upstream

### Bug Fixes

 - set clang to use c++ stl
 - use SPDX license identifiers

### Other

 - <csr-id-2d14d8df7e3850525d0594d387f65b7a4fc26646/> use `link-cplusplus`, enable build+test on all branches
   * ci: disable static linking of llama.o
   
   * ci: build+test on all branches/prs
   
   * ci: use `link-cplusplus`
 - <csr-id-a5fb19499ecbb1060ca8211111f186efc6e9b114/> configure for `cargo-release`

### Commit Statistics

 - 10 commits contributed to the release over the course of 5 calendar days.
 - 6 commits were understood as [conventional](https://www.conventionalcommits.org).
 - 3 unique issues were worked on: #1, #2, #3

### Commit Details

 * **#1**
    - Use `link-cplusplus`, enable build+test on all branches (2d14d8d)
 * **#2**
    - Prepare for publishing to crates.io (f35e282)
 * **#3**
    - Release (116fe8c)
 * **Uncategorized**
    - Use SPDX license identifiers (2cb06ae)
    - Release llama_cpp_sys v0.2.0 (85f21a1)
    - Add CHANGELOG.md (0e836f5)
    - Set clang to use c++ stl (b9cde4a)
    - Latest fixes from upstream (96548c8)
    - Configure for `cargo-release` (a5fb194)
    - Initial commit (6f672ff)