Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sync : llama.cpp #742

Merged
merged 9 commits into from
Feb 21, 2024
Merged

sync : llama.cpp #742

merged 9 commits into from
Feb 21, 2024

Commits on Feb 21, 2024

  1. cuda : ignore peer access already enabled errors (llama/5597)

    * cuda : ignore peer access already enabled errors
    
    * fix hip
    slaren authored and ggerganov committed Feb 21, 2024
    Configuration menu
    Copy the full SHA
    bda616d View commit details
    Browse the repository at this point in the history
  2. Allow for Vulkan build with Accelerate.

    Closes #5304
    dokterbob authored and ggerganov committed Feb 21, 2024
    Configuration menu
    Copy the full SHA
    9d2b322 View commit details
    Browse the repository at this point in the history
  3. Configuration menu
    Copy the full SHA
    c462eb0 View commit details
    Browse the repository at this point in the history
  4. Configuration menu
    Copy the full SHA
    a1d40bf View commit details
    Browse the repository at this point in the history
  5. Configuration menu
    Copy the full SHA
    5ec3b91 View commit details
    Browse the repository at this point in the history
  6. Refactor validation and enumeration platform checks into functions to…

    … clean up ggml_vk_instance_init()
    0cc4m authored and ggerganov committed Feb 21, 2024
    Configuration menu
    Copy the full SHA
    f3cb240 View commit details
    Browse the repository at this point in the history
  7. Update ggml_sycl_op_mul_mat_vec_q (llama/5502)

    * Update ggml_sycl_op_mul_mat_vec_q
    
    * Apply suggestions from code review
    
    Co-authored-by: Abhilash Majumder <[email protected]>
    
    * revert suggestion on macro
    
    * fix bug
    
    * Add quant type GGML_TYPE_IQ1_S to unsupported
    
    * fix format
    
    ---------
    
    Co-authored-by: Abhilash Majumder <[email protected]>
    2 people authored and ggerganov committed Feb 21, 2024
    Configuration menu
    Copy the full SHA
    e221158 View commit details
    Browse the repository at this point in the history
  8. conext add name (llama/5624)

    * [SYCL] conext add name
    
    * name should start with SYCL*
    airMeng authored and ggerganov committed Feb 21, 2024
    Configuration menu
    Copy the full SHA
    6e6e573 View commit details
    Browse the repository at this point in the history
  9. sync : llama.cpp (#0)

    ggml-ci
    ggerganov committed Feb 21, 2024
    Configuration menu
    Copy the full SHA
    f956464 View commit details
    Browse the repository at this point in the history