Skip to content

Commit

Permalink
Release/0.4.14 to main (#2948)
Browse files Browse the repository at this point in the history
* Fix: Phi-3 doesn't display (#2928)

* fix: params correction

* add phi

* version bump

* Feat/oai endpoints mapper using JSON (#2929)

* feat: test mapper

* chore: temporally add other runner

* chore: temp remove ubuntu-18-04-openai-api-collection-test

* chore: test json file

* chore: test json file

* Correct path endpoints_mapping.json

* feat: running via endpoints

* feat: running multiple endpoint

* feat: use endpoint value from workflow dispatch

* feat: add mapper between endpoint and python test file

* feat: config run all

* feat: config run all

---------

Co-authored-by: Van-QA <[email protected]>
Co-authored-by: Hien To <[email protected]>

* fix: fix crash when model name not ready (#2931)

Signed-off-by: James <[email protected]>
Co-authored-by: James <[email protected]>

* Replace deprecated steps github action (#2935)

Co-authored-by: Hien To <[email protected]>

* Chore: phi3 long-context update (#2936)

* init

* init

* fix: correct version

* version bump

* correct url

* remove small

* correct size

* Bump cortex to 0.4.8 (#2938)

* Update README.md (#2927)

Updated:

Title: Turn your computer into an AI computer
Nvidia -> NVIDIA
M1/M2 -> M1/M2/M3/M4

* docs: Update README.md (#2939)

* Bump cortex to 0.4.9 (#2940)

* Code sign retry 3 times (#2943)

* Replace deprecated steps github action

* Windows codesign retry 3 times

---------

Co-authored-by: Hien To <[email protected]>

* Chore: aya update (#2941)

* init

* init

* fix: correct format

* version bump

* add: aya 8b, aya 35b, phi3

* fix: stop token

* fix: stop token

* fix: unchange title and last message when clean or delete message (#2937)

* Update package.json to use [email protected] (#2949)

---------

Signed-off-by: James <[email protected]>
Co-authored-by: Hoang Ha <[email protected]>
Co-authored-by: Van-QA <[email protected]>
Co-authored-by: Hien To <[email protected]>
Co-authored-by: NamH <[email protected]>
Co-authored-by: James <[email protected]>
Co-authored-by: hiento09 <[email protected]>
Co-authored-by: eckartal <[email protected]>
Co-authored-by: Mohammed Aldakhil <[email protected]>
Co-authored-by: Faisal Amir <[email protected]>
  • Loading branch information
10 people committed May 27, 2024
1 parent ac33358 commit fb6b833
Show file tree
Hide file tree
Showing 18 changed files with 369 additions and 58 deletions.
7 changes: 3 additions & 4 deletions .github/workflows/jan-electron-build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,12 +25,11 @@ jobs:
GITHUB_REF: ${{ github.ref }}
- name: Create Draft Release
id: create_release
uses: actions/create-release@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
uses: softprops/action-gh-release@v2
with:
tag_name: ${{ github.ref_name }}
release_name: "${{ env.VERSION }}"
token: ${{ secrets.GITHUB_TOKEN }}
name: "${{ env.VERSION }}"
draft: true
prerelease: false

Expand Down
21 changes: 16 additions & 5 deletions .github/workflows/jan-openai-api-test.yml
Original file line number Diff line number Diff line change
@@ -1,6 +1,13 @@
name: Test - OpenAI API Pytest collection
on:
workflow_dispatch:
inputs:
endpoints:
description: 'comma-separated list (see available at endpoints_mapping.json e.g. GET /users,POST /transform)'
required: false
default: all
type: string

push:
branches:
- main
Expand Down Expand Up @@ -38,11 +45,11 @@ jobs:
rm -rf ~/jan
make clean
- name: install dependencies
- name: Install dependencies
run: |
npm install -g @stoplight/prism-cli
- name: create python virtual environment and run test
- name: Create python virtual environment and run test
run: |
python3 -m venv /tmp/jan
source /tmp/jan/bin/activate
Expand All @@ -65,10 +72,14 @@ jobs:
# Append to conftest.py
cat ../docs/tests/conftest.py >> tests/conftest.py
cat ../docs/tests/endpoints_mapping.json >> tests/endpoints_mapping.json
# start mock server and run test then stop mock server
prism mock ../docs/openapi/jan.yaml > prism.log & prism_pid=$! && pytest --reportportal --html=report.html && kill $prism_pid
prism mock ../docs/openapi/jan.yaml > prism.log & prism_pid=$! &&
pytest --endpoint "$ENDPOINTS" --reportportal --html=report.html && kill $prism_pid
deactivate
env:
ENDPOINTS: ${{ github.event.inputs.endpoints }}

- name: Upload Artifact
uses: actions/upload-artifact@v2
Expand All @@ -79,7 +90,7 @@ jobs:
openai-python/assets
openai-python/prism.log
- name: clean up
- name: Clean up
if: always()
run: |
rm -rf /tmp/jan
Expand Down
11 changes: 6 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Jan - Bring AI to your Desktop
# Jan - Turn your computer into an AI computer

![Jan banner](https://github.com/janhq/jan/assets/89722390/35daac7d-b895-487c-a6ac-6663daaad78e)

Expand All @@ -19,13 +19,14 @@
- <a href="https://discord.gg/AsJ8krTT3N">Discord</a>
</p>

> ⚠️ **Jan is currently in Development**: Expect breaking changes and bugs!
>[!Warning]
>**Jan is currently in Development**: Expect breaking changes and bugs!
Jan is an open-source ChatGPT alternative that runs 100% offline on your computer.

**Jan runs on any hardware.** From PCs to multi-GPU clusters, Jan supports universal architectures:

- [x] Nvidia GPUs (fast)
- [x] NVIDIA GPUs (fast)
- [x] Apple M-series (fast)
- [x] Apple Intel
- [x] Linux Debian
Expand Down Expand Up @@ -57,7 +58,7 @@ Jan is an open-source ChatGPT alternative that runs 100% offline on your compute
<td style="text-align:center">
<a href='https://app.jan.ai/download/latest/mac-arm64'>
<img src='https://github.com/janhq/docs/blob/main/static/img/mac.png' style="height:15px; width: 15px" />
<b>M1/M2</b>
<b>M1/M2/M3/M4</b>
</a>
</td>
<td style="text-align:center">
Expand Down Expand Up @@ -90,7 +91,7 @@ Jan is an open-source ChatGPT alternative that runs 100% offline on your compute
<td style="text-align:center">
<a href='https://app.jan.ai/download/nightly/mac-arm64'>
<img src='https://github.com/janhq/docs/blob/main/static/img/mac.png' style="height:15px; width: 15px" />
<b>M1/M2</b>
<b>M1/M2/M3/M4</b>
</a>
</td>
<td style="text-align:center">
Expand Down
42 changes: 38 additions & 4 deletions docs/tests/conftest.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,40 @@
import json


def pytest_addoption(parser):
parser.addoption(
"--endpoint", action="store", default="all", help="my option: endpoints"
)


def pytest_configure(config):
config.addinivalue_line(
"markers", "endpoint(endpoint): this mark select the test based on endpoint"
)


def pytest_runtest_setup(item):
getoption = item.config.getoption("--endpoint").split(",")
if getoption not in (["all"], [''], [""]):
endpoint_names = [mark.args[0] for mark in item.iter_markers(name="endpoint")]
if not endpoint_names or not set(getoption).intersection(set(endpoint_names)):
pytest.skip("Test skipped because endpoint is {!r}".format(endpoint_names))


def pytest_collection_modifyitems(items):
# load the JSON file
with open("tests/endpoints_mapping.json", "r") as json_file:
endpoints_file_mapping = json.load(json_file)

# create a dictionary to map filenames to endpoints
filename_to_endpoint = {}
for endpoint, files in endpoints_file_mapping.items():
for filename in files:
filename_to_endpoint[filename] = endpoint

# add the markers based on the JSON file
for item in items:
# add the name of the file (without extension) as a marker
filename = item.nodeid.split("::")[0].split("/")[-1].replace(".py", "")
marker = pytest.mark.file(filename)
item.add_marker(marker)
# map the name of the file to endpoint, else use default value
filename = item.fspath.basename
marker = filename_to_endpoint.get(filename, filename)
item.add_marker(pytest.mark.endpoint(marker, filename=filename))
75 changes: 75 additions & 0 deletions docs/tests/endpoints_mapping.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
{
"/embeddings": [
"test_embedding.py"
],
"/audio/translations": [
"test_translations.py"
],
"/audio/transcriptions": [
"test_transcriptions.py"
],
"/moderations": [
"test_moderations.py"
],
"/images/generations": [
"test_images.py"
],
"/batches": [
"test_batches.py"
],
"/vector_stores": [
"test_vector_stores.py"
],
"/fine_tuning/jobs": [
"test_jobs.py",
"test_checkpoints.py"
],
"/assistants": [
"test_assistants.py"
],
"/threads/{thread_id}/runs": [
"test_runs.py"
],
"/threads/{thread_id}/runs/{run_id}/steps": [
"test_steps.py"
],
"/vector_stores/{vector_store_id}/file_batches": [
"test_file_batches.py"
],
"/messages": [
"test_messages.py"
],
"/vector_stores/{vector_store_id}/files": [
"test_files.py"
],
"/chat/completions": [
"test_completions.py"
],
"/threads": [
"test_threads.py"
],
"/audio/speech": [
"test_speech.py"
],
"/models": [
"test_models.py"
],
"native_client_sdk_only": [
"test_streaming.py"
],
"utils": [
"test_response.py",
"test_client.py",
"test_extract_files.py",
"test_typing.py",
"test_legacy_response.py",
"test_module_client.py",
"test_old_api.py",
"test_proxy.py",
"test_qs.py",
"test_required_args.py",
"test_transform.py",
"test_azure.py",
"test_deepcopy.py"
]
}
63 changes: 42 additions & 21 deletions electron/sign.js
Original file line number Diff line number Diff line change
@@ -1,5 +1,28 @@
const { exec } = require('child_process')

function execCommandWithRetry(command, retries = 3) {
return new Promise((resolve, reject) => {
const execute = (attempt) => {
exec(command, (error, stdout, stderr) => {
if (error) {
console.error(`Error: ${error}`)
if (attempt < retries) {
console.log(`Retrying... Attempt ${attempt + 1}`)
execute(attempt + 1)
} else {
return reject(error)
}
} else {
console.log(`stdout: ${stdout}`)
console.error(`stderr: ${stderr}`)
resolve()
}
})
}
execute(0)
})
}

function sign({
path,
name,
Expand All @@ -13,16 +36,9 @@ function sign({
}) {
return new Promise((resolve, reject) => {
const command = `azuresigntool.exe sign -kvu "${certUrl}" -kvi "${clientId}" -kvt "${tenantId}" -kvs "${clientSecret}" -kvc "${certName}" -tr "${timestampServer}" -v "${path}"`

exec(command, (error, stdout, stderr) => {
if (error) {
console.error(`Error: ${error}`)
return reject(error)
}
console.log(`stdout: ${stdout}`)
console.error(`stderr: ${stderr}`)
resolve()
})
execCommandWithRetry(command)
.then(resolve)
.catch(reject)
})
}

Expand All @@ -34,15 +50,20 @@ exports.default = async function (options) {
const certName = process.env.AZURE_CERT_NAME
const timestampServer = 'http:https://timestamp.globalsign.com/tsa/r6advanced1'

await sign({
path: options.path,
name: 'jan-win-x64',
certUrl,
clientId,
tenantId,
clientSecret,
certName,
timestampServer,
version: options.version,
})
try {
await sign({
path: options.path,
name: 'jan-win-x64',
certUrl,
clientId,
tenantId,
clientSecret,
certName,
timestampServer,
version: options.version,
})
} catch (error) {
console.error('Failed to sign after 3 attempts:', error)
process.exit(1)
}
}
2 changes: 1 addition & 1 deletion extensions/inference-nitro-extension/bin/version.txt
Original file line number Diff line number Diff line change
@@ -1 +1 @@
0.4.7
0.4.9
2 changes: 1 addition & 1 deletion extensions/inference-nitro-extension/package.json
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
{
"name": "@janhq/inference-cortex-extension",
"productName": "Cortex Inference Engine",
"version": "1.0.7",
"version": "1.0.10",
"description": "This extension embeds cortex.cpp, a lightweight inference engine written in C++. See https://nitro.jan.ai.\nAdditional dependencies could be installed to run without Cuda Toolkit installation.",
"main": "dist/index.js",
"node": "dist/node/index.cjs.js",
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
{
"sources": [
{
"filename": "aya-23-35B-Q4_K_M.gguf",
"url": "https://huggingface.co/bartowski/aya-23-35B-GGUF/resolve/main/aya-23-35B-Q4_K_M.gguf"
}
],
"id": "aya-23-35b",
"object": "model",
"name": "Aya 23 35B Q4",
"version": "1.0",
"description": "Aya 23 can talk upto 23 languages fluently.",
"format": "gguf",
"settings": {
"ctx_len": 8192,
"prompt_template": "<|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>{system_prompt}<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>{prompt}<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>",
"llama_model_path": "aya-23-35B-Q4_K_M.gguf",
"ngl": 40
},
"parameters": {
"temperature": 0.7,
"top_p": 0.95,
"stream": true,
"max_tokens": 8192,
"frequency_penalty": 0,
"presence_penalty": 0,
"stop": ["<|END_OF_TURN_TOKEN|>"]
},
"metadata": {
"author": "CohereForAI",
"tags": ["34B", "Finetuned"],
"size": 21556982144
},
"engine": "nitro"
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
{
"sources": [
{
"filename": "aya-23-8B-Q4_K_M.gguf",
"url": "https://huggingface.co/bartowski/aya-23-8B-GGUF/resolve/main/aya-23-8B-Q4_K_M.gguf"
}
],
"id": "aya-23-8b",
"object": "model",
"name": "Aya 23 8B Q4",
"version": "1.0",
"description": "Aya 23 can talk upto 23 languages fluently.",
"format": "gguf",
"settings": {
"ctx_len": 8192,
"prompt_template": "<|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>{system_prompt}<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>{prompt}<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>",
"llama_model_path": "aya-23-8B-Q4_K_M.gguf",
"ngl": 32
},
"parameters": {
"temperature": 0.7,
"top_p": 0.95,
"stream": true,
"max_tokens": 8192,
"frequency_penalty": 0,
"presence_penalty": 0,
"stop": ["<|END_OF_TURN_TOKEN|>"]
},
"metadata": {
"author": "CohereForAI",
"tags": ["7B", "Finetuned","Featured"],
"size": 5056982144
},
"engine": "nitro"
}
Loading

0 comments on commit fb6b833

Please sign in to comment.