Skip to content

Commit

Permalink
feat: run ollama & llama3
Browse files Browse the repository at this point in the history
  • Loading branch information
luoluoter committed Apr 19, 2024
1 parent 49636a7 commit f1e5788
Show file tree
Hide file tree
Showing 5 changed files with 21 additions and 2 deletions.
2 changes: 2 additions & 0 deletions docs/examples.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,8 @@ All examples that can be run:
| Example | Type | Model Size | Image Size | Command | Device |
| ------------------------------------------------ | ------------------------ | ---------- | ---------- | -------------------------------------------- | -------- |
| text-generation-webui | Text (LLM) | 3.9GB | 14.8GB | `reComputer run text-generation-webui` | |
| llama3 | Text (LLM) | 4.9GB | 10.5GB | `reComputer run llama3` | |
| [ollama](https://github.com/ollama/ollama) | Inference Server | * | 10.5GB | `reComputer run ollama` | |
| LLaMA | Text (LLM) | 1.5GB | 10.5GB | `reComputer run Sheared-LLaMA-2.7B-ShareGPT` | |
| llava-v1.5 | Text + Vision (VLM) | 13GB | 14.4GB | `reComputer run llava-v1.5-7b` | |
| llava-v1.6 | Text + Vision (VLM) | 13GB | 20.3GB | `reComputer run llava-v1.6-vicuna-7b` | |
Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"

[project]
name = "jetson-examples"
version = "0.0.5"
version = "0.0.6"
authors = [{ name = "luozhixin", email = "[email protected]" }]
description = "Running Gen AI models and applications on NVIDIA Jetson devices with one-line command"
readme = "README.md"
Expand Down
2 changes: 1 addition & 1 deletion reComputer/__init__.py
Original file line number Diff line number Diff line change
@@ -1 +1 @@
__version__ = "0.0.5"
__version__ = "0.0.6"
10 changes: 10 additions & 0 deletions reComputer/scripts/llama3/run.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
#!/bin/bash

# try stop old server
docker rm -f ollama
# start new server
./run.sh -d --name ollama $(./autotag ollama)
# run a client
./run.sh $(./autotag ollama) /bin/ollama run llama3
# clean new server
docker rm -f ollama
7 changes: 7 additions & 0 deletions reComputer/scripts/ollama/run.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
#!/bin/bash

# try stop old server
docker rm -f ollama
# run Front-end
./run.sh $(./autotag ollama)
# user only can access with http:https://ip:11434

0 comments on commit f1e5788

Please sign in to comment.