Skip to content

v0.9.17

Compare
Choose a tag to compare
@PawelPeczek-Roboflow PawelPeczek-Roboflow released this 15 Mar 13:50
· 835 commits to main since this release
a5dc38a

馃殌 Added

YOLOWorld - new versions and Roboflow hosted inference 馃く

inference package now support 5 new versions of YOLOWorld model. We've added versions x, v2-s, v2-m, v2-l, v2-x. Versions with prefix v2 have better performance than the previously published ones.

To use YOLOWorld in inference, use the following model_id: yolo_world/<version>, substituting <version> with one of [s, m, l, x, v2-s, v2-m, v2-l, v2-x].

You can use the models in different contexts:

Roboflow hosted inference - easiest way to get your predictions 馃挜

馃挕 Please make sure you have inference-sdk installed

If you do not have the whole inference package installed, you will need to install at leastinference-sdk:

pip install inference-sdk
馃挕 You need Roboflow account to use our hosted platform
import cv2
from inference_sdk import InferenceHTTPClient

client = InferenceHTTPClient(api_url="https://infer.roboflow.com", api_key="<YOUR_ROBOFLOW_API_KEY>")
image = cv2.imread("<path_to_your_image>")
results = client.infer_from_yolo_world(
    image,
    ["person", "backpack", "dog", "eye", "nose", "ear", "tongue"],
    model_version="s",  # <-- you do not need to provide `yolo_world/` prefix here
)

Self-hosted inference server

馃挕 Please remember to clean up old version of docker image

If you ever used inference server before, please run:

docker rmi roboflow/roboflow-inference-server-cpu:latest

# or, if you have GPU on the machine
docker rmi roboflow/roboflow-inference-server-gpu:latest

in order to make sure the newest version of image is pulled.

馃挕 Please make sure you run the server and have sdk installed

If you do not have the whole inference package installed, you will need to install at least inference-cli and inference-sdk:

pip install inference-sdk inference-cli

Make sure you start local instance of inference server before running the code

inference server start
import cv2
from inference_sdk import InferenceHTTPClient

client = InferenceHTTPClient(api_url="http:https://127.0.0.1:9001")
image = cv2.imread("<path_to_your_image>")
results = client.infer_from_yolo_world(
    image,
    ["person", "backpack", "dog", "eye", "nose", "ear", "tongue"],
    model_version="s",  # <-- you do not need to provide `yolo_world/` prefix here
)

In inference Python package

馃挕 Please remember to install inference with yolo-world extras
pip install "inference[yolo-world]"
import cv2
from inference.models import YOLOWorld

image = cv2.imread("<path_to_your_image>")
model = YOLOWorld(model_id="yolo_world/s")
results = model.infer(
    image, 
    ["person", "backpack", "dog", "eye", "nose", "ear", "tongue"]
)

馃尡 Changed

New Contributors

Full Changelog: v0.9.16...v0.9.17