Skip to content

Commit

Permalink
Merge branch 'roboflow:main' into main
Browse files Browse the repository at this point in the history
  • Loading branch information
nathan-marraccini authored Jun 19, 2024
2 parents 69e2fc2 + 6bcc8fe commit ee3773b
Show file tree
Hide file tree
Showing 11 changed files with 82 additions and 20 deletions.
23 changes: 23 additions & 0 deletions docs/fine-tuned/yolov10.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
[YOLOv10](https://github.com/THU-MIG/yolov10), released on May 23, 2024, is a real-time object detection model developed by researchers from Tsinghua University. YOLOv10 follows in the long-running series of YOLO models, created by authors from a wide variety of researchers and organizations.

## Supported Model Types

You can deploy the following YOLOv9 model types with Inference:

- Object Detection

## Supported Inputs

Click a link below to see instructions on how to run a YOLOv9 model on different inputs:

- [Image](/quickstart/run_model_on_image/)
- [Video, Webcam, or RTSP Stream](/quickstart/run_model_on_rtsp_webcam/)

## License

See our [Licensing Guide](/quickstart/licensing/) for more information about how your use of YOLOv9 is licensed when using Inference to deploy your model.

## See Also

- [How to Train a YOLOv10 Model](https://blog.roboflow.com/yolov10-how-to-train/)
- [Deploy a YOLOv10 Model with Roboflow](https://blog.roboflow.com/deploy-yolov10-model/)
2 changes: 1 addition & 1 deletion docs/fine-tuned/yolov9.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ Click a link below to see instructions on how to run a YOLOv9 model on different

### Available Pretrained Models

You may use keypoint detection models available on the [Universe](https://universe.roboflow.com/search?q=model:yolov9).
You may use YOLOv9 object detection models available on the [Universe](https://universe.roboflow.com/search?q=model:yolov9).

## Configure Your Deployment

Expand Down
2 changes: 1 addition & 1 deletion docs/quickstart/compatability_matrix.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

The table below shows on what devices you can deploy models supported by Inference.

See our [Docker Getting Started](/docs/quickstart/docker) guide for more information on how to deploy Inference on your device.
See our [Docker Getting Started](/quickstart/docker) guide for more information on how to deploy Inference on your device.

Table key:

Expand Down
2 changes: 1 addition & 1 deletion docs/quickstart/devices.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ You can set up a server to use computer vision models with Inference on the foll

The table below shows on what devices you can deploy models supported by Inference.

See our [Docker Getting Started](/docs/quickstart/docker) guide for more information on how to deploy Inference on your device.
See our [Docker Getting Started](/quickstart/docker) guide for more information on how to deploy Inference on your device.

Table key:

Expand Down
24 changes: 12 additions & 12 deletions docs/workflows/about.md
Original file line number Diff line number Diff line change
@@ -1,24 +1,24 @@
# Inference Workflows

!!! note
## What is a Workflow?

Workflows is an alpha product undergoing active development. Stay tuned for updates as we continue to
refine and enhance this feature.
Workflows allow you to define multi-step processes that run one or more models to return results based on model outputs and custom logic.

Inference Workflows allow you to define multi-step processes that run one or more models and returns a result based on the output of the models.

With Inference workflows, you can:
With Workflows, you can:

- Detect, classify, and segment objects in images.
- Apply filters (i.e. process detections in a specific region, filter detections by confidence).
- Apply logic filters such as establish detection consensus or filter detections by confidence.
- Use Large Multimodal Models (LMMs) to make determinations at any stage in a workflow.

You can build simple workflows in the Roboflow web interface that you can then deploy to your own device or the cloud using Inference.
<div class="button-holder">
<a href="https://inference.roboflow.com/workflows/blocks/" class="button half-button">Explore all Workflows blocks</a>
<a href="https://app.roboflow.com/workflows" class="button half-button">Begin building with Workflows</a>
</div>

You can build more advanced workflows for use on your own devices by writing a workflow configuration directly in JSON.
![A license plate detection workflow implemented in Workflows](https://media.roboflow.com/inference/workflow-example.png)

In this section of documentation, we describe what you need to know to create workflows.
You can build and configure Workflows in the Roboflow web interface that you can then deploy using the Roboflow Hosted API, self-host locally and on the cloud using inference, or offline to your hardware devices. You can also build more advanced workflows by writing a Workflow configuration directly in the JSON editor.

Here is an example structure for a workflow you can build with Inference Workflows:
In this section of documentation, we walk through what you need to know to create and run workflows. Let’s get started!

![](https://github.com/roboflow/inference/blob/main/inference/enterprise/workflows/assets/example_pipeline.jpg?raw=true)
[Create and run a workflow.](/workflows/create_and_run/)
41 changes: 40 additions & 1 deletion docs/workflows/create_and_run.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,44 @@
Workflows allow you to define multi-step processes that run one or more models and return a result based on the output of the models.
# How to Create and Run a Workflow

## Example (Web App)

In this example, we are going to build a Workflow from scratch that detects license plates, crops the license plate, and then runs OCR on the license plate.

### Step 1: Create a Workflow

Navigate to the Workflows tab at the top of your workspace and select the Create Workflows button. We are going to start with a Single Model Workflow.

![Workflow start](https://media.roboflow.com/inference/workflow-example-start.png)

### Step 2: Add Crop

Next, we are going to add a block to our Workflow that crops the objects that our first model detects.

![Add crop](https://media.roboflow.com/inference/add-crop.gif)

### Step 3: Add OCR

We are then going to add an OCR model for text recognition to our Workflow. We will need to adjust the parameters in order to set the cropped object from our previous block as the input for this block.

![Add OCR](https://media.roboflow.com/inference/add-ocr.gif)

### Step 4: Add outputs to our response

Finally, we are going to add an output to our response which includes the object that we cropped, alongside the outputs of both our detection model and our OCR model.

![Add OCR](https://media.roboflow.com/inference/add-output.gif)

### Run the Workflow

Selecting the Run Workflow button generates the code snippets to then deploy your Workflow via the Roboflow Hosted API, locally on images via the Inference Server, and locally on video streams via the Inference Pipeline.

![Workflow code snippet](https://media.roboflow.com/inference/workflow-code-snippet.png)

You now have a workflow you can run on your own hardware!

## Example (Code, Advanced)

Workflows allow you to define multi-step processes that run one or more models and return a result based on the output of the models.

You can create and deploy workflows in the cloud or in Inference.

Expand Down
1 change: 0 additions & 1 deletion docs/yolov8.md

This file was deleted.

2 changes: 1 addition & 1 deletion inference/core/version.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
__version__ = "0.12.0"
__version__ = "0.12.1"


if __name__ == "__main__":
Expand Down
1 change: 1 addition & 0 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@ nav:
- "Models: Universe": quickstart/load_from_universe.md
- "Models: Local Weights": models/from_local_weights.md
- Supported Fine-Tuned Models:
- YOLOv10: fine-tuned/yolov10.md
- YOLOv9: fine-tuned/yolov9.md
- YOLOv8: fine-tuned/yolov8.md
- YOLOv7: fine-tuned/yolov7.md
Expand Down
2 changes: 1 addition & 1 deletion requirements/requirements.sdk.http.txt
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ opencv-python>=4.8.0.0
pillow>=9.0.0
requests>=2.27.0
supervision<1.0.0
numpy>=1.20.0
numpy<=1.26.4
aiohttp>=3.9.0
backoff>=2.2.0
aioresponses>=0.7.6
Expand Down
2 changes: 1 addition & 1 deletion requirements/requirements.test.integration.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,4 @@ requests
pytest
pillow
requests_toolbelt
numpy
numpy<=1.26.4

0 comments on commit ee3773b

Please sign in to comment.