Skip to content

Ikomia-dev/IkomiaApi

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Logo

State-of-the-art Computer Vision with a few lines of code

Report a Bug · Request a Feature . Ask a Question

Stars Website Python Python GitHub GitHub tags
Slack community

About The Project

Ikomia API is an open source tool to easily build and deploy your Computer Vision solutions. You can mix your preferred frameworks such as OpenCV, Detectron2, OpenMMLab or YOLO with the best state-of-the-art algorithms from individual repos.

No effort, just choose what you want and Ikomia downloads it, installs the requirements and runs everything in a few lines of code.

Getting Started

Installation

pip install ikomia

Usage

Ikomia API has already more than 180 pre-integrated algorithms (mainly OpenCV) but the most interesting algorithms are in Ikomia HUB. That's why, you need to connect to Ikomia HUB when you want to download/install these algorithms.

You can use our demo credentials below or you can get yours for free and join our community here :)

Ikomia authentication is based on two environment variables: IKOMIA_USER=your_login and IKOMIA_PWD=your_password, so you can set these variables by command-lines or use this code snippet:

import os
import ikomia

os.environ['IKOMIA_USER'] = "demo"
os.environ['IKOMIA_PWD'] = "jH4q72DApbRPa4k"

ikomia.authenticate()

When you want to use an algorithm, it's always the same code pattern which is useful when you want to test multiple algorithms effortlessly.

from ikomia.dataprocess import workflow

# Init your workflow
wf = workflow.create("YOLO inference")

# Add YOLO and connect it to your input data
yolo_id, yolo = wf.add_task("infer_yolo_v7")
wf.connect_tasks(wf.getRootID(), yolo_id)

Then run and display your results.

import cv2

# Run directly on your image
wf.run_on(path="path/to/your/image.png")

# YOLO output image with bounding boxes
img_bbox = wf.get_image_with_graphics(yolo_id)
img_bbox = cv2.cvtColor(img_bbox, cv2.COLOR_RGB2BGR)

cv2.imshow("Ikomia Demo", img_bbox)

You can also change each algorithms parameters.

yolo_params = {
    "custom_train": True,
    "custom_model": "path/to/your/model",
    "thr_conf": 0.25
}
wf.set_parameters(task_id=yolo_id, params=yolo_params)

If you don't know what are the parameters (which is often the case), just print your task !

print(yolo)
Show print

###################################
#	infer_yolo_v7
###################################

***********************************
*	 PARAMETERS
***********************************

cuda:True
thr_conf:0.25
iou_conf:0.5
pretrain_model:yolov7
custom_model:
img_size:640
custom_train:False

***********************************
*	 INPUTS
***********************************

-----------------------------------
Name: porsche-4795517_960_720
-----------------------------------
Description: 2D or 3D images.
Can be single frame from video or camera stream.
Save folder: 
Auto-save: 0
Data type: image
Save format: .png
Dimension count: 2
File name: 
-----------------------------------
Name: CGraphicsInput
-----------------------------------
Description: Graphics items organized in layer.
Represent shapes and types of objects in image.
Graphics can be created interactively by user.
Save folder: 
Auto-save: 0
Data type: graphics
Save format: .json
Dimension count: 0

***********************************
*	 OUTPUTS
***********************************

-----------------------------------
Name: CImageIO
-----------------------------------
Description: 2D or 3D images.
Can be single frame from video or camera stream.
Save folder: 
Auto-save: 0
Data type: image
Save format: .png
Dimension count: 2
File name: 
-----------------------------------
Name: CObjectDetectionIO
-----------------------------------
Description: Object detection data: label, confidence, box and color.
Save folder: 
Auto-save: 0
Data type: Object detection
Save format: .json
Dimension count: 0

Then you can easily save your workflow as JSON file for reuse.

wf.save("path/to/your/workflow.json")
wf.load("path/to/your/workflow.json")
wf.run_on(path="path/to/your/image.png")

And finally, you can also export your results as JSON files.

# Get all object detection outputs (most of the time, there is only one)
output_list = wf.get_object_detection_output(task_name="infer_yolo_v7")
output_list[0].toJson()
{
  "detections": [
    {
      "box": {
        "height": 195,
        "width": 463,
        "x": 183,
        "y": 261
      },
      "color": {
        "a": 0,
        "b": 192,
        "g": 106,
        "r": 91
      },
      "confidence": 0.90673828125,
      "id": 0,
      "label": "ball"
    }
  ]
}

Examples

You can find some notebooks here.

We provide some Google Colab tutorials:

Notebooks Google Colab
How to make a simple workflow Open In Colab
How to run Neural Style Transfer Open In Colab
How to train and run YOLO v7 on your datasets Open In Colab
How to use Detectron2 Object Detection Open In Colab

Documentation

Python API documentation can be found here. You will find Ikomia HUB algorithms code source in our Ikomia HUB GitHub.

Contributing

This part is coming soon...:)

License

Distributed under the Apache-2.0 License. See LICENSE.md for more information.

They like us, we love them 😍

Stargazers repo roster for @Ikomia-dev/IkomiaApi

Star History

Star History Chart

Citing Ikomia

If you use Ikomia in your research, please use the following BibTeX entry.

@misc{DeBa2019Ikomia,
  author =       {Guillaume Demarcq and Ludovic Barusseau},
  title =        {Ikomia},
  howpublished = {\url{https://github.com/Ikomia-dev/IkomiaAPI}},
  year =         {2019}
}

Support

Contributions, issues, and feature requests are welcome! Give a ⭐ if you like this project!

Contact

Ikomia - @IkomiaOfficial - [email protected]

Project Link: https://github.com/Ikomia-dev/IkomiaAPI