Skip to content

Examples showing how to use the OpenAI vision API to run inference on images, video files and webcam streams

Notifications You must be signed in to change notification settings

tuhinmallick/awesome-openai-vision-api-experiments

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 

Repository files navigation

awesome-openai-vision-api-experiments

ssstwitter.com_1699453288672.mp4

👋 hello

A set of examples showing how to use the OpenAI vision API to run inference on images, video files and webcam streams.

💻 Install

# create and activate virtual environment
python3 -m venv venv
source venv/bin/activate

# install dependencies
pip install -r requirements.txt

🔑 Keys

Experimenting with the OpenAI API requires an API key. You can get one here.

🧪 Experiments

Experiment Description Code HF Space
WebcamGPT chat with video stream GitHub HuggingFace
Grounding DINO + GPT-4V label images with Grounding DINO and GPT-4V GitHub
GPT-4V Classification classify images with GPT-4V GitHub
GPT-4V vs. CLIP label images with Grounding DINO and Autodistill GitHub
Hot Dog or not Hot Dog simple image classification GitHub HuggingFace

🦸 Contribution

I would love your help in making this repository even better! Whether you want to correct a typo, add some new experiment, or if you have any suggestions for improvement, feel free to open an issue or pull request.

About

Examples showing how to use the OpenAI vision API to run inference on images, video files and webcam streams

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%