GUI to explore large image collections with text queries
-
Updated
Jul 2, 2023 - Python
GUI to explore large image collections with text queries
Search for images using text and images.
Search relevant images using text/image query.
Deep learning pet breed recognition app
Recommendation system that searches similar items
Group images by provided labels using OpenAI/CLIP
Search images by text input with CLIP
Generation of faces, numbers and images...And Stable-Diffusion Inpainting through Segmentation through SAM and CLIP Model
ChatSense - Llama 2 + Code Llama + CLIP based Chatbot
Visual and Vision-Language Representation Pre-Training with Contrastive Learning
Text2ImageDescription retrieves relevant images from Pascal VOC 2012 dataset using OpenAI CLIP, based on text queries, and generates descriptions using quantized Mistral-7b model.
Computationally-free personalization at test time for sEMG gesture classification. Fast (gpu/cpu) ninapro API.
CLIFS (CLIP-based Frame Selection) is a Python function that takes in a video file and a text prompt as input, and uses the CLIP (Contrastive Language-Image Pre-training) model to find the frame in the video that is most similar to the given text prompt.
A list of projects that use OpenAI's CLIP model.
OpenAI's CLIP neural network
Generative models for architecture prose and schematics
Add a description, image, and links to the openai-clip topic page so that developers can more easily learn about it.
To associate your repository with the openai-clip topic, visit your repo's landing page and select "manage topics."