GPT-4 Vision-based footage analyst #8675
Replies: 2 comments 2 replies
-
This is a cool project that can defenity help with the false positives. here are a couple of thoughts I have:
What are your thoughts? |
Beta Was this translation helpful? Give feedback.
-
Added custom prompt feature to allow the GPT to understand each camera's environment. For example,
|
Beta Was this translation helpful? Give feedback.
-
Hi 👋! I have been a Frigate user for a while and have always been impressed by Frigate's versatility. After switching from Nest to Frigate, I tried to replicate the package delivery notification functionality of the older camera system. This month, with the release of the GPT-4 Vision API, I was able to take my experimentation to the next level to allow a higher level of contextual understanding.
Here is a prototype called AmbleGPT https://github.com/mhaowork/amblegpt I put together quickly over the past couple weeks. Feel free to give it a shot!
Feedback and suggestions are welcome!
Summary
AmbleGPT is activated by a Frigate event via MQTT and analyzes the event clip using the OpenAI GPT-4 Vision API. It returns an easy-to-understand, context-rich summary. AmbleGPT then publishes this summary text in an MQTT message. The message can be received by the Home Assistant Frigate Notification Automation and sent to a user via iOS/Android notifications.
Demo
More video examples:
More details are in https://github.com/mhaowork/amblegpt
Beta Was this translation helpful? Give feedback.
All reactions