Replies: 1 comment 5 replies
-
You need to provide |
Beta Was this translation helpful? Give feedback.
5 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
You need to provide |
Beta Was this translation helpful? Give feedback.
-
👋 Hello!
I'm working a GPT-4o multimodal chatbot. Here's my Next.js API route:
In this route, I format a user message to be able to add in an image URL. The route works alright but the
useChat
hook somehow removes the image and sends only a string message.Here's my
useChat
hook:The messages array is of type
Message[]
instead ofCoreMessage[]
(which supports image type messages) as I use on the route. I've also tried logging the content of the messages array to be sure but I get a string message still.How does one work with images with the
useChat
hook? Could you share a sample please? If theuseChat
hook is not ready for multimodal models, is there a recent solution you can recommend?PS: I haven't seen an official example for multimodal chatbots 🤔 (except the one on https://sdk.vercel.ai which is not open-source ;/).
Beta Was this translation helpful? Give feedback.
All reactions