Simple method to resolve hallucinations in LLM during long conversations.
But Why LLM's hallucinate in long convo?
The primary reason is that they are not trained for long conversations; instead, they are trained on simple Q&A type datasets which lacks depth of conversations typically humans do.
Solution:
Use dataset that includes lengthy, simultaneous conversations as humans do with llms.
It's puzzling why this type of dataset isn't utilized more often, given it offers a straightforward solution to such a significant problem.
Thanks for Reading!🤗