Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Response engineering #8

Closed
appatalks opened this issue Jan 29, 2023 · 2 comments
Closed

Response engineering #8

appatalks opened this issue Jan 29, 2023 · 2 comments

Comments

@appatalks
Copy link
Owner

appatalks commented Jan 29, 2023

At the moment, Prompts are engineered with the initial prompt to always be sent, along with the last response returned and the next passed input. Works pretty well with simple follow ups:
-- Who was was Zeus?
-- Did he have kids?
-- Did he have wives?
and can get maybe 5 or so questions in before the AI seems to get lost.

Pondering: Should I attempt to send the entire conversation from the start?
Possible issue: Token length will get exponentially smaller for new inputs/responses since token takes into both input and output.

@appatalks
Copy link
Owner Author

One thought is to take masterOutput from localStorage and do something with it in a way that pulls key phrase or words to include it in the input prompt. That way we can compress the stream to only what is needed.

maybe what ever the javascript equivalent to grep -v noise_words. Basically anything not a noun, is what I'm thinking..

@appatalks
Copy link
Owner Author

Ill probably wait for additional api release as this matures to figure this out with best path forward.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant