Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

playground token counting is misleading as it doesn't use a given model's tokenizer #134

Closed
thiswillbeyourgithub opened this issue Jun 22, 2024 · 1 comment

Comments

@thiswillbeyourgithub
Copy link

Hi,

I think the playground token counting is misleading because many models don't provide their tokenizer so we can't know the token ids and sometimes not even the length.

For example anthropic allows knowing the number of tokens of a string but not their IDs and \t\t counts as 1 token for Claude but 2 for openai's gpt-3.5 and 4 models. For some python code that can make a huge difference!

I think it might be best to print to the user if the modelname used does not correspond to the tokenizer used, just before printing the token info, instead of just telling a token count without disclaimer.

@AndraxDev
Copy link
Owner

This function is experimental and may be removed in the future. It is not accepting new ideas or bug reports.

@AndraxDev AndraxDev closed this as not planned Won't fix, can't repro, duplicate, stale Jun 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants