Popular repositories Loading
-
OAI-evals
OAI-evals PublicForked from openai/evals
Open AI Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.
Python
-
lm-evaluation-harness
lm-evaluation-harness PublicForked from EleutherAI/lm-evaluation-harness
A framework for few-shot evaluation of language models.
Python
Repositories
Showing 2 of 2 repositories
- lm-evaluation-harness Public Forked from EleutherAI/lm-evaluation-harness
A framework for few-shot evaluation of language models.
Lelapa-AI/lm-evaluation-harness’s past year of commit activity - OAI-evals Public Forked from openai/evals
Open AI Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.
Lelapa-AI/OAI-evals’s past year of commit activity
People
This organization has no public members. You must be a member to see who’s a part of this organization.
Top languages
Loading…
Most used topics
Loading…