Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sweep MVP #191

Merged
merged 7 commits into from
Apr 16, 2023
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Update README.md
  • Loading branch information
lauritowal committed Apr 16, 2023
commit e3d7af2bdf9c62f308a2058fc0583821bf7b4e3f
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ elk eval naughty-northcutt microsoft/deberta-v2-xxlarge-mnli imdb
The following runs `elicit` on the Cartesian product of the listed models (gpt2-tiny gpt2-medium gpt2-large ) and datasets (imdb amazon_polarity), storing it in a special folder ELK_DIR/sweeps/<memorable_name>. Moreover, `--add_pooled` adds an additional dataset that pools all of the datasets together.

```bash
elk sweep --models gpt2 gpt2-medium gpt2-large --datasets imdb amazon_polarity --add_pooled
elk sweep --models gpt2-{tiny,medium,large,xl} --datasets imdb amazon_polarity --add_pooled
```

## Caching
Expand Down