Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add --no_balance flag to not balance datasets #287

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open

Conversation

AlexTMallen
Copy link
Collaborator

No description provided.

Copy link
Collaborator

@lauritowal lauritowal left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Run with elk elicit gpt2 imdb --no_balance True --disable_cache --max_examples 100 100 --num_gpus 1 --max_inlp_iter 4 and seems to work.

Added some comments though

@@ -65,6 +65,9 @@ class Extract(Serializable):
binarize: bool = False
"""Whether to binarize the dataset labels for multi-class datasets."""

no_balance: bool = False
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not just make it
balance: bool = True ?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That would also avoid having that:
balance=not cfg.no_balance

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because it would be unclear how to use the flag to disable balancing from the CLA. --balance False or something is weirder than --no_balance

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

--balance False does not seem weirder than --no_balance True to me.
But okay, it's fine for me

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I think I agree with you now

@@ -212,12 +212,11 @@ def inlp(
p = y.float().mean()
H = -p * torch.log(p) - (1 - p) * torch.log(1 - p)

if max_iter is not None:
d = min(d, max_iter)
max_iter = max_iter or d
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's just some refactoring which has nothing to do with the balancing I guesS?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

right, I also added a max_iter flag and this was a necessary refactoring

@@ -6,7 +6,7 @@


def train_supervised(
data: dict[str, tuple], device: str, mode: str
data: dict[str, tuple], device: str, mode: str, max_inlp_iter: int | None = None
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that's a new feature not related to the balancing either, right?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants