-
Notifications
You must be signed in to change notification settings - Fork 195
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reorg logical flow in train #37
Conversation
…ry Binary Classifier
牛🐮辛苦 |
|
||
self.input_dicts = dict() | ||
# init |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
readability is not good, I think. Strongly suggest enumerate every task.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi, @woailaosang Readability is just special for train logical flow here. I made less works on other modules. Yeah, enumerate every task is a good idea . But I think we need to optimize the code to reduce the repeated codes and logic. Otherwise you have to modify every place if you want to make some changes
@@ -2,5 +2,7 @@ | |||
*~ | |||
*.pyc | |||
*.cache* | |||
*.vs* |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what's the directory of '.vs'?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's just the configuration of vs code, not related to this project 😄
train.py
Outdated
vocab_info, initialize = None, False | ||
if not conf.pretrained_model_path: | ||
vocab_info, initialize = get_vocab_info(problem, emb_matrix), True | ||
print(initialize) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This 'print' line needs to be deleted, I think.
train.py
Outdated
# first time training, load problem from cache, and then backup the cache to model_save_dir/.necessary_cache/ | ||
if conf.use_cache and os.path.isfile(conf.problem_path): | ||
def load(self, conf, problem, emb_matrix): | ||
# load dictionary when (not finetune) and (cache invalid) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
load dictionary when (not finetune) and (cache valid)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks, done
there are conflict on these files: Conflicting files |
done |
I have reorged the logical flow in train.py in order to better readability, scalability and robustness. It is the first work to add encoding cache mechanism. I have tested the regression & classification & Chinese tasks locally and still need more tests from folks 😄 because there are lots of changes in this PR.
current logical flow is
1. Cache conf
2. Problem
3. Embedding
4. Encoding
1. build dictionary
2. encoding
1. create dir
2. conf
3. problem
4. embedding
5. encoding
1. model
2. loss
3. optimizer