Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG_FIX] RobertaTokenizerFast object has no attributes max_len #84

Closed
wants to merge 1 commit into from

Conversation

davebulaval
Copy link

Quick fix for the transfomers update V4.0.0

Regarding the tokenizer classes:

The tokenizer attribute max_len becomes model_max_length.

@davebulaval
Copy link
Author

davebulaval commented Dec 1, 2020

Also, I've noticed some dead import, no styling rules (Pylint and YAPF) some dead code and the build is impossible to pass on Travis. If the maintainers are interested, I can push a PR including some fix for it.

And the example is version outdated (bert_Score and Transformers)

Also, I would recommend not trying to make the test pass in Travis since it is confusing.

@felixgwu
Copy link
Collaborator

felixgwu commented Dec 3, 2020

Hi @davebulaval, thank you for pointing this out.
However, your changes would break the backward compatibility with the older transformers versions. Would you please modify your PR to make sure that the code works with version 2.0.0 and 3.0.0?
If you don't have time, we can fix it ourselves.

@kalyankumarp
Copy link

Hi guys,

Thanks for your awesome work on the bert_score library. Its definitely a better evaluation metric than rouge. I have a text summarization project submission on Tuesday. Until a few days ago, I was using the rouge scores for hyper parameter tuning. I believe that I will get better summaries when tuned with Bert scores.
I would really appreciate if you guys fix this compatibility issue soon.

@davebulaval
Copy link
Author

Hi guys,

Thanks for your awesome work on the bert_score library. Its definitely a better evaluation metric than rouge. I have a text summarization project submission on Tuesday. Until a few days ago, I was using the rouge scores for hyper parameter tuning. I believe that I will get better summaries when tuned with Bert scores.
I would really appreciate if you guys fix this compatibility issue soon.

For a quick fix, just pip install -U git+https://github.com/davebulaval/bert_score.git@master

@kalyankumarp
Copy link

Hi guys,
Thanks for your awesome work on the bert_score library. Its definitely a better evaluation metric than rouge. I have a text summarization project submission on Tuesday. Until a few days ago, I was using the rouge scores for hyper parameter tuning. I believe that I will get better summaries when tuned with Bert scores.
I would really appreciate if you guys fix this compatibility issue soon.

For a quick fix, just pip install -U git+https://github.com/davebulaval/bert_score.git@master

Thanks. Will try that.

@felixgwu
Copy link
Collaborator

felixgwu commented Dec 5, 2020

Hi @kalyankumarp,

We recommend that you use transformers with version before 4.0.0 currently because neither this PR or #86 passed our unittest. Before we figure out where the bugs are, it's recommended to use an old version. Otherwise, the numbers you report in your paper won't be comparable to the ones in other papers.

Best, Felix

@kalyankumarp
Copy link

Hi Felix,

I thought of using transformers of version 3.0.0 but I am using BART model of SimpleTransformers library for text summarization. This library needs transformers of 4.0.0 version.

Thanks,
Kalyan.

@felixgwu
Copy link
Collaborator

felixgwu commented Dec 5, 2020

Hi @kalyankumarp,
We have fixed the problem and the code in the master branch now supports transformers version 4.0.0 and has passed all our unittests. Please feel free to pull the newest commit.

@felixgwu
Copy link
Collaborator

felixgwu commented Dec 5, 2020

Hi @davebulaval,
Thanks again for being the first person pointing out this problem and submitting this PR. Given that you didn't respond to our request for modification and that supporting version 4.0.0 is urgent, we decided to take #86. The reason is that it is compatible with the old version and fixed the problem ourselves. However, we still really appreciate your help.

@felixgwu felixgwu closed this Dec 5, 2020
@kalyankumarp
Copy link

Thanks, Felix. Will use the master code.

@kalyankumarp
Copy link

kalyankumarp commented Dec 6, 2020

Hi @kalyankumarp,
We have fixed the problem and the code in the master branch now supports transformers version 4.0.0 and has passed all our unittests. Please feel free to pull the newest commit.

Hi,
Do I have to wait for the 0.3.7 version of bert score to be able to use transformers 4.0.0? I tried 0.3.6 bert score with 4.0.0 transformers. I am still getting the same error. It is still going to the block "if LooseVersion(transformers.version) >= LooseVersion("3.0.0"):"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants