Skip to content

Commit

Permalink
Update BERT tokenizer integration to reflect API changes in Transform…
Browse files Browse the repository at this point in the history
…ers library (#42)

* Be more specific about the version of the Transformers library

* Update BERT tokenizer integration to reflect latest APIs of the Transformers library
  • Loading branch information
frreiss authored Jul 2, 2020
1 parent 6819c52 commit 8abb7b6
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 2 deletions.
2 changes: 1 addition & 1 deletion config/dev_reqs.txt
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
# NOTE: These requirements are IN ADDITION TO the packages in
# config/dev_env.yml and requirements.txt
pyyaml
transformers
transformers>=3.0.0
spacy
fastparquet
sphinx
Expand Down
2 changes: 1 addition & 1 deletion text_extensions_for_pandas/io/tokenization.py
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ def make_bert_tokens(target_text: str, tokenizer) -> pd.DataFrame:
* "special_tokens_mask": `True` if the token is a zero-length special token
such as "start of document"
"""
from transformers.tokenization_utils import PreTrainedTokenizerFast
from transformers import PreTrainedTokenizerFast

if not isinstance(tokenizer, PreTrainedTokenizerFast):
raise TypeError(
Expand Down

0 comments on commit 8abb7b6

Please sign in to comment.