Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added Chinese and Korean examples to TextTokenizerTest #442

Merged
merged 1 commit into from
Dec 3, 2019

Conversation

Jauntbox
Copy link
Contributor

Related issues
n/a

Describe the proposed solution
n/a

Describe alternatives you've considered
n/a

Additional context
This is a small change to better allow testing of alternatives to the CJK tokenizer (that we've already replaced for Japanese). The CJK tokenizer uses bigrams for its tokenization, rather than trying to extract words, so most of the tokens from a text sample will have length 2 (not all, since other languages can be mixed in). Some of the simpler ID detection calculations will look at the distributions of token lengths, so they may incorrectly think that text from languages using the CJK tokenizer is IDs.

Copy link
Collaborator

@tovbinm tovbinm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lgtm

@codecov
Copy link

codecov bot commented Nov 26, 2019

Codecov Report

Merging #442 into master will not change coverage.
The diff coverage is n/a.

Impacted file tree graph

@@           Coverage Diff           @@
##           master     #442   +/-   ##
=======================================
  Coverage   86.93%   86.93%           
=======================================
  Files         337      337           
  Lines       11096    11096           
  Branches      362      362           
=======================================
  Hits         9646     9646           
  Misses       1450     1450

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update e45073d...9f67367. Read the comment docs.

Copy link
Contributor

@gerashegalov gerashegalov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Collaborator

@leahmcguire leahmcguire left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@tovbinm
Copy link
Collaborator

tovbinm commented Dec 3, 2019

LOCO test is failing @sanmitra @Jauntbox

@sanmitra
Copy link
Contributor

sanmitra commented Dec 3, 2019

@tovbinm The LOCO test - com.salesforce.op.stages.impl.insights.RecordInsightsLOCOTest is succeeding. Where exactly you are seeing the failure of LOCO test ?

@tovbinm
Copy link
Collaborator

tovbinm commented Dec 3, 2019

It’s a flaky one. See previous runs.

@tovbinm tovbinm merged commit 9778481 into master Dec 3, 2019
@tovbinm tovbinm deleted the km/token-lens branch December 3, 2019 20:15
@nicodv nicodv mentioned this pull request Jun 11, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants