Fix caching calls to _vector_for_key_cached
and _out_of_vocab_vector_cached
#47
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
_query_is_cached
will always returnsFalse
becausekey
should be in a tuple.lru_cache
is able to unifyargs
,kwargs
, and default args in a call with theget_default_args
magic in order to generate a consistent cache key. What this means is thata. all the default
args
will be part ofkwargs
;b. any
args
with a default value will also be converted tokwargs
.c. for a parameter that has no default value, if you provide it as
args
in one call and askwargs
in another, they will have different cache keys.Therefore
_out_of_vocab_vector_cached._cache.get(((key,), frozenset([('normalized', normalized)])))
will always returnFalse
since the actual cache key is((key,), frozenset([('normalized', normalized), ('force', force)]))
It's wasteful to call
_cache.get
and throw away the result. So I changed_query_is_cached
to_query_cached
.