Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor cache logic for easy replacement #35831

Merged
merged 4 commits into from
Jun 9, 2020
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Apply suggestions from code review
Co-authored-by: Jameson Nash <[email protected]>
  • Loading branch information
Keno and vtjnash committed Jun 5, 2020
commit 914d7f76e829211d1d2ae9cc2f1c23db54a1a0c5
2 changes: 1 addition & 1 deletion base/compiler/typeinfer.jl
Original file line number Diff line number Diff line change
Expand Up @@ -137,7 +137,7 @@ already_inferred_quick_test(interp::AbstractInterpreter, mi::MethodInstance) =
false

# inference completed on `me`
# update the MethodInstance and notify the edges
# update the MethodInstance
function cache_result!(interp::AbstractInterpreter, result::InferenceResult, min_valid::UInt, max_valid::UInt)
# check if the existing linfo metadata is also sufficient to describe the current inference result
# to decide if it is worth caching this
Expand Down
10 changes: 1 addition & 9 deletions base/compiler/types.jl
Original file line number Diff line number Diff line change
Expand Up @@ -181,15 +181,7 @@ code_cache(ni::NativeInterpreter) = WorldView(GLOBAL_CI_CACHE, ni.world)
"""
lock_mi_inference(ni::NativeInterpreter, mi::MethodInstance)

Locks `mi`'s inference flag to prevent infinite recursion inside inference.
While running inference, we must occaisionally compile additional code, which
may in turn request additional code to be inferred. This can happen during
bootstrap (for part of inference itself), but also during regular execution (e.g.
to expand generated functions). The inference flag lets the runtime know that
it should not attempt to re-infer the requested functions as it is being worked
on higher in the stack. Not that inference itself does not look at this flag,
instead checking its own working caches - it is only used for locking the C
runtime.
Hint that `mi` is in inference to help accelerate bootstrapping. This helps limit the amount of wasted work we might do when inference is working on initially inferring itself by letting us detect when inference is already in progress and not running a second copy on it. This creates a data-race, but the entry point into this code from C (jl_type_infer) already includes detection and restriction on recursion, so it is hopefully mostly a benign problem (since it should really only happen during the first phase of bootstrapping that we encounter this flag).
"""
lock_mi_inference(ni::NativeInterpreter, mi::MethodInstance) = (mi.inInference = true; nothing)

Expand Down
3 changes: 2 additions & 1 deletion src/aotcompile.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -265,7 +265,8 @@ static void jl_ci_cache_lookup(const jl_cgparams_t &cgparams, jl_method_instance
if (*src_out == NULL || !jl_is_code_info(*src_out)) {
if (cgparams.lookup != jl_rettype_inferred) {
jl_error("Refusing to automatically run type inference with custom cache lookup.");
} else {
}
else {
*src_out = jl_type_infer(mi, world, 0);
codeinst = jl_get_method_inferred(mi, (*src_out)->rettype, (*src_out)->min_world, (*src_out)->max_world);
if ((*src_out)->inferred && !codeinst->inferred)
Expand Down