-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Turn on inference for OpaqueClosure #39681
Conversation
This turns on inference for `PartialOpaque` callees (but no optimization/inlining yet and also no dynamic dispatch to the optimized implementations). Because of the current design and some fixes getting pulled into previous PRs, I believe this is all that remains to be done on the inference front. In particular, we specialize the OpaqueClosure methods on the tuple formed by the tuple type of the environment (at inference time) and the argument tuples. This is a bit of an odd method specialization, but it seems like inference is just fine with it in general. In the fullness of time, we may want to store the specializations differently to give more freedom to partial optimizations, but that would require being able to re-enter inference later, which is currently not possible.
Now infers |
Is there a level of |
Right now it breaks at |
pushfirst!(argtypes, closure.env) | ||
sig = argtypes_to_type(argtypes) | ||
rt, edgecycle, edge = abstract_call_method(interp, closure.source::Method, sig, Core.svec(), false, sv) | ||
info = OpaqueClosureCallInfo(edge) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
JET now warns me edge
could be nothing
and thus this call may result in an error.
I guess we may want to bail out return CallMeta(rt, false)
in that case ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I'm revising this code in a follow up PR
This turns on inference for `PartialOpaque` callees (but no optimization/inlining yet and also no dynamic dispatch to the optimized implementations). Because of the current design and some fixes getting pulled into previous PRs, I believe this is all that remains to be done on the inference front. In particular, we specialize the OpaqueClosure methods on the tuple formed by the tuple type of the environment (at inference time) and the argument tuples. This is a bit of an odd method specialization, but it seems like inference is just fine with it in general. In the fullness of time, we may want to store the specializations differently to give more freedom to partial optimizations, but that would require being able to re-enter inference later, which is currently not possible.
This turns on inference for `PartialOpaque` callees (but no optimization/inlining yet and also no dynamic dispatch to the optimized implementations). Because of the current design and some fixes getting pulled into previous PRs, I believe this is all that remains to be done on the inference front. In particular, we specialize the OpaqueClosure methods on the tuple formed by the tuple type of the environment (at inference time) and the argument tuples. This is a bit of an odd method specialization, but it seems like inference is just fine with it in general. In the fullness of time, we may want to store the specializations differently to give more freedom to partial optimizations, but that would require being able to re-enter inference later, which is currently not possible.
This turns on inference for
PartialOpaque
callees (but nooptimization/inlining yet and also no dynamic dispatch
to the optimized implementations). Because of the current design
and some fixes getting pulled into previous PRs, I believe this
is all that remains to be done on the inference front.
In particular, we specialize the OpaqueClosure methods on
the tuple formed by the tuple type of the environment
(at inference time) and the argument tuples. This is a bit of
an odd method specialization, but it seems like inference
is just fine with it in general. In the fullness of time,
we may want to store the specializations differently
to give more freedom to partial optimizations, but that
would require being able to re-enter inference later, which
is currently not possible.