-
Notifications
You must be signed in to change notification settings - Fork 21.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bool inherited from number #125577
base: main
Are you sure you want to change the base?
bool inherited from number #125577
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/125577
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (1 Unrelated Failure)As of commit 5512c87 with merge base ead97ee ( FLAKY - The following job failed but was likely due to flakiness present on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
@EikanWang - would you please review this PR? |
LGTM. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please fix UT failures.
04166eb
to
a774791
Compare
a774791
to
2af922a
Compare
2af922a
to
d618138
Compare
d618138
to
1a35692
Compare
@ezyang Could you help to take a review? |
You are still failing a bunch of tests |
1a35692
to
1ec8ce2
Compare
hi all, |
@pytorchbot merge -r |
@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here |
Successfully rebased |
1ec8ce2
to
ddc0847
Compare
Merge failedReason: Approvers from one of the following sets are needed:
|
I am not convinced I want to deal with the fallout from this in TorchScript. @davidberard98 for a second opinion |
@davidberard98 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
I'm seeing internal failures - not sure if I'll be able to get to this for a few days, but I'll need to take a closer look to understand what's failing. I'll report back once I have a better understanding. Actually, the failures are from torchvision - specifically from |
hi @davidberard98 ,does it still failed on your test? |
@Ma-Jian1 yes - do you think you can check using torchvision to better understand the failures? |
hi @davidberard98 ,I clone torchvision into local pytorch fold,and run investigated further: test/test_transforms_tensor.py::test_convert_image_dtype[out_dtype0-in_dtype0-cpu] graph(%image.1 : Tensor,
%dtype.1 : int):
%382 : int = prim::Constant[value=4294967294]()
%377 : float = prim::Constant[value=9.2233720368547758e+18]()
%376 : float = prim::Constant[value=2147483648.]()
%333 : Function = prim::Constant[name="_max_value"]()
%332 : str = prim::Constant[value="builtins.RuntimeError"]() # /home/jianma/repo/pytorch/vision/torchvision/transforms/_functional_tensor.py:79:18
%329 : str = prim::Constant[value="The cast from {} to {} cannot be performed safely."]() # /home/jianma/repo/pytorch/vision/torchvision/transforms/_functional_tensor.py:78:18
%321 : int = prim::Constant[value=7]() # /home/jianma/repo/pytorch/vision/torchvision/transforms/_functional_tensor.py:76:27
%319 : bool = prim::Constant[value=1]() # /home/jianma/repo/pytorch/vision/torchvision/transforms/_functional_tensor.py:75:12
%312 : int = prim::Constant[value=4]() # /home/jianma/repo/pytorch/vision/torchvision/transforms/_functional_tensor.py:75:68
%311 : int = prim::Constant[value=3]() # /home/jianma/repo/pytorch/vision/torchvision/transforms/_functional_tensor.py:75:55
%307 : int = prim::Constant[value=6]() # /home/jianma/repo/pytorch/vision/torchvision/transforms/_functional_tensor.py:75:27
%206 : bool = prim::Constant[value=0]()
%205 : NoneType = prim::Constant()
%12 : int = prim::Constant[value=0]() # /home/jianma/repo/pytorch/vision/torchvision/transforms/_functional_tensor.py:71:24
%64 : float = prim::Constant[value=0.001]() # /home/jianma/repo/pytorch/vision/torchvision/transforms/_functional_tensor.py:86:14
%72 : float = prim::Constant[value=1.]() # /home/jianma/repo/pytorch/vision/torchvision/transforms/_functional_tensor.py:88:37
%3 : int = prim::dtype(%image.1)
%5 : bool = aten::eq(%3, %dtype.1) # /home/jianma/repo/pytorch/vision/torchvision/transforms/_functional_tensor.py:65:7
%305 : Tensor = prim::If(%5) # /home/jianma/repo/pytorch/vision/torchvision/transforms/_functional_tensor.py:65:4
block0():
-> (%image.1)
block1():
%198 : bool = aten::is_floating_point(%image.1) # /home/jianma/repo/pytorch/vision/torchvision/transforms/_functional_tensor.py:68:7
%201 : Tensor = prim::If(%198) # /home/jianma/repo/pytorch/vision/torchvision/transforms/_functional_tensor.py:68:4
block0():
%207 : Tensor = aten::tensor(%12, %dtype.1, %205, %206) # /home/jianma/repo/pytorch/vision/torchvision/transforms/_functional_tensor.py:71:11
%208 : bool = aten::is_floating_point(%207) # /home/jianma/repo/pytorch/vision/torchvision/transforms/_functional_tensor.py:71:11
%345 : Tensor = prim::If(%208) # /home/jianma/repo/pytorch/vision/torchvision/transforms/_functional_tensor.py:71:8
block0():
%215 : Tensor = aten::to(%image.1, %dtype.1, %206, %206, %205) # /home/jianma/repo/pytorch/vision/torchvision/transforms/_functional_tensor.py:72:19
-> (%215)
block1():
%306 : int = prim::dtype(%image.1)
%308 : bool = aten::eq(%306, %307) # /home/jianma/repo/pytorch/vision/torchvision/transforms/_functional_tensor.py:75:12
%310 : bool = prim::If(%308) # /home/jianma/repo/pytorch/vision/torchvision/transforms/_functional_tensor.py:75:12
block0():
%314 : int[] = prim::ListConstruct(%311, %312)
%315 : bool = aten::__contains__(%314, %dtype.1) # /home/jianma/repo/pytorch/vision/torchvision/transforms/_functional_tensor.py:75:45
-> (%315)
block1():
-> (%206)
%318 : bool = prim::If(%310) # /home/jianma/repo/pytorch/vision/torchvision/transforms/_functional_tensor.py:75:12
block0():
-> (%319)
block1():
%320 : int = prim::dtype(%image.1)
%322 : bool = aten::eq(%320, %321) # /home/jianma/repo/pytorch/vision/torchvision/transforms/_functional_tensor.py:76:12
%324 : bool = prim::If(%322) # /home/jianma/repo/pytorch/vision/torchvision/transforms/_functional_tensor.py:76:12
block0():
%326 : bool = aten::eq(%dtype.1, %312) # /home/jianma/repo/pytorch/vision/torchvision/transforms/_functional_tensor.py:76:45
-> (%326)
block1():
-> (%206)
-> (%324)
= prim::If(%318) # /home/jianma/repo/pytorch/vision/torchvision/transforms/_functional_tensor.py:75:8
block0():
%330 : int = prim::dtype(%image.1)
%msg.7 : str = aten::format(%329, %330, %dtype.1) # /home/jianma/repo/pytorch/vision/torchvision/transforms/_functional_tensor.py:78:18
= prim::RaiseException(%msg.7, %332) # /home/jianma/repo/pytorch/vision/torchvision/transforms/_functional_tensor.py:79:12
-> ()
block1():
-> ()
%334 : int = prim::CallFunction(%333, %dtype.1) # /home/jianma/repo/pytorch/vision/torchvision/transforms/_functional_tensor.py:87:24
%max_val.7 : float = aten::Float(%334) # /home/jianma/repo/pytorch/vision/torchvision/transforms/_functional_tensor.py:87:18
%337 : float = aten::add(%max_val.7, %72) # /home/jianma/repo/pytorch/vision/torchvision/transforms/_functional_tensor.py:88:27
%338 : float = aten::sub(%337, %64) # /home/jianma/repo/pytorch/vision/torchvision/transforms/_functional_tensor.py:88:27
%result.7 : Tensor = aten::mul(%image.1, %338) # /home/jianma/repo/pytorch/vision/torchvision/transforms/_functional_tensor.py:88:17
%343 : Tensor = aten::to(%result.7, %dtype.1, %206, %206, %205) # /home/jianma/repo/pytorch/vision/torchvision/transforms/_functional_tensor.py:89:15
-> (%343)
-> (%345)
block1():
= prim::Print(%376) # /home/jianma/repo/pytorch/vision/torchvision/transforms/_functional_tensor.py:93:8
%263 : Tensor = aten::tensor(%12, %dtype.1, %205, %206) # /home/jianma/repo/pytorch/vision/torchvision/transforms/_functional_tensor.py:97:11
%264 : bool = aten::is_floating_point(%263) # /home/jianma/repo/pytorch/vision/torchvision/transforms/_functional_tensor.py:97:11
%375 : Tensor = prim::If(%264) # /home/jianma/repo/pytorch/vision/torchvision/transforms/_functional_tensor.py:97:8
block0():
%image.65 : Tensor = aten::to(%image.1, %dtype.1, %206, %206, %205) # /home/jianma/repo/pytorch/vision/torchvision/transforms/_functional_tensor.py:98:20
%273 : Tensor = aten::div(%image.65, %376) # /home/jianma/repo/pytorch/vision/torchvision/transforms/_functional_tensor.py:99:19
-> (%273)
block1():
= prim::Print(%377) # /home/jianma/repo/pytorch/vision/torchvision/transforms/_functional_tensor.py:103:8
= prim::Print(%377, %376, %382) # /home/jianma/repo/pytorch/vision/torchvision/transforms/_functional_tensor.py:116:12
%image.91 : Tensor = aten::to(%image.1, %dtype.1, %206, %206, %205) # /home/jianma/repo/pytorch/vision/torchvision/transforms/_functional_tensor.py:117:20
%373 : Tensor = aten::mul(%image.91, %382) # /home/jianma/repo/pytorch/vision/torchvision/transforms/_functional_tensor.py:118:19
-> (%373)
-> (%375)
-> (%201)
return (%305) |
I have to run "pip uninstall torch" several times to clean the virtual env. |
ddc0847
to
5512c87
Compare
hi @davidberard98, the failed case in torchvision is related to cast int to float, which has an accuracy issue. |
hi @davidberard98 , |
Fixes #125003
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @gujinghui @PenghuiCheng @XiaobingSuper @jianyuh @mingfeima @ashokei @jingxu10 @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal