Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

frontends.torch.Tensor: add matmul instance method #6935

Merged
merged 6 commits into from
Nov 25, 2022
Merged

frontends.torch.Tensor: add matmul instance method #6935

merged 6 commits into from
Nov 25, 2022

Conversation

hmahmood24
Copy link
Contributor

Close #6557

@ivy-leaves ivy-leaves added the PyTorch Frontend Developing the PyTorch Frontend, checklist triggered by commenting add_frontend_checklist label Nov 11, 2022
@hmahmood24
Copy link
Contributor Author

Hi,
Regarding the failing test, I added @reproduce_failure('6.55.0', b'AXicY2BgYGRgBCIGOGAaKswGCRCL8SIqczC4jHIm1EMXcCtQACvA7eMGZXwKALhvB3k=') to my test and bumped the hypothesis version to 6.55.0 as mentioned in the decorator, but was unable to reproduce the error getting:

hypothesis.errors.DidNotReproduce: Expected the test to raise an error, but it completed successfully

Just thought you should know.

)
def test_torch_instance_matmul(
dtype_tensor1_tensor2,
as_variable,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

where are as_variable and native_array defined?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test_torch_instance_matmul error stack trace reads:

_ test_torch_instance_matmul[cpu-ivy.functional.backends.tensorflow-False-False-tensorflow] _

get_command_line_flags = {'as_variable': None, 'container': None, 'instance_method': None, 'native_array': None, ...}
device = 'cpu'
f = <module 'ivy.functional.backends.tensorflow' from '/ivy/ivy/functional/backends/tensorflow/__init__.py'>
fw = 'tensorflow', fixt_frontend_str = 'torch', args = (), kwargs = {}

    @given(data=st.data())
>   @settings(max_examples=1)

ivy_tests/test_ivy/helpers/testing_helpers.py:184: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
ivy_tests/test_ivy/helpers/testing_helpers.py:226: in new_fn
    return test_fn(*args, **kwargs)
ivy_tests/test_ivy/test_frontends/test_torch/test_tensor.py:2582: in test_torch_instance_matmul
    @given(
ivy_tests/test_ivy/test_frontends/test_torch/test_tensor.py:2591: in test_torch_instance_matmul
    helpers.test_frontend_method(
ivy_tests/test_ivy/helpers/function_testing.py:1429: in test_frontend_method
    value_test(
ivy_tests/test_ivy/helpers/assertions.py:122: in value_test
    assert_all_close(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

ret_np = array([[[12.   , 12.   , 12.   ],
        [12.   , 12.   , 12.   ]],

       [[12.   , 12.   , 12.   ],
        [12.   , 12.   , 12.   ]],

       [[24.125, 12.   , 12.   ],
        [38.   , 18.125, 18.125]]])
ret_from_gt_np = array([[[12.  , 12.  , 12.  ],
        [12.  , 12.  , 12.  ]],

       [[12.  , 12.  , 12.  ],
        [12.  , 12.  , 12.  ]],

       [[24.25, 12.  , 12.  ],
        [38.5 , 18.25, 18.25]]])
rtol = 0.01, atol = 1e-06, ground_truth_backend = 'torch'

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The @handle_cmd_line_args decorator takes care of these strategies at runtime so there's no need to explicitly define them in the test. I have never had to define them previously whenever I have contributed to the codebase. Additionally, you can read point no. 3 here. One of the maintainers of this repo actually advised me against defining strategies for as_variable and native_array and let the @handle_cmd_line_args take care of that for us. I don't think anyone else has defined them explicitly either.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @hmahmood24, indeed you're correct, my apologies, I haven't been working on the frontends so I'm a little behind with some of the strategies updates.

As for the the assertion error, perhaps try adjusting the rtol and see if that makes a difference, although I'm concerned about the tolerance being high already. This issue is surprising, especially given the ground truth reference is torch also.

@simonetgordon
Copy link
Contributor

Hi there, thanks for the contribution! And thanks for letting me know about this. Please see me comments regarding the test you've written.

)
def test_torch_instance_matmul(
dtype_tensor1_tensor2,
as_variable,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test_torch_instance_matmul error stack trace reads:

_ test_torch_instance_matmul[cpu-ivy.functional.backends.tensorflow-False-False-tensorflow] _

get_command_line_flags = {'as_variable': None, 'container': None, 'instance_method': None, 'native_array': None, ...}
device = 'cpu'
f = <module 'ivy.functional.backends.tensorflow' from '/ivy/ivy/functional/backends/tensorflow/__init__.py'>
fw = 'tensorflow', fixt_frontend_str = 'torch', args = (), kwargs = {}

    @given(data=st.data())
>   @settings(max_examples=1)

ivy_tests/test_ivy/helpers/testing_helpers.py:184: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
ivy_tests/test_ivy/helpers/testing_helpers.py:226: in new_fn
    return test_fn(*args, **kwargs)
ivy_tests/test_ivy/test_frontends/test_torch/test_tensor.py:2582: in test_torch_instance_matmul
    @given(
ivy_tests/test_ivy/test_frontends/test_torch/test_tensor.py:2591: in test_torch_instance_matmul
    helpers.test_frontend_method(
ivy_tests/test_ivy/helpers/function_testing.py:1429: in test_frontend_method
    value_test(
ivy_tests/test_ivy/helpers/assertions.py:122: in value_test
    assert_all_close(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

ret_np = array([[[12.   , 12.   , 12.   ],
        [12.   , 12.   , 12.   ]],

       [[12.   , 12.   , 12.   ],
        [12.   , 12.   , 12.   ]],

       [[24.125, 12.   , 12.   ],
        [38.   , 18.125, 18.125]]])
ret_from_gt_np = array([[[12.  , 12.  , 12.  ],
        [12.  , 12.  , 12.  ]],

       [[12.  , 12.  , 12.  ],
        [12.  , 12.  , 12.  ]],

       [[24.25, 12.  , 12.  ],
        [38.5 , 18.25, 18.25]]])
rtol = 0.01, atol = 1e-06, ground_truth_backend = 'torch'

)
def test_torch_instance_matmul(
dtype_tensor1_tensor2,
as_variable,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @hmahmood24, indeed you're correct, my apologies, I haven't been working on the frontends so I'm a little behind with some of the strategies updates.

As for the the assertion error, perhaps try adjusting the rtol and see if that makes a difference, although I'm concerned about the tolerance being high already. This issue is surprising, especially given the ground truth reference is torch also.

# matmul
@handle_cmd_line_args
@given(
dtype_tensor1_tensor2=_get_dtype_and_multiplicative_matrices().example(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

.example() should not be called in the data generation. Perhaps you kept this in whilst testing?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @simonetgordon, yeah, my bad. I forgot to remove .example() from the test itself before committing the code. I have removed that, as well as rtol itself and tried fixing the tests. They run perfectly fine on my machine against all backends as of the latest commit. Looking forward to your feedback now.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hold up.

Someone refactored the entire testing criteria a few hours ago and made major changes to the frontend tests. I took the latest pull to merge any conflicts with my latest commits but when I try to run my tests, hypothesis fails throwing all sorts of weird errors. It isn't even able to collect the tests, running them is a different story altogether. I think I will wait out till these new changes are made stable enough and the errors are fixed on the main branch before running tests specific to just my function.

@simonetgordon can you please have someone look into the ivy_tests/test_ivy/test_frontends/test_torch/test_tensor.py file so they can fix these errors as a consequence of the recent refactoring? Thanks!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @hmahmood24, yes I am aware of the recent frontend decorator changes, apologies for the inconvenience! I will let you know when they've been fixed.

Since you said the tests worked fine for you locally, hopefully we only need to wait for the decorator issues to be resolved to get this merged. If it takes too long, I will merge this PR within a reasonable amount of time.



# matmul
@handle_cmd_line_args
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we now have 4 new decorators instead of handle_cmd_line_args :

  1. handle_function: used for Ivy functional API
  2. handle_method: used for Ivy methods
  3. handle_frontend_function: used for frontend functions
  4. handle_frontend_method: used for frontend methods, methods tests are now skipped temporarily.

The last decorator is the one you'll need to replace handle_cmd_line_args with. Unfortunately, this is the only one currently not functioning correctly, but it will be rectified soon!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hmahmood24 perhaps you didn't see this update?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@simonetgordon,

Apologies I wasn't able to reply earlier. I have seen the updates and I have also refactored my tests in coherence with these updates but to no avail. The test_tensor.py file has still so many bugs that prevents the tests from getting collected. At this point, I'm not sure what's the best course of action for me apart from just waiting this out.

Have you asked anyone from the team to look into these bugs by now?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hmahmood24 Hi there, yes someone is trying to tackle these but there should have already been some fixes pushed.

Also, I don't see the new decorator here in the PR version. Perhaps you forgot to push?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@simonetgordon Hey, yes I didn't push my changes since the tests weren't collecting locally. I have made the changes and pushed them to remain coherent with all the other tests now.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @simonetgordon,

Any updates since it’s been almost 2 weeks since this PR got opened? Anything from my end to add here?



# matmul
@handle_cmd_line_args
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hmahmood24 Hi there, yes someone is trying to tackle these but there should have already been some fixes pushed.

Also, I don't see the new decorator here in the PR version. Perhaps you forgot to push?

@simonetgordon simonetgordon merged commit dd47af9 into ivy-llc:master Nov 25, 2022
@hmahmood24 hmahmood24 deleted the hmahmood-6557-matmul branch February 9, 2023 19:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
PyTorch Frontend Developing the PyTorch Frontend, checklist triggered by commenting add_frontend_checklist
Projects
None yet
Development

Successfully merging this pull request may close these issues.

matmul
3 participants