Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

frontends.torch.Tensor: add matmul instance method #6935

Merged
merged 6 commits into from
Nov 25, 2022
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions ivy/functional/frontends/torch/tensor.py
Original file line number Diff line number Diff line change
Expand Up @@ -288,6 +288,9 @@ def pow_(self, other):
self.data = self.pow(other)
return self.data

def matmul(self, tensor2):
return torch_frontend.matmul(self.data, tensor2)

# Special Methods #
# -------------------#

Expand Down
45 changes: 45 additions & 0 deletions ivy_tests/test_ivy/test_frontends/test_torch/test_tensor.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,11 @@

# local
import ivy_tests.test_ivy.helpers as helpers
from ivy_tests.test_ivy.helpers import handle_cmd_line_args
from ivy_tests.test_ivy.test_frontends.test_torch.test_blas_and_lapack_ops import (
_get_dtype_and_3dbatch_matrices,
_get_dtype_input_and_matrices,
)
from ivy_tests.test_ivy.helpers import handle_frontend_method


Expand Down Expand Up @@ -2558,3 +2563,43 @@ def test_torch_instance_pow_(dtype_and_x, as_variable, native_array):
class_name="tensor",
method_name="pow_",
)


@st.composite
def _get_dtype_and_multiplicative_matrices(draw):
return draw(
st.one_of(
_get_dtype_input_and_matrices(),
_get_dtype_and_3dbatch_matrices(),
)
)


# matmul
@handle_cmd_line_args
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we now have 4 new decorators instead of handle_cmd_line_args :

  1. handle_function: used for Ivy functional API
  2. handle_method: used for Ivy methods
  3. handle_frontend_function: used for frontend functions
  4. handle_frontend_method: used for frontend methods, methods tests are now skipped temporarily.

The last decorator is the one you'll need to replace handle_cmd_line_args with. Unfortunately, this is the only one currently not functioning correctly, but it will be rectified soon!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hmahmood24 perhaps you didn't see this update?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@simonetgordon,

Apologies I wasn't able to reply earlier. I have seen the updates and I have also refactored my tests in coherence with these updates but to no avail. The test_tensor.py file has still so many bugs that prevents the tests from getting collected. At this point, I'm not sure what's the best course of action for me apart from just waiting this out.

Have you asked anyone from the team to look into these bugs by now?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hmahmood24 Hi there, yes someone is trying to tackle these but there should have already been some fixes pushed.

Also, I don't see the new decorator here in the PR version. Perhaps you forgot to push?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@simonetgordon Hey, yes I didn't push my changes since the tests weren't collecting locally. I have made the changes and pushed them to remain coherent with all the other tests now.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @simonetgordon,

Any updates since it’s been almost 2 weeks since this PR got opened? Anything from my end to add here?

@given(
dtype_tensor1_tensor2=_get_dtype_and_multiplicative_matrices(),
)
def test_torch_instance_matmul(
dtype_tensor1_tensor2,
as_variable,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

where are as_variable and native_array defined?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test_torch_instance_matmul error stack trace reads:

_ test_torch_instance_matmul[cpu-ivy.functional.backends.tensorflow-False-False-tensorflow] _

get_command_line_flags = {'as_variable': None, 'container': None, 'instance_method': None, 'native_array': None, ...}
device = 'cpu'
f = <module 'ivy.functional.backends.tensorflow' from '/ivy/ivy/functional/backends/tensorflow/__init__.py'>
fw = 'tensorflow', fixt_frontend_str = 'torch', args = (), kwargs = {}

    @given(data=st.data())
>   @settings(max_examples=1)

ivy_tests/test_ivy/helpers/testing_helpers.py:184: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
ivy_tests/test_ivy/helpers/testing_helpers.py:226: in new_fn
    return test_fn(*args, **kwargs)
ivy_tests/test_ivy/test_frontends/test_torch/test_tensor.py:2582: in test_torch_instance_matmul
    @given(
ivy_tests/test_ivy/test_frontends/test_torch/test_tensor.py:2591: in test_torch_instance_matmul
    helpers.test_frontend_method(
ivy_tests/test_ivy/helpers/function_testing.py:1429: in test_frontend_method
    value_test(
ivy_tests/test_ivy/helpers/assertions.py:122: in value_test
    assert_all_close(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

ret_np = array([[[12.   , 12.   , 12.   ],
        [12.   , 12.   , 12.   ]],

       [[12.   , 12.   , 12.   ],
        [12.   , 12.   , 12.   ]],

       [[24.125, 12.   , 12.   ],
        [38.   , 18.125, 18.125]]])
ret_from_gt_np = array([[[12.  , 12.  , 12.  ],
        [12.  , 12.  , 12.  ]],

       [[12.  , 12.  , 12.  ],
        [12.  , 12.  , 12.  ]],

       [[24.25, 12.  , 12.  ],
        [38.5 , 18.25, 18.25]]])
rtol = 0.01, atol = 1e-06, ground_truth_backend = 'torch'

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The @handle_cmd_line_args decorator takes care of these strategies at runtime so there's no need to explicitly define them in the test. I have never had to define them previously whenever I have contributed to the codebase. Additionally, you can read point no. 3 here. One of the maintainers of this repo actually advised me against defining strategies for as_variable and native_array and let the @handle_cmd_line_args take care of that for us. I don't think anyone else has defined them explicitly either.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @hmahmood24, indeed you're correct, my apologies, I haven't been working on the frontends so I'm a little behind with some of the strategies updates.

As for the the assertion error, perhaps try adjusting the rtol and see if that makes a difference, although I'm concerned about the tolerance being high already. This issue is surprising, especially given the ground truth reference is torch also.

native_array,
):
dtype, tensor1, tensor2 = dtype_tensor1_tensor2
helpers.test_frontend_method(
input_dtypes_init=dtype,
as_variable_flags_init=as_variable,
num_positional_args_init=1,
native_array_flags_init=native_array,
all_as_kwargs_np_init={
"data": tensor1,
},
input_dtypes_method=dtype,
as_variable_flags_method=as_variable,
num_positional_args_method=1,
native_array_flags_method=native_array,
all_as_kwargs_np_method={"tensor2": tensor2},
frontend="torch",
class_name="tensor",
method_name="matmul",
)