Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Fix] fix a bug in input transformation in BaseHead #1843

Merged
merged 6 commits into from
Dec 1, 2022
Merged

[Fix] fix a bug in input transformation in BaseHead #1843

merged 6 commits into from
Dec 1, 2022

Conversation

xinxinxinxu
Copy link
Contributor

@xinxinxinxu xinxinxinxu commented Nov 29, 2022

A small modification that applies to different backbone outputs. from issue #1838

Motivation

when I use mmpose 1.0.0 rc to train a new backbone, I find that the output head raises a size unmatching error, which did not happen in the past editions. The main issue is the BaseHead treat output of backbone as a tuple by default, while the output of the backbone i used is a tensor. The positioning of the reported error is ambiguous, which may result in more users consuming some time to locate what the problem is. However, it is only a matter of condition statements.

Modification

Add a condition statements in a private function of BaseHead class (._transform_inputs) :
original:
image
modified
image

BC-breaking (Optional)

No

Use cases (Optional)

Checklist

Before PR:

  • I have read and followed the workflow indicated in the CONTRIBUTING.md to create this PR.
  • Pre-commit or linting tools indicated in CONTRIBUTING.md are used to fix the potential lint issues.
  • Bug fixes are covered by unit tests, the case that causes the bug should be added in the unit tests.
  • New functionalities are covered by complete unit tests. If not, please add more unit tests to ensure correctness.
  • The documentation has been modified accordingly, including docstring or example tutorials.

After PR:

  • CLA has been signed and all committers have signed the CLA in this PR.

@CLAassistant
Copy link

CLAassistant commented Nov 29, 2022

CLA assistant check
All committers have signed the CLA.

@Ben-Louis
Copy link
Collaborator

Hi @xinxinxinxu, thanks for your contribution to MMPose! Could you please sign the CLA? It is a necessary step for merging this PR.

@xinxinxinxu
Copy link
Contributor Author

Hi @xinxinxinxu, thanks for your contribution to MMPose! Could you please sign the CLA? It is a necessary step for merging this PR.

Okay, it's signed.

@codecov
Copy link

codecov bot commented Nov 30, 2022

Codecov Report

Base: 79.58% // Head: 78.68% // Decreases project coverage by -0.89% ⚠️

Coverage data is based on head (149830d) compared to base (b76c69d).
Patch coverage: 50.00% of modified lines in pull request are covered.

❗ Current head 149830d differs from pull request most recent head 115a440. Consider uploading reports for the commit 115a440 to get more accurate results

Additional details and impacted files
@@             Coverage Diff             @@
##           dev-1.x    #1843      +/-   ##
===========================================
- Coverage    79.58%   78.68%   -0.90%     
===========================================
  Files          206      205       -1     
  Lines        12036    11861     -175     
  Branches      2035     1995      -40     
===========================================
- Hits          9579     9333     -246     
- Misses        2017     2118     +101     
+ Partials       440      410      -30     
Flag Coverage Δ
unittests 78.68% <50.00%> (-0.90%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
mmpose/models/heads/base_head.py 75.43% <50.00%> (-7.62%) ⬇️
mmpose/models/necks/gap_neck.py 36.36% <0.00%> (-22.73%) ⬇️
mmpose/models/losses/heatmap_loss.py 32.00% <0.00%> (-21.72%) ⬇️
mmpose/models/pose_estimators/base.py 67.44% <0.00%> (-12.56%) ⬇️
mmpose/models/heads/heatmap_heads/simcc_head.py 72.97% <0.00%> (-8.11%) ⬇️
mmpose/models/pose_estimators/topdown.py 61.62% <0.00%> (-6.80%) ⬇️
mmpose/codecs/associative_embedding.py 88.34% <0.00%> (-4.16%) ⬇️
mmpose/apis/inference.py 72.72% <0.00%> (-2.79%) ⬇️
mmpose/apis/webcam/utils/misc.py 75.00% <0.00%> (-0.98%) ⬇️
... and 19 more

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

@Ben-Louis Ben-Louis requested a review from ly015 November 30, 2022 05:16
"""Transform multi scale features into the network input."""
if not isinstance(feats, Sequence):
return feats
Copy link
Member

@ly015 ly015 Dec 1, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's better to warn the user that inputting a bare tensor doesn't meet our convention and the input_transform argument will be ignored.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, the warning has been added

@ly015 ly015 merged commit eaca6f2 into open-mmlab:dev-1.x Dec 1, 2022
ly015 pushed a commit to ly015/mmpose that referenced this pull request Feb 21, 2023
@ly015 ly015 changed the title Update base_head.py [Fix] fix a bug in input transformation in BaseHead Mar 14, 2023
shuheilocale pushed a commit to shuheilocale/mmpose that referenced this pull request May 6, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants