Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to show pose label in result image? #16

Closed
blackCmd opened this issue Jul 19, 2020 · 5 comments
Closed

How to show pose label in result image? #16

blackCmd opened this issue Jul 19, 2020 · 5 comments
Assignees

Comments

@blackCmd
Copy link

blackCmd commented Jul 19, 2020

How to show pose label like this.

image

And If it can, Can I train pose? not object.
For example, MMdetction will show three person labels in the above image.
But I want to show another three labels in the above image like sitting, hitting and standing.

@blackCmd blackCmd changed the title Can I show pose label? How to show pose label in result image? Jul 19, 2020
@innerlee
Copy link
Contributor

And If it can, Can I train pose? not object.

Could you elaborate a little bit? is "train" a typo?

@jin-s13
Copy link
Collaborator

jin-s13 commented Jul 19, 2020

Hi. The COCO dataset does not provide action label (such as standing, hitting and sitting). You may need to train an action classifier using the human poses as input, and outputs the action type.

You can also choose to directly use the action recognition dataset. As far as I am concerned, some datasets already provide both human poses and action labels. see http:https://humaninevents.org/ for example.

@blackCmd
Copy link
Author

blackCmd commented Jul 19, 2020

Hi. The COCO dataset does not provide action label (such as standing, hitting and sitting). You may need to train an action classifier using the human poses as input, and outputs the action type.

You can also choose to directly use the action recognition dataset. As far as I am concerned, some datasets already provide both human poses and action labels. see http:https://humaninevents.org/ for example.

Oh, I see.
Then I think MMpose can inference human's action using trained keypoints not bbox. Is it right?
If you know where should I edit code for showing action label in a result image, plz help me.

@innerlee
Copy link
Contributor

AD:
for rgb based action recognition: https://github.com/open-mmlab/mmaction2
for skeleton based action recognition: https://github.com/open-mmlab/mmskeleton

@blackCmd
Copy link
Author

AD:
for rgb based action recognition: https://github.com/open-mmlab/mmaction2
for skeleton based action recognition: https://github.com/open-mmlab/mmskeleton

Unbelievable !!
It is what i want to do. Thank you : )

@jin-s13 jin-s13 closed this as completed Jul 20, 2020
HAOCHENYE pushed a commit to HAOCHENYE/mmpose that referenced this issue Jun 27, 2023
* add visualizer

* update

* update

* update

* update

* update

* fix lint

* fix commit

* fix commit

* fix commit

* fix commit

* refine

* refine

* update

* update

* update

* update

* update
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants