Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue about the channel attention. #22

Open
night3759 opened this issue Apr 25, 2023 · 1 comment
Open

Issue about the channel attention. #22

night3759 opened this issue Apr 25, 2023 · 1 comment

Comments

@night3759
Copy link

Thank you for you great job. When I read the paper, I am confused about the "Attentional feature transformation" part. I don't know why the channel dimension attention can stand for feature covariance. And I didn't find the relating code about equation (6). I look forward to receiving your reply. Thank you.

@zhoudaquan
Copy link
Contributor

Hi, thanks for your interest in the work, and sorry for the late reply.... Eqn(6) is not used as it was replaced by Eqn(7) in the final implementation. However, it is easy to reproduce simply by manipulating the original SA module by swapping the dimension of the formula of self-attention calculation from NC x CN --> CN x NC and then matmal with CN dimensioned v vector.

I hope this clarifies your questions. If you have any other questions, do drop me a message.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants