Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make MultiHeadAttention to work with all attention operators #382

Closed
WenjieDu opened this issue May 5, 2024 · 0 comments
Closed

Make MultiHeadAttention to work with all attention operators #382

WenjieDu opened this issue May 5, 2024 · 0 comments
Labels
enhancement New feature or request new feature Proposing to add a new feature

Comments

@WenjieDu
Copy link
Owner

WenjieDu commented May 5, 2024

1. Feature description

Since #333, we have made self-attention operator replaceable in Transformers. Although there're other attention operators in pypots, e.g. ProbAttention from Informer, but not all of them can work with MultiHeadAttention. So we need a refactor.

2. Motivation

This can increase code reusability and make things in pypots more like Lego。

3. Your contribution

Will make a PR to achieve this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request new feature Proposing to add a new feature
Projects
None yet
Development

No branches or pull requests

1 participant