Skip to content

Fusion Training Question #690

Answered by calpt
paen27 asked this question in Q&A
Discussion options

You must be logged in to vote

Hey @paen27,

there are multiple options how you can use AdapterFusion for transfer learning:

  • similar to what you described, you could have a task adapter and the fusion layer trained on the same downstream task data. While this often results in the adapter of this dataset being activated predominantly in fusion, this is not necessarily the case. Especially low-resource tasks profit from knowledge of adapters trained on different, higher-resource datasets. You can see examples for this e.g. in Fig 4 of the AdapterFusion paper, where the adapters for QQP and MNLI are activated for various downstream tasks.
  • alternatively, it's perfectly fine to transfer knowledge to a new task for which no …

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by calpt
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants