Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to train SAEs on my own model? #192

Closed
likangk opened this issue Jun 20, 2024 · 1 comment · Fixed by #210
Closed

How to train SAEs on my own model? #192

likangk opened this issue Jun 20, 2024 · 1 comment · Fixed by #210

Comments

@likangk
Copy link

likangk commented Jun 20, 2024

Hi,
Thank you for your excellent work on the SAE package. I am interested in training Sparse Autoencoders (SAEs) on my own model(LLM in Genomics), which is not included in TransformerLens. Could you please provide some guidance on how to get started with this? Any advice or suggestions would be greatly appreciated.

Thank you for your time and assistance.

Best regards,

@jbloomAus
Copy link
Owner

This currently isn't supported but could be supported soon. I think the key requirements are:

  • Turn a non-T-lens model into a HookedRootModule (a t-lens class).
  • Enabling SAE training to work on a local model not imported from T-lens.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants