Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft PR Adding mistral 0.1 #1131

Merged
merged 69 commits into from
Feb 23, 2024
Merged

Conversation

AIproj
Copy link
Contributor

@AIproj AIproj commented Jan 25, 2024

Here's the PR of the October addition of support for Mistral 7B v0.1 in GPT-NeoX, referred to in issue 1050.

Among other things, this PR also adds support for sliding window attention in GPT-NeoX, both through FlashAttention2 and through Megatron.

An example script is included to show how to run the conversion of a HuggingFace (HF) Mistral 7B v0.1 model into corresponding GPT-NeoX checkpoints.

The items left to do since then are to:

  • Add support for PipelineEngine in the HF -> GPT-NeoX conversion script (currently, pp>0 is not supported).
  • Test training through HF to check that on enwik8 the loss is also ~3 going down quickly to 2.xx.
  • Try to lm-eval the GPT-NeoX and HF versions to see if they match in performance.

Note: issue 1124 recently added support for conversion back to HF to enable such testing and is also concerned with supporting pp>0 in the conversion scripts.

@haileyschoelkopf haileyschoelkopf marked this pull request as ready for review February 22, 2024 20:57
@haileyschoelkopf
Copy link
Contributor

This is ready for review!

Something might be up with the self-hosted runner for tests? it seems to not have the proper packages installed, including pytest

@Quentin-Anthony Quentin-Anthony merged commit f36aed7 into EleutherAI:main Feb 23, 2024
2 of 5 checks passed
@malteos
Copy link

malteos commented Feb 29, 2024

Does this require flash attention >= 2.3? Sliding window attention is only available from that version (see https://github.com/Dao-AILab/flash-attention?tab=readme-ov-file#23-local-ie-sliding-window-attention).

With the current Docker image (flash-attn==2.2.1) I get the following error:

...
  File "/netscratch/mostendorff/experiments/gpt-neox/megatron/model/transformer.py", line 622, in flash_attention
    output = self.flash_qkv_fn(
TypeError: flash_attn_func() got an unexpected keyword argument 'window_size'

@haileyschoelkopf
Copy link
Contributor

haileyschoelkopf commented Feb 29, 2024

Ah yes you’re correct— I will add a check for this in #1162 .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants