Skip to content

Standalone Flash Attention v2 kernel without libtorch dependency

License

Notifications You must be signed in to change notification settings

tiandiao123/libflash_attn

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

The flash attention v2 kernel has been extracted from the original repo into this repo to make it easier to integrate into a third-party project. In particular, the dependency on libtorch was removed.

As a consquence, dropout is not supported (since the original code uses randomness provided by libtorch). Also, only forward is supported for now.

Build with

mkdir build && cd build
cmake ..
make

It seems there are compilation issues if g++-9 is used as the host compiler. We confirmed that g++-11 works without issues.

About

Standalone Flash Attention v2 kernel without libtorch dependency

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C++ 83.5%
  • Cuda 15.7%
  • CMake 0.8%