Skip to content

Pull requests: Dao-AILab/flash-attention

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Reviews
Assignee
Filter by who’s assigned
Sort

Pull requests list

Fix compilation with clang on ARM64
#1285 opened Oct 18, 2024 by sclarkson Loading…
[AMD] Triton Backend for ROCm
#1203 opened Sep 4, 2024 by micmelesse Loading…
flashattnvarlen support tree attention
#1188 opened Aug 30, 2024 by efsotr Loading…
add softmax_d for mha_bwd
#1161 opened Aug 19, 2024 by MayDomine Loading…
Add how to import FA3 to documentation.
#1112 opened Jul 31, 2024 by AdamLouly Loading…
Windows actions
#1036 opened Jul 9, 2024 by bdashore3 Loading…
change condition to num_heads >= num_heads_k
#1030 opened Jul 5, 2024 by xenshinu Loading…
Fix +/-inf in LSE returned by forward
#978 opened Jun 3, 2024 by sgrigory Loading…
add pyproject.toml with build dependencies
#958 opened May 17, 2024 by dhellmann Loading…
Relative position encoding
#956 opened May 14, 2024 by b-albar Loading…
1 of 4 tasks
ALiBi for the non-flash code path
#858 opened Feb 29, 2024 by Markus28 Loading…
Add support for small page sizes
#824 opened Feb 13, 2024 by skrider Loading…
Add C++ build support for use with LibTorch
#819 opened Feb 9, 2024 by shaltielshmid Loading…
meta tensor stuff
#769 opened Jan 15, 2024 by tsengalb99 Loading…
Jetson (aarch64) support
#724 opened Dec 14, 2023 by jasl Loading…
ProTip! Updated in the last three days: updated:>2024-11-30.