Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -18,7 +18,7 @@ Load any ESM2 models into a FastEsm model to dramatically speed up training and
|
|
| 18 |
| Backend | Key | Notes |
|
| 19 |
| :--- | :--- | :--- |
|
| 20 |
| PyTorch SDPA | `"sdpa"` | Default. Exact numerics, stable on all hardware. |
|
| 21 |
-
| Flash Attention | `"kernels_flash"` | Fastest. Requires `pip install kernels` (pre-built β no hours-long compilation). Outputs
|
| 22 |
| Flex Attention | `"flex"` | Skips padding tokens via block mask β faster on variable-length batches. Near-exact numerics. First use compiles a Triton kernel (30β120 s). |
|
| 23 |
| Auto | `"auto"` | Picks the best available: `kernels_flash` β `flex` β `sdpa`. |
|
| 24 |
|
|
|
|
| 18 |
| Backend | Key | Notes |
|
| 19 |
| :--- | :--- | :--- |
|
| 20 |
| PyTorch SDPA | `"sdpa"` | Default. Exact numerics, stable on all hardware. |
|
| 21 |
+
| Flash Attention | `"kernels_flash"` | Fastest. Requires `pip install kernels` (pre-built β no hours-long compilation). Outputs are not bitwise identical to SDPA due to online softmax reordering; differences are often small but not guaranteed to be inconsequential β use `"sdpa"` if exact numerics matter. |
|
| 22 |
| Flex Attention | `"flex"` | Skips padding tokens via block mask β faster on variable-length batches. Near-exact numerics. First use compiles a Triton kernel (30β120 s). |
|
| 23 |
| Auto | `"auto"` | Picks the best available: `kernels_flash` β `flex` β `sdpa`. |
|
| 24 |
|