# 'torch' must be installed separately first, using the command # from the README.md to match your specific CUDA version. torchmetrics triton==3.2.0 numpy pandas matplotlib flash-linear-attention @ git+https://github.com/fla-org/flash-linear-attention@main scikit-learn gluonts notebook datasets ujson pyyaml wandb build pre-commit ruff mypy commitizen black cupy-cuda12x statsmodels pyo # Requires portaudio