Separated LTX2.3 checkpoint for alternative way to load the models in Comfy
The fp8 quantizations were done with the basic static weight scales and are set to not run with fp8 matmuls, the models marked input_scaled additionally have activation scaling, and are set to run with fp8 matmuls on supported hardware.
Tiny VAE by madebyollin
Can be used like this currently:
- Downloads last month
- 42,259
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

