Quantizing unaligned and unbiased LLMs for fast and efficient batched inference! This org contains models that we do not maintain, but are eager to give access to, for the better!