AI & ML interests

The AI community building the future.

Recent Activity

merve  published a Space about 14 hours ago
huggingface/2025-wrapped
merve  updated a Space about 15 hours ago
huggingface/2025-wrapped
nielsr  updated a Space about 16 hours ago
huggingface/ai-deadlines
View all activity

Articles

angt 
posted an update 4 days ago
view post
Post
2496
installama.sh at the TigerBeetle 1000x World Tour !

Last week I had the chance to give a short talk during the TigerBeetle 1000x World Tour (organized by @jedisct1 👏 ) a fantastic event celebrating high-performance engineering and the people who love pushing systems to their limits!

In the talk, I focused on the CPU and Linux side of things, with a simple goal in mind: making the installation of llama.cpp instant, automatic, and optimal, no matter your OS or hardware setup.

For the curious, here are the links worth checking out:
Event page: https://tigerbeetle.com/event/1000x
GitHub repo: https://github.com/angt/installama.sh
Talk: https://youtu.be/pg5NOeJZf0o?si=9Dkcfi2TqjnT_30e

More improvements are coming soon. Stay tuned!
  • 1 reply
·
angt 
posted an update 10 days ago
view post
Post
1603
I'm excited to share that https://installama.sh is up and running! 🚀

On Linux / macOS / FreeBSD it is easier than ever:
curl https://installama.sh | sh


And Windows just joined the party 🥳
irm https://installama.sh | iex

Stay tuned for new backends on Windows!
angt 
posted an update 15 days ago
view post
Post
394
🚀 installama.sh update: Vulkan & FreeBSD support added!

The fastest way to install and run llama.cpp has just been updated!

We are expanding hardware and OS support to make local AI even more accessible. This includes:

🌋 Vulkan support for Linux on x86_64 and aarch64.
😈 FreeBSD support (CPU backend) on x86_64 and aarch64 too.
✨ Lots of small optimizations and improvements under the hood.

Give it a try right now:
curl angt.github.io/installama.sh | MODEL=unsloth/Qwen3-4B-GGUF:Q4_0 sh
angt 
posted an update 24 days ago
view post
Post
1965
One command line is all you need...

...to launch a local llama.cpp server on any Linux box or any Metal-powered Mac 🚀

curl angt.github.io/installama.sh | MODEL=unsloth/gpt-oss-20b-GGUF sh


Learn more: https://github.com/angt/installama.sh
cgeorgiaw 
posted an update 25 days ago
badaoui 
posted an update 27 days ago
view post
Post
388
Building high-performance, reproducible kernels for AMD ROCm just got a lot easier.

I've put together a guide on building, testing, and sharing ROCm-compatible kernels using the Hugging Face kernel-builder and kernels libraries; so you can focus on optimizing performance rather than spending time on setup.

Learn how to:

- Use Nix for reproducible builds
- Integrate kernels as native PyTorch operators
- Share your kernels on the Hub for anyone to use with kernels.get_kernel()

We use the 🏆 award-winning RadeonFlow GEMM kernel as a practical example.

📜 Check out the full guide here : https://huggingface.co/blog/build-rocm-kernels
lunarflu 
posted an update about 1 month ago
lunarflu 
posted an update about 1 month ago
lunarflu 
posted an update about 1 month ago
view post
Post
2691
💸🤑You don’t need 100 GPUs to train something amazing!

Our Smol Training Playbook teaches you a better path to world-class LLMs, for free!

Check out the #1 trending space on 🤗 :
HuggingFaceTB/smol-training-playbook