You've noticed that I did the "WEIRD" and attempted to make it look like all my old content was "SCRAPED"
I'm largely retiring from GEN AI.
Calypso Crunchies is an old account I used to use for diffusers conversions for someone.
IF YOU WOULD LIKE ACCESS to ANYTHING -- I lost access due to me forgetting to jank Calypso into the E&D old repo, but i can get Angel or someone to add me or my other account back..
I didn't want HF to lose 3 years of my insane progress in doing things, but i need to retire from Generative image AI fast, my mental health has been diving for so long.
I'll continue in the developing/vibe coding./educational sphere, but I just can't continue in the other end of it. Much love, thank you all
I created a benchmark to evaluate the quality of the Russian language in LLMs. Details: - A set of 100 (default)/250/500 questions on general chat/creative writing domains. - LLM-as-a-Judge, but with clear criteria for marking answers. - Focuses on errors that are typical of LLMs in Russian, such as mixed grammatical genders, characters from other alphabets, and made-up words. - Everything is under an open license!
Analysis of results: - The best models are still closed source ones, such as Sonnet 4.5, Gemini, and GPT-4o. However, some open models are very close. - GPT-5 is terrible. I thought it would be better. - Of the open models, Gemma-3-27b-it and Vistral-24B are unrivaled. - Ruadapt significantly reduces errors compared to Qwen. - Qwen3 and GPT-oss are very bad. They're even worse than I expected. - Qwen3-Next is better than Qwen3. It seems like they added Russian to train dataset. - DeepSeek V3 has few errors, but V3.2-Exp is almost twice as bad.
The xLLMs project is a growing suite of multilingual and multimodal dialogue datasets designed to train and evaluate advanced conversational LLMs. Each dataset focuses on a specific capability — from long-context reasoning and factual grounding to STEM explanations, math Q&A, and polite multilingual interaction.
💬 Highlight: xLLMs – Dialogue Pubs A large-scale multilingual dataset built from document-guided synthetic dialogues (Wikipedia, WikiHow, and technical sources). It’s ideal for training models on long-context reasoning, multi-turn coherence, and tool-augmented dialogue across 9 languages. 👉 lamhieu/xllms_dialogue_pubs
🧠 Designed for: - Long-context and reasoning models - Multilingual assistants - Tool-calling and structured response learning
All datasets are open for research and development use — free, transparent, and carefully curated to improve dialogue model quality.
✨ We are happy to share with you our new universal LLM models based on Qwen3 1.7B and 4B — powerful, multilingual and ready to solve a wide range of problems!
🛠️ We have conducted additional training and carefully merged them to achieve even better results and maximize the potential of the models.
🆓 And most importantly — the models are completely open and free under the Apache-2.0 license!
Supercharge Apple’s Shortcuts using Cloudflare Workers and Gemini within minutes (and for free, up to 1,500 requests per day) ☁️✨
Hello everyone, last week, while experimenting for fun, I created an API that allows you to easily access AI models (in this case, Google's) from the Shortcut app in order to analyze data from my apps and make the most of it thanks to the generative capabilities of advanced models.
It costs me nothing, and I think it might be good to share it so that others can build on it.
In README.md, you will find everything you need to get started and put your own microservice into production, which you can call from the app’s HTTP request features.
You will simply be asked to have a free Cloudflare account and an API key obtained from Google's AI Studio.
Feel free to take a look and get back to me if you encounter any problems during deployment.
Although more and more code editors are aligning themselves with the AGENTS.md file standard, some still use specific nomenclatures that can make it difficult to maintain different configuration files when several people are working on the same project with different agents.
Bodyboard addresses this by generating canonical instructions for code helpers from a single AGENTS.md file, thereby streamlining the production of adapter outputs for Gemini CLI, Copilot, Cline, Claude, Rules, Windsurf, and OpenAI Codex integrations.