-
GLM-4.5: Agentic, Reasoning, and Coding (ARC) Foundation Models
Paper • 2508.06471 • Published • 195 -
NVIDIA Nemotron Nano 2: An Accurate and Efficient Hybrid Mamba-Transformer Reasoning Model
Paper • 2508.14444 • Published • 39 -
Gemini 2.5: Pushing the Frontier with Advanced Reasoning, Multimodality, Long Context, and Next Generation Agentic Capabilities
Paper • 2507.06261 • Published • 64 -
MiniMax-M1: Scaling Test-Time Compute Efficiently with Lightning Attention
Paper • 2506.13585 • Published • 273
Collections
Discover the best community collections!
Collections including paper arxiv:2505.00949
-
nvidia/Llama-3_3-Nemotron-Super-49B-v1_5
Text Generation • 50B • Updated • 29.1k • 220 -
nvidia/Llama-3_3-Nemotron-Super-49B-v1_5-FP8
Text Generation • 50B • Updated • 1.1k • 23 -
nvidia/Llama-3_1-Nemotron-Ultra-253B-v1
Text Generation • 253B • Updated • 178k • • 342 -
nvidia/Llama-3_3-Nemotron-Super-49B-v1
Text Generation • 50B • Updated • 7.98k • 320
-
Human-like Episodic Memory for Infinite Context LLMs
Paper • 2407.09450 • Published • 62 -
MUSCLE: A Model Update Strategy for Compatible LLM Evolution
Paper • 2407.09435 • Published • 23 -
Refuse Whenever You Feel Unsafe: Improving Safety in LLMs via Decoupled Refusal Training
Paper • 2407.09121 • Published • 6 -
ChatQA 2: Bridging the Gap to Proprietary LLMs in Long Context and RAG Capabilities
Paper • 2407.14482 • Published • 26
-
RL + Transformer = A General-Purpose Problem Solver
Paper • 2501.14176 • Published • 28 -
Towards General-Purpose Model-Free Reinforcement Learning
Paper • 2501.16142 • Published • 30 -
SFT Memorizes, RL Generalizes: A Comparative Study of Foundation Model Post-training
Paper • 2501.17161 • Published • 123 -
MaxInfoRL: Boosting exploration in reinforcement learning through information gain maximization
Paper • 2412.12098 • Published • 4
-
GLM-4.5: Agentic, Reasoning, and Coding (ARC) Foundation Models
Paper • 2508.06471 • Published • 195 -
NVIDIA Nemotron Nano 2: An Accurate and Efficient Hybrid Mamba-Transformer Reasoning Model
Paper • 2508.14444 • Published • 39 -
Gemini 2.5: Pushing the Frontier with Advanced Reasoning, Multimodality, Long Context, and Next Generation Agentic Capabilities
Paper • 2507.06261 • Published • 64 -
MiniMax-M1: Scaling Test-Time Compute Efficiently with Lightning Attention
Paper • 2506.13585 • Published • 273
-
nvidia/Llama-3_3-Nemotron-Super-49B-v1_5
Text Generation • 50B • Updated • 29.1k • 220 -
nvidia/Llama-3_3-Nemotron-Super-49B-v1_5-FP8
Text Generation • 50B • Updated • 1.1k • 23 -
nvidia/Llama-3_1-Nemotron-Ultra-253B-v1
Text Generation • 253B • Updated • 178k • • 342 -
nvidia/Llama-3_3-Nemotron-Super-49B-v1
Text Generation • 50B • Updated • 7.98k • 320
-
RL + Transformer = A General-Purpose Problem Solver
Paper • 2501.14176 • Published • 28 -
Towards General-Purpose Model-Free Reinforcement Learning
Paper • 2501.16142 • Published • 30 -
SFT Memorizes, RL Generalizes: A Comparative Study of Foundation Model Post-training
Paper • 2501.17161 • Published • 123 -
MaxInfoRL: Boosting exploration in reinforcement learning through information gain maximization
Paper • 2412.12098 • Published • 4
-
Human-like Episodic Memory for Infinite Context LLMs
Paper • 2407.09450 • Published • 62 -
MUSCLE: A Model Update Strategy for Compatible LLM Evolution
Paper • 2407.09435 • Published • 23 -
Refuse Whenever You Feel Unsafe: Improving Safety in LLMs via Decoupled Refusal Training
Paper • 2407.09121 • Published • 6 -
ChatQA 2: Bridging the Gap to Proprietary LLMs in Long Context and RAG Capabilities
Paper • 2407.14482 • Published • 26