bird-of-paradise commited on
Commit
a473570
·
verified ·
1 Parent(s): 67cb979

initial commit

Browse files
Files changed (4) hide show
  1. README.md +133 -3
  2. distributed_muon.py +555 -0
  3. distributed_muon_cpu.ipynb +719 -0
  4. distributed_muon_cpu.py +552 -0
README.md CHANGED
@@ -1,3 +1,133 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ tags:
6
+ - optimization
7
+ - muon
8
+ - deep-learning
9
+ - pytorch
10
+ - distributed-computing
11
+ - tutorial
12
+ - cpu-friendly
13
+ ---
14
+
15
+ # 🧩 The "Muon is Scalable" Blueprint: A Distributed Muon Tutorial(CPU-Friendly)
16
+
17
+ A practical, **annotated tutorial** for the Muon optimizer — extended into a fully distributed (DP × TP) version that actually runs on **plain CPU/Gloo**, so broke-but-curious builders can still get their hands dirty.
18
+
19
+ This is the expert-level systems engineering companion to my original [**Understanding the Muon Optimizer**](https://huggingface.co/datasets/bird-of-paradise/muon-tutorial) tutorial.
20
+
21
+ > 💡 _“Because sometimes the best way to learn the distributed nightmare is to get your hands dirty and your eyes crossed.”_ 🤪
22
+
23
+ ---
24
+
25
+ ## 🌕 Why This Exists
26
+
27
+ Most public Muon examples (like MoonShot’s PoC) are designed for multi-GPU NCCL clusters, making them impossible to run or debug for most of us. In addition, most documentation for distributed systems is written by experts, for experts, making it a "nightmare" to learn. My goal is to change that.
28
+
29
+ This repository is **not** a "simplified" version that "flattens the depth" of the work.
30
+
31
+ Instead, it's a **didactic refactor**. I've taken the complex, real-world PoC and optimized it for *readability* and *learning*, so you can see the "blueprint" behind the "chaos":
32
+ - Fully **annotated** to **demonstrate** data parallel (ZeRO-1) + tensor parallel (TP) orchestration end-to-end.
33
+ - Understand every “distributed acrobatic” step (`DP gather` → `TP gather` → `Newton–Schulz` → `TP shard` → `DP shard`).
34
+ - The code is optimized to highlight **symmetrical logic** and **consistent naming**, showing the "opposite arrow" data flow of the "virtual map" (`dist_meta`).
35
+ - It's built to run **on a single CPU machine or Colab notebook**.
36
+
37
+ ---
38
+
39
+ ## 🧠 The “Turtle Speed” Breakthrough: The Full Story
40
+
41
+ This code is complex. It's a "distributed nightmare" 🫠.
42
+
43
+ Instead of a traditional, long-form `README`, the best documentation is the "making of" story. I've chronicled my entire journey of reverse-engineering and debugging this code in my "Turtle Speed Breakthrough" series on Medium.
44
+
45
+ * **[Part 1: The “Turtle Speed” Breakthrough: Decoding Distributed Optimizers](https://medium.com/@jenwei0312/the-turtle-speed-breakthrough-decoding-distributed-optimizers-from-fsdp-to-muons-secret-sauce-64fc76f20cd7)**
46
+ * **[Part 2: My Map of the Distributed Nightmare (The Blueprint)](https://medium.com/@jenwei0312/the-turtle-speed-breakthrough-part-2-the-blueprint-for-distributed-chaos-37fe343e7aa9)**
47
+ * **[Part 3: The Final Bugs and "Aha!" Moments](https://medium.com/@jenwei0312/the-turtle-speed-breakthrough-part-3-my-map-of-the-distributed-nightmare-b10ff4affd56)**
48
+
49
+ This tutorial is the final, runnable code that resulted from that deep dive.
50
+
51
+ ---
52
+
53
+ ## 🚀 Quick Start
54
+
55
+ Run the CPU-safe, fully-annotated notebook right from your browser:
56
+
57
+ [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](<distributed_muon_cpu.ipynb>)
58
+
59
+ Or, you can clone this repo and run the Python script locally to simulate an 8-process cluster on your CPU:
60
+
61
+ ```bash
62
+ git clone [https://huggingface.co/datasets/](https://huggingface.co/datasets/)bird-of-paradise/muon_distributed
63
+ cd muon_distributed
64
+
65
+ # This will spawn 8 processes and run the full test
66
+ !python distributed_muon_cpu.py
67
+ ````
68
+
69
+ (Note: For the original, un-annotated, buggy Moonshot PoC that this work is based on, you can find it in this [commit](https://github.com/NVIDIA/Megatron-LM/pull/1428/commits/f432fbe45c169aeb5a0805ff6f41e13f989c6730#diff-8fe91f4096ff232fc6f97b17e60e619eda92b6dffc80b4573a23e06aa56d2559).)
70
+
71
+ -----
72
+
73
+ ## 🗂️ What's Inside (File Guide)
74
+
75
+ * `distributed_muon_cpu.ipynb`: **(Start Here)** The Colab-friendly notebook that walks through the environment fixes and runs the code.
76
+ * `distributed_muon_cpu.py`: The final, **tested, fixed, and heavily-annotated** Python script. This is the "golden" code that runs on a CPU-only environment using the `"gloo"` backend.
77
+ * `distributed_muon.py`: My **annotated and logically debugged** version of the *GPU* code. This is for users who have a multi-GPU `"nccl"` environment. (Note: Since I don't have a multi-GPU cluster, this version is untested... unless someone wants to sponsor me with some GPUs! 😉)
78
+
79
+ -----
80
+
81
+ ## 🎓 What You'll Learn (The "Nightmare" Blueprint)
82
+
83
+ By exploring this code, you'll see the *real* implementation of the concepts I discuss in my articles:
84
+
85
+ * **The 2D Grid:** How to set up orthogonal `dist_group` (DP) and `tp_group` (TP) handles.
86
+ * **The "Aisles" & "Pallets":** How `param_groups` (`buffer_idx`) and communication `buckets` (`bucket_idx`) are used to organize parameters.
87
+ * **The "Virtual Buffer":** How a "master plan" (`dist_meta` and `global_buffer_size`) is used to manage memory for sharding (ZeRO-1).
88
+ * **The Acrobatic Data Flow:** The full `(DP gather -> TP gather) -> (Run Math) -> (TP shard -> DP shard)` journey.
89
+ * **The Nuance:** You'll see *why* we bucket the slow DP `all_gather` but *don't* need to bucket the fast, on-node TP `all_gather`.
90
+
91
+ -----
92
+
93
+ ## 🐞 Summary of All Fixes
94
+
95
+ This repo isn't just a copy-paste. It's the result of a week-long debugging "nightmare." Here are all the bugs we had to find and fix to make it run:
96
+
97
+ | Issue | Problem | Solution |
98
+ | :--- | :--- | :--- |
99
+ | **Logic Bug \#1** | Missing `params = group["params"]` | Added the line in the Muon update loop. |
100
+ | **Logic Bug \#2** | `ns_input` was 1D after unpacking, crashing `zeropower`.| Changed `.view(-1)` to `.view(dist_meta.shape)` to restore the 2D shape. |
101
+ | **Systems Bug** | Hardcoded `bfloat16` | Changed to `float32` (`first_param.dtype`) to work with the `"gloo"` (CPU) backend. |
102
+ | **Env Bug \#1** | Hardcoded `"nccl"` backend. | Changed `dist.init_process_group` to use `"gloo"`. |
103
+ | **Env Bug \#2** | Hardcoded `'cuda'` device. | Changed `gen_param_and_grads` to use `'cpu'`. |
104
+ | **Env Bug \#3** | `mp.spawn()` crashes in Jupyter/Colab. | The `.ipynb` runs the code as a `!python` subprocess, bypassing the notebook kernel. |
105
+
106
+ -----
107
+
108
+ ## 📖 Citation
109
+
110
+ If you use this tutorial in your work, please cite the original Muon paper and this tutorial.
111
+
112
+ ```bibtex
113
+ @misc{wei2025muondistributed,
114
+ author = {Wei, Jen},
115
+ title = {A CPU-Friendly Tutorial for Distributed Muon (DPxTP)},
116
+ year = {2025},
117
+ howpublished = {\url{[https://huggingface.co/datasets/](https://huggingface.co/datasets/)<your-hf-handle>/muon-distributed}}
118
+ }
119
+
120
+ @misc{jordan2024muon,
121
+ author = {Jordan, Keller, et al.},
122
+ title = {Muon: An optimizer for hidden layers in neural networks},
123
+ year = {2024},
124
+ url = {[https://kellerjordan.github.io/posts/muon/](https://kellerjordan.github.io/posts/muon/)}
125
+ }
126
+
127
+ @misc{liu2025muonscalable,
128
+ author = {Liu, Jingyuan, et al.},
129
+ title = {Muon is Scalable for LLM Training},
130
+ year = {2025},
131
+ url = {[https://arxiv.org/abs/2502.16982](https://arxiv.org/abs/2502.16982)}
132
+ }
133
+ ```
distributed_muon.py ADDED
@@ -0,0 +1,555 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # megatron/core/optimizer/muon.py
2
+ from typing import Tuple, Dict
3
+ import torch
4
+ import math
5
+ import torch.distributed as dist
6
+
7
+
8
+ # copy from https://github.com/KellerJordan/Muon/tree/master
9
+ # @torch.compile
10
+ def zeropower_via_newtonschulz5(G, steps):
11
+ """
12
+ Newton-Schulz iteration to compute the zeroth power / orthogonalization of G. We opt to use a
13
+ quintic iteration whose coefficients are selected to maximize the slope at zero. For the purpose
14
+ of minimizing steps, it turns out to be empirically effective to keep increasing the slope at
15
+ zero even beyond the point where the iteration no longer converges all the way to one everywhere
16
+ on the interval. This iteration therefore does not produce UV^T but rather something like US'V^T
17
+ where S' is diagonal with S_{ii}' ~ Uniform(0.5, 1.5), which turns out not to hurt model
18
+ performance at all relative to UV^T, where USV^T = G is the SVD.
19
+ """
20
+ assert len(G.shape) == 2
21
+ a, b, c = (3.4445, -4.7750, 2.0315)
22
+ X = G
23
+ if G.size(0) > G.size(1):
24
+ X = X.T
25
+
26
+ # Ensure spectral norm is at most 1
27
+ X = X / (X.norm() + 1e-7)
28
+ # Perform the NS iterations
29
+ for _ in range(steps):
30
+ A = X @ X.T
31
+ B = b * A + c * A @ A # adapted from suggestion by @jxbz, @leloykun, and @YouJiacheng
32
+ X = a * X + B @ X
33
+
34
+ if G.size(0) > G.size(1):
35
+ X = X.T
36
+ return X
37
+
38
+ def normalize_range(range: Tuple[int, int], start):
39
+ return (range[0] - start, range[1] - start)
40
+
41
+ class MuonDistMeta:
42
+
43
+ # which buffer and bucket param belongs to
44
+ buffer_idx: int = 0
45
+ bucket_idx: int = 0
46
+ # param shape after tp
47
+ shape: torch.Size = None
48
+ # param location in global buffer
49
+ global_range: Tuple[int, int] = None
50
+ tp_split_dim: int = -1
51
+ # param location in global buffer (current dp slice)
52
+ local_range: Tuple[int, int] = None
53
+
54
+ def __init__(self, buffer_idx: int, bucket_idx: int, shape: torch.Size, global_range: Tuple[int, int], tp_split_dim: int):
55
+ self.buffer_idx = buffer_idx
56
+ self.bucket_idx = bucket_idx
57
+ self.shape = shape
58
+ self.global_range = global_range
59
+ self.tp_split_dim = tp_split_dim
60
+
61
+ def set_local_buffer_range(self, local_buffer_range: Tuple[int, int]):
62
+ start = max(self.global_range[0], local_buffer_range[0])
63
+ end = min(self.global_range[1], local_buffer_range[1])
64
+ self.local_range = (start, end) if start < end else (local_buffer_range[0], local_buffer_range[0])
65
+
66
+ # adjust LR based on: https://github.com/MoonshotAI/Moonlight
67
+ def adjust_lr_wd_for_muon(lr, matched_adamw_rms, param_shape):
68
+ A, B = param_shape[:2]
69
+ adjusted_ratio = math.sqrt(max(A, B)) * matched_adamw_rms
70
+ adjusted_lr = lr * adjusted_ratio
71
+ return adjusted_lr
72
+
73
+ # copy from https://github.com/KellerJordan/Muon/tree/master and support distributed solution
74
+ class Muon(torch.optim.Optimizer):
75
+ """
76
+ Muon - MomentUm Orthogonalized by Newton-schulz
77
+ Muon internally runs standard SGD-momentum, and then performs an orthogonalization post-
78
+ processing step, in which each 2D parameter's update is replaced with the nearest orthogonal
79
+ matrix. To efficiently orthogonalize each update, we use a Newton-Schulz iteration, which has
80
+ the advantage that it can be stably run in bfloat16 on the GPU.
81
+ Some warnings:
82
+ - We believe this optimizer is unlikely to work well for training with small batch size.
83
+ - We believe it may not work well for finetuning pretrained models, but we haven't tested this.
84
+ Arguments:
85
+ param_groups: The parameters to be optimized.
86
+ lr: The learning rate. The updates will have spectral norm of `lr`. (0.02 is a good default)
87
+ momentum: The momentum used by the internal SGD. (0.95 is a good default)
88
+ matched_adamw_rms: The AdamW Update RMS that Muon is designed to match. (0.2~0.4 recommended)
89
+ nesterov: Whether to use Nesterov-style momentum in the internal SGD. (recommended)
90
+ ns_steps: The number of Newton-Schulz iterations to run. (5 is probably always enough)
91
+ {0, 1}-D or are detected as being the embed or lm_head will be optimized by AdamW as well.
92
+ adamw_betas: The betas for the internal AdamW.
93
+ adamw_eps: The epsilon for the internal AdamW.
94
+ adamw_wd: The weight decay for the internal AdamW.
95
+ """
96
+ def __init__(self, param_groups, lr=2e-2, weight_decay=0.1,
97
+ matched_adamw_rms=0.2, momentum=0.95, nesterov=True, ns_steps=5,
98
+ adamw_betas=(0.95, 0.95), adamw_eps=1e-8):
99
+
100
+ defaults = dict(lr=lr, weight_decay=weight_decay,
101
+ matched_adamw_rms=matched_adamw_rms,
102
+ momentum=momentum, nesterov=nesterov, ns_steps=ns_steps,
103
+ adamw_betas=adamw_betas, adamw_eps=adamw_eps,)
104
+
105
+ super().__init__(param_groups, defaults)
106
+ self.distributed_mode = False
107
+
108
+
109
+ def enable_distributed_mode(self, global_buffer_sizes, dist_group, tp_group,
110
+ dist_metas: Dict[torch.nn.Parameter, MuonDistMeta]):
111
+ """
112
+ enable distributed mode
113
+ Args:
114
+ global_buffer_size: global buffer size
115
+ dist group: optimizer sharding group
116
+ tp group: param tp group
117
+ dist metas: dist metas for all param
118
+ """
119
+
120
+ self.global_buffer_sizes = global_buffer_sizes
121
+ self.dist_group = dist_group
122
+ self.tp_group = tp_group
123
+ self.dist_metas = dist_metas
124
+
125
+ world_size = dist.get_world_size(dist_group)
126
+ rank = dist.get_rank(dist_group)
127
+
128
+ # calc local buffer range
129
+ self.local_buffer_sizes = []
130
+ self.local_buffer_ranges = []
131
+ # The outer loop is for different parameter groups (e.g., weights vs. biases)
132
+ for global_bucket_sizes in global_buffer_sizes: # <--- rename `global_bucket_sizes`
133
+ local_bucket_sizes = []
134
+ local_bucket_ranges = []
135
+
136
+ # The inner loop is for the different buckets within a single group
137
+ for (global_bucket_size, bucket_offset) in global_bucket_sizes:
138
+ # calculate the local range for THIS specific bucket
139
+ assert global_bucket_size % world_size == 0
140
+ local_bucket_size = global_bucket_size // world_size
141
+ # Renaming here makes the logic so much clearer
142
+ local_bucket_start = local_bucket_size * rank + bucket_offset
143
+ local_buffer_range = (local_bucket_start, local_bucket_start + local_bucket_size)
144
+ local_bucket_sizes.append(local_bucket_size)
145
+ local_bucket_ranges.append(local_buffer_range)
146
+
147
+ self.local_buffer_sizes.append(local_bucket_sizes)
148
+ self.local_buffer_ranges.append(local_bucket_ranges)
149
+
150
+ # calc local range for params
151
+ for dist_meta in dist_metas.values():
152
+ local_buffer_range = self.local_buffer_ranges[dist_meta.buffer_idx][dist_meta.bucket_idx]
153
+ dist_meta.set_local_buffer_range(local_buffer_range)
154
+
155
+ self.distributed_mode = True
156
+
157
+ def step(self):
158
+
159
+ dtype = torch.bfloat16
160
+ device = torch.cuda.current_device()
161
+
162
+ ns_inputs = {}
163
+
164
+ # update muon momentum first
165
+ # `self.param_groups` is already sharded
166
+ for group in self.param_groups:
167
+
168
+ if not group.get("use_muon", False):
169
+ continue
170
+
171
+ momentum = group['momentum']
172
+ params = group["params"]
173
+
174
+ for p in params:
175
+
176
+ g = p.grad
177
+ assert g is not None
178
+ # 1-dim grad for distributed mode
179
+ assert self.distributed_mode or g.dim() == 2
180
+
181
+ # prepare muon buffer in state
182
+ state = self.state[p]
183
+ if not "muon_buffer" in state:
184
+ state["muon_buffer"] = torch.zeros_like(g)
185
+ buf = state["muon_buffer"]
186
+ buf.mul_(momentum).add_(g)
187
+
188
+ # save to ns input
189
+ g = g.add(buf, alpha=momentum) if group['nesterov'] else buf
190
+ ns_inputs[p] = g.bfloat16()
191
+
192
+ # rewrite ns_inputs if distributed
193
+ """
194
+ the four-step "acrobatic" journey of the ns_inputs data:
195
+
196
+ 1. **DP `all_gather`**: (ZeRO) Gather all the sharded pieces from your data-parallel "column" to re-create your **full TP slice**.
197
+ 2. **TP `all_gather`**: Gather all the TP slices from your tensor-parallel "row" to re-create the **full, 100% complete matrix**.
198
+ 3. *(...Run the math on the full matrix...)*
199
+ 4. **TP `shard`**: Shard the full `update` matrix back down to your **local TP slice**.
200
+ 5. **DP `shard`**: (ZeRO) Shard that TP slice *again* back down to the **local DP/ZeRO slice** that you're responsible for.
201
+
202
+ """
203
+ if self.distributed_mode:
204
+
205
+ # initialize buffers
206
+ # hanged the variable nnames to `local_bucket_size` and `global_bucket_size` for clarity
207
+ ns_input_local_buffers = [
208
+ [ torch.empty((local_bucket_size), device=device, dtype=dtype)
209
+ for local_bucket_size in local_bucket_sizes ]
210
+ for local_bucket_sizes in self.local_buffer_sizes
211
+ ]
212
+ ns_input_global_buffers = [
213
+ [ torch.empty((global_bucket_size), device=device, dtype=dtype)
214
+ for (global_bucket_size, bucket_offset) in global_bucket_sizes ]
215
+ for global_bucket_sizes in self.global_buffer_sizes
216
+ ]
217
+
218
+ # fill ns input data to local buffer
219
+ # looping through all params in local rank, ok.
220
+ for param, ns_input in ns_inputs.items():
221
+ dist_meta = self.dist_metas[param]
222
+ # ceate a reference to `ns_input_local_buffers`
223
+ # the update is in local rank, so we only need one `for` loop
224
+ ns_input_local_buffer = ns_input_local_buffers[dist_meta.buffer_idx][dist_meta.bucket_idx]
225
+ local_buffer_range = self.local_buffer_ranges[dist_meta.buffer_idx][dist_meta.bucket_idx]
226
+ local_range = normalize_range(dist_meta.local_range, local_buffer_range[0]) # local_range in global_range
227
+ # copy data into this `ns_input_local_buffer` memory
228
+ # because dist.all_gather requires a single, physically contiguous block of memory to work efficiently.
229
+ ns_input_local_buffer[local_range[0]:local_range[1]].copy_(ns_input.view(-1))
230
+
231
+ # all gather buffers: one bucket at a time. -- the "shipping" phase
232
+ for ns_input_global_buffer, ns_input_local_buffer in zip(ns_input_global_buffers, ns_input_local_buffers):
233
+ for ns_input_global_bucket, ns_input_local_bucket in zip(ns_input_global_buffer, ns_input_local_buffer):
234
+ dist.all_gather_into_tensor(ns_input_global_bucket, ns_input_local_bucket, group=self.dist_group)
235
+
236
+ # overwrite ns input with the `all_gather`-ed `ns_inputs` -- the "unpacking" phase
237
+ # this is the "opposite" of filling ns input data to local buffer
238
+ for p in ns_inputs.keys():
239
+ dist_meta = self.dist_metas[p]
240
+ ns_input_global_buffer = ns_input_global_buffers[dist_meta.buffer_idx][dist_meta.bucket_idx]
241
+ offset = self.global_buffer_sizes[dist_meta.buffer_idx][dist_meta.bucket_idx][1]
242
+ global_range = normalize_range(dist_meta.global_range, offset)
243
+
244
+ #ns_inputs[p] = ns_input_global_buffer[global_range[0]:global_range[1]].view(-1)
245
+ ## bug fix 👆🏻-- overwrite ns input with the `all_gather`-ed `ns_inputs` -- the "unpacking" phase
246
+ #ns_inputs[p] = ns_input_global_buffer[global_range[0]:global_range[1]].view(-1)
247
+ # Unpack the 1D slice of data
248
+ unpacked_data = ns_input_global_buffer[global_range[0]:global_range[1]]
249
+
250
+ # THIS IS THE FIX: Reshape it to its correct 2D shape, not view(-1)
251
+ ns_inputs[p] = unpacked_data.view(dist_meta.shape)
252
+
253
+ # set tp info
254
+ tp_world_size = dist.get_world_size(self.tp_group)
255
+ tp_rank = dist.get_rank(self.tp_group)
256
+
257
+ # update muon momentum first
258
+ for group in self.param_groups:
259
+
260
+ if not group.get('use_muon', False):
261
+ continue
262
+
263
+ lr = group["lr"]
264
+ ns_steps = group["ns_steps"]
265
+ weight_decay = group["weight_decay"]
266
+ matched_adamw_rms = group["matched_adamw_rms"]
267
+ params = group["params"] # <-- add this
268
+
269
+ for p in params:
270
+
271
+ ns_input = ns_inputs[p]
272
+ tp_split_dim = -1
273
+
274
+ if self.distributed_mode:
275
+ dist_meta = self.dist_metas[p]
276
+ tp_split_dim = dist_meta.tp_split_dim
277
+
278
+ # gather tensor parallel ( if tp )
279
+ if tp_split_dim != -1:
280
+ ns_input_shards = [ torch.empty_like(ns_input) for _ in range(tp_world_size) ]
281
+ dist.all_gather(ns_input_shards, ns_input, self.tp_group)
282
+ ns_input = torch.cat(ns_input_shards, dim=tp_split_dim)
283
+
284
+ # calc update
285
+ update = zeropower_via_newtonschulz5(ns_input, steps=ns_steps)
286
+
287
+ # only local tp part
288
+ # this is effectivly "shadding" the newtonschulz-processed update,
289
+ # and keep only your assigned piece, discarding the rest
290
+ if tp_split_dim != -1:
291
+ update = update.chunk(tp_world_size, dim=tp_split_dim)[tp_rank]
292
+
293
+ # only local dp buffer part
294
+ if self.distributed_mode:
295
+ # local range in global range
296
+ # unpacking the tp sharded update to dp sharded update
297
+ local_range = normalize_range(dist_meta.local_range, dist_meta.global_range[0])
298
+ update = update.reshape(-1)[local_range[0]:local_range[1]]
299
+
300
+ # apply weight decay
301
+ p.data.mul_(1 - lr*weight_decay)
302
+
303
+ # adjust lr and apply update
304
+ adjusted_lr = adjust_lr_wd_for_muon(lr, matched_adamw_rms, ns_input.shape)
305
+ p.data.add_(update, alpha=-adjusted_lr)
306
+
307
+ # use adam for other params
308
+ for group in self.param_groups:
309
+
310
+ if group.get('use_muon', False):
311
+ continue
312
+
313
+ # init step
314
+ if 'step' in group:
315
+ group['step'] += 1
316
+ else:
317
+ group['step'] = 1
318
+
319
+ step = group['step']
320
+ params = group["params"]
321
+ lr = group['lr']
322
+ weight_decay = group['weight_decay']
323
+ beta1, beta2 = group['adamw_betas']
324
+ eps = group['adamw_eps']
325
+
326
+ for p in params:
327
+
328
+ g = p.grad
329
+ assert g is not None
330
+ state = self.state[p]
331
+
332
+ if len(state) == 0:
333
+ state['adamw_exp_avg'] = torch.zeros_like(g)
334
+ state['adamw_exp_avg_sq'] = torch.zeros_like(g)
335
+
336
+ buf1 = state['adamw_exp_avg']
337
+ buf2 = state['adamw_exp_avg_sq']
338
+ buf1.lerp_(g, 1-beta1)
339
+ buf2.lerp_(g.square(), 1-beta2)
340
+
341
+ g = buf1 / (eps + buf2.sqrt())
342
+
343
+ bias_correction1 = 1 - beta1**step
344
+ bias_correction2 = 1 - beta2**step
345
+ scale = bias_correction1 / bias_correction2**0.5
346
+ p.data.mul_(1 - lr * weight_decay)
347
+ p.data.add_(g, alpha=-lr/scale)
348
+
349
+
350
+ ##--------------- tests/unit_tests/test_optimizer_muon.py -----------------
351
+ import os
352
+
353
+ import torch
354
+ import torch.distributed as dist
355
+
356
+ #from megatron.core.optimizer.muon import Muon, MuonDistMeta, normalize_range
357
+
358
+ def is_rank_0():
359
+ return torch.distributed.get_rank() == 0
360
+
361
+ def print_rank_0(*args):
362
+ if is_rank_0():
363
+ print(*args)
364
+
365
+ def cdiv(x: int, y: int):
366
+ return (x + y - 1) // y
367
+
368
+ def gen_param_and_grads():
369
+
370
+ # reset manual seed
371
+ torch.manual_seed(0)
372
+ torch.cuda.manual_seed(0)
373
+ device = 'cuda'
374
+ dtype = torch.float32
375
+
376
+ # gen params
377
+ params = [ torch.randn(shape, device=device, dtype=dtype) for shape in [
378
+ (100, 100), (124, 324), (456, 124), (676, 876), (128, 128), ] ]
379
+
380
+ # gen grads [ [ grad-list ] * step ]
381
+ grads = [ [ torch.randn_like(param) for param in params ] for _ in range(10) ]
382
+
383
+ return params, grads
384
+
385
+ def distribute_params(params, grads, tp_dims, dist_group, tp_group):
386
+ """ 将 param 进行 dist & tp shard, 仅保留自己的一部分 """
387
+
388
+ params = params.copy()
389
+ grads = [ step_grads.copy() for step_grads in grads ]
390
+
391
+ # tp dist
392
+ tp_size = dist.get_world_size(tp_group)
393
+ tp_rank = dist.get_rank(tp_group)
394
+ for i, param in enumerate(params):
395
+ tp_dim = tp_dims[i]
396
+ if tp_dim == -1:
397
+ continue
398
+ # Shard the parameter tensor along the `tp_dim` dimension.
399
+ assert param.shape[tp_dim] % tp_size == 0
400
+ local_range_start = param.shape[tp_dim] // tp_size * tp_rank
401
+ # range of the shard based on the rank of the current GOU in the given `tp_group``
402
+ local_range_end = param.shape[tp_dim] // tp_size * (tp_rank + 1)
403
+ # each GPU gets `[local_range_start:local_range_end, :] ` rows or `[:, local_range_start:local_range_end]` columns
404
+ params[i] = param[local_range_start:local_range_end, :] if tp_dim == 0 else \
405
+ param[:, local_range_start:local_range_end].contiguous()
406
+ # same logic applies to sharding the gradients for the current layer(param)
407
+ for step_grads in grads:
408
+ step_grads[i] = step_grads[i][local_range_start:local_range_end, :] if tp_dim == 0 else \
409
+ step_grads[i][:, local_range_start:local_range_end].contiguous()
410
+
411
+ # distributed
412
+ world_size = dist.get_world_size(dist_group)
413
+ rank = dist.get_rank(dist_group)
414
+
415
+ # global as the given DP group
416
+ # "global" here means "global to the TP group's worth of parameters."
417
+ global_buffer_size = sum(param.numel() for param in params)
418
+ local_buffer_size = cdiv(global_buffer_size, world_size)
419
+ # deciding the shard range for this rank
420
+ local_buffer_range = (local_buffer_size * rank, local_buffer_size * (rank + 1))
421
+ # padded global_buffer_size
422
+ global_buffer_size = local_buffer_size * world_size # fix global buffer size
423
+
424
+ numel_acc = 0
425
+ dist_params = []
426
+ dist_grads = [[] for _ in grads]
427
+ dist_metas = {}
428
+ for i, param in enumerate(params):
429
+
430
+ # gen meta
431
+ # align global buffer index(range) with local buffer index(range)
432
+ # see handwritten diagram for more details
433
+ numel = param.numel()
434
+ dist_meta = MuonDistMeta(0, 0, param.shape, (numel_acc, numel_acc + numel), tp_dims[i])
435
+ dist_meta.set_local_buffer_range(local_buffer_range)
436
+ numel_acc += numel
437
+
438
+ # skip if no element in this shard
439
+ if dist_meta.local_range[0] == dist_meta.local_range[1]:
440
+ continue
441
+
442
+ # gen param
443
+
444
+ # Convert the ABSOLUTE slice range (from the global virtual buffer)
445
+ # into a RELATIVE slice range (local to just this one parameter).
446
+ local_range = normalize_range(dist_meta.local_range, dist_meta.global_range[0])
447
+
448
+ # 1. Flatten the 2D parameter tensor into a 1D vector.
449
+ # 2. Use the relative range to slice out the piece this GPU is responsible for storing.
450
+ dist_param = param.view(-1)[local_range[0]:local_range[1]]
451
+ dist_params.append(dist_param)
452
+ dist_metas[dist_param] = dist_meta
453
+
454
+ # gen grad
455
+ # same logoc as the `gen param` scetion
456
+ for step, step_grads in enumerate(grads):
457
+ dist_grad = step_grads[i].view(-1)[local_range[0]:local_range[1]]
458
+ dist_grads[step].append(dist_grad)
459
+
460
+ return dist_params, dist_grads, global_buffer_size, dist_metas
461
+
462
+
463
+ def test_muon_dist(dp_size, tp_size):
464
+
465
+ world_size = dist.get_world_size()
466
+ rank = dist.get_rank()
467
+ assert dp_size * tp_size == world_size
468
+
469
+ # init dist group
470
+ for i in range(tp_size):
471
+ # decide the tp group based on grod of size `tp_size`
472
+ ranks = range(i, world_size, tp_size)
473
+ group = dist.new_group(ranks)
474
+ # each rank finds its groups
475
+ if rank in ranks:
476
+ # groups are passed as instructions
477
+ dist_group = group
478
+ # init tp group
479
+ for i in range(dp_size):
480
+ ranks = range(i * tp_size, (i + 1) * tp_size)
481
+ group = dist.new_group(ranks)
482
+ if rank in ranks:
483
+ tp_group = group
484
+
485
+ print_rank_0("process group initialized")
486
+
487
+ params_ref, grads_ref = gen_param_and_grads()
488
+ params_test, grads_test = gen_param_and_grads()
489
+ tp_dims = [0, 1, -1, 1, 0]
490
+
491
+ # global_buffer_size is the padded buffer size of the dp group where the current rank belongs to
492
+ params_test, grads_test, global_buffer_size, dist_metas \
493
+ = distribute_params(params_test, grads_test, tp_dims, dist_group, tp_group)
494
+
495
+ muon_args = {
496
+ "use_muon": True,
497
+ "lr": 0.1,
498
+ "momentum": 0.9,
499
+ "nesterov": True,
500
+ "ns_steps": 5,
501
+ "weight_decay": 0.1,
502
+ }
503
+
504
+ # gen params
505
+ ref_param_groups = [{
506
+ "params": params_ref,
507
+ **muon_args
508
+ }]
509
+ test_param_groups = [{
510
+ "params": params_test,
511
+ **muon_args
512
+ }]
513
+
514
+ ref_muon = Muon(ref_param_groups)
515
+ test_muon = Muon(test_param_groups)
516
+ test_muon.enable_distributed_mode([[(global_buffer_size, 0)]], dist_group, tp_group, dist_metas)
517
+
518
+ for step in range(10):
519
+
520
+ # add grad
521
+ for i, grad in enumerate(grads_ref[step]):
522
+ params_ref[i].grad = grad.clone()
523
+ for i, grad in enumerate(grads_test[step]):
524
+ params_test[i].grad = grad.clone()
525
+ # step
526
+ ref_muon.step()
527
+ test_muon.step()
528
+ # distribute ref params
529
+ dist_ref_params, _, _, _ = distribute_params(params_ref, [], tp_dims, dist_group, tp_group)
530
+ # verify
531
+ for i, params_x2 in enumerate(zip(dist_ref_params, params_test)):
532
+ assert (params_x2[0] == params_x2[1]).all(), f"rank {rank} param {i} verify failed"
533
+ print_rank_0(f" - step {step} verify passed")
534
+
535
+ print_rank_0(f"dist dp = {dp_size} tp = {tp_size} test passed")
536
+
537
+ def run_process(rank, world_size):
538
+
539
+ # init dist
540
+ torch.cuda.set_device(rank)
541
+ dist.init_process_group("nccl", rank=rank, world_size=world_size)
542
+
543
+ test_muon_dist(dp_size=4, tp_size=2)
544
+ test_muon_dist(dp_size=2, tp_size=4)
545
+
546
+ dist.destroy_process_group()
547
+
548
+ if __name__ == "__main__":
549
+
550
+ world_size = 8
551
+ os.environ['MASTER_ADDR'] = 'localhost'
552
+ os.environ['MASTER_PORT'] = '12345'
553
+ os.environ['CUDA_DEVICE_MAX_CONNECTIONS'] = '1'
554
+
555
+ torch.multiprocessing.spawn(run_process, args=(world_size,), nprocs=world_size, join=True)
distributed_muon_cpu.ipynb ADDED
@@ -0,0 +1,719 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "metadata": {
6
+ "id": "s0j9J9zu4v00"
7
+ },
8
+ "source": [
9
+ "**Muon is scalable for LLM Trainings -- Optimized and tested code**"
10
+ ]
11
+ },
12
+ {
13
+ "cell_type": "code",
14
+ "execution_count": null,
15
+ "metadata": {
16
+ "colab": {
17
+ "base_uri": "https://localhost:8080/"
18
+ },
19
+ "id": "kXO67qguR1cD",
20
+ "outputId": "74e0a52d-36dd-4313-e9d5-76763a06d591"
21
+ },
22
+ "outputs": [
23
+ {
24
+ "name": "stdout",
25
+ "output_type": "stream",
26
+ "text": [
27
+ "✅ Test code written to /content/test_muon_dist.py\n",
28
+ "\n",
29
+ "Now run it with:\n",
30
+ "!python /content/test_muon_dist.py\n"
31
+ ]
32
+ }
33
+ ],
34
+ "source": [
35
+ "\"\"\"\n",
36
+ "COLAB WORKAROUND: Write code to file and run as subprocess\n",
37
+ "This avoids the multiprocessing limitations in Jupyter notebooks\n",
38
+ "\"\"\"\n",
39
+ "test_code = '''\n",
40
+ "\n",
41
+ "# Step 1: Write the distributed test code to a file\n",
42
+ "\n",
43
+ "import os\n",
44
+ "import sys\n",
45
+ "import torch\n",
46
+ "import torch.distributed as dist\n",
47
+ "import torch.multiprocessing as mp\n",
48
+ "import math\n",
49
+ "from typing import Tuple, Dict\n",
50
+ "\n",
51
+ "\n",
52
+ "# copy from https://github.com/KellerJordan/Muon/tree/master\n",
53
+ "# @torch.compile\n",
54
+ "def zeropower_via_newtonschulz5(G, steps):\n",
55
+ " \"\"\"\n",
56
+ " Newton-Schulz iteration to compute the zeroth power / orthogonalization of G. We opt to use a\n",
57
+ " quintic iteration whose coefficients are selected to maximize the slope at zero. For the purpose\n",
58
+ " of minimizing steps, it turns out to be empirically effective to keep increasing the slope at\n",
59
+ " zero even beyond the point where the iteration no longer converges all the way to one everywhere\n",
60
+ " on the interval. This iteration therefore does not produce UV^T but rather something like US'V^T\n",
61
+ " where S' is diagonal with S_{ii}' ~ Uniform(0.5, 1.5), which turns out not to hurt model\n",
62
+ " performance at all relative to UV^T, where USV^T = G is the SVD.\n",
63
+ " \"\"\"\n",
64
+ " assert len(G.shape) == 2\n",
65
+ " a, b, c = (3.4445, -4.7750, 2.0315)\n",
66
+ " X = G\n",
67
+ " if G.size(0) > G.size(1):\n",
68
+ " X = X.T\n",
69
+ "\n",
70
+ " # Ensure spectral norm is at most 1\n",
71
+ " X = X / (X.norm() + 1e-7)\n",
72
+ " # Perform the NS iterations\n",
73
+ " for _ in range(steps):\n",
74
+ " A = X @ X.T\n",
75
+ " B = b * A + c * A @ A # adapted from suggestion by @jxbz, @leloykun, and @YouJiacheng\n",
76
+ " X = a * X + B @ X\n",
77
+ "\n",
78
+ " if G.size(0) > G.size(1):\n",
79
+ " X = X.T\n",
80
+ " return X\n",
81
+ "\n",
82
+ "def normalize_range(range: Tuple[int, int], start):\n",
83
+ " return (range[0] - start, range[1] - start)\n",
84
+ "\n",
85
+ "class MuonDistMeta:\n",
86
+ "\n",
87
+ " # which buffer and bucket param belongs to\n",
88
+ " buffer_idx: int = 0\n",
89
+ " bucket_idx: int = 0\n",
90
+ " # param shape after tp\n",
91
+ " shape: torch.Size = None\n",
92
+ " # param location in global buffer\n",
93
+ " global_range: Tuple[int, int] = None\n",
94
+ " tp_split_dim: int = -1\n",
95
+ " # param location in global buffer (current dp slice)\n",
96
+ " local_range: Tuple[int, int] = None\n",
97
+ "\n",
98
+ " def __init__(self, buffer_idx: int, bucket_idx: int, shape: torch.Size, global_range: Tuple[int, int], tp_split_dim: int):\n",
99
+ " self.buffer_idx = buffer_idx\n",
100
+ " self.bucket_idx = bucket_idx\n",
101
+ " self.shape = shape\n",
102
+ " self.global_range = global_range\n",
103
+ " self.tp_split_dim = tp_split_dim\n",
104
+ "\n",
105
+ " def set_local_buffer_range(self, local_buffer_range: Tuple[int, int]):\n",
106
+ " start = max(self.global_range[0], local_buffer_range[0])\n",
107
+ " end = min(self.global_range[1], local_buffer_range[1])\n",
108
+ " self.local_range = (start, end) if start < end else (local_buffer_range[0], local_buffer_range[0])\n",
109
+ "\n",
110
+ "# adjust LR based on: https://github.com/MoonshotAI/Moonlight\n",
111
+ "def adjust_lr_wd_for_muon(lr, matched_adamw_rms, param_shape):\n",
112
+ " A, B = param_shape[:2]\n",
113
+ " adjusted_ratio = math.sqrt(max(A, B)) * matched_adamw_rms\n",
114
+ " adjusted_lr = lr * adjusted_ratio\n",
115
+ " return adjusted_lr\n",
116
+ "\n",
117
+ "# copy from https://github.com/KellerJordan/Muon/tree/master and support distributed solution\n",
118
+ "class Muon(torch.optim.Optimizer):\n",
119
+ " \"\"\"\n",
120
+ " Muon - MomentUm Orthogonalized by Newton-schulz\n",
121
+ " Muon internally runs standard SGD-momentum, and then performs an orthogonalization post-\n",
122
+ " processing step, in which each 2D parameter's update is replaced with the nearest orthogonal\n",
123
+ " matrix. To efficiently orthogonalize each update, we use a Newton-Schulz iteration, which has\n",
124
+ " the advantage that it can be stably run in bfloat16 on the GPU.\n",
125
+ " Some warnings:\n",
126
+ " - We believe this optimizer is unlikely to work well for training with small batch size.\n",
127
+ " - We believe it may not work well for finetuning pretrained models, but we haven't tested this.\n",
128
+ " Arguments:\n",
129
+ " param_groups: The parameters to be optimized.\n",
130
+ " lr: The learning rate. The updates will have spectral norm of `lr`. (0.02 is a good default)\n",
131
+ " momentum: The momentum used by the internal SGD. (0.95 is a good default)\n",
132
+ " matched_adamw_rms: The AdamW Update RMS that Muon is designed to match. (0.2~0.4 recommended)\n",
133
+ " nesterov: Whether to use Nesterov-style momentum in the internal SGD. (recommended)\n",
134
+ " ns_steps: The number of Newton-Schulz iterations to run. (5 is probably always enough)\n",
135
+ " {0, 1}-D or are detected as being the embed or lm_head will be optimized by AdamW as well.\n",
136
+ " adamw_betas: The betas for the internal AdamW.\n",
137
+ " adamw_eps: The epsilon for the internal AdamW.\n",
138
+ " adamw_wd: The weight decay for the internal AdamW.\n",
139
+ " \"\"\"\n",
140
+ " def __init__(self, param_groups, lr=2e-2, weight_decay=0.1,\n",
141
+ " matched_adamw_rms=0.2, momentum=0.95, nesterov=True, ns_steps=5,\n",
142
+ " adamw_betas=(0.95, 0.95), adamw_eps=1e-8):\n",
143
+ "\n",
144
+ " defaults = dict(lr=lr, weight_decay=weight_decay,\n",
145
+ " matched_adamw_rms=matched_adamw_rms,\n",
146
+ " momentum=momentum, nesterov=nesterov, ns_steps=ns_steps,\n",
147
+ " adamw_betas=adamw_betas, adamw_eps=adamw_eps,)\n",
148
+ "\n",
149
+ " super().__init__(param_groups, defaults)\n",
150
+ " self.distributed_mode = False\n",
151
+ "\n",
152
+ "\n",
153
+ " def enable_distributed_mode(self, global_buffer_sizes, dist_group, tp_group,\n",
154
+ " dist_metas: Dict[torch.nn.Parameter, MuonDistMeta]):\n",
155
+ " \"\"\"\n",
156
+ " enable distributed mode\n",
157
+ " Args:\n",
158
+ " global_buffer_size: global buffer size\n",
159
+ " dist group: optimizer sharding group\n",
160
+ " tp group: param tp group\n",
161
+ " dist metas: dist metas for all param\n",
162
+ " \"\"\"\n",
163
+ "\n",
164
+ " self.global_buffer_sizes = global_buffer_sizes\n",
165
+ " self.dist_group = dist_group\n",
166
+ " self.tp_group = tp_group\n",
167
+ " self.dist_metas = dist_metas\n",
168
+ "\n",
169
+ " world_size = dist.get_world_size(dist_group)\n",
170
+ " rank = dist.get_rank(dist_group)\n",
171
+ "\n",
172
+ " # calc local buffer range\n",
173
+ " self.local_buffer_sizes = []\n",
174
+ " self.local_buffer_ranges = []\n",
175
+ " # The outer loop is for different parameter groups (e.g., weights vs. biases)\n",
176
+ " for global_bucket_sizes in global_buffer_sizes: # <--- rename `global_bucket_sizes`\n",
177
+ " local_bucket_sizes = []\n",
178
+ " local_bucket_ranges = []\n",
179
+ "\n",
180
+ " # The inner loop is for the different buckets within a single group\n",
181
+ " for (global_bucket_size, bucket_offset) in global_bucket_sizes:\n",
182
+ " # calculate the local range for THIS specific bucket\n",
183
+ " assert global_bucket_size % world_size == 0\n",
184
+ " local_bucket_size = global_bucket_size // world_size\n",
185
+ " # Renaming here makes the logic so much clearer\n",
186
+ " local_bucket_start = local_bucket_size * rank + bucket_offset\n",
187
+ " local_buffer_range = (local_bucket_start, local_bucket_start + local_bucket_size)\n",
188
+ " local_bucket_sizes.append(local_bucket_size)\n",
189
+ " local_bucket_ranges.append(local_buffer_range)\n",
190
+ "\n",
191
+ " self.local_buffer_sizes.append(local_bucket_sizes)\n",
192
+ " self.local_buffer_ranges.append(local_bucket_ranges)\n",
193
+ "\n",
194
+ " # calc local range for params\n",
195
+ " for dist_meta in dist_metas.values():\n",
196
+ " local_buffer_range = self.local_buffer_ranges[dist_meta.buffer_idx][dist_meta.bucket_idx]\n",
197
+ " dist_meta.set_local_buffer_range(local_buffer_range)\n",
198
+ "\n",
199
+ " self.distributed_mode = True\n",
200
+ "\n",
201
+ " def step(self):\n",
202
+ " first_param = self.param_groups[0]['params'][0]\n",
203
+ " device = first_param.device\n",
204
+ " dtype = torch.bfloat16\n",
205
+ "\n",
206
+ " ns_inputs = {}\n",
207
+ "\n",
208
+ " # update muon momentum first\n",
209
+ " # `self.param_groups` is already sharded\n",
210
+ " for group in self.param_groups:\n",
211
+ "\n",
212
+ " if not group.get(\"use_muon\", False):\n",
213
+ " continue\n",
214
+ "\n",
215
+ " momentum = group['momentum']\n",
216
+ " params = group[\"params\"]\n",
217
+ "\n",
218
+ " for p in params:\n",
219
+ "\n",
220
+ " g = p.grad\n",
221
+ " assert g is not None\n",
222
+ " # 1-dim grad for distributed mode\n",
223
+ " assert self.distributed_mode or g.dim() == 2\n",
224
+ "\n",
225
+ " # prepare muon buffer in state\n",
226
+ " state = self.state[p]\n",
227
+ " if not \"muon_buffer\" in state:\n",
228
+ " state[\"muon_buffer\"] = torch.zeros_like(g)\n",
229
+ " buf = state[\"muon_buffer\"]\n",
230
+ " buf.mul_(momentum).add_(g)\n",
231
+ "\n",
232
+ " # save to ns input\n",
233
+ " g = g.add(buf, alpha=momentum) if group['nesterov'] else buf\n",
234
+ " ns_inputs[p] = g.bfloat16()\n",
235
+ "\n",
236
+ " # rewrite ns_inputs if distributed\n",
237
+ " \"\"\"\n",
238
+ " the four-step \"acrobatic\" journey of the ns_inputs data:\n",
239
+ "\n",
240
+ " 1. **DP `all_gather`**: (ZeRO) Gather all the sharded pieces from your data-parallel \"column\" to re-create your **full TP slice**.\n",
241
+ " 2. **TP `all_gather`**: Gather all the TP slices from your tensor-parallel \"row\" to re-create the **full, 100% complete matrix**.\n",
242
+ " 3. *(...Run the math on the full matrix...)*\n",
243
+ " 4. **TP `shard`**: Shard the full `update` matrix back down to your **local TP slice**.\n",
244
+ " 5. **DP `shard`**: (ZeRO) Shard that TP slice *again* back down to the **local DP/ZeRO slice** that you're responsible for.\n",
245
+ "\n",
246
+ " \"\"\"\n",
247
+ " if self.distributed_mode:\n",
248
+ "\n",
249
+ " # initialize buffers\n",
250
+ " # hanged the variable nnames to `local_bucket_size` and `global_bucket_size` for clarity\n",
251
+ " ns_input_local_buffers = [\n",
252
+ " [ torch.empty((local_bucket_size), device=device, dtype=dtype)\n",
253
+ " for local_bucket_size in local_bucket_sizes ]\n",
254
+ " for local_bucket_sizes in self.local_buffer_sizes\n",
255
+ " ]\n",
256
+ " ns_input_global_buffers = [\n",
257
+ " [ torch.empty((global_bucket_size), device=device, dtype=dtype)\n",
258
+ " for (global_bucket_size, bucket_offset) in global_bucket_sizes ]\n",
259
+ " for global_bucket_sizes in self.global_buffer_sizes\n",
260
+ " ]\n",
261
+ "\n",
262
+ " # fill ns input data to local buffer\n",
263
+ " # looping through all params in local rank, ok.\n",
264
+ " for param, ns_input in ns_inputs.items():\n",
265
+ " dist_meta = self.dist_metas[param]\n",
266
+ " # ceate a reference to `ns_input_local_buffers`\n",
267
+ " # the update is in local rank, so we only need one `for` loop\n",
268
+ " ns_input_local_buffer = ns_input_local_buffers[dist_meta.buffer_idx][dist_meta.bucket_idx]\n",
269
+ " local_buffer_range = self.local_buffer_ranges[dist_meta.buffer_idx][dist_meta.bucket_idx]\n",
270
+ " local_range = normalize_range(dist_meta.local_range, local_buffer_range[0]) # local_range in global_range\n",
271
+ " # copy data into this `ns_input_local_buffer` memory\n",
272
+ " # because dist.all_gather requires a single, physically contiguous block of memory to work efficiently.\n",
273
+ " ns_input_local_buffer[local_range[0]:local_range[1]].copy_(ns_input.view(-1))\n",
274
+ "\n",
275
+ " # all gather buffers: one bucket at a time. -- the \"shipping\" phase\n",
276
+ " for ns_input_global_buffer, ns_input_local_buffer in zip(ns_input_global_buffers, ns_input_local_buffers):\n",
277
+ " for ns_input_global_bucket, ns_input_local_bucket in zip(ns_input_global_buffer, ns_input_local_buffer):\n",
278
+ " dist.all_gather_into_tensor(ns_input_global_bucket, ns_input_local_bucket, group=self.dist_group)\n",
279
+ "\n",
280
+ " # overwrite ns input with the `all_gather`-ed `ns_inputs` -- the \"unpacking\" phase\n",
281
+ " # this is the \"opposite\" of filling ns input data to local buffer\n",
282
+ " for p in ns_inputs.keys():\n",
283
+ " dist_meta = self.dist_metas[p]\n",
284
+ " ns_input_global_buffer = ns_input_global_buffers[dist_meta.buffer_idx][dist_meta.bucket_idx]\n",
285
+ " offset = self.global_buffer_sizes[dist_meta.buffer_idx][dist_meta.bucket_idx][1]\n",
286
+ " global_range = normalize_range(dist_meta.global_range, offset)\n",
287
+ "\n",
288
+ " #ns_inputs[p] = ns_input_global_buffer[global_range[0]:global_range[1]].view(-1)\n",
289
+ " ## bug fix 👆🏻-- overwrite ns input with the `all_gather`-ed `ns_inputs` -- the \"unpacking\" phase\n",
290
+ " #ns_inputs[p] = ns_input_global_buffer[global_range[0]:global_range[1]].view(-1)\n",
291
+ " # Unpack the 1D slice of data\n",
292
+ " unpacked_data = ns_input_global_buffer[global_range[0]:global_range[1]]\n",
293
+ "\n",
294
+ " # THIS IS THE FIX: Reshape it to its correct 2D shape, not view(-1)\n",
295
+ " ns_inputs[p] = unpacked_data.view(dist_meta.shape)\n",
296
+ "\n",
297
+ " # set tp info\n",
298
+ " tp_world_size = dist.get_world_size(self.tp_group)\n",
299
+ " tp_rank = dist.get_rank(self.tp_group)\n",
300
+ "\n",
301
+ " # update muon momentum first\n",
302
+ " for group in self.param_groups:\n",
303
+ "\n",
304
+ " if not group.get('use_muon', False):\n",
305
+ " continue\n",
306
+ "\n",
307
+ " lr = group[\"lr\"]\n",
308
+ " ns_steps = group[\"ns_steps\"]\n",
309
+ " weight_decay = group[\"weight_decay\"]\n",
310
+ " matched_adamw_rms = group[\"matched_adamw_rms\"]\n",
311
+ " params = group[\"params\"] # <-- add this\n",
312
+ "\n",
313
+ " for p in params:\n",
314
+ "\n",
315
+ " ns_input = ns_inputs[p]\n",
316
+ " tp_split_dim = -1\n",
317
+ "\n",
318
+ " if self.distributed_mode:\n",
319
+ " dist_meta = self.dist_metas[p]\n",
320
+ " tp_split_dim = dist_meta.tp_split_dim\n",
321
+ "\n",
322
+ " # gather tensor parallel ( if tp )\n",
323
+ " if tp_split_dim != -1:\n",
324
+ " ns_input_shards = [ torch.empty_like(ns_input) for _ in range(tp_world_size) ]\n",
325
+ " dist.all_gather(ns_input_shards, ns_input, self.tp_group)\n",
326
+ " ns_input = torch.cat(ns_input_shards, dim=tp_split_dim)\n",
327
+ "\n",
328
+ " # calc update\n",
329
+ " update = zeropower_via_newtonschulz5(ns_input, steps=ns_steps)\n",
330
+ "\n",
331
+ " # only local tp part\n",
332
+ " # this is effectivly \"shadding\" the newtonschulz-processed update,\n",
333
+ " # and keep only your assigned piece, discarding the rest\n",
334
+ " if tp_split_dim != -1:\n",
335
+ " update = update.chunk(tp_world_size, dim=tp_split_dim)[tp_rank]\n",
336
+ "\n",
337
+ " # only local dp buffer part\n",
338
+ " if self.distributed_mode:\n",
339
+ " # local range in global range\n",
340
+ " # unpacking the tp sharded update to dp sharded update\n",
341
+ " local_range = normalize_range(dist_meta.local_range, dist_meta.global_range[0])\n",
342
+ " update = update.reshape(-1)[local_range[0]:local_range[1]]\n",
343
+ "\n",
344
+ " # apply weight decay\n",
345
+ " p.data.mul_(1 - lr*weight_decay)\n",
346
+ "\n",
347
+ " # adjust lr and apply update\n",
348
+ " adjusted_lr = adjust_lr_wd_for_muon(lr, matched_adamw_rms, ns_input.shape)\n",
349
+ " p.data.add_(update, alpha=-adjusted_lr)\n",
350
+ "\n",
351
+ " # use adam for other params\n",
352
+ " for group in self.param_groups:\n",
353
+ "\n",
354
+ " if group.get('use_muon', False):\n",
355
+ " continue\n",
356
+ "\n",
357
+ " # init step\n",
358
+ " if 'step' in group:\n",
359
+ " group['step'] += 1\n",
360
+ " else:\n",
361
+ " group['step'] = 1\n",
362
+ "\n",
363
+ " step = group['step']\n",
364
+ " params = group[\"params\"]\n",
365
+ " lr = group['lr']\n",
366
+ " weight_decay = group['weight_decay']\n",
367
+ " beta1, beta2 = group['adamw_betas']\n",
368
+ " eps = group['adamw_eps']\n",
369
+ "\n",
370
+ " for p in params:\n",
371
+ "\n",
372
+ " g = p.grad\n",
373
+ " assert g is not None\n",
374
+ " state = self.state[p]\n",
375
+ "\n",
376
+ " if len(state) == 0:\n",
377
+ " state['adamw_exp_avg'] = torch.zeros_like(g)\n",
378
+ " state['adamw_exp_avg_sq'] = torch.zeros_like(g)\n",
379
+ "\n",
380
+ " buf1 = state['adamw_exp_avg']\n",
381
+ " buf2 = state['adamw_exp_avg_sq']\n",
382
+ " buf1.lerp_(g, 1-beta1)\n",
383
+ " buf2.lerp_(g.square(), 1-beta2)\n",
384
+ "\n",
385
+ " g = buf1 / (eps + buf2.sqrt())\n",
386
+ "\n",
387
+ " bias_correction1 = 1 - beta1**step\n",
388
+ " bias_correction2 = 1 - beta2**step\n",
389
+ " scale = bias_correction1 / bias_correction2**0.5\n",
390
+ " p.data.mul_(1 - lr * weight_decay)\n",
391
+ " p.data.add_(g, alpha=-lr/scale)\n",
392
+ "\n",
393
+ "\n",
394
+ "##--------------- tests/unit_tests/test_optimizer_muon.py -----------------\n",
395
+ "import os\n",
396
+ "\n",
397
+ "import torch\n",
398
+ "import torch.distributed as dist\n",
399
+ "\n",
400
+ "#from megatron.core.optimizer.muon import Muon, MuonDistMeta, normalize_range\n",
401
+ "\n",
402
+ "def is_rank_0():\n",
403
+ " return torch.distributed.get_rank() == 0\n",
404
+ "\n",
405
+ "def print_rank_0(*args):\n",
406
+ " if is_rank_0():\n",
407
+ " print(*args)\n",
408
+ "\n",
409
+ "def cdiv(x: int, y: int):\n",
410
+ " return (x + y - 1) // y\n",
411
+ "\n",
412
+ "def gen_param_and_grads():\n",
413
+ "\n",
414
+ " # reset manual seed\n",
415
+ " torch.manual_seed(0)\n",
416
+ " device = 'cpu'\n",
417
+ " dtype = torch.float32\n",
418
+ "\n",
419
+ " # gen params\n",
420
+ " params = [ torch.randn(shape, device=device, dtype=dtype) for shape in [\n",
421
+ " (100, 100), (124, 324), (456, 124), (676, 876), (128, 128), ] ]\n",
422
+ "\n",
423
+ " # gen grads [ [ grad-list ] * step ]\n",
424
+ " grads = [ [ torch.randn_like(param) for param in params ] for _ in range(10) ]\n",
425
+ "\n",
426
+ " return params, grads\n",
427
+ "\n",
428
+ "def distribute_params(params, grads, tp_dims, dist_group, tp_group):\n",
429
+ " \"\"\" 将 param 进行 dist & tp shard, 仅保留自己的一部分 \"\"\"\n",
430
+ "\n",
431
+ " params = params.copy()\n",
432
+ " grads = [ step_grads.copy() for step_grads in grads ]\n",
433
+ "\n",
434
+ " # tp dist\n",
435
+ " tp_size = dist.get_world_size(tp_group)\n",
436
+ " tp_rank = dist.get_rank(tp_group)\n",
437
+ " for i, param in enumerate(params):\n",
438
+ " tp_dim = tp_dims[i]\n",
439
+ " if tp_dim == -1:\n",
440
+ " continue\n",
441
+ " # Shard the parameter tensor along the `tp_dim` dimension.\n",
442
+ " assert param.shape[tp_dim] % tp_size == 0\n",
443
+ " local_range_start = param.shape[tp_dim] // tp_size * tp_rank\n",
444
+ " # range of the shard based on the rank of the current GOU in the given `tp_group``\n",
445
+ " local_range_end = param.shape[tp_dim] // tp_size * (tp_rank + 1)\n",
446
+ " # each GPU gets `[local_range_start:local_range_end, :] ` rows or `[:, local_range_start:local_range_end]` columns\n",
447
+ " params[i] = param[local_range_start:local_range_end, :] if tp_dim == 0 else \\\n",
448
+ " param[:, local_range_start:local_range_end].contiguous()\n",
449
+ " # same logic applies to sharding the gradients for the current layer(param)\n",
450
+ " for step_grads in grads:\n",
451
+ " step_grads[i] = step_grads[i][local_range_start:local_range_end, :] if tp_dim == 0 else \\\n",
452
+ " step_grads[i][:, local_range_start:local_range_end].contiguous()\n",
453
+ "\n",
454
+ " # distributed\n",
455
+ " world_size = dist.get_world_size(dist_group)\n",
456
+ " rank = dist.get_rank(dist_group)\n",
457
+ "\n",
458
+ " # global as the given DP group\n",
459
+ " # \"global\" here means \"global to the TP group's worth of parameters.\"\n",
460
+ " global_buffer_size = sum(param.numel() for param in params)\n",
461
+ " local_buffer_size = cdiv(global_buffer_size, world_size)\n",
462
+ " # deciding the shard range for this rank\n",
463
+ " local_buffer_range = (local_buffer_size * rank, local_buffer_size * (rank + 1))\n",
464
+ " # padded global_buffer_size\n",
465
+ " global_buffer_size = local_buffer_size * world_size # fix global buffer size\n",
466
+ "\n",
467
+ " numel_acc = 0\n",
468
+ " dist_params = []\n",
469
+ " dist_grads = [[] for _ in grads]\n",
470
+ " dist_metas = {}\n",
471
+ " for i, param in enumerate(params):\n",
472
+ "\n",
473
+ " # gen meta\n",
474
+ " # align global buffer index(range) with local buffer index(range)\n",
475
+ " # see handwritten diagram for more details\n",
476
+ " numel = param.numel()\n",
477
+ " dist_meta = MuonDistMeta(0, 0, param.shape, (numel_acc, numel_acc + numel), tp_dims[i])\n",
478
+ " dist_meta.set_local_buffer_range(local_buffer_range)\n",
479
+ " numel_acc += numel\n",
480
+ "\n",
481
+ " # skip if no element in this shard\n",
482
+ " if dist_meta.local_range[0] == dist_meta.local_range[1]:\n",
483
+ " continue\n",
484
+ "\n",
485
+ " # gen param\n",
486
+ "\n",
487
+ " # Convert the ABSOLUTE slice range (from the global virtual buffer)\n",
488
+ " # into a RELATIVE slice range (local to just this one parameter).\n",
489
+ " local_range = normalize_range(dist_meta.local_range, dist_meta.global_range[0])\n",
490
+ "\n",
491
+ " # 1. Flatten the 2D parameter tensor into a 1D vector.\n",
492
+ " # 2. Use the relative range to slice out the piece this GPU is responsible for storing.\n",
493
+ " dist_param = param.view(-1)[local_range[0]:local_range[1]]\n",
494
+ " dist_params.append(dist_param)\n",
495
+ " dist_metas[dist_param] = dist_meta\n",
496
+ "\n",
497
+ " # gen grad\n",
498
+ " # same logoc as the `gen param` scetion\n",
499
+ " for step, step_grads in enumerate(grads):\n",
500
+ " dist_grad = step_grads[i].view(-1)[local_range[0]:local_range[1]]\n",
501
+ " dist_grads[step].append(dist_grad)\n",
502
+ "\n",
503
+ " return dist_params, dist_grads, global_buffer_size, dist_metas\n",
504
+ "\n",
505
+ "\n",
506
+ "def test_muon_dist(dp_size, tp_size):\n",
507
+ "\n",
508
+ " world_size = dist.get_world_size()\n",
509
+ " rank = dist.get_rank()\n",
510
+ " assert dp_size * tp_size == world_size\n",
511
+ "\n",
512
+ " # init dist group\n",
513
+ " for i in range(tp_size):\n",
514
+ " # decide the tp group based on grod of size `tp_size`\n",
515
+ " ranks = range(i, world_size, tp_size)\n",
516
+ " group = dist.new_group(ranks)\n",
517
+ " # each rank finds its groups\n",
518
+ " if rank in ranks:\n",
519
+ " # groups are passed as instructions\n",
520
+ " dist_group = group\n",
521
+ " # init tp group\n",
522
+ " for i in range(dp_size):\n",
523
+ " ranks = range(i * tp_size, (i + 1) * tp_size)\n",
524
+ " group = dist.new_group(ranks)\n",
525
+ " if rank in ranks:\n",
526
+ " tp_group = group\n",
527
+ "\n",
528
+ " print_rank_0(\"process group initialized\")\n",
529
+ "\n",
530
+ " params_ref, grads_ref = gen_param_and_grads()\n",
531
+ " params_test, grads_test = gen_param_and_grads()\n",
532
+ " tp_dims = [0, 1, -1, 1, 0]\n",
533
+ "\n",
534
+ " # global_buffer_size is the padded buffer size of the dp group where the current rank belongs to\n",
535
+ " params_test, grads_test, global_buffer_size, dist_metas \\\n",
536
+ " = distribute_params(params_test, grads_test, tp_dims, dist_group, tp_group)\n",
537
+ "\n",
538
+ " muon_args = {\n",
539
+ " \"use_muon\": True,\n",
540
+ " \"lr\": 0.1,\n",
541
+ " \"momentum\": 0.9,\n",
542
+ " \"nesterov\": True,\n",
543
+ " \"ns_steps\": 5,\n",
544
+ " \"weight_decay\": 0.1,\n",
545
+ " }\n",
546
+ "\n",
547
+ " # gen params\n",
548
+ " ref_param_groups = [{\n",
549
+ " \"params\": params_ref,\n",
550
+ " **muon_args\n",
551
+ " }]\n",
552
+ " test_param_groups = [{\n",
553
+ " \"params\": params_test,\n",
554
+ " **muon_args\n",
555
+ " }]\n",
556
+ "\n",
557
+ " ref_muon = Muon(ref_param_groups)\n",
558
+ " test_muon = Muon(test_param_groups)\n",
559
+ " test_muon.enable_distributed_mode([[(global_buffer_size, 0)]], dist_group, tp_group, dist_metas)\n",
560
+ "\n",
561
+ " for step in range(10):\n",
562
+ "\n",
563
+ " # add grad\n",
564
+ " for i, grad in enumerate(grads_ref[step]):\n",
565
+ " params_ref[i].grad = grad.clone()\n",
566
+ " for i, grad in enumerate(grads_test[step]):\n",
567
+ " params_test[i].grad = grad.clone()\n",
568
+ " # step\n",
569
+ " ref_muon.step()\n",
570
+ " test_muon.step()\n",
571
+ " # distribute ref params\n",
572
+ " dist_ref_params, _, _, _ = distribute_params(params_ref, [], tp_dims, dist_group, tp_group)\n",
573
+ " # verify\n",
574
+ " for i, params_x2 in enumerate(zip(dist_ref_params, params_test)):\n",
575
+ " assert (params_x2[0] == params_x2[1]).all(), f\"rank {rank} param {i} verify failed\"\n",
576
+ " print_rank_0(f\" - step {step} verify passed\")\n",
577
+ "\n",
578
+ " print_rank_0(f\"dist dp = {dp_size} tp = {tp_size} test passed\")\n",
579
+ "\n",
580
+ "\n",
581
+ "\n",
582
+ "def run_process(rank, world_size):\n",
583
+ " os.environ['MASTER_ADDR'] = 'localhost'\n",
584
+ " os.environ['MASTER_PORT'] = '12355'\n",
585
+ " dist.init_process_group(\"gloo\", rank=rank, world_size=world_size)\n",
586
+ " test_muon_dist(dp_size=4, tp_size=2)\n",
587
+ " test_muon_dist(dp_size=2, tp_size=4)\n",
588
+ " dist.destroy_process_group()\n",
589
+ "\n",
590
+ "if __name__ == \"__main__\":\n",
591
+ " world_size = 8\n",
592
+ " os.environ['CUDA_DEVICE_MAX_CONNECTIONS'] = '1'\n",
593
+ " mp.spawn(run_process, args=(world_size,), nprocs=world_size, join=True)\n",
594
+ " print(\"\\\\n✅ All tests passed!\")\n",
595
+ "'''\n",
596
+ "\n",
597
+ "# Step 2: Write to file\n",
598
+ "with open('/content/test_muon_dist.py', 'w') as f:\n",
599
+ " f.write(test_code)\n",
600
+ "\n",
601
+ "print(\"✅ Test code written to /content/test_muon_dist.py\")\n",
602
+ "print(\"\\nNow run it with:\")\n",
603
+ "print(\"!python /content/test_muon_dist.py\")"
604
+ ]
605
+ },
606
+ {
607
+ "cell_type": "code",
608
+ "execution_count": null,
609
+ "metadata": {
610
+ "colab": {
611
+ "base_uri": "https://localhost:8080/"
612
+ },
613
+ "id": "18Xbd3ovSDxx",
614
+ "outputId": "4b82d48c-455f-4278-988f-32916a87d336"
615
+ },
616
+ "outputs": [
617
+ {
618
+ "name": "stdout",
619
+ "output_type": "stream",
620
+ "text": [
621
+ "[Gloo] Rank 6 is connected to 7 peer ranks. Expected number of connected peer ranks is : 7\n",
622
+ "[Gloo] Rank 5 is connected to 7 peer ranks. Expected number of connected peer ranks is : 7\n",
623
+ "[Gloo] Rank 1 is connected to 7 peer ranks. Expected number of connected peer ranks is : 7\n",
624
+ "[Gloo] Rank 3 is connected to 7 peer ranks. Expected number of connected peer ranks is : 7\n",
625
+ "[Gloo] Rank 4 is connected to 7 peer ranks. Expected number of connected peer ranks is : 7\n",
626
+ "[Gloo] Rank 2 is connected to 7 peer ranks. Expected number of connected peer ranks is : 7\n",
627
+ "[Gloo] Rank 7 is connected to 7 peer ranks. Expected number of connected peer ranks is : 7\n",
628
+ "[Gloo] Rank 0 is connected to 7 peer ranks. Expected number of connected peer ranks is : 7\n",
629
+ "[Gloo] Rank 0 is connected to 3 peer ranks. Expected number of connected peer ranks is : 3\n",
630
+ "[Gloo] Rank 1 is connected to 3 peer ranks. Expected number of connected peer ranks is : 3\n",
631
+ "[Gloo] Rank 2 is connected to 3 peer ranks. Expected number of connected peer ranks is : 3\n",
632
+ "[Gloo] Rank 3 is connected to 3 peer ranks. Expected number of connected peer ranks is : 3\n",
633
+ "[Gloo] Rank 0 is connected to 3 peer ranks. Expected number of connected peer ranks is : 3\n",
634
+ "[Gloo] Rank 1 is connected to 3 peer ranks. Expected number of connected peer ranks is : 3\n",
635
+ "[Gloo] Rank 3 is connected to 3 peer ranks. Expected number of connected peer ranks is : 3\n",
636
+ "[Gloo] Rank 1 is connected to 1 peer ranks. Expected number of connected peer ranks is : 1\n",
637
+ "[Gloo] Rank 2 is connected to 3 peer ranks. Expected number of connected peer ranks is : 3\n",
638
+ "[Gloo] Rank 0 is connected to 1 peer ranks. Expected number of connected peer ranks is : 1\n",
639
+ "[Gloo] Rank 1 is connected to 1 peer ranks. Expected number of connected peer ranks is : 1\n",
640
+ "[Gloo] Rank 0 is connected to 1 peer ranks. Expected number of connected peer ranks is : 1\n",
641
+ "[Gloo] Rank 0 is connected to 1 peer ranks. Expected number of connected peer ranks is : 1\n",
642
+ "process group initialized\n",
643
+ "[Gloo] Rank 1 is connected to 1 peer ranks. Expected number of connected peer ranks is : 1\n",
644
+ "[Gloo] Rank 1 is connected to 1 peer ranks. Expected number of connected peer ranks is : 1\n",
645
+ "[Gloo] Rank 0 is connected to 1 peer ranks. Expected number of connected peer ranks is : 1\n",
646
+ " - step 0 verify passed\n",
647
+ " - step 1 verify passed\n",
648
+ " - step 2 verify passed\n",
649
+ " - step 3 verify passed\n",
650
+ " - step 4 verify passed\n",
651
+ " - step 5 verify passed\n",
652
+ " - step 6 verify passed\n",
653
+ " - step 7 verify passed\n",
654
+ " - step 8 verify passed\n",
655
+ "[Gloo] Rank 0 is connected to 1 peer ranks. Expected number of connected peer ranks is : 1\n",
656
+ "[Gloo] Rank 1 is connected to 1 peer ranks. Expected number of connected peer ranks is : 1\n",
657
+ "[Gloo] Rank 1 is connected to 1 peer ranks. Expected number of connected peer ranks is : 1\n",
658
+ "[Gloo] Rank 0 is connected to 1 peer ranks. Expected number of connected peer ranks is : 1\n",
659
+ "[Gloo] Rank 1 is connected to 1 peer ranks. Expected number of connected peer ranks is : 1\n",
660
+ "[Gloo] Rank 0 is connected to 1 peer ranks. Expected number of connected peer ranks is : 1\n",
661
+ " - step 9 verify passed\n",
662
+ "dist dp = 4 tp = 2 test passed\n",
663
+ "[Gloo] Rank 1 is connected to 1 peer ranks. Expected number of connected peer ranks is : 1\n",
664
+ "[Gloo] Rank 0 is connected to 1 peer ranks. Expected number of connected peer ranks is : 1\n",
665
+ "[Gloo] Rank 0 is connected to 3 peer ranks. Expected number of connected peer ranks is : 3\n",
666
+ "[Gloo] Rank 2 is connected to 3 peer ranks. Expected number of connected peer ranks is : 3\n",
667
+ "[Gloo] Rank 1 is connected to 3 peer ranks. Expected number of connected peer ranks is : 3\n",
668
+ "[Gloo] Rank 3 is connected to 3 peer ranks. Expected number of connected peer ranks is : 3\n",
669
+ "[Gloo] Rank 0 is connected to 3 peer ranks. Expected number of connected peer ranks is : 3\n",
670
+ "process group initialized\n",
671
+ "[Gloo] Rank 3 is connected to 3 peer ranks. Expected number of connected peer ranks is : 3\n",
672
+ "[Gloo] Rank 2 is connected to 3 peer ranks. Expected number of connected peer ranks is : 3\n",
673
+ "[Gloo] Rank 1 is connected to 3 peer ranks. Expected number of connected peer ranks is : 3\n",
674
+ " - step 0 verify passed\n",
675
+ " - step 1 verify passed\n",
676
+ " - step 2 verify passed\n",
677
+ " - step 3 verify passed\n",
678
+ " - step 4 verify passed\n",
679
+ " - step 5 verify passed\n",
680
+ " - step 6 verify passed\n",
681
+ " - step 7 verify passed\n",
682
+ " - step 8 verify passed\n",
683
+ " - step 9 verify passed\n",
684
+ "dist dp = 2 tp = 4 test passed\n",
685
+ "\n",
686
+ "✅ All tests passed!\n"
687
+ ]
688
+ }
689
+ ],
690
+ "source": [
691
+ "!python /content/test_muon_dist.py"
692
+ ]
693
+ }
694
+ ],
695
+ "metadata": {
696
+ "colab": {
697
+ "provenance": []
698
+ },
699
+ "kernelspec": {
700
+ "display_name": "Python 3 (ipykernel)",
701
+ "language": "python",
702
+ "name": "python3"
703
+ },
704
+ "language_info": {
705
+ "codemirror_mode": {
706
+ "name": "ipython",
707
+ "version": 3
708
+ },
709
+ "file_extension": ".py",
710
+ "mimetype": "text/x-python",
711
+ "name": "python",
712
+ "nbconvert_exporter": "python",
713
+ "pygments_lexer": "ipython3",
714
+ "version": "3.11.7"
715
+ }
716
+ },
717
+ "nbformat": 4,
718
+ "nbformat_minor": 4
719
+ }
distributed_muon_cpu.py ADDED
@@ -0,0 +1,552 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import sys
3
+ import torch
4
+ import torch.distributed as dist
5
+ import torch.multiprocessing as mp
6
+ import math
7
+ from typing import Tuple, Dict
8
+
9
+
10
+ # copy from https://github.com/KellerJordan/Muon/tree/master
11
+ # @torch.compile
12
+ def zeropower_via_newtonschulz5(G, steps):
13
+ """
14
+ Newton-Schulz iteration to compute the zeroth power / orthogonalization of G. We opt to use a
15
+ quintic iteration whose coefficients are selected to maximize the slope at zero. For the purpose
16
+ of minimizing steps, it turns out to be empirically effective to keep increasing the slope at
17
+ zero even beyond the point where the iteration no longer converges all the way to one everywhere
18
+ on the interval. This iteration therefore does not produce UV^T but rather something like US'V^T
19
+ where S' is diagonal with S_{ii}' ~ Uniform(0.5, 1.5), which turns out not to hurt model
20
+ performance at all relative to UV^T, where USV^T = G is the SVD.
21
+ """
22
+ assert len(G.shape) == 2
23
+ a, b, c = (3.4445, -4.7750, 2.0315)
24
+ X = G
25
+ if G.size(0) > G.size(1):
26
+ X = X.T
27
+
28
+ # Ensure spectral norm is at most 1
29
+ X = X / (X.norm() + 1e-7)
30
+ # Perform the NS iterations
31
+ for _ in range(steps):
32
+ A = X @ X.T
33
+ B = b * A + c * A @ A # adapted from suggestion by @jxbz, @leloykun, and @YouJiacheng
34
+ X = a * X + B @ X
35
+
36
+ if G.size(0) > G.size(1):
37
+ X = X.T
38
+ return X
39
+
40
+ def normalize_range(range: Tuple[int, int], start):
41
+ return (range[0] - start, range[1] - start)
42
+
43
+ class MuonDistMeta:
44
+
45
+ # which buffer and bucket param belongs to
46
+ buffer_idx: int = 0
47
+ bucket_idx: int = 0
48
+ # param shape after tp
49
+ shape: torch.Size = None
50
+ # param location in global buffer
51
+ global_range: Tuple[int, int] = None
52
+ tp_split_dim: int = -1
53
+ # param location in global buffer (current dp slice)
54
+ local_range: Tuple[int, int] = None
55
+
56
+ def __init__(self, buffer_idx: int, bucket_idx: int, shape: torch.Size, global_range: Tuple[int, int], tp_split_dim: int):
57
+ self.buffer_idx = buffer_idx
58
+ self.bucket_idx = bucket_idx
59
+ self.shape = shape
60
+ self.global_range = global_range
61
+ self.tp_split_dim = tp_split_dim
62
+
63
+ def set_local_buffer_range(self, local_buffer_range: Tuple[int, int]):
64
+ start = max(self.global_range[0], local_buffer_range[0])
65
+ end = min(self.global_range[1], local_buffer_range[1])
66
+ self.local_range = (start, end) if start < end else (local_buffer_range[0], local_buffer_range[0])
67
+
68
+ # adjust LR based on: https://github.com/MoonshotAI/Moonlight
69
+ def adjust_lr_wd_for_muon(lr, matched_adamw_rms, param_shape):
70
+ A, B = param_shape[:2]
71
+ adjusted_ratio = math.sqrt(max(A, B)) * matched_adamw_rms
72
+ adjusted_lr = lr * adjusted_ratio
73
+ return adjusted_lr
74
+
75
+ # copy from https://github.com/KellerJordan/Muon/tree/master and support distributed solution
76
+ class Muon(torch.optim.Optimizer):
77
+ """
78
+ Muon - MomentUm Orthogonalized by Newton-schulz
79
+ Muon internally runs standard SGD-momentum, and then performs an orthogonalization post-
80
+ processing step, in which each 2D parameter's update is replaced with the nearest orthogonal
81
+ matrix. To efficiently orthogonalize each update, we use a Newton-Schulz iteration, which has
82
+ the advantage that it can be stably run in bfloat16 on the GPU.
83
+ Some warnings:
84
+ - We believe this optimizer is unlikely to work well for training with small batch size.
85
+ - We believe it may not work well for finetuning pretrained models, but we haven't tested this.
86
+ Arguments:
87
+ param_groups: The parameters to be optimized.
88
+ lr: The learning rate. The updates will have spectral norm of `lr`. (0.02 is a good default)
89
+ momentum: The momentum used by the internal SGD. (0.95 is a good default)
90
+ matched_adamw_rms: The AdamW Update RMS that Muon is designed to match. (0.2~0.4 recommended)
91
+ nesterov: Whether to use Nesterov-style momentum in the internal SGD. (recommended)
92
+ ns_steps: The number of Newton-Schulz iterations to run. (5 is probably always enough)
93
+ {0, 1}-D or are detected as being the embed or lm_head will be optimized by AdamW as well.
94
+ adamw_betas: The betas for the internal AdamW.
95
+ adamw_eps: The epsilon for the internal AdamW.
96
+ adamw_wd: The weight decay for the internal AdamW.
97
+ """
98
+ def __init__(self, param_groups, lr=2e-2, weight_decay=0.1,
99
+ matched_adamw_rms=0.2, momentum=0.95, nesterov=True, ns_steps=5,
100
+ adamw_betas=(0.95, 0.95), adamw_eps=1e-8):
101
+
102
+ defaults = dict(lr=lr, weight_decay=weight_decay,
103
+ matched_adamw_rms=matched_adamw_rms,
104
+ momentum=momentum, nesterov=nesterov, ns_steps=ns_steps,
105
+ adamw_betas=adamw_betas, adamw_eps=adamw_eps,)
106
+
107
+ super().__init__(param_groups, defaults)
108
+ self.distributed_mode = False
109
+
110
+
111
+ def enable_distributed_mode(self, global_buffer_sizes, dist_group, tp_group,
112
+ dist_metas: Dict[torch.nn.Parameter, MuonDistMeta]):
113
+ """
114
+ enable distributed mode
115
+ Args:
116
+ global_buffer_size: global buffer size
117
+ dist group: optimizer sharding group
118
+ tp group: param tp group
119
+ dist metas: dist metas for all param
120
+ """
121
+
122
+ self.global_buffer_sizes = global_buffer_sizes
123
+ self.dist_group = dist_group
124
+ self.tp_group = tp_group
125
+ self.dist_metas = dist_metas
126
+
127
+ world_size = dist.get_world_size(dist_group)
128
+ rank = dist.get_rank(dist_group)
129
+
130
+ # calc local buffer range
131
+ self.local_buffer_sizes = []
132
+ self.local_buffer_ranges = []
133
+ # The outer loop is for different parameter groups (e.g., weights vs. biases)
134
+ for global_bucket_sizes in global_buffer_sizes: # <--- rename `global_bucket_sizes`
135
+ local_bucket_sizes = []
136
+ local_bucket_ranges = []
137
+
138
+ # The inner loop is for the different buckets within a single group
139
+ for (global_bucket_size, bucket_offset) in global_bucket_sizes:
140
+ # calculate the local range for THIS specific bucket
141
+ assert global_bucket_size % world_size == 0
142
+ local_bucket_size = global_bucket_size // world_size
143
+ # Renaming here makes the logic so much clearer
144
+ local_bucket_start = local_bucket_size * rank + bucket_offset
145
+ local_buffer_range = (local_bucket_start, local_bucket_start + local_bucket_size)
146
+ local_bucket_sizes.append(local_bucket_size)
147
+ local_bucket_ranges.append(local_buffer_range)
148
+
149
+ self.local_buffer_sizes.append(local_bucket_sizes)
150
+ self.local_buffer_ranges.append(local_bucket_ranges)
151
+
152
+ # calc local range for params
153
+ for dist_meta in dist_metas.values():
154
+ local_buffer_range = self.local_buffer_ranges[dist_meta.buffer_idx][dist_meta.bucket_idx]
155
+ dist_meta.set_local_buffer_range(local_buffer_range)
156
+
157
+ self.distributed_mode = True
158
+
159
+ def step(self):
160
+ first_param = self.param_groups[0]['params'][0]
161
+ device = first_param.device
162
+ dtype = torch.bfloat16
163
+
164
+ ns_inputs = {}
165
+
166
+ # update muon momentum first
167
+ # `self.param_groups` is already sharded
168
+ for group in self.param_groups:
169
+
170
+ if not group.get("use_muon", False):
171
+ continue
172
+
173
+ momentum = group['momentum']
174
+ params = group["params"]
175
+
176
+ for p in params:
177
+
178
+ g = p.grad
179
+ assert g is not None
180
+ # 1-dim grad for distributed mode
181
+ assert self.distributed_mode or g.dim() == 2
182
+
183
+ # prepare muon buffer in state
184
+ state = self.state[p]
185
+ if not "muon_buffer" in state:
186
+ state["muon_buffer"] = torch.zeros_like(g)
187
+ buf = state["muon_buffer"]
188
+ buf.mul_(momentum).add_(g)
189
+
190
+ # save to ns input
191
+ g = g.add(buf, alpha=momentum) if group['nesterov'] else buf
192
+ ns_inputs[p] = g.bfloat16()
193
+
194
+ # rewrite ns_inputs if distributed
195
+ """
196
+ the four-step "acrobatic" journey of the ns_inputs data:
197
+
198
+ 1. **DP `all_gather`**: (ZeRO) Gather all the sharded pieces from your data-parallel "column" to re-create your **full TP slice**.
199
+ 2. **TP `all_gather`**: Gather all the TP slices from your tensor-parallel "row" to re-create the **full, 100% complete matrix**.
200
+ 3. *(...Run the math on the full matrix...)*
201
+ 4. **TP `shard`**: Shard the full `update` matrix back down to your **local TP slice**.
202
+ 5. **DP `shard`**: (ZeRO) Shard that TP slice *again* back down to the **local DP/ZeRO slice** that you're responsible for.
203
+
204
+ """
205
+ if self.distributed_mode:
206
+
207
+ # initialize buffers
208
+ # hanged the variable nnames to `local_bucket_size` and `global_bucket_size` for clarity
209
+ ns_input_local_buffers = [
210
+ [ torch.empty((local_bucket_size), device=device, dtype=dtype)
211
+ for local_bucket_size in local_bucket_sizes ]
212
+ for local_bucket_sizes in self.local_buffer_sizes
213
+ ]
214
+ ns_input_global_buffers = [
215
+ [ torch.empty((global_bucket_size), device=device, dtype=dtype)
216
+ for (global_bucket_size, bucket_offset) in global_bucket_sizes ]
217
+ for global_bucket_sizes in self.global_buffer_sizes
218
+ ]
219
+
220
+ # fill ns input data to local buffer
221
+ # looping through all params in local rank, ok.
222
+ for param, ns_input in ns_inputs.items():
223
+ dist_meta = self.dist_metas[param]
224
+ # ceate a reference to `ns_input_local_buffers`
225
+ # the update is in local rank, so we only need one `for` loop
226
+ ns_input_local_buffer = ns_input_local_buffers[dist_meta.buffer_idx][dist_meta.bucket_idx]
227
+ local_buffer_range = self.local_buffer_ranges[dist_meta.buffer_idx][dist_meta.bucket_idx]
228
+ local_range = normalize_range(dist_meta.local_range, local_buffer_range[0]) # local_range in global_range
229
+ # copy data into this `ns_input_local_buffer` memory
230
+ # because dist.all_gather requires a single, physically contiguous block of memory to work efficiently.
231
+ ns_input_local_buffer[local_range[0]:local_range[1]].copy_(ns_input.view(-1))
232
+
233
+ # all gather buffers: one bucket at a time. -- the "shipping" phase
234
+ for ns_input_global_buffer, ns_input_local_buffer in zip(ns_input_global_buffers, ns_input_local_buffers):
235
+ for ns_input_global_bucket, ns_input_local_bucket in zip(ns_input_global_buffer, ns_input_local_buffer):
236
+ dist.all_gather_into_tensor(ns_input_global_bucket, ns_input_local_bucket, group=self.dist_group)
237
+
238
+ # overwrite ns input with the `all_gather`-ed `ns_inputs` -- the "unpacking" phase
239
+ # this is the "opposite" of filling ns input data to local buffer
240
+ for p in ns_inputs.keys():
241
+ dist_meta = self.dist_metas[p]
242
+ ns_input_global_buffer = ns_input_global_buffers[dist_meta.buffer_idx][dist_meta.bucket_idx]
243
+ offset = self.global_buffer_sizes[dist_meta.buffer_idx][dist_meta.bucket_idx][1]
244
+ global_range = normalize_range(dist_meta.global_range, offset)
245
+
246
+ #ns_inputs[p] = ns_input_global_buffer[global_range[0]:global_range[1]].view(-1)
247
+ ## bug fix 👆🏻-- overwrite ns input with the `all_gather`-ed `ns_inputs` -- the "unpacking" phase
248
+ #ns_inputs[p] = ns_input_global_buffer[global_range[0]:global_range[1]].view(-1)
249
+ # Unpack the 1D slice of data
250
+ unpacked_data = ns_input_global_buffer[global_range[0]:global_range[1]]
251
+
252
+ # THIS IS THE FIX: Reshape it to its correct 2D shape, not view(-1)
253
+ ns_inputs[p] = unpacked_data.view(dist_meta.shape)
254
+
255
+ # set tp info
256
+ tp_world_size = dist.get_world_size(self.tp_group)
257
+ tp_rank = dist.get_rank(self.tp_group)
258
+
259
+ # update muon momentum first
260
+ for group in self.param_groups:
261
+
262
+ if not group.get('use_muon', False):
263
+ continue
264
+
265
+ lr = group["lr"]
266
+ ns_steps = group["ns_steps"]
267
+ weight_decay = group["weight_decay"]
268
+ matched_adamw_rms = group["matched_adamw_rms"]
269
+ params = group["params"] # <-- add this
270
+
271
+ for p in params:
272
+
273
+ ns_input = ns_inputs[p]
274
+ tp_split_dim = -1
275
+
276
+ if self.distributed_mode:
277
+ dist_meta = self.dist_metas[p]
278
+ tp_split_dim = dist_meta.tp_split_dim
279
+
280
+ # gather tensor parallel ( if tp )
281
+ if tp_split_dim != -1:
282
+ ns_input_shards = [ torch.empty_like(ns_input) for _ in range(tp_world_size) ]
283
+ dist.all_gather(ns_input_shards, ns_input, self.tp_group)
284
+ ns_input = torch.cat(ns_input_shards, dim=tp_split_dim)
285
+
286
+ # calc update
287
+ update = zeropower_via_newtonschulz5(ns_input, steps=ns_steps)
288
+
289
+ # only local tp part
290
+ # this is effectivly "shadding" the newtonschulz-processed update,
291
+ # and keep only your assigned piece, discarding the rest
292
+ if tp_split_dim != -1:
293
+ update = update.chunk(tp_world_size, dim=tp_split_dim)[tp_rank]
294
+
295
+ # only local dp buffer part
296
+ if self.distributed_mode:
297
+ # local range in global range
298
+ # unpacking the tp sharded update to dp sharded update
299
+ local_range = normalize_range(dist_meta.local_range, dist_meta.global_range[0])
300
+ update = update.reshape(-1)[local_range[0]:local_range[1]]
301
+
302
+ # apply weight decay
303
+ p.data.mul_(1 - lr*weight_decay)
304
+
305
+ # adjust lr and apply update
306
+ adjusted_lr = adjust_lr_wd_for_muon(lr, matched_adamw_rms, ns_input.shape)
307
+ p.data.add_(update, alpha=-adjusted_lr)
308
+
309
+ # use adam for other params
310
+ for group in self.param_groups:
311
+
312
+ if group.get('use_muon', False):
313
+ continue
314
+
315
+ # init step
316
+ if 'step' in group:
317
+ group['step'] += 1
318
+ else:
319
+ group['step'] = 1
320
+
321
+ step = group['step']
322
+ params = group["params"]
323
+ lr = group['lr']
324
+ weight_decay = group['weight_decay']
325
+ beta1, beta2 = group['adamw_betas']
326
+ eps = group['adamw_eps']
327
+
328
+ for p in params:
329
+
330
+ g = p.grad
331
+ assert g is not None
332
+ state = self.state[p]
333
+
334
+ if len(state) == 0:
335
+ state['adamw_exp_avg'] = torch.zeros_like(g)
336
+ state['adamw_exp_avg_sq'] = torch.zeros_like(g)
337
+
338
+ buf1 = state['adamw_exp_avg']
339
+ buf2 = state['adamw_exp_avg_sq']
340
+ buf1.lerp_(g, 1-beta1)
341
+ buf2.lerp_(g.square(), 1-beta2)
342
+
343
+ g = buf1 / (eps + buf2.sqrt())
344
+
345
+ bias_correction1 = 1 - beta1**step
346
+ bias_correction2 = 1 - beta2**step
347
+ scale = bias_correction1 / bias_correction2**0.5
348
+ p.data.mul_(1 - lr * weight_decay)
349
+ p.data.add_(g, alpha=-lr/scale)
350
+
351
+
352
+ ##--------------- tests/unit_tests/test_optimizer_muon.py -----------------
353
+ import os
354
+
355
+ import torch
356
+ import torch.distributed as dist
357
+
358
+ #from megatron.core.optimizer.muon import Muon, MuonDistMeta, normalize_range
359
+
360
+ def is_rank_0():
361
+ return torch.distributed.get_rank() == 0
362
+
363
+ def print_rank_0(*args):
364
+ if is_rank_0():
365
+ print(*args)
366
+
367
+ def cdiv(x: int, y: int):
368
+ return (x + y - 1) // y
369
+
370
+ def gen_param_and_grads():
371
+
372
+ # reset manual seed
373
+ torch.manual_seed(0)
374
+ device = 'cpu'
375
+ dtype = torch.float32
376
+
377
+ # gen params
378
+ params = [ torch.randn(shape, device=device, dtype=dtype) for shape in [
379
+ (100, 100), (124, 324), (456, 124), (676, 876), (128, 128), ] ]
380
+
381
+ # gen grads [ [ grad-list ] * step ]
382
+ grads = [ [ torch.randn_like(param) for param in params ] for _ in range(10) ]
383
+
384
+ return params, grads
385
+
386
+ def distribute_params(params, grads, tp_dims, dist_group, tp_group):
387
+ """ 将 param 进行 dist & tp shard, 仅保留自己的一部分 """
388
+
389
+ params = params.copy()
390
+ grads = [ step_grads.copy() for step_grads in grads ]
391
+
392
+ # tp dist
393
+ tp_size = dist.get_world_size(tp_group)
394
+ tp_rank = dist.get_rank(tp_group)
395
+ for i, param in enumerate(params):
396
+ tp_dim = tp_dims[i]
397
+ if tp_dim == -1:
398
+ continue
399
+ # Shard the parameter tensor along the `tp_dim` dimension.
400
+ assert param.shape[tp_dim] % tp_size == 0
401
+ local_range_start = param.shape[tp_dim] // tp_size * tp_rank
402
+ # range of the shard based on the rank of the current GOU in the given `tp_group``
403
+ local_range_end = param.shape[tp_dim] // tp_size * (tp_rank + 1)
404
+ # each GPU gets `[local_range_start:local_range_end, :] ` rows or `[:, local_range_start:local_range_end]` columns
405
+ params[i] = param[local_range_start:local_range_end, :] if tp_dim == 0 else \
406
+ param[:, local_range_start:local_range_end].contiguous()
407
+ # same logic applies to sharding the gradients for the current layer(param)
408
+ for step_grads in grads:
409
+ step_grads[i] = step_grads[i][local_range_start:local_range_end, :] if tp_dim == 0 else \
410
+ step_grads[i][:, local_range_start:local_range_end].contiguous()
411
+
412
+ # distributed
413
+ world_size = dist.get_world_size(dist_group)
414
+ rank = dist.get_rank(dist_group)
415
+
416
+ # global as the given DP group
417
+ # "global" here means "global to the TP group's worth of parameters."
418
+ global_buffer_size = sum(param.numel() for param in params)
419
+ local_buffer_size = cdiv(global_buffer_size, world_size)
420
+ # deciding the shard range for this rank
421
+ local_buffer_range = (local_buffer_size * rank, local_buffer_size * (rank + 1))
422
+ # padded global_buffer_size
423
+ global_buffer_size = local_buffer_size * world_size # fix global buffer size
424
+
425
+ numel_acc = 0
426
+ dist_params = []
427
+ dist_grads = [[] for _ in grads]
428
+ dist_metas = {}
429
+ for i, param in enumerate(params):
430
+
431
+ # gen meta
432
+ # align global buffer index(range) with local buffer index(range)
433
+ # see handwritten diagram for more details
434
+ numel = param.numel()
435
+ dist_meta = MuonDistMeta(0, 0, param.shape, (numel_acc, numel_acc + numel), tp_dims[i])
436
+ dist_meta.set_local_buffer_range(local_buffer_range)
437
+ numel_acc += numel
438
+
439
+ # skip if no element in this shard
440
+ if dist_meta.local_range[0] == dist_meta.local_range[1]:
441
+ continue
442
+
443
+ # gen param
444
+
445
+ # Convert the ABSOLUTE slice range (from the global virtual buffer)
446
+ # into a RELATIVE slice range (local to just this one parameter).
447
+ local_range = normalize_range(dist_meta.local_range, dist_meta.global_range[0])
448
+
449
+ # 1. Flatten the 2D parameter tensor into a 1D vector.
450
+ # 2. Use the relative range to slice out the piece this GPU is responsible for storing.
451
+ dist_param = param.view(-1)[local_range[0]:local_range[1]]
452
+ dist_params.append(dist_param)
453
+ dist_metas[dist_param] = dist_meta
454
+
455
+ # gen grad
456
+ # same logoc as the `gen param` scetion
457
+ for step, step_grads in enumerate(grads):
458
+ dist_grad = step_grads[i].view(-1)[local_range[0]:local_range[1]]
459
+ dist_grads[step].append(dist_grad)
460
+
461
+ return dist_params, dist_grads, global_buffer_size, dist_metas
462
+
463
+
464
+ def test_muon_dist(dp_size, tp_size):
465
+
466
+ world_size = dist.get_world_size()
467
+ rank = dist.get_rank()
468
+ assert dp_size * tp_size == world_size
469
+
470
+ # init dist group
471
+ for i in range(tp_size):
472
+ # decide the tp group based on grod of size `tp_size`
473
+ ranks = range(i, world_size, tp_size)
474
+ group = dist.new_group(ranks)
475
+ # each rank finds its groups
476
+ if rank in ranks:
477
+ # groups are passed as instructions
478
+ dist_group = group
479
+ # init tp group
480
+ for i in range(dp_size):
481
+ ranks = range(i * tp_size, (i + 1) * tp_size)
482
+ group = dist.new_group(ranks)
483
+ if rank in ranks:
484
+ tp_group = group
485
+
486
+ print_rank_0("process group initialized")
487
+
488
+ params_ref, grads_ref = gen_param_and_grads()
489
+ params_test, grads_test = gen_param_and_grads()
490
+ tp_dims = [0, 1, -1, 1, 0]
491
+
492
+ # global_buffer_size is the padded buffer size of the dp group where the current rank belongs to
493
+ params_test, grads_test, global_buffer_size, dist_metas \
494
+ = distribute_params(params_test, grads_test, tp_dims, dist_group, tp_group)
495
+
496
+ muon_args = {
497
+ "use_muon": True,
498
+ "lr": 0.1,
499
+ "momentum": 0.9,
500
+ "nesterov": True,
501
+ "ns_steps": 5,
502
+ "weight_decay": 0.1,
503
+ }
504
+
505
+ # gen params
506
+ ref_param_groups = [{
507
+ "params": params_ref,
508
+ **muon_args
509
+ }]
510
+ test_param_groups = [{
511
+ "params": params_test,
512
+ **muon_args
513
+ }]
514
+
515
+ ref_muon = Muon(ref_param_groups)
516
+ test_muon = Muon(test_param_groups)
517
+ test_muon.enable_distributed_mode([[(global_buffer_size, 0)]], dist_group, tp_group, dist_metas)
518
+
519
+ for step in range(10):
520
+
521
+ # add grad
522
+ for i, grad in enumerate(grads_ref[step]):
523
+ params_ref[i].grad = grad.clone()
524
+ for i, grad in enumerate(grads_test[step]):
525
+ params_test[i].grad = grad.clone()
526
+ # step
527
+ ref_muon.step()
528
+ test_muon.step()
529
+ # distribute ref params
530
+ dist_ref_params, _, _, _ = distribute_params(params_ref, [], tp_dims, dist_group, tp_group)
531
+ # verify
532
+ for i, params_x2 in enumerate(zip(dist_ref_params, params_test)):
533
+ assert (params_x2[0] == params_x2[1]).all(), f"rank {rank} param {i} verify failed"
534
+ print_rank_0(f" - step {step} verify passed")
535
+
536
+ print_rank_0(f"dist dp = {dp_size} tp = {tp_size} test passed")
537
+
538
+
539
+
540
+ def run_process(rank, world_size):
541
+ os.environ['MASTER_ADDR'] = 'localhost'
542
+ os.environ['MASTER_PORT'] = '12355'
543
+ dist.init_process_group("gloo", rank=rank, world_size=world_size)
544
+ test_muon_dist(dp_size=4, tp_size=2)
545
+ test_muon_dist(dp_size=2, tp_size=4)
546
+ dist.destroy_process_group()
547
+
548
+ if __name__ == "__main__":
549
+ world_size = 8
550
+ os.environ['CUDA_DEVICE_MAX_CONNECTIONS'] = '1'
551
+ mp.spawn(run_process, args=(world_size,), nprocs=world_size, join=True)
552
+ print("\\n✅ All tests passed!")