Tag: training

  • Perplexed

    The normal loss when pre-training a language model is Cross-Entropy, which sounds more complicated than it is. As it generates a token, the model doesn’t just predict a token, it predicts a probability distribution across all possible tokens. Cross Entropy loss is -log(probability of the correct token) from that distribution.

    • If p(correct) = 0.99 → CE ≈ 0.01
    • If p(correct) = 0.5 (unsure between two tokens) → CE ≈ 0.693
    • If p(correct) = 1/100_000 (e.g. guessing uniformly) → CE ≈ 11.5

    If you average the CE over a whole bunch of tokens (say in your validation set) and take e^(ave CE), you get the perplexity, or PPL.

    The number gives you an idea of how many choices the model was considering. Perplexity of 1 means the model was always 100% sure and 100% right (a feat only Elon can achieve). PPL 2 means the model was flipping a coin between two tokens most of the time. PPL 50 means the model was uncertain between 50 plausible next tokens. Because you’re already calculating the loss, PPL is very cheap to compute, so it gets used a lot.

    Prior to pre-training you’ll typically run a sweep of experiments of different architecture tweaks, and see which lower perplexity. During pre-training you’ll want to check whether the model is successfully learning, whether you should nuke a run rather than continuing: improvements in perplexity are a good guide to that. You can also score perplexity on fresh data using a well-trained model: data with a surprisingly high perplexity might be garbage, or a counting subreddit.

    Still, you can have too much of a good thing. A new paper from Veličković et al “Perplexity cannot always tell right from wrong”, makes the argument that, much like with humans, its very easy to select for confidently wrong rather than uncertainly right.

    We prove that, for a wide class of decoder-only Transformer-based language models, should the model be highly confident and correct on a sufficiently long input sequence, this must imply existence of another input where the model’s prediction is wrong, yet the log-perplexity of that prediction approaches *zero*

    The basic idea is that when the model is confident, you can construct a different sequence that the model would be equally confident on but also… wrong.

    This particularly shows up when contexts get longer, because all tokens are not equal. To give a trivial example:

    In the word "strawberry," there are 8 Rs.

    This is correct for every single token, except ‘8’. A highly confident model may have a lower perplexity for that sequence, as a whole, than a more correct but less confident one.

  • What is In-Distribution

    What is In-Distribution

    One of the persistent questions in model development is whether reasoning actually involves… reasoning. As in: are we seeing actual logical conclusions, or just better recall of knowledge and patterns from the training set? LLMs are trained on, roughly, the web, which makes answering that question tricky: almost everything shows up in some form. A model that appears to “reason” through a physics problem could just be pattern-matching an irritated Reddit reply it saw during training.

    On the Interplay of Pre-Training, Mid-Training, and RL on Reasoning Language Models takes a look at this question methodically.

    To this end, we build a fully controlled framework that isolates the contributions of each training stage. Our design is based on three principles: (i) fully controllable synthetic reasoning tasks with explicit atomic operations and DAG-defined dependency structure; (ii) observable, parseable reasoning processes enabling process-level evaluation and reducing reward or evaluation hacking; and (iii) systematic manipulation of pre-/mid-/post-training distributions to attribute causal effects to each stage.

    The authors break the problem of reasoning and training data down along two dimensions.

    1) Breadth-wise: can the model generalize from one type of problem to another (structurally similar) one in a different domain?
    2) Depth-wise: can the model reason correctly for longer, and hence solve harder problems?

    Rather than train on the internet, they build synthetic Math-puzzle reasoning tasks using a dependency-graph framework inspired by GSM-Infinite. By varying the depth of the reasoning chains required, and by generating structurally equivalent tasks across different domains, they try to tease apart those two aspects and investigate them separately.

    For the breadth side the model needs to generalize, to transfer learning across domains. The paper finds that the target domain has to be “in-distribution: the model has to have some examples in the pretraining set. They test this by using pass@128: if you give the pre-trained model 128 attempts, does it get the answer right even once? If so, you can use reinforcement learning or SFT to help the model get reliably better.

    It’s a bit like having studied Spanish at some point and forgotten albóndigas, the word for meatballs. If, for dietary preference reasons, you came to use that word regularly it would likely lodge itself in your brain more easily and you’d go from a lowish chance of getting it right to a much higher one.

    The paper is saying you must have this baseline in their to amplify with RL. Daniel Han of Unsloth describes this by saying with RL “luck is all you need”. If the model never gets the answer right, there is nothing much to reinforce (and you are stuck with paella).

    Depth on the other hand does seem to something we can kinda make up in post-training. Even if a model has only been pre-trained on problems up to a certain complexity, post-training on harder problems consistently enables it to solve them. The model is able to compose more complex patterns based on the simpler ones in its training set1. To continued our tortured analogy, this is more like being reminded of several Spanish words and, over time, learning to stick them together into actual sentences.

    Practically this means your pre-training data is a bet on what the model will ever be able to reason about, and post-training refines how well and how hard it can think within those domains.

    That approach also gives a useful tool for identifying whether something is in-distribution. If you want to know whether a model can learn a new capability through post-training, check pass@128 first. If it never gets the answer right in 128 attempts, you probably have a pre-training gap, not an RL problem.

    1. The paper also spends a while justifying curriculum training, giving the model problems just on the edge of its capabilities before introducing harder ones. Recent work from the FAIR Paris folks and others show you can somewhat automate this by generating problems from the same model you are training! ↩︎
  • Everything MoE

    There are two really good ways to learn the deep fundamentals of a field. One we could call the Carmack/Ilya method: get an expert to give you a list of the seminal papers, systematically work through them, and in the process develop a deep, grounded intuition. This seems to work. The second is: funny tweets.

    A case in point:

    Other than the fact you have to be in a very particular niche in order to understand all the acronyms in that tweet, the idea that everything is an MoE feels right? Pretty much every notable model release, and probably all the secret frontier models, are MoE.

    Like every other idea in deep learning this goes back to something Hinton did in the 90s, specifically the paper Adaptive Mixtures of Local Experts by Jacobs, Jordan, Nowland and Hinton:

    If backpropagation is used to train a single, multilayer network to perform different subtasks on different occasions, there will generally be strong interference effects that lead to slow learning and poor generalization. If we know in advance that a set of training cases may be naturally divided into subsets that correspond to distinct subtasks, interference can be reduced by using a system composed of several different “expert” networks plus a gating network that decides which of the experts should be used for each training case. […] The idea behind such a system is that the gating network allocates a new case to one or a few experts, and, if the output is incorrect, the weight changes are localized to these experts (and the gating network).

    The idea is that if your data naturally clusters, then having separate networks avoids smearing understanding across the weights. A dataset with both German and English training data might produce a model that mixes up both languages. If we train two different experts and learn a gating network, we can get a clean “German-speaking” model, and a clean “English-speaking” model, in one.

    Also, like every other idea in deep learning, this was very clever, but painful to train. In particular, this was because the decision about which expert to choose was a bit of a cliff. If you choose the German expert when you needed the English expert then the German expert would get some loss, but the English expert would get none. This could lead to the awkward situation where the German expert performed better for both English and German: you ended up with a smaller, smeared model, and a dead expert.

    Noam Shazeer and co came to the rescue in 2017 with the excellently titled “Outrageously Large Neural Networks”. They introduced concepts that didn’t fundamentally change the approach, but did make it practical.

    The key trick was adding an auxiliary loss that penalized the model for using one expert over the others. By adding some noise to the gating decision they helped it be differentiable and ensure errors could flow back effectively. This gave the training process a much better chance of avoiding this kind of “winner-takes-all” collapse.

    Over time these methods were refined. In a contemporary MoE like DeepSeek v3, sigmoid-based routing removed the noise from the gating and the auxiliary loss is replaced in favor of a what they call bias updates: they just put their thumb on the scale during training if some experts aren’t getting enough samples, which seems to work great.

    All of that is about how we got MoEs to scale, but doesn’t really say… why? Intuitively, if you can train a model with X parameters, it seems like it would be better to have all of them doing something (a dense model), rather than some subset1?

    The main reason this has taken over the field is it is a way of decoupling capacity (how much can the network “know”) from compute (how much work does it do for each input).

    In a dense model when you add a new token to train you send it to all parts of the model: every bit of capacity touches it, each of which uses some compute to process. MoEs are a form of sparsity: a way of ignoring some of the parameters. They let you add capacity without adding compute2.

    There are other ways of achieving the same result, but the MoE approach is very hardware friendly. You’re still mostly doing dense matmuls, just split between experts. In parallelism terms, Expert Parallelism is efficient because you’re moving tokens between devices: it needs an all-to-all, but the data volumes are manageable.

    The tweet calls out NSA, engram and mHC, all recent papers from Deepseek. But underneath it calls out the design pattern: make a few alternative compute or memory paths, then use a learned gate to pick (or mix) a subset of them, per token. You get sparsity at the routing level, decoupling formerly coupled aspects, while each path can remain fairly dense and hardware-friendly.

    Engrams makes the argument that language models have to do two things: reasoning and looking stuff up. The reasoning works great with stacks of Transformers, but the looking-stuff-up part is approximated through computation rather than just… looking stuff up.

    This process essentially amounts to an expensive runtime reconstruction of a static lookup table, wasting valuable sequential depth on trivial operations that could otherwise be allocated to higher-level reasoning.

    Classically, Natural Language Processing used a lot of N-grams: representations of more than one token at a time, but language models pretty much dropped that in favor of a fixed vocabulary. Deepseek is bringing it back. These extra embeddings are retrieved for subsets3 of the tokens in the context window, the resulting vectors are summed4, then the model gates how much to incorporate the information based on the current state.

    It’s the same move of decoupling compute and capacity. Here they are adding a bunch of extra storage parameters but letting the model learn whether or not to use them. Because the retrieval is based on tokens the table doesn’t have to live in VRAM but can be loaded with the input5 .

    The second paper, Manifold-constrained Hyper Connectors is the most math-heavy of the recent release, and it builds on one of the most cited papers in ML: ResNet.

    In the bad old days ,the “Deep” in Deep Neural Nets didn’t really exist: you could theorize, but if you tried to train one you’d get into a place where the early layers received basically no useful loss signal. ResNets fixed this in the simplest way possible: As well sending through the “output” of a layer, you sent through the input as well. This gave an efficient highway for loss gradients to flow back and enabled successfully training much, much deeper models.

    mHC builds on an observation that ResNets hard-code another compute/capacity tradeoff: the size of the residual channel. If you think of a layer of a transformer: it has an input of C tokens, and an output the same size. The residual connection works by summing the input tokens and the output tokens. That’s assigning as much information capacity to the residual channel as you do to the processing channel. E.g.

    • Layer 0 gets raw tokens, and outputs a sum of raw+contextualized tokens
    • Layer 1 gets layer 0 tokens and outputs a sum of layer0+contextualized tokens
    • Etc.
    • At the end you get a cake recipe

    But maybe that cake recipe would be better if Layer 2 had access not just to the layer0 tokens, but also to the raw tokens? We don’t really have a way to express that outside of adding extra skip connections. Hyper Connections widen the ResNet channel into multiple lanes, and mHC lets the model decide what to put in each: so you could have layer 1 putting layer0 context in one lane, and raw tokens in another lane6 . If MoE lets you take a bunch of parameters and selectively route tokens to a subset, then mHC lets you take a bunch of residual bandwidth and selectively mix the information flow from your module to a subset of it.

    Finally, Native Sparse Attention follows the classic Deepseek move of throwing a bunch of engineering wins together. Instead of assuming the amount of attention compute for each token in is the same they are scaling it dynamically based on the content itself. They mix the outputs of a pooled version of the content window to get a compressed representation, a MoE-style gated selection from the full context window7, and a classic sliding window attention.

    This is the pattern MoE exemplified:

    • look at what is constrained
    • add more of it, but make it conditional to avoid scaling other things at the same time

    It’s a thread that runs through an awful lot of the industry right now. Understanding that is useful when anticipating where the things are going to go next.

    Or, you could have saved yourself a lot of time and just liked the tweet.

    1. MoEs do have some inference advantages: if you have a 100bn parameters model where just 20bn are active for a given token you simply have to do less work than a 100bn param dense model. That’s a win for latency! But, you still have to store all those 100bn parameters, meaning you need quite a lot of memory kicking around. ↩︎
    2. More specifically, they make the ratio of adding capacity and adding capacity very flexible: modern MoEs often have many experts and activate several at a time. ↩︎
    3. In this case Deepseek uses 2-gram and 3-grams ↩︎
    4. Weighted summed ↩︎
    5. In practice they inject the ngram embeddings at a couple of different points later in the model, where empirically there seemed to be enough context for the model to make useful mixing decisions ↩︎
    6. The specific clever thing the Deepseek folks added was a constraint to stop this from exploding, using the wonderfully named Sinkhorn-Knopp algorithm (apparently) ↩︎
    7. Based on those pooled tokens. Effectively its taking the “summarized” context window, and using runtime gating to decide which bits of the context window to add in full. ↩︎
  • Attention, Compression & Predicting the next token

    Language modelling is one of the great ideas in ML: if you train a model to accurately predict the next word in a sequence of text1, you are forcing it to learn a deep structure for human language. Because language is how we map reality, hopefully then you can do many useful things. This turned out to be right!

    The challenge with actually, you know, doing this is that text is messy. It’s sequential, variable length, and has structure, but the structure is kind of weird: the phrase “the cat, a mellow long-haired persian, sat on the mat” very clearly associates “sat” with “cat”, but the actual words are quite far away2.

    Dealing with sequential, variable length data with a fixed network is a bit of an inherent mismatch. In training you often know the sizes you’re dealing with, but at inference time it’s variable. One elegant solution to that was the Recursive Neural Net (RNN): start at the beginning, read one word at a time and keep a “hidden state” as a scratch pad to provide memory of what has come before.

    Training RNNs was painful, because now you have to backpropagate over multiple steps, and it was a minefield of vanishing and exploding gradients. The hidden state was used for two different things: the long-term memory of the whole sequence and as the key to the next word.

    Getting to Attention

    The architecture that really addressed this was the LSTM: instead of a single memory they split short and long-term memory and added activation functions to keep the gradient updates sane. They also made the updating the memory a function of the input rather than of the weights by adding learnable gates that let the model decide which parts of the input to remember, and what information from the memory to forget. This unlocked real sequence-to-sequence models, which proved immediately useful in areas like machine translation: one model reads a sequence and compresses it to a hidden state (the encoder), another generates new output based on it (the decoder).

    This solved the training stability bottleneck, and introduced a new one: compression. The entire sequence got compressed to a single hidden state, which limited how much complexity could be captured.

    Bahdanau et al. addressed that with the idea of attention in 2014. The hidden state gets updated in the encoder with each new word, so why not keep all the hidden states around? Then, have a small network score which hidden states are relevant to the current decoder state, and make a new contextualized input to the decoder that is a weighted sum of the encoder states. This was called “attention” as it allowed the model to put different amounts of focus on different parts of the input sequence.

    The new bottleneck though was throughput: to generate hidden state n, you first needed hidden state n-1. That made it hard to parallelize, which made it hard to take advantage of emerging accelerators. Luong et al first showed that you could simplify the state scoring to make it more hardware friendly, then Attention Is All You Need in 2017 stripped away the recurrent part entirely. In the Transformer architecture they got rid of the RNN and hidden state, replacing it with another version of the attention mechanism: self-attention.

    Rather than a stack of hidden states that progressively encode the state of the sequence, each incoming word is transformed at once into a contextualized representation that carries information about it and its surroundings. This was really parallelizable; you don’t need to care about previous time steps to make decisions, so you can scale the computation on GPUs and other accelerators.

    In regular attention you can think of the current decoder3 state as a query, and the various encoder hidden states as keys: the scoring function would generate a value for each pair of key and query. In self-attention, all the tokens were projected through key and query networks, and the query for each token was compared to the key of all the others. The transformer also added a value projection: in the older attention the “key” from the hidden state was both “what makes a good match” and “what information the token provides”, but in the transformer the two were decoupled.

    The new bottleneck that emerged was performance, particularly during inference. Comparing everything to everything else is an O(n2) operation. During training you can ameliorate some of that through batching, but you’re directly exposed in inference. And, unlike an RNN, increasing the sequence length (aka context length) gives you a quadratic increase in time, not linear.

    There were various attempts at addressing this one too. In “Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention” back in 2020, Katharopoulos et al showed that the quadratic aspect of self-attention comes from having to materialize a big matrix to calculate the softmax for scoring. If you replace the softmax with a map-type function you can chunk the computation and get linear time performance. This was mathematically elegant, but didn’t actually work very well, so more engineering-oriented approaches like KV caching and FlashAttention were the main-stay for tackling the bottleneck.

    So why talk about this now? Because of Moonshot AI, and their excellent Kimi models. Moonshot are perhaps the frontier-est of the Chinese tiger labs, and their recent model releases have involved: Kimi Linear: An Expressive, Efficient Attention Architecture

    The architecture mixes regular, self-attention layers with Kimi Delta Attention. And Kimi Delta Attention is just the latest in a thread of evolution which goes back (sorta!) to RNNs.

    State space models

    For a long time, folks modelled control systems using state-space models. These return both an output and a state, and have a linear update function. RNNs such as LSTMs weren’t strictly state-space models in part because of their use of non-linearities: when updating the memory LSTMs used a tanh activation, for example. If you hand-wave a bit and ignore that, you’re looking at a very similar process.

    But there is a gap between hand-waving and science, and luckily someone crossed it. The benefit of that activation function was that it squashed the state into a known range and avoided the vanishing gradient issue that plagued RNNs. The key realization was that you can drop the non-linearity entirely4 as long as the weight matrix that multiplies the hidden state is well behaved (specifically, has eigenvalues close to, but less than, one).

    Much of this is in the HiPPO and S4 papers, with Albert Gu, Chris Ré and Tri Dao. This was another neat idea, which included a clever bit of linear algebra with a technique called Diagonal+Low Rank to make the state updates relative efficient, but didn’t perform as well as regular transformer models. Gu and Dao identified the challenge as those well-behaved weights that updates the hidden state. Much like with RNNs prior to LSTMs they were adding a fixed amount of information from the input to the state. In Mamba they reused the same kind of trick: adding a small network to gate the updates so the model can learn remember more, or less, depending on the specific input5.

    Then, in the Mamba 2 paper from 2024, Gu and Dao brought everything together. They showed that the 2020 style linear attention, with a decay mask, was the same as a structured state space model like Mamba 1. That means they could apply the same chunking tricks in linear attention and get much better scaling and training, but with the ability to handle long sequences Mamba had.

    The slow recreation of LSTM features in more scalable forms continued with Gated DeltaNet. The Mamba approach ‘faded’ old memories via a decay, but it couldn’t explicitly subtract information like the LSTM forget gate. Gated DeltaNet also calculated the difference (the delta) between the expected and actual state, allowing it to effectively edit the memory rather than just overwriting it6.

    Kimi Linear sped this up, and improved the fading mechanism to be per-dimension rather than a single rate across the memory:

    “Crucially, KDA parameterizes its transition dynamics with a specialized variant of the Diagonal-Plus-Low-Rank (DPLR) matrices [30, 71], enabling a bespoke chunkwise-parallel algorithm that substantially reduces computation relative to general DPLR formulations while remaining consistent with the classical delta rule. Kimi Linear interleaves KDA with periodic full attention layers in a uniform 3:1 ratio.”

    They manage to solve two birds with one stone linear algebra: They reused the DPLR trick from S4 let you take a diagonal vector for the update rate and apply it across the matrix product of a low-rank approximation for the state transition. Moonshot realized that you could replace the approximation with the K and V matrices directly, which is much more efficient, and that you could have the diagonal come from a vector of the same dimension, so you get per-channel forgetting.

    Compression & Recall

    It seems likely we will see more sophisticated mixing of different types of attention in models as labs continue improving architectures. We started with recursive models as a natural expression of the problem, moved to transformers for scale, and have been slowly integrating the two expressions together. We are still just trying to predict the next word, but it turns out the best way to do it is to remember some things, forget most things, and accept that the map is not the territory.

    Reading through the papers on this journey really highlighted how the field moves between compression and breadth of recall. Sometimes researchers get a bad rap from their engineering brethren for being disconnected from reality, but this chain of evolutions is a pragmatic one.

    You want to get the most intelligence in the model as possible. That’s done by compressing the training data into efficient, useful and general representations, but finding those representations is hard! If you hit a limit in finding them, then one approach is to simply add more knowledge: add more parameters, consider more training data, and build more of the imperfect representations to give you more options to choose from.

    MoEs, synthetic data, and various other aspects of modern model training are playing with this same trade off: represent better or represent more. After his recent HotChips talk, Noam Shazeer was asked how we can find more efficient ways of encode knowledge into parameters, closer to how the brain does it. He responded first by asking the questioner: “why are you limited on parameters?”

    1. The idea dates back to Jeff Elman, I think, who showed that training a network on this objective caused the network to learn grammar categories and other features of English. ↩︎
    2. This kind of thing is even hard for humans at sufficient lengths of text: there is a version of War & Peace in English that is largely the original (translated, natch), but normalizes all the character names as they were such a common point of confusion ↩︎
    3. In the original paper they kept the same encoder/decoder set up as with earlier models, as its eminently sensible for translation tasks. The GPT models and others demonstrated you could go decoder-only effectively. What we tend to call “prefill” these days is effectively a (causal) encoder step within the decoder model that contextualizes the input, then the “decoder” is the autoregressive generation process after. ↩︎
    4. There actually still is non-linearity, as you need it for neural networks in general but rather than doing it in the loop memory update, it happens in projection MLPs after the layer. Then in Mamba it moved into the gating, so it’s only dependent on input, not the h_{t-1} state! ↩︎
    5. And it was Orvieto and the DeepMind folks that showed that you can get the same results in an RNN without the non-linearities if you can set up the matrix right. ↩︎
    6. Part of this reason was recall, which Jamba addressed. Because the RNN approach is inherently compression based it was harder to just cut and paste sections of the context when they were relevant. Jamba mixed regular attention layers with Mamba layers, giving back the global context while still providing better scaling. The specific recall problem is really emphasized by the fact that one of the standard long context evals is the “needle in a haystack” task, where a relevant fact is hidden in a long doc and needs to be pulled out. ↩︎
  • Qwen-Image

    GPT4o’s image generation was a remarkable event, beyond the brief Ghiblification of all social media.GPT-4o offered significantly more steerability than earlier image generation models,, while offering image quality in the ball park of the best diffusion models. Qwen-Image gives a similar level of fidelity and accuracy and is an open-weights model with a pretty decent technical report: QwenLM/Qwen-Image.

    While I was fairly familiar with diffusion models, I wasn’t really familiar with the backbone of this model, the multimodal diffusion transformer (MMDiT). Rather than just look at it, I vibed up a repo with Claude Code that went step by step through the architectures, training on good old MNIST. ianbarber/diffusion-edu — which spat out this:

    This ended up being a helpful way to go step by step through the evolution of diffusion models. 

    Loss/Target

    Modern image generation really kicked off with GANs. GANs were a clever idea that exploited the fact that we are better at building classifiers than generators by using one to bootstrap the other. A generator would generate an image against a reference, the discriminator would be given the generated image and the reference and have to predict which was the real one, and both networks were scored on how well they did on their tasks. This was effective, but was challenging to train. The generator also had to start from somewhere and what it effectively started from was noise: the generate would start with fairly random output and the discriminator would learn to identify noise vs the real image. 

    The clever idea Jonathan Ho and co had with DDPM was to focus on that noise: what if instead of learning to generate images we learned to remove noise from images. In the snippet below we:

    • Pick a timestep between 0 and 1000
    • Generate some noise
    • Add an amount of noise to the training image proportional to the timestep
    • Get the model to predict the noise, given the time step
    • Calculate the loss as the mean squared error between the known noise and the predicted noise
    # Sample random timestep
    t = torch.randint(0, 1000, (B,), device=device)
    
    # Add noise to image
    eps = torch.randn_like(x0)
    alpha_t = self.alpha_schedule(t)
    xt = sqrt(alpha_t) * x0 + sqrt(1 - alpha_t) * eps
    
    # Predict the noise we just added
    eps_pred = self.model(xt, t, cond)
    
    return F.mse_loss(eps_pred, eps)  

    This pretty much worked! You needed to use quite a few timesteps (around 1000), but the model would learn to discriminate noise from data. Then, you can reverse the process to generate: given a noisy starting point, generate some noise,  predict the noise at the first timestep, remove it, increment the timestep, then repeat, each time adding some noise and removing. 

    Song et al. followed this up with DDIM, identifying that one of the reasons you need so many samples is that you are injecting new noise each generation. If you start with the noise up front when sampling you have a much more deterministic process, and can generate in more like 50 steps than 1000: 

    x = torch.randn(*x_shape)  # Start with pure noise
    
    for i in reversed(range(steps)):
      t = torch.full((B,), i/steps)
      if target == TargetMode.EPS:
        eps = model(x, t, cond)
        x = (x - eps * dt) / sqrt(1 - dt)

    The next step, in 2021, was Classifier-Free Guidance from Ho and Salimans. The clever idea was to pass a conditioning variable through to the model: for example in our MNIST example it could be the digit we are learning from. However, during training we would sometimes zero it out. This means the model learns to generate conditionally (for the specific digit) and unconditionally (just in whichever direction looks the best). 

    if cond is not None and self.cfg_dropout_prob > 0:
      mask = torch.rand(B, 1, 1) < self.cfg_dropout_prob
    
      cond = cond * ~mask  # Zero out conditioning randomly
    
    return F.mse_loss(self.model(xt, t, cond), target)

    This gets useful at generation time. When sampling, we can sample both conditionally and unconditionally, and diff out the unconditioned part: 

    # Run model twice: with and without conditioning
    cond_pred = model(x, t, cond)
    uncond_pred = model(x, t, None)
    
    # Amplify the difference
    return uncond_pred + cfg_scale * (cond_pred - uncond_pred)

    If you imagine the sampling process as denoising, this is saying there is the “best” direction, given the condition, and the “best direction” overall. By reducing the influence of the overall best direction, we get clearer steerability, and effectively the model serves as its own iterative classifier. 

    Also in 2021, Song et al published Score-Based Generative Modeling through Stochastic Differential Equations. They framed the diffusion problem as a Stochastic Differential Equation (SDE), effectively a regular differential equation dx = f(x, t)dt with an additional noise term: dx = f(x, t)dt + g(t)dw1. That latter term g(t) controls how much random noise is involved.

    The contribution from the paper is that they worked out how to reframe this without that dw noise – e.g. they turned it into an “Ordinary” Differential Equation (ODE) without the random component. Then the model can be viewed as a velocity field that ends up having a similar shape to the one modelled by the random noise version, but is deterministic.

    Salimans & Ho were not done, and proposed another improvement to loss in V-Parameterization in the Imagen paper. One of the challenges with predicting the noise (eps above) is that when you get pretty close to a finished image there isn’t much noise, so the prediction isn’t particularly good. Similarly, when you are starting with pure noise it’s predicting almost everything, so also doesn’t give much information. Predicting the noise involves estimating both the clean sample and the noise. Some reordering lets you predict a single value, the velocity field (or gradients) which combines the clean sample (alpha), the noise (eps) the time step and the current (noised) sample. By having the model predict that we can balance between predicting the image and the noise, giving better results better at extremes. 

    v_target = alpha_t * eps - sigma_t * x0
    v_pred = self.model(xt, t, cond)
    
    return F.mse_loss(v_pred, v_target)

    Finally (on the loss) we get to flow matching from folks at Meta FAIR (Flow matching) and UT Austin (Rectified Flow). Rather than making the target a blend of start and noise, why don’t we just predict the straight path to the data. Compare the v_target below to the one above: 

    t = torch.rand(B, 1, 1, 1)
    z = torch.randn_like(x0)
    
    # Straight line: xt = (1-t)*x0 + t*z
    xt = (1 - t) * x0 + t * z
    
    # Learn the velocity field pointing from noise to data
    v_target = x0 - z  # The straight path direction
    v_pred = self.model(xt, t.squeeze(), cond)
    
    return F.mse_loss(v_pred, v_target)

    Flow matching models often converge faster during training and can generate good samples with fewer steps. They also tend to have more consistent quality across different sampling step counts.

    Architecture

    All of that evolution was about the loss function and sampling, and we haven’t really discussed the model architecture itself. The original diffusion models used an approach called Unets: a series of convolutions that compressed the (latent) visual information into fewer dimensions, then expanded it back up (giving a sort of U shape). But post-ChatGPT the Transformer was ascendent, so in 2023 Peebles and Xie proposed swapping out the Unet for a stack of transformers in the Diffusion Transformers (DiT) paper. 

    class DiTTiny(nn.Module):
        def __init__(self, embed_dim=256, depth=6):
            # Patchify the image (like ViT)
            self.patch_embed = PatchEmbed(patch_size=2)
    
            # Stack of transformer blocks
            self.blocks = nn.ModuleList([
             TransformerBlock(embed_dim) for _ in range(depth)
            ])
    
        def forward(self, x, t, cond=None):
            # Convert image to patches
            x = self.patch_embed(x)  # (B, num_patches, embed_dim)
    
            # Add positional encoding
            x = x + self.pos_embed
    
            # Transform through attention layers
            for block in self.blocks:
                x = block(x, t_emb)
    
            # Reshape back to image
            return self.unpatchify(x)

    This looks like a regular transformer but with patches (segments of the image) rather than text tokens, as in ViT understanding models. The transformer block will also look pretty familiar 

    class TransformerBlock(nn.Module):
      def __init__(self, dim, heads=8, mlp_ratio=4.0):
        super().__init__()
        self.ln1 = nn.LayerNorm(dim)
        self.attn = nn.MultiheadAttention(dim, heads, batch_first=True)
        self.ln2 = nn.LayerNorm(dim)
        self.mlp = nn.Sequential(
          nn.Linear(dim, int(dim*mlp_ratio)), nn.GELU(), nn.Linear(int(dim*mlp_ratio), dim)
      )
    
      def forward(self, x):
        h = self.ln1(x)
        x = x + self.attn(h, h, h, need_weights=False)[0]
        x = x + self.mlp(self.ln2(x))
        
        return x

    They got good results and more importantly it was easier to scale up to more compute and larger inputs. For what it’s worth, I found DiTs a bit tricky for training on small data sets (like the mnist example), but didn’t spend much time on it, since: 

    MMDiTs emerged in 2024, and were used for Stable Diffusion 3 and Flux, largely setting the standard in terms of image quality. The idea is to process images and text in parallel with the ability to attend across each other, reminiscent of cross-encoder models.

    class MMDiTTiny(nn.Module):
        def __init__(self, img_dim=256, txt_dim=256):
            # Separate encoders for each modality
            self.img_encoder = nn.Linear(patch_dim, img_dim)
            self.txt_encoder = nn.Linear(txt_dim, txt_dim)
    
            # Joint transformer blocks
            self.blocks = nn.ModuleList([
                CrossTransformerBlock(img_dim, txt_dim) for _ in range(depth)
            ])
    
        def forward(self, img, t, txt=None):
            # Process both modalities
            img_tokens = self.img_encoder(patchify(img))
            txt_tokens = self.txt_encoder(txt) if txt is not None else None
    
            # Bidirectional attention between modalities
            for block in self.blocks:
                img_tokens, txt_tokens = block(img_tokens, txt_tokens, t)
    
            return unpatchify(img_tokens)

    MMDiT models demonstrate great prompt adherence and can handle complex requests. The bidirectional flow means text understanding improves alongside image generation.

    class CrossTransformerBlock(nn.Module):
    """Cross‑attention: query=image tokens, key/value = text tokens."""
    
        def __init__(self, dim_img, dim_txt, heads=8, mlp_ratio=4.0):
            super().__init__()
            self.q_proj = nn.Linear(dim_img, dim_img)
            self.k_proj = nn.Linear(dim_txt, dim_img)
            self.v_proj = nn.Linear(dim_txt, dim_img)
    
            self.attn = nn.MultiheadAttention(dim_img, heads, batch_first=True)
    
            self.ln_q = nn.LayerNorm(dim_img)
            self.ln = nn.LayerNorm(dim_img)
            self.mlp = nn.Sequential(
                nn.Linear(dim_img, int(dim_img*mlp_ratio)), nn.GELU(), nn.Linear(int(dim_img*mlp_ratio), dim_img)
            )
    
        def forward(self, x_img, x_txt):
            q = self.q_proj(self.ln_q(x_img))
            k = self.k_proj(x_txt)
            v = self.v_proj(x_txt)
    
            x = x_img + self.attn(q, k, v, need_weights=False)[0]
            x = x + self.mlp(self.ln(x))
    
            return x

    Here, in the cross attention block the image is used for the Query part and the text for the K and V parts of the attention. The results are combined with the image input. 

    Putting this all together, you can see the evolution of the common diffusion baselines across both scale and steerability:

    1. DDPM: Clean but slow. The baseline everything else improves on.
    2. SD1-style (UNet + Epsilon + CFG): The first practical system. Good quality, reasonable speed, follows prompts well with CFG.
    3. SD2-style (UNet + V-param + CFG): Slightly better contrast and stability, especially at high resolutions.
    4. SD3-style (MMDiT + Flow): The current state-of-the-art. Fastest training, best prompt adherence, most efficient sampling.

    Back to Qwen

    The Qwen-Image model is a good, practical example of scaling this up. It uses an existing multimodal model2 () to encode text and image inputs, a pretrained VAE3 for translating between pixel and latent space, and then as its backbone an MMDiT. The use of strong (understanding) models for encoding helps really enhance the steerability of the results in the MMDiT. 

    In the MMDiT sketch above we just concatenate image and text together. In real systems we first add the positional embeddings for the image tokens, then add on text tokens. This works, but made it difficult to adapt to different image resolutions.

    Seedream introduced Scaling RoPE4 that instead puts the image positional encoding in the middle of the image, treats the text tokens as 2D shapes [1,L], then applied 2D RoPE to both text and image tokens. This worked better, but had some problems where the positions were confusable between text and image latents, meaning the model couldn’t properly differentiate in some cases. The Qwen team updates this by implementing positional encoding across both dimensions of the text tokens, and concatenating the text along the diagonal of the image:

    This design allows MSRoPE to leverage resolution scaling advantages on the image side while maintaining functional equivalence to 1D-RoPE on the text side, thereby obviating the need to determine the optimal positional encoding for text.

    The resolution independence is important for the training recipe. The model is progressively trained  with images starting at 256×256 and increasing in steps up to 1328x, in a variety of aspect ratios. They follow it up with post-training consisting of SFT on organized, high quality image-text pairs and DPO against preference pairs judged by human raters5. Finally, they do a GRPO stage with a “reward model”: though it isn’t clear if that’s based on the aforementioned preference data or is some other secret sauce. 

    While we don’t know how GPT-image is trained, this recipe certainly gave some comparable results. I was surprised to learn that the combination of a strong text and image encoding model plus MMDiT6 gives this level of steerability and fidelity. As usual, it’s exciting to have open models and papers to bring these concepts together! 

    1.  Its w because the noise is a Weiner process, also known as standard Brownian motion. I am heavily conditioned to think of this as the motion in a cup of tea thanks to HHGTTG
      ↩︎
    2. Qwen 2.5-VL ↩︎
    3. Interestingly, a video one from Wan-2.1 ↩︎
    4. Roughly the same idea was about as Column-wise position encoding as I understand it. 
      ↩︎
    5.  The same prompt with two different seeds, and — if present — a reference image
      ↩︎
    6. And a lot of very carefully curated and programmatically generated data, to be fair
      ↩︎
  • The TPU book, on GPUs

    How to Think About GPUs | How To Scale Your Model

    The Jax “How To Scale Your Model” book is one of my favorite references for folks trying to get their head round pretraining1. It breaks down the performance characteristics of model training (often using Llama 3 as an example) in an incredibly clear way. The only slight limitation is that it is primarily focused on scaling LLMs on TPUs: interesting, but probably not your main platform target (unless you work at Deepmind). They just released a new chapter covering GPUs, and it’s also a great summary2.

    There are also plenty of mildly snarky comments about design choices to leaven the reading too:

    Takeaway: in theory, NVIDIA SHARP (available on most NVIDIA switches) should reduce the cost of an AllReduce on B bytes from about 2 * B / W to B / W. However, in practice we only see a roughly 30% improvement in bandwidth. Since pure AllReduces are fairly rare in LLMs, this is not especially useful.

    1. Though they include a chapter on inference too! ↩︎
    2. Though if you haven’t read the rest of the book it moves pretty fast – definitely best to read through the whole thing and treat this as the appendix it is intended to be! ↩︎
  • Overthinking Everything

    Yann was taking laps on Threads a few weeks back over a recent paper, which was one of several recently that have explored aspects of how autoregressive models do as the amount of information they are dealing with gets longer. His general complaint is that each token they generate can either push them towards the right answer or further away from it, and that the models are inherently bad at recovering if they end up too far outside the correct trajectory.

    This “more might be worse” idea shows up anywhere folks are leveraging large context windows, and one of those1 is in agentic tasks. This post summarizes some research trying to measure the fall-off in chances of succeeding as task length2 increases.

    Is there a Half-Life for the Success Rates of AI Agents? — Toby Ord

    It provides indirect evidence that what really is going on under the hood is that tasks are made up of many sequential subtasks and the chance of succeeding at the whole requires succeeding at every individual component. Moreover, this suggests that the current AI agents are not very good at recovering from earlier mistakes.

    The framing they use is a constant hazard rate: each subtask is another roll of the dice, and if you roll a failure you don’t have much chance of recovering. So more (or longer) is pretty much always worse.

    One interesting aspect is that they also investigate the human failure rate, which increases over time, but much more slowly:

    This could indicate a different scaling behaviour of success rate with time horizon for humans compared to AI agents, which would be well worth investigating and may suggest important underlying mechanisms (e.g. that the humans were better at correcting earlier failed subtasks). If human performance scales differently with task length than AI agent performance, that would be an important result, suggesting that there is a notable inefficiency in the current AI paradigm.

    They’re testing with multiple runs, so these aren’t just models hitting problems they can’t do: its models hitting problems they can’t do given the specific tokens they have generated tried before.

    Agentic use cases aren’t the only situation where a model is generating responses that add to its context window. There were a lot of early observations after the release of O1 last year that thinking for longer on easy problems does not add value. This recent paper proposes not only that but suggests that there is an inverse scaling law: more time thinking makes the model worse.

    [2507.14417] Inverse Scaling in Test-Time Compute

    More specifically, they devised some stress tests: things like counting problems in the presence of distracting information, performing a regression where there is easy-to-understand but spurious data, and so on. The performance drops as the trace length increases. Different models are more susceptible to some failure modes than other, but performance consistently drops:

    Our experiments reveal distinct failure modes across model families—Claude models are particularly vulnerable to distraction from irrelevant information, while OpenAI o-series models show greater resistance but overfit to familiar problem framings. Extended reasoning amplifies different weaknesses: models overthink simple problems, shift attention to spurious correlations, and lose focus during Deduction tasks with constraint tracking.

    In contrast, Chroma’s recent Technical Report investigates how models do on single prompts, but of increasingly long contexts.

    Context Rot: How Increasing Input Tokens Impacts LLM Performance | Chroma Research

    Unlike in the agentic case, here all of the context is passed in at once, so the model isn’t poisoning its own context window through bad choices. It is still dealing with a large amount of content where it needs to choose which parts to attend to. Traditionally the test of long context has been needle-in-a-haystack evaluations: a relevant fact is hidden at different points in a long prompt and the test evaluates whether the model can effectively pull it out.

    The Chroma folks make the test a lot more nuanced — adding distractors3 and irrelevant content in both the broader context and the question. They find that performance consistently degrades as context increases.

    More broadly, our findings point to the importance of context engineering: the careful construction and management of a model’s context window. Where and how information is presented in a model’s context strongly influences task performance

    All of these papers rhyme with LeCun’s gripe about autoregressive transformers, which is (roughly!) that they (also) have a constant hazard rate on generating the “right” token.

    This is a very active area of research though. Process-based rewards in RL training make updates on each step vs only at the end. Multi-token prediction reduces the effective generation length or number of chances of misprediction. Summarizing context effectively compresses existing tokens, also reducing error rate.

    Similarly, if you have good verifiers4 you can use beam or tree searches to explore multiple reasoning paths during generation , which can reduce the error rate, at the cost of more compute.

    The closest (LLMish) techniques to LeCun’s vision are things like the recent Hierarchical Reasoning Model that has a layer of persisting hidden state, but it’s still pretty experimental!

    As agentic and reasoning traces get longer, I’m sure we’ll see more entries documenting failure modes, and proposals for techniques to scale around them.

    1. And the one being referenced in the post! ↩︎
    2. In time — they characterize tasks based on how long it takes humans to do them, which is a good control factor ↩︎
    3. As in additional content related to the question, but that doesn’t give the answer. ↩︎
    4. Similar to process-based rewards this is somewhat pushing the problem to the ability to judge how well you are doing during the generation ↩︎
  • The Tools Are Made Up

    The Tools Are Made Up

    It has been hard to keep up with the flurry of strong agentic open-source models coming out of Chinese labs recently, including Moonshot’s Kimi K2, Z.ai’s GLM 4.5, and Qwen3-Coder1.

    Each of them have the mix of clever pre-training recipes and verifiable-rewards post-training. Notably, Kimi and GLM both use the Muon optimizer, which seems to be gaining ground among the OSS labs at least. GLM’s description of the recipe is as follows:

    Our base model undergoes several training stages. During pre-training, the model is first trained on 15T tokens of a general pre-training corpus, followed by 7T tokens of a code & reasoning corpus. After pre-training, we introduce additional stages to further enhance the model’s performance on key downstream domains. Unlike the earlier pre-training stage on large-scale universal documents, these stages leverage medium-sized domain-specific datasets, including instruction data.

    The additional stages, which they refer to as mid-training, extend the context window and help grow capabilities in specific domains. They then move to post-training, with SFT over reasoning and agentic traces followed by RL with Verified Rewards2.

    The Kimi-K2 technical report goes into more details about how to actually train for tool use. Unlike the others, Kimi is not a reasoning model so doesn’t use much in the way of extended thinking. The fact that wasn’t required to get to strong levels of tool use/agentic capability feels pretty notable to me — most of the recent3 agentic models have been built on a reasoning foundation.

    What I really found interesting from the Kimi report was the level of synthetic data that the team used. This starts in pretraining: to extend high quality data sources they rewrite it with another LLM, giving the same facts with new phrasing, instead of looping over the same “good” data for multiple epochs.

    Their approach to tool training takes this kind of idea ever further:

    We construct a comprehensive tool repository through two complementary approaches. First, we directly fetch over 3,000 real MCP (Model Context Protocol) tools from GitHub repositories, leveraging existing high-quality tool specifications. Second, we systematically evolve 82 synthetic tools through a hierarchical domain generation process. We begin with key categories (e.g., financial trading, software applications, robot control), then evolve multiple specific application domains within each category. Specialized tools are then synthesized for each domain, with clear interfaces, descriptions, and operational semantics. This evolution process produces over 20,000 synthetic tools.

    They analyze a set of real tools, generate some novel (but derivative) ones, then domain-specialize them for a lot of use cases.

    Once they have this tool zoo, the actual training loop involves:

    1. Randomly sample a subset of tools and give it to a new agent with a fresh system prompt. Generate tool-appropriate tasks with explicit success rubrics.
    2. Run an LLM-driven user simulator to drive the agent, while running the tools in sandbox that keeps state.
    3. Filter trajectories using another LLM as judge to keep only successful ones for SFT

    They’re using models at every stage to generate data and evaluate options. When it comes to the actual RL training, they are baselining in verifiable rewards wherever possible for the RL stages: They, and the Qwen folks, talk about their simulator set up for code4: thousands of sandbox environments.

    For software engineering tasks, we collect a vast amount of pull requests and issues from GitHub to build software
    development environment that consists of user prompts/issues and executable unit tests. This environment was built on a robust sandbox infrastructure, powered by Kubernetes for scalability and security. It supports over 10,000 concurrent sandbox instances with stable performance, making it ideal for both competitive coding and software engineering tasks

    The combination of very sophisticated synthetic data and operationally intense sandboxes seem like table stakes for the current agentic game, and one which a lot of labs have figured out. Feels very promising for a growth in capabilities of these models over time, particularly as we work out how best to distill them down to smaller sizes for inference.

    1. Which seems a very solid model, but they haven’t released a lot of extra details about how they got there. One interesting component of the release though was that they forked Gemini CLI to make a qwen-code tool that works with any OpenAI compatible API, and I had some success locally plugging it into the smaller Qwen3 (non-coder) releases in case you were looking for some offline agentic capabilities! ↩︎
    2. Then GLM is distilled between the RL and base version of the model, which apparently helps generalize. This seems like a fun and relatively simple way of smoothing out the learning. ↩︎
    3. Though Claude 3.5 wasn’t, and that is really the trend-setter here I guess! ↩︎
    4. And other tasks that allow fully verifiable rewards. They use other models to score softer domains like creative writing. ↩︎

  • Muon optimizer

    Last year Keller Jordan at OpenAI beat some of the existing NanoGPT speedrun records thanks to some optimizer improvements. Towards the end of the year the work was formalized as the Muon optimizer, and it’s making waves in a bunch of areas now

    Friendship ended with Adam, now Muon is my best friend.
    From Elie Bakouch’s great pretraining presentation

    Jeremy Berenstein has written up a great post on how Muon is derived:

    To handle individual layers, our idea is to normalize the weight updates in a clever way so that, given the structure of the inputs, the weight updates automatically induce a desirable effect on the outputs. As a community, we have invested so much effort into thinking about how to normalize the activations: think batch normlayer normRMS norm, etc. Why not also consider how the weights and weight updates influence the activations?

    Keller also wrote a detailed blog post when introducing the optimizer, calling out some open questions (like does it scale to very large training).

    As the posts cover, the optimizer isn’t totally generally – it was designed for linear layers (and flattened convs), so you need to pair it up with Adam for most usage.

    You can install the library from Github: pip install git+https://github.com/KellerJordan/Muon

    from muon import Muon
    
    muon_params = [p for p in list(model.parameters()) if p.ndim > 2]
    muon_param_ids = {id(p) for p in muon_params}
    adamw_params = [p for p in model.parameters() if id(p) not in muon_param_ids]
    # Create the optimizer
    optimizers = [Muon(muon_params, lr=0.001),
    torch.optim.AdamW(adamw_params, lr=0.001)]

    And step both optimizers in the training loop:

    for opt in optimizers:
        opt.step()

    It’s great to have innovation in this area, particularly with this kind of fundamental reasoning around why it works!

  • Byte-Latent Transformers

    Who needs a tokenizer anyway!

    [2412.09871] Byte Latent Transformer: Patches Scale Better Than Tokens

    This paper, from back in December last year, presents an interesting approach to handling raw byte sequences in LLMs without relying on tokenization.

    Vocab sizes for tokenizers have gone up over the last couple of years with attendant gains in usefulness, but this remains a particularly hand-tuned number in the training process. BLT proposes a method that processes raw UTF-8 byte sequences directly, leveraging a dynamic patching mechanism to group bytes into variable-length patches based on entropy.

    Higher-entropy regions receive more attention and shorter patches, while lower-entropy regions can be processed more efficiently.

    There are conceptually three levels of processing:

    • Local Encoder: A small transformer stack encodes raw byte sequences into higher level representations, which are then structured into patches.
    • Latent Global Transformer: A standard large transformer model operating on patch-level representations
    • Local Decoder: The encoded patches are decoded back into byte sequences, using a cross-attention mechanism to reconstruct text.

    In the paper they show they can achieve parity in pretraining with a traditional tokenized approach in llama for similar parameter count, while being more robust and offering some inference time performance gains. The patching approach allows for allocating compute where needed most.

    Retrofitting existing models

    One of the ideas I found most interesting is starting with a traditionally pretrained model. The paper discusses using the main transformer layers from Llama and training the byte latent approach successfully.

    I gave the approach a go with a simplified local encoder, entropy and patching approach, and took the transformer layers from Qwen 2.5 3B, a strong model that could still be trained locally (no corporate resources were harmed, etc).

    The basic approach was replacing the tokenizer, adding a small transformer and patch pooling based on a local entropy measure to generate patches, then cross-attending in some of the Qwen layers. Its training a new encoder while leveraging Qwen for the backbone of the global transformer and adding new cross-attention params to make it also the decoder, with the embedding layers at each end chopped off – so a significant domain shift. For inference I leverage the same patch generation process to try and generate effective tokens.

    You can find my Torchtune recipe on GitHub, running through the Alpaca dataset. Thus far I’m still training so while loss is improving, I have no idea whether it will turn into something useful. The fact that there is something trainable is fun though, and I have hopes that this kind of technique will lead to some breakthroughs in tokenizer-free models in the future!

  • Streaming DiLoCo

    [2501.18512] Streaming DiLoCo with overlapping communication: Towards a Distributed Free Lunch

    Every paper in this series has been required reading in (very) large language model training. The basic theme is that model training requires gang-semantics, where a large cluster of accelerators need to do coordinated work together in order to make progress, which gets progressively more expensive to enable and harder to do reliably as the number of devices in the cluster increases.

    The prior papers explored ways of splitting up the training into an inner loop where the model trained fairly traditionally, and an outer optimization loop that aggregated the differences and updated based on them – the outer optimizer works on the deltas between parameter values at the sync point. The outer optimizer still runs on the same cluster as all the inner loops, but it means that only at the “outer” sync point do you need to do synchronization between all the devices. This loosens the coupling between devices and allows introducing failure domains.

    This paper addresses the challenge that when you do synchronize you still have to send data for all the parameters, which requires a lot of bandwidth and can block forward progress. Streaming DiLoCo divides the model layers into different shards and syncs those at different times (in practicality, ever 5 inner optimizer steps), lowering the peak bandwidth required. They take shards in a strided fashion rather than sequentially to mildly improve stability and performance.

    To further reduce bandwidth, the communication between devices for the outer loop is done in 4-bit floating point! They still do the accumulations/optimization in 32 bit, but they didn’t see any performance loss when using the lower bit rate for comms. All of these comms are overlapped with the inner loop training, which helps minimize stalls.

  • Grouped GEMMs and MoE

    One of the challenges discussed in the Deepseek v3 paper is the availability of grouped GEMM kernels, which are used to hide the performance impact of many small kernel launches on GPUs. Deepseek uses many small experts (256!) rather than a few larger ones, which exacerbates this problem.

    Mixture of Experts models introduce multiple experts in the feed-forward portion of each transformer layer. Rather than having a single shared set of experts, each layer has its own. Each batch of tokens first passes through the standard attention block, followed by a lightweight linear layer with a softmax function1. This determines, for each token, which experts it should be sent to. Tokens designated for each expert are gathered and sent to the appropriate device via an all-to-all operation, as experts are typically distributed across different devices.

    Once the tokens are on the device with the right expert(s) we need to execute the matrix multiplies for each expert for its set of tokens. The obvious solution is just to loop through and launch each GEMM, but because these are small (small number of tokens, and smaller expert matrices) the kernel launch ends up being a lot of the performance. A grouped GEMM allows you to do this process on-device, taking in a list of tokens and experts and executing all the GEMMS with a single kernel launch.

    This varies from batch GEMMs as the inputs can vary – different experts might receive different numbers of tokens.

    There are example implementations available, including a tutorial on TritonLang that walks through a simple grouped GEMM kernel, as well as an example in Cutlass .

    1. In switch MoEs at least, but there are similar gating networks elsewhere. ↩︎
  • Idle Speculation on GPU Capacity Management

    Training large models today is tightly coupled to specific hardware. This makes moving workloads across systems or abstracting the hardware almost impossible without losing efficiency, and hence why you don’t tend to see a lot of uptake of the kind of cloud-like abstractions we see elsewhere.

    1. Gang semantics: Large models rely on precise scheduling for memory, networking, and compute to achieve high utilization. These tend to be model and hardware specific, and are hard to abstract.
    2. Compute efficiency: Compute ops like GEMMS should be less exposed to model quirks and are already more abstractable, but a lot of custom work is done on number format support and shape optimizations for specific models.

    My entirely unfounded prediction is that this stuff is getting easier, and we will see more standardization over the next few years.

    • Slow Down in Number Formats: Research like “Scaling Laws for Precision” (https://arxiv.org/pdf/2411.04330) shows a tradeoff between precision and parameter count. There’s a lot of folk knowledge in getting formats like FP8 stable, and it’s not totally clear how much FP4/MXFP4 and their ilk will add: my guess would be less, and they will be used in more targeted (and perhaps predictable) ways. Either way, I expect things to get less choppy and more predictable on the compute side, eventually.
    • Parameter Stabilization: Model size growth may well plateau, either for fundamental reasons (e.g. say we have enough model capacity in the 2-4tn params range for all the data) or to become more aligned with practical cluster sizes for scale-out networking (e.g., 72 GPUs with Blackwell). Whether this is for a model or a set of experts in an MoE I don’t know – and it feels like there is room for some variants of MoE routing architectures if we find that pattern particularly successful.
    • Shift To Test-Time: As training stabilizes, focus will shift to test-time compute — the pain points there being handling longer sequences and optimizing KV caches, which feel like more general/repeatable problems. I see this in part as moving a chunk of the pool of “large job expertise” from pretraining focused to inference focused, which then opens the door to the benefit of the tools/standards to help scale on the pretrain side.

  • HuatuoGPT-o1, domain specific reasoning

    https://arxiv.org/abs/2412.18925

    A good paper on creating domain specific (medical in this case) reasoning LLM. I had been somewhat vague on how creating a reasoning model actually worked, and this paper felt very clear (as to one recipe at least). As I read it:

    Start with a base model you are fine tuning for reasoning (in their case qwen 2.5) and a strong general model for verification and data generation (in this case GPT-4o)

    1. Gather a bunch of problems with ground-truth answers. In this case they had medical problems. Take about half of them for fine tuning data.
    2. Prompt the general model to think step-by-step for each problem to get some initial reasoning.
    3. Use the general model to evaluate whether each answer reached is correct vs ground truth.
    4. If the reasoning gets to the wrong answer (which is likely!) try a search strategy randomly – e.g. backtracking (start from an earlier step), critique-and-correct the existing line of reasoning, explore a new path distinct from the one given, or verify the current reasoning. Again, this is done by prompting the general model.
    5. Each problem gets three tries to get to a correct answer before giving up. At the end of this process we have a series of reasoning traces that get to the correct answer. Pair these with the problems and used for the next steps, but first rewrite them into a chain of thought incorporating “hmms” and other smooth transitions between thoughts, via prompting the general model.
    6. Fine tune the base model on the problem traces.
    7. Do RL (PPO) on the fine tuned model with the other half of the problems, rewarding when the model is correct with reasoning (verified by the general model), giving a small reward for a incorrect answer but with reasoning, and giving 0 for no reasoning provided (regardless of answer). Also constrain with KL divergence from the fine tuned model, as is standard in RLHF etc.

    This seems like a pretty reproducible recipe and the results seem strong. They include the prompts they use in the appendix, helpfully, and some good ablations/practical notes.

    1. Effectiveness of Complex CoTs: We further examined the impact of different types of Chain-ofThought (CoT) reasoning. The results show that direct learning of response (yˆ) performs the worst,
      while simple CoT (y0, e0) offers only little benefit. In contrast, Complex CoT (yˆ0, eˆ) significantly
      improves performance by an average of 4.3 points. This demonstrates the importance of teaching
      models to refine their answers with reflection

    One other interesting note on sourcing data was that they used GPT-4o to filter a number of multiple choice questions to generate the set of problems and ground truth. They used it to evaluate whether questions were complex enough to require reasoning, and whether they had a single clear and unambiguous answer. I am guessing it is a lot easier to get multiple choice question banks than other kinds, so this is a clever approach.

  • TorchFT: Fault tolerant training

    https://github.com/pytorch-labs/torchft

    The repo from Tristan Rice and Chirag Pandya’s poster at the PyTorch conference has been continually being updated with refinements and improvements.

    torchft implements a lighthouse server that coordinates across the different replica groups and then a per replica group manager and fault tolerance library that can be used in a standard PyTorch training loop. This allows for membership changes at the training step granularity which can greatly improve efficiency by avoiding stop the world training on errors.

    There are lots of clever techniques in here. They center around the idea of having replica groups which serve as the failure domain, rather than the whole training job. In a loose sense, this means that when there is a failure you simply drop the replica group its in and carry on with the rest to the next batch, adding the replica group back in when its recovered.

    To make that possible, there’s custom comms that allow for error handling, health monitoring of individual processes, and fast checkpointing that allows recovered workers to be quickly added back to replica sets.