Match flows, not scores

Fundamentals of flow matching, path straightening and reflow.
Author

Ayan Das

Published

April 26, 2024

While a plethora of high-quality text-to-image diffusion models (; ; ) emerged in the last few years, credit mostly goes to the tremendous engineering efforts put into them. The fundamental theory behind them however, remained largly untouched – well, untill recently. The traditional “markov-chain” () or “SDE” () perspective is now being replaced by an arguably simpler and flexible alternative. Under the new approach, we ditch the usual notion of a stochastic mapping as the model and adopt a deterministic mapping instead. Turns out that such model, although existed, never took off due the absense of a scalable learning algorithm. In this article, we provide a relatively easy and visually guided tour of “Flow Matching”, followed by ideas like “path straightening” & “reflow”. It is worth mentioning that this very idea powered the most recent version of Stable Diffusion (i.e. SD3 by Esser et al. ()).

Some prerequisites

Note that this article relies on the fact that you are familiar with the core ideas of Diffusion Models. So if you haven’t already, please visit my previous blogs (Blog #1, Blog #2 & Blog #3) on Diffusion/Score models.

Every scalable generative model follows a “sampling first” philosophy, i.e. they are defined in terms of a (learnable) mapping (say Fθ), which maps (known) pure noise z to desired data x

(1)x=Fθ(ϵ), where ϵN(0,I)

The learning objective is crafted in a way that the Fθ induces a model distribution on x, i.e. pθ(x) that matches a given qdata(x) as closely as possible. That is, of course, from the samples of qdata(x) only. Different generative model families learn (parameters of) the mapping in different ways.

Brief overview of Diffusion Models

The generative process

Although there are mutiple formalisms to describe the underlying theory of Diffusion Models, the one that gained traction recently is the “Differential Equations” view, mostly due to Song et al. (). Under this formalism, Diffusion’s generative mapping (in ) can be realized by integrating a differential equation in time t. Song et al. () also showed that there are two equivalent generative processes, one stochastic and another deterministic, that leads to exact same noise-to-data mapping.

(2)dxt=[f(t)xtg2(t)xtlogpt(xt)sθ(xt,t)]dt+g(t)dw¯ The hyper-parameters f(t) and g(t) are scalar functions of time.

(3)dxt=[f(t)xt0.5 g2(t)xtlogpt(xt)sθ(xt,t)]dt+g(t)dw¯ The hyper-parameters f(t) and g(t) are scalar functions of time.

where pt(xt) is a noisy version of the data density induced by a (known & fixed) forward process over time t

1 Knowns as the forward process marginal

dxt=f(t)xtdt+g(t)dw

(4)xt=α(t)x0+σ(t)ϵ

pt(xt|x0)=N(xt;α(t)x0,σ(t)2I)

The relationship between {f(t),g(t)} and {α(t),σ(t)} is as follows.

(5)f(t)=ddtlogα(t), and g(t)2=2σ(t)2ddtlogα(t)σ(t)

Despite being able sample xt using , the quantity xtlogpt(x) in or is not analytically computable and therefore learned using a neural function.

2 The ‘score’ of the marginal

The learning objective

A parametric neural function sθ(xt,t) is regressed using a different regression target than the ‘true’ target

(6)L(θ)=Et,x0qdata(x),xtpt(xt|x0)[||sθ(xt,t)xtlogpt(xt|x0)||2] This objective is known as “Denoising Score Matching” ().

(7)L(θ)=Et,x0qdata(x),xtpt(xt|x0)[||sθ(xt,t)xtlogpt(xt)||2] This is the true objective that is not directly computable.

It was proved initially by Vincent () and later re-established by Song et al. () that using xtlogpt(xt|x0) (in ) as an alternative target for regression can still lead to an ubiased estimate of the true xtlogpt(xt). The figure below show graphically, the quantity (red arrows) our parametric score model regresses against at an arbitrary timestep t.

3 Please note that xtlogpt(xt|x0)=xtx0σt2=ϵσt

The parametric model regresses against the vector that points towards the original data point.

Matching flows, not scores

The idea of re-interpreting reverse diffusion as “flow” stems from a holistic observation. Note that given , an ODE dynamics is guaranteed to exists which induces a deterministic mapping from noise to any given data distribution. In diffusion model’s framework, we were only learning a part of the dynamics – not the dynamics itself.

4 This term comes from Continuous Normalizing Flows (CNFs)

(8)dxt=[f(t)xt0.5 g2(t)sθ(xt,t)Instead of this .... why not learn this ?]dt

A deterministic model

Following the observation above, we can assume a generative model realized by a deterministic ODE simulation, whose parametric dynamics subsumes f(t),g(t) and the parametric scores sθ(xt,t)

(9)dxt=vθ(xt,t)dt, where x1:=ϵN(0,I)

Turns out, models like have already been investigated in generative modelling literature under the name of Continuous Normalizing Flows (). However, these models never made it to the scalable realm due to their “simulation based” learning objective. The dynamics is often called “velocity” or “velocity field” and denoted with v.

5 One must integrate or simulate the ODE during training.

6 It is a time derivative of position.

7 .. or as it is now called, the ‘Flow Matching’ loss

Upon inspecting the pair of and , it is not particularly hard to sense the existance of an equivalent ‘velocity matching’ loss for the new flow model in .

An equivalent objective

To see that exact form of the flow matching loss, simply try recreating the ODE dynamics in within by appending some extra terms that cancel out

L(θ)=Et,x0qdata(x),xtpt(xt|x0)[||2g2(t){(f(t)xt12g2(t)sθ(xt,t))             (f(t)xt12g2(t)xtlogpt(xt|x0))}||2]

The expression within the first set of parantheses ( .. ) is equivalent to what now call the parametric velocity/flow or vθ(xt,t). The expression in second set of parantheses is a proxy regression target (let’s call it v(xt,t)), equivalent to xtlogpt(xt|x0) in score matching (see ). With the help of {f,g}{α,σ} conversion (see ), it’s relatively easy to show that v(xt,t) can be written as the time-derivative of the forward sampling process

v(xt,t)f(t)xt12g2(t)xtlogpt(xt|x0)=f(t)(α(t)x0+σ(t)ϵ)[σ(t)g(t)22σ(t)ϵσ(t)]=α˙(t)α(t)(α(t)x0+σ(t)ϵ)[σ(t)(α˙(t)α(t)σ˙(t)σ(t))ϵ]=α˙(t)x0+α˙(t)α(t)σ(t)ϵσ(t)α˙(t)α(t)ϵ+σ˙(t)ϵ

v(xt,t)=α˙(t)x0+σ˙(t)ϵ=x˙t

To summarize, the following is the general form of flow matching loss

(10)LFM(θ)=Et,x0qdata(x),xtpt(xt|x0)[||vθ(xt,t)(α˙(t)x0+σ˙(t)ϵx˙t)||2]

In practice, as proposed by many (; ), we discard the weightning term just like Diffusion Model’s simple loss popularized by Ho, Jain, and Abbeel (). We may think of x˙t as a stochastic velocity induced by the forward process. The model, when minimized with , tries to mimic the stochastic velocity, but without having access to x0.

Be careful with time direction

While the learning objective regresses against x˙t, we need x˙t for sampling. The extra negative is induced automatically while simulating in reverse time (dt being negative). It is therefore equivalent to think the regression target to be x˙t.

The parametric model regresses against the instantaneous stochastic velocity at any point xt on the stochastic path.

The minima & its interpretation

The loss in can be shown to be equivalent to

8 Their gradients are equal, but the losses are not.

(11)LFM(θ)=Et,xtpt(xt)[||vθ(xt,t)Ex0p(x0|xt)[x˙t]||2]

which implies that the loss reaches its minima when the model perfectly learns

9 This is a typical MMSE estimator.

v(xt,t)Ex0p(x0|xt)[x˙t]

Hence, vθ can be regarded as the variational approximation of the posterior velocity v. Note that v is non-causal, i.e. it has access to the true data distribution qdata(x0). On the other hand, the model vθ must be causal. Hence, the learning process “causalizes” () the stochastic path.

The forward stochastic paths xt are also overlapping, while the model is a function. The expectation Ex0p(x0|xt)[] averages the stochastic velocity over all possible real data points x0|xt, leading to smooth velocity fields learned by the model.

10 cannot have multiple value at one given point

The optimally learned generative process () can therefore be expressed as

dxt=v(xt,t)dt, where x1:=ϵN(0,I)

Straightening & ReFlow

t-independent stochastic velocity

With the general theory understood, it is now easy to concieve the idea of straight flows. It simply refers to the following special case

α(t)=1t,σ(t)=t

which implies the forward process and its time derivative to be

xt=(1t)x0+tx1x˙t=x0+x1

What is important is the stochastic velocity is independent of time and the xt trajectory itself is a stright line.

11 Not “constant” – it still depends on data x0 and noise sample ϵ, which are random variables.

This however, does not mean that the learned model will produce straight paths – it only means we’re supervising the model to follow a path as straight as possible. An analogous illustration was provided in Liu, Gong, and Liu () which is a bit more descriptive.

The stochastic straight paths (left) used for training had crossovers. However, the learned model (right) resolves them. (Taken from Liu, Gong, and Liu ())

This learning problem () effectively turns an independent stochastic ‘coupling’ qdata(x0)N(x1;0,I) into a deterministic coupling p(z0,z1) with some dependence. This prcoess (x0,x1)(z0,z1) has been termed (by Liu, Gong, and Liu ()) as “Rectification”. Samples from this deterministic coupling can be drawn by drawing noise sample z1:=ϵN(ϵ;0,I) and then simulating the flow in with the learned model

z0=z1+t=1t=0vθ(zt,t)dt.

It can be proved that the samples (z0,z1) are, in average, closer to each other than that of (x0,x1).

Ep(z0,z1)[||z0z1||]=Ep(z0,z1)[||10v(zt,t)dt||]Ep(z0,z1)[10||v(zt,t)||dt]=Ep(x0,x1)[10||v(xt,t)||dt]=Ep(x0,x1)[10||E[x˙t|xt]||dt]Ep(x0,x1)[10E[ ||x˙t|| | xt]dt]=10Ep(x0,x1)[E[ ||x0x1|| | xt]]dt=Ep(x0,x1)[||x0x1||]

The proof uses the following

  1. The fact that |||| is a convex cost function.
  2. Convex functions can be exchanged with E according to Jensen’s inequality.
  3. zt and xt has the same marginal and can be exchanges as a random variable.
  4. Assumes our model learns the perfect v.
  5. Law of iterated expectation.

Please see section 3.2 of Liu, Gong, and Liu () for more details on the proof.

Reflow

The process of rectification however, does not guarantee (as you can see in the above figure) the new coupling to have straight paths between each pair. Liu, Gong, and Liu () suggested the “Reflow” procedure, which is nothing but learning a new model using the samples of p(z0,z1).

LFM(ϕ)=Et, (z0,ϵ)p(z0,ϵ)[||vϕ(zt,t)(z0+ϵ)||2]

The ‘reflow’-ed coupling (z2,ϵ) is guaranteed to have paths straighter than (z1,ϵ). One can, in fact, repeat this procedure as many times as they want. Another figure from Liu, Gong, and Liu () perfectly demonstrates successive reflows

Successive reflows produce more and more straight paths.

In this article, we talked about Flow Matching, Rectification and Reflow – some of the emerging new ideas in Diffusion Model literature. Specifically, we looked into the theoretical definitions and justifications behind the ideas. Despite having an appealing outlook, some researchers are skeptical of it being a special case of good old Diffusion Models. Whatever the case maybe, it did deliver one of the best text-to-image model so far (), pehaps with a bit of clever engineering, which is a topic of another day.

References

Chen, Tian Qi, Yulia Rubanova, Jesse Bettencourt, and David Duvenaud. 2018. “Neural Ordinary Differential Equations.” In NeurIPS.
Esser, Patrick, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas Müller, Harry Saini, Yam Levi, et al. 2024. “Scaling Rectified Flow Transformers for High-Resolution Image Synthesis.” arXiv Preprint arXiv:2403.03206.
Ho, Jonathan, Ajay Jain, and Pieter Abbeel. 2020. “Denoising Diffusion Probabilistic Models.” In NeurIPS.
Lipman, Yaron, Ricky TQ Chen, Heli Ben-Hamu, Maximilian Nickel, and Matthew Le. 2022. “Flow Matching for Generative Modeling.” In The Eleventh International Conference on Learning Representations.
Liu, Xingchao, Chengyue Gong, and Qiang Liu. 2022. “Flow Straight and Fast: Learning to Generate and Transfer Data with Rectified Flow.” arXiv Preprint arXiv:2209.03003.
Podell, Dustin, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, and Robin Rombach. 2024. SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis.” In The Twelfth International Conference on Learning Representations.
Ramesh, Aditya, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. 2022. “Hierarchical Text-Conditional Image Generation with Clip Latents.” arXiv Preprint arXiv:2204.06125.
Rombach, Robin, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. “High-Resolution Image Synthesis with Latent Diffusion Models.” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10684–95.
Song, Yang, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. 2021. “Score-Based Generative Modeling Through Stochastic Differential Equations.” In ICLR.
Vincent, Pascal. 2011. “A Connection Between Score Matching and Denoising Autoencoders.” Neural Computation.

Citation

BibTeX citation:
@online{das2024,
  author = {Das, Ayan},
  title = {Match Flows, Not Scores},
  date = {2024-04-26},
  url = {https://ayandas.me/blogs/2024-04-26-flow-matching-strightning-sd3.html},
  langid = {en}
}
For attribution, please cite this work as:
Das, Ayan. 2024. “Match Flows, Not Scores.” April 26, 2024. https://ayandas.me/blogs/2024-04-26-flow-matching-strightning-sd3.html.