A couple weeks back a blog post was released on the PyTorch blog describing the Stochastic Weight Averaging (SWA) algorithm and it’s implementation in pytorch/contrib. The algorithm itself seemed embarrassingly straightforward and relied on averaging snapshots of the the model across a certain learning rate schedule. The authors argued that “SGD tends to converge to the boundary of the low-loss region, making it susceptible to the shift between train and test error surfaces”. The hope is that by averaging multiple solutions we’ll end up in the center of a flat and wide region of loss (which hopefully should lead to better generalization). I ended up trying this on some semantic parsing tasks I was working on and after a painful couple of hours of manual hyper-parameter selection I ended up with a model that was giving me around 5-10% of relative improvement on a hidden test set.

I was surprised that such a simple modification to the learning algorithm could give such non-trivial gains, which led me to diving deeper into the why.

Personally, the most interesting part of the SWA paper was the following excerpt

SGD generally converges to a point near the boundary of the wide flat region of optimal points.

Izmailov, P., Podoprikhin, D., Garipov, T., Vetrov, D., & Wilson, A. G. (2018). Averaging weights leads to wider optima and better generalization.arXiv:1803.05407.

The logic behind the SWA algorithm relies on this statement being true and therefore seemed like a good starting pointer for deeper exploration.

## Ornstein-Uhlenbeck Process

The next couple of sections will be slightly out of order. First we’ll introduce what the Ornstein-Uhlenbeck (OU) process is as well as describe some of the properties it exhibits. Next we’ll show that under certain assumptions we can represent SGD as the respective process and finally we’ll connect it all back to SWA (hopefully).

So what is the OU Process? The clearest intuitive explanation I’ve heard is that it’s a continuous random walk with a tendency to walk toward some centralized point.

Let’s define a n-dimensional real vector and two square real matrices of the same size .

The OU process is then a stochastic differential equation of the following form

where is the n-dimensional Wiener process.

Recall that the one dimensional Wiener process is a continuous process starting from 0 satisfying two additional constraints:

Gaussian Increments:

Independence:

Then an n-dimensional Wiener process is

Throughout the rest of the diagrams, the blue point represents the start of a process while the red represents the end of the process.

The left picture represent the left portion (after the equal sign) of equation 1, essentially what our “random” walk will look like if there was no noise or randomness. The right side is the exact opposite, it’s a random walk with no behavior toward a centralized point.

The OU process puts these two random walks together to produce stochastic random walk with a tendency toward a centralized point (in this case 0,0).

So what is the gaussian looking density that we’re seeing here? The cool thing about the OU process is that it has a tractable stationary distribution. In other words we can assign a probability density to any point on our space. Skipping the derivation, we can look at the probability density function.

where

The probability density implies that OU path will tend to converge to the center. Interesting enough though, looking at the figure directly above, the random walk seems to orbit the center of the gaussian, which (if we assume SGD and OU are coupled) is what the authors of SWA noted. But why is this happening?

Well intuitively we have two vector forces acting on our point during a point in time. One being a force pulling toward our center and the second being a random Brownian motion. Let’s visualize the two vector forces to see what is happening near the orbit of our random walk.

So what’s happening here? As we got closer to the 0 point tends to 0 leaving the major force behind the movement of our particle the Wiener process, which due to its property of independence, has the same expected magnitude of vector regardless of position on the field. In other words the magnitude of our noise vector stays the same while the informative vector disappears as it gets close to its centralized point.

So as we get closer the to the dense section of our the random noise kicks us out and then we tend to the center again and we repeat throughout the lifetime of the random walk.

This is very interesting behavior. You might be wondering why we’re spending time exploring this process so let’s connect SGD with the OU process before moving on with our analysis.

## Representing SGD as an Ornstein-Uhlenbeck Process

The first paper (that I’m aware of) that represented SGD as an OU process was *Stochastic Gradient Descent as Approximate Bayesian Inference* by Mandt et al. We’ll follow the assumptions and derivations of the paper hopefully with more commentary.

Let’s say we have a loss function that depends on the parameters of some function, over our complete training data.

We’re leaving out the actual function used because that’s not really of interest to us, but we can definitely rewrite the loss like so , it’s just more verbose plus we’ll be working with the loss landscape exclusively. Now generally we don’t take gradient steps utilizing a gradient over the whole training data. Instead we form a minibatch that’s uniformly sampled over our dataset.

Let be a set of random indices drawn uniformly from 1 to , we can then get an unbiased estimate of our gradient via

Notice because is an unbiased estimator of we have in expectation

We can then use this estimated gradient in our SGD update step.

Let’s first try to figure out how we can derive the random noise portion of the OU process. Notice that increments of the Wiener process are Gaussian in nature. Furthermore the stochastic gradient is the sum of independent, uniformly sampled samples. We can therefore utilize central limit theorem to approximate the gradient noise with a Gaussian.

where is a function that provides us with the covariance at

Now lets make another assumption; the neighborhood we are considering is small enough so that we can approximate the covariance by a single positive definite matrix

Well if this is true our gradient approximation becomes the following

And by rearranging equation 6 and plugging in equation 8 we get

Now if we can make the assumption that we can approximate our finite difference equation with a continuous stochastic DiffEq, we can rewrite our finite difference equation to

Okay, we’re almost there. The only thing left to do is to rewrite via linear transformation of . Without loss of generality let’s say . Let’s also assume that we can approximate the loss with a quadratic approximation. We then get

where is the Hessian at the optimum.

Our final equation for SGD now is exactly the OU equation we described in the previous section.

## What can we say about SGD?

It’s important that any intuitions we get by analysing the OU process be within our assumptions, the biggest being that we approximate the covariance and loss surface. Our approximations only make sense when we are within a small area of the loss surface. This tends to be the case only when we’re toward the end of training.

The peculiarities we saw with the OU process, specifically the instability at the center of the OU process also apply to SGD. SGD seems to not be able to enter wide and flat minimas because the noise parameter that is derived from the stochasticity of the gradient, overpowers any information carried by the gradient. The balance is only restored around the boundary of the flat minima which is exactly what the SWA paper empirically showed.

So how does SWA solve this issue? Well by averaging points around the boundary of minima we’ll end up in the center of the flat and wide minima (we denote this with a black point on the OU process figure above). The averaging of parameters is done when training is completed and no extra gradient updates are applied after the averaging. This makes perfect sense now since a gradient update might have enough noise to push the point out of the flat and wide minima.

Well so what else can we do? One trivial thing is to reduce the amount of noise. Looking at the source of noise the only variables we have direct control over is and . Decreasing won’t reduce our signal to noise ratio because it’s also applied to the non-noise portion of our SDE. That leaves us which we can easily control by increasing the batch-size.

Turns out that there’s already a paper that does exactly that. Towards the end of training they’ll increase the batch-size without decaying learning rate: Don’t Decay the Learning Rate, Increase the Batch Size

Overall viewing SGD as a SDE is a useful tool to understanding it’s macro behavior. I’d love to see more work on formalizing the orbital behavior of SGD and maybe providing new tools to deal with the noisy conditions of flat minima.

“by averaging points around the boundary of minima we’ll end up in the center of the flat and wide minima (we denote this with a black point on the OU process figure above). The averaging of parameters is done when training is completed and no extra gradient updates are applied after the averaging. This makes perfect sense now since a gradient update might have enough noise to push the point out of the flat and wide minima.”

This reminds me of a similar technique in GANs, the exponential-moving average/EMA. Same general idea: average the model each iteration with a snapshot with a weight, and use it at testing/sampling time. It yields *much* better results, at least in BigGAN. The intuition is similar: you have the G/D orbiting around each other in cycles etc. Oddly, EMA apparently is not useful at training time, and if you replace the current model with the EMA, despite the EMA being so much better, I’m told it doesn’t help matters at all and may make things worse. BigGAN has already shown benefits to GANs in running with a constant very large minibatch size, so I wonder if the solution for EMA is the same as SWA and one merely needs to increase the minibatch size every time the EMA is swapped in?

I also wonder what the maximum minibatch size for GANs would turn out to be following the logic of https://openai.com/blog/science-of-ai/ https://arxiv.org/pdf/1812.06162.pdf – could one just calculate the gradient noise scale over every say 10 updates, and increase minibatch size until the gradient noise scale ~ 10^0? Then EMA might not need to interact with minibatch scaling at all, as the gradient noise scale for better-trained models will be changing on its own.