To accelerate convergence of iterative methods, we often change variables.
The model-styling regression
is changed to
Experience shows, however, that the variable
is often more interesting
to look at than the model
Why should a new variable introduced for computational convenience
turn out to have more interpretive value?
There is a little theory explaining why. Begin from
|The preconditioning variable is not simply a computational convenience. This model-space image tells us where our data contradicts our prior model. Admire it! Make a movie of it evolving with iteration.|
If I were young and energetic like you, I would write a new basic tool for optimization. Instead of scanning only the space of the gradient and previous step, it would scan also over the ``smart'' direction. Using both directions should offer the benefit of preconditioning the regularization at early iterations while offering more assured fitting data at late iterations. The improved module for cgstep would need to solve a matrix. I would also be looking for ways to assure all directions were scaled to have the prior model spectrum and prior energy function of space.