|IID random variables have uniform variance in both physical space and Fourier space.|
In a geophysical project it is important the residual between observed data and theoretical data is not far from IID. We fit the data difference by minimizing the sum of the squared residuals, so if any collection of residuals is small, their squares are really small, so such regression equations are effectively ignored. We would hardly ever want that. Consider reflection seismograms. They get weak at late time. So even with a bad fit the difference between real and theoretical seismograms is necessarly weak at late times. We don't want the data at late times to be ignored. So we boost up the residual there. We choose to be a diagonal matrix that boosts late times in the regression
An example with too much low (spatial) frequency in a residual might arise in a topographic study. It is not unusual for the topographic wavelength to exceed the survey size. Here we should choose to be a filter to boost up the higher frequencies. Perhaps should contain a derivative or a Laplacian. If you set up and solve a data modeling problem and then find is not IID, you should consider changing your .
Now let us include regularization
and a preconditioning variable
We have our data fitting goal and our model styling goal,
the first with a residual
in data space, the second with a residual
in model space.
We have had to choose a regularization operator
and a scaling factor
|We should choose a weighting function (and/or operator) so data residuals are IID. We should also choose our regularization operator so the precondioning variable comes out IID.|
Finally, the . How should we choose this number? Let be read as the Expected average value of . The concept that each component in the vector should have the same expected absolute value leads to the notion that the value of epsilon should be . (I vaguely remember trying this once and discovering that epsilon could not be bootstrapped. It either diverged to infinity or converged to zero depending on its starting value. Perhaps the epsilon we should use is the starting value poised between each divergence! I don't trust my memory for an important issue like this. Somebody else should try this again.)
There is another strange idea here which is a consequence of the notion that elements in should be IID. It means elements of and should not be correlated. ``But wait'', you say, ``it makes no sense to correlate spaces of different dimension.'' That is where the formal statistical notion of ``ensemble'' arises. If there are many worlds, and if we may speak of an average over worlds, then we can have an array of averages, the array being of dimension the number of components of by that of each an average over the many "worlds". How can a practioner absorb this notion? Perhaps in some cases the model and data space have a natural alignment such that a product of the two spaces can be locally averaged. Post stack migration suggests the example that a hyperbola in data space hitting its top in model space. This suggests we use a weight matrix