Next: The basic low-cut filter Up: FAMILIAR OPERATORS Previous: Causal and leaky integration

## Backsolving, polynomial division and deconvolution

Ordinary differential equations often lead us to the backsolving operator. For example, the damped harmonic oscillator leads to a special case of equation (23) where . There is a huge literature on finite-difference solutions of ordinary differential equations that lead to equations of this type. Rather than derive such an equation on the basis of many possible physical arrangements, we can begin from the filter transformation in (4) but put the matrix on the other side of the equation so our transformation can be called one of inversion or backsubstitution. Let us also force the matrix to be a square matrix by truncating it with , say . To link up with applications in later chapters, I specialize to 1's on the main diagonal and insert some bands of zeros.

 (23)

Algebraically, this operator goes under the various names, backsolving'', polynomial division'', and deconvolution''. The leaky integration transformation (19) is a simple example of backsolving when and . To confirm this, you need to verify that the matrices in (23) and (19) are mutually inverse.

A typical row in equation (23) says

 (24)

Change the signs of all terms in equation (24) and move some terms to the opposite side
 (25)

Equation (25) is a recursion to find from the values of at earlier times.

In the same way that equation (4) can be interpreted as , equation (23) can be interpreted as which amounts to . Thus, convolution is amounts to polynomial multiplication while the backsubstitution we are doing here is called deconvolution, and it amounts to polynomial division.

A causal operator is one that uses its present and past inputs to make its current output. Anticausal operators use the future but not the past. Causal operators are generally associated with lower triangular matrices and positive powers of , whereas anticausal operators are associated with upper triangular matrices and negative powers of . A transformation like equation (23) but with the transposed matrix would require us to run the recursive solution the opposite direction in time, as we did with leaky integration.

A module to backsolve equation (23) is recfilt.

api/c/recfilt.c
  for (ix=0; ix < nx; ix++) { tt[ix] = 0.; } if (adj) { for (ix = nx-1; ix >= 0; ix-) { tt[ix] = yy[ix]; for (ia = 0; ia < SF_MIN(na,ny-ix-1); ia++) { iy = ix + ia + 1; tt[ix] -= aa[ia] * tt[iy]; } } for (ix=0; ix < nx; ix++) { xx[ix] += tt[ix]; } } else { for (iy = 0; iy < ny; iy++) { tt[iy] = xx[iy]; for (ia = 0; ia < SF_MIN(na,iy); ia++) { ix = iy - ia - 1; tt[iy] -= aa[ia] * tt[ix]; } } for (iy=0; iy < ny; iy++) { yy[iy] += tt[iy]; } } 

The more complicated an operator, the more complicated is its adjoint. Given a transformation from to that is , we may wonder if the adjoint transform really is . It amounts to asking if the adjoint of is . Mathematically we are asking if the inverse of a transpose is the transpose of the inverse. This is so because in the parenthesized object must be the inverse of its neighbor .

The adjoint has a meaning which is nonphysical. It is like the forward operator except that we must begin at the final time and revert towards the first. The adjoint pendulum damps as we compute it backward in time, but that, of course, means that the adjoint pendulum diverges as it is viewed moving forward in time.