<<home  <<previous  next>>

Complex Resonator

How can numbers, put in a feedback loop, get caught in sinusoidal vibration, like it is the case in resonating filters? Complex integration can do that. In it's proper form, complex integration is rarely used for filters, because most input signals are not complex. Yet it shines the clearest light on the mechanism of resonance, and it was only after studying the topic of complex integration that I started to understand how resonating filters can work at all.

On this page, I will illustrate discrete complex integration with signal flow graphs, plots on the complex plane, and translations to signal output on the real plane. To demonstrate the action of the integrator, I will feed a single-sample pulse into it everytime. The output is what they call 'filter impulse response', and for a resonator the impuls response could even be employed in a musical sense.

The scheme of complex integration is somewhat complicated, therefore I will start from an introduction of real number integration. An integrator simply sums the previous output value with the current value. If x[n] is the input and y[n] the output of a discrete intgrator, then integration is written:

y[n] = x[n] + y[n-1]

The previous output value holds the history of the output, and the weight of that history prevents too fast fluctuations in the output. The effect of real number integration is therefore: lo-pass filtering.


The integrator keeps a running sum of input/output. Imagine you feed the number 1 in, followed by an infinite series of zero's. That single number 1 will keep circulating through the delay line and summation forever, and a constant series of ones will appear at the output. A pulse is transformed into a DC level, which can only be stopped when a negative number of the same absolute value is sent in.

The above sketched scheme is a perfect integrator. For financial applications, it should be done like this or else we have a case of fraude. But in audio processing the output must be kept within well-defined limits. Therefore the integration should be non-perfect. Including a multiplication factor (slightly) below 1 in the feedback loop will make the integrator 'leaky'. That factor is the filter coefficient:


What happens if you feed a single number 1 into a leaky integrator, followed by a series of zero's? Below is an example with filter coefficient 0.9. If the pulse was sent at time [0], the first output value y[0] just has the input value of 1. For output y[1], y[0] is multiplied with the filter coefficient and summed to the input which is then zero. Thus the output value for y[1] is 0.9. When the sample circulates the second time it is multiplied with 0.9 again. So you get 0.9 * 0.9, the square of 0.9 as a result. The third time it is 0.9 * 0.9 * 0.9, or the cube of 0.9. And so on.


The whole series is defined by the function y = 0.9x for non-negative x. It is an exponential decay, theoretically of infinite length, and in practice limited by the range of the datatype used for the computation.


Although the signal duration is extended by the integrator, this is not yet a resonator. Doing the same process with complex numbers will introduce resonance.


Although it can be stated that the scheme does not change, it is not at all so easy to see how this should be implemented in practice.

Let me start with the simplest case here: a perfect complex integrator that feeds the output y[n] back into the summation with amplitude factor 1. All complex numbers on the unit circle have amplitude 1, and they can be expressed in the form:


where the omega is an angle. Complex numbers with amplitude or radius 1 are called complex unit vectors. The complex number [1+i0] is a unit vector, but filter coefficients with non-zero imaginary parts should be explicitly supported by a complex integrator scheme. A perfect complex integrator with non-decaying impulse response looks like this:


Recall the general form of complex multiplication: (a+ib)*(c+id). If we call the one-sample-delayed complex output (a+ib), that delayed output multiplied with the complex unit vector is expressed as:


This is a regular complex multiplication and it produces the real and imaginary products:


These products must be summed with the complex input to make the complex integrator. Let me sketch the whole thing as a matrix operation for convenience:


The real sum, sent to the real output, is:


The imaginary sum, sent to the imaginary output, is:


There are dozens of ways to illustrate this in a flow graph. I have chosen a form where the imaginary phase is shown as a duplicate of the real phase. Notice the exchange between real phase and imaginary phase: a positive contribution goes from real to imaginary, and a negative contribution goes the other way round. This is typical for complex multiplication, and plays the essential role in the generation of sinusoidal motion.


The complex integrator has so many feedback routes that it is hard to imagine what will happen to even the simplest of input signals, a single pulse at time [0]. Fortunately, this can be illustrated as a repetition of operations on the complex plane. Let the pulse be a vector [1+i0]. The filter coefficient is somewhere on the unit circle since I restricted it to amplitude 1 for the moment. Here are the pulse and an arbitrary filter vector drawn on the complex plane:


The filter vector I picked for this example is cos(pi/8)+isin(pi/8). Written as a complex exponential, this is ei*pi/8. The first output value is the pulse itself. Since there is no non-zero input after x[0], the second output value is [1+i0] multiplied by the filter vector. The whole process is analoguous to the real numbers integrator. The third output value is the filter vector squared, the next one is the filter vector cubed, and so on. The function describing this series is:

y[n] = e(i*pi/8)n for all non-negative n

These numbers are all on the unit circle with angular intervals of pi/8 radian, and it will keep rotating anti-clockwise till the end of times. It is an oscillator. Here are some points on the complex plane:


If I plot the real parts of the output series on the real number plane, it is a cosine wave. Here are some points:


So far, I have deliberately chosen a unit vector as filter coefficient example, to not over-complicate things immediately. Unit vector coefficients will make an oscillator with user-definable frequency. Now I want to make a damped oscillator, a resonator. The system must be leaky, here again. The filter vector's amplitude should be (slightly) less than 1. If we use the letter r (for radius or ratio) to represent the vector's amplitude, the complex filter coefficient can be simply expressed:


The complex integrator flow graph need not be redrawn for this adaptation. Only the radius factor is prepended to the unit vector terms:


At last, here is the resonator scheme. Let me now fill 0.9 for the radius and plot some output points from pulse excitation on the complex plane:


There is a decay in the radius or amplitude. Here are some points of the real phase on the real number plane:


Extending the plot to more points will show the decaying cosine better:


I have built this scheme into a Max/MSP object named 'composc~'. Using this object, I can test the complex resonator scheme in real time within Max/MSP.


Before discussing details of parameter settings, I want to show some scope traces of complex resonator output. On the left is a trace in the complex plane with an x-y scope set up, and on the right real phase and imaginary phase are seen on the time axis:

x-yscope. phases

A more detailed view of the phase attacks show their cosine and sine character respectively. The imaginary output sounds much better because of it's smooth start.


The real and imaginary phases are equivalent but not equal, nor symmetric. The system has a well-defined rotational direction, which is anti-clockwise. That is conventionally interpreted as 'positive frequencies'.

It is by the way also possible to feed the pulse into the imaginary input, as the complex vector [0+i]. The imaginary phase then has a cosine instead of a sine, meaning a phase shift of -pi/2 radians. The real phase has a negative sine, which is also a -pi/2 radians phase shift.



It is quite easy to set a precise frequency for a complex resonator. The angle or phase parameter, omega, is an angular increment in radians over a one-sample interval. The required angular increment can be computed from a user parameter 'frequency in Hz' and the sampling rate:

omega = frequency * 2 * pi / samplingrate


With the composc~ object I can check whether the above statement is correct, and to which extent of precision.

Because the output is complex, I can compute the phase increment over each one-sample interval and map this to cycles per second, that is, Herz.

The errors I found were in the order of 0.0001 Herz, for low and hi resonator frequencies, and for short and long decay times. Not so bad.

The accuracy in the instantaneous frequencies tells that the resonator output must be a pure sinusoid, with very low distortion. It also shows a quite perfect orthogonality of the real and imaginary phases. The slightest shift from orthogonality would result in serious fluctuations in the measurements of the instantaneous frequencies.

If the user would enter a negative frequency, the resonator will produce negative frequencies indeed, rotating clockwise. The same thing would happen for frequencies between Nyquist and the sampling rate. Such settings can do no harm.

decay time

For a resonator, it is more convenient to have a user parameter 'decaytime' instead of radius. But the concept of decay time is ambiguous. Any exponential decay theoretically has infinite length. How could you distinguish one decay from another by specifying their length, if they are all infinite? It is customary to specify a kind of half-life, like done for radioactive material. Only, for signal decay a different base is chosen, the mathematical constant e, and the decay is roughly to one-third rather than to one-half.

We want to set a time interval in which the amplitude will decay to a factor e-1, around 0.367879. With radius = e-1 this would be a one-sample interval. To specify milliseconds instead of one-sample intervals, the factor 1000/samplingrate is introduced, making e-1000/sr. Inserting the user parameter decaytime, the following equation does the mapping from milliseconds to radius:

radius = exp((-1000 / samplingrate) / decaytime)

Here are some examples, showing how radius relates to decaytime at 44100 Hz sampling rate:

radius decay time in milliseconds

0.9775794 1

If infinite could be passed as the decay time parameter, this would make an oscillator. It is also interesting to consider the effect of passing a negative decay time value:

radius = exp((-1000. / 44100.) / -1.) = 1.022935

This will do exponential growth instead of decay, and resonator amplitude will rise to the stars in a fraction of a second. Exponential amplitude growth emulates time reversal of a decay. You could musically express the origin of the universe...


With this Max/MSP patch I checked how the above mentioned decaytime-to-radius mapping holds in practice. The decay time was set at 10000 milliseconds, and the amplitude decay was measured over that interval. Theoretically it should be 0.367879. The measured result is 0.367109, a 0.2% error. Further testing showed errors of similar order.

The inaccuracy in the decay time is due to the datatype used, 32 bit floating point. This type has 23 bits in the mantissa, therefore the precision is 2-23, which rounds to 0.000001 in decimal representation. With radiuses approaching so near to 1, precision falls short quite soon. If you would really need to set a 1000+ millisecond decay time with millisecond accuracy, double precision (64 bit) floating point processing is required. In the meantime, very long decaytimes can be set with 32 bit datatype. Not with millisecond precision, but the decay itself will still be perfectly smooth.

other resonator types

In practice, filters are not designed to operate on complex signals, therefore they do not perform complex integration as outlined above. However, to understand how they resonate, I found it useful to compare their operation with complex integration and see how their output moves on the complex plane. In the illustrations below you will find 'phase 1' and 'phase 2', replacing real and imaginary phase.

I was fascinated by the state variable filter, and sort of reverse-engineered it to find it's mathematical principle. The oscillator at the heart of an SVF has this flow graph:


It has no cosine terms, and for a long time I wondered how this could work at all. The feedback loops of the second phase are somewhat confusing. It receives it's own output via two routes:


This can be written as a factor on b:


The other feedback factors are straightforward, and the system can be described as a matrix operation like it was done for complex integration. Instead of multiplication with a complex unit vector (cos(omega)+isin(omega)), the SVF oscillator has this matrix:


The green parts of the matrix represent the so called 'identity diagonal', and here something strange has happened. With complex integration, the cosine is on the identity diagonal, but now the entries are not even a constant. The lower entry reminded me of a trigonometric identity:


(The page 'Unit Circle' illustrates how the real part of a point on the unit circle can be defined in terms of the imaginary part, and vice versa). If the matrix had been the one here below, it would have been regular complex multiplication with a unit vector:


In the state variable filter, the scaling factors on the identity diagonal are stuffed in just one entry. The total of the scaling factors is unaltered, since:


The SVF seems to perform a sort of crippled complex multiplication. I would never expect this to work, but guess what? It does an oscillator, describing an ellipse on the complex plane, instead of a circle. At low frequencies, the elliptical drift is hardly noticed. At high frequencies, where sin(omega) is a more important factor, it becomes distinct:

An ellipse on the complex plane indicates sinusoid phases which are not orthogonal. They have a phase relation different from 90 degrees. Each phase can still be a pure undistorted sinusoid however.

The SVF ellipse results from the 'modified complex multiplication'. Just like in complex multiplication itself, there is a balance mechanism controlling centrifugal and centripetal forces, here not on a per-sample basis but on a per-cycle basis.

(The lines in the plot are merely 'interpolations' between the points on the ellipse.)


The state variable filter's oscillating heart must receive a bypass to make a damped oscillator, that is, a resonator. One more negative feedback route is included halfway. It's coefficient, titled 'c' here, regulates a frequency-dependant decay which could be expressed as a Q factor. The graph sketched below is a resonator which can be driven by single-sample pulse input.


As a resonator, the state variable resonator does a job comparable to the full complex integrator, only it's decay varies with frequency setting, and you may or may not appreciate that.

A conventional state variable filter has the sin(omega) factor after the 'real' phase summation. In that case the input will be attenuated according to frequency parameter setting, which is unwelcome for the resonator application. But for filtering audio signals it is an advantage. This is a regular SVF graph:


The SVF produces a band pass filtered signal in it's first integrator phase and a low pass in the second phase. If you take the input signal plus the negative feedback, you have a high pass filtered signal.

Another classic resonator scheme is the two-pole section of a biquad. In it's non-damped form, the two-pole needs no more than this simplistic signal flow:


I wondered if this routine can still has elements of complex integration. Therefore I have redrawn the two-pole graph such that it shows complex integration similarities and missing links:


From this graph, it is easy to see what the matrix representation must be:


The scaled cosine term in the top left entry can be any number inbetween -2. and 2. Only when this entry has value 0, the square multiplication matrix is the representation of a defined complex number: [0+i], describing a pi/2 radian angular interval. This case does the familiar circular motion on the complex plane, after a single pulse input:


For all other cases, the matrix does not hold a defined complex number. Still, the imaginary unit remains there, and this will effectuate some sort of rotation. In fact, for most frequencies the motion has an extreme elliptical drift. To see why, consider the matrix for frequency setting zero with 2*cos(0)=2:


The impulse response to this setting will be a diagonally ascending series on the complex plane.


Other frequency settings result in something inbetween an ever ascending line and a circle. That is, an ellipse.

From the above figure it is also clear that even a single-sample pulse input can lead to huge output values. Only the pi/2 radian case with it's perfect circle has radius 1 automatically. For all other frequency settings, the pulse input should be attenuated by a factor abs(sin(omega)). This will give output peak value 1 for all oscillator frequencies.

To make a two-pole resonator, the feedback must be reduced in both loops. A factor r between 0 and 1 is included in the first loop, while the second loop gets the same factor, but squared. Together with the input attenuator, this is the scheme for a pulse-driven two-pole resonator:


Although the two-pole is in itself a stable resonator, it can not handle arbitrary modulation of it's frequency parameter. Imagine you have this perfect pi/2 radian circle running, and go to a low frequency setting from there. The circle is stretched to an ellipse with much larger peak value. It is way not so bad as exponential growth, which can reach inf and nan within a fraction of a second. But still the ellipse can inadvertently grow to a couple of times unity gain. Normalisation of the output instead of the input does not work either.

If you need a robust resonator that can withstand some abuse, the complex integrator is a better choice.

complex versus real resonators

This page illustrated the impulse response of a complex integrator: a circle or decaying spiral on the complex plane. I also 'abused' the complex plane to illustrate the impulse responses of the SVF and two-pole. The ellipses of the real filters denote that their output is not a complex signal with orthogonal phases. This is logical, because their input is not supposed to be complex and they do not perform complex multiplication, even though there are partial similarities which I exploited to sketch their behaviour. Possibly, their output should not be presented on the complex plane at all. But there is so little to see on only the x-axis! At least I now have an impression of how resonators can operate.

Normally, the second integrator of a real filter is not handled as the second of two phases, but as a two-sample-delayed real output value. The integration then has a form like:

y[n] = a*x[n] + b*y[n-1] - c*y[n-2]

Both x and y are real numbers. Although y is most certainly phase-shifted respective to the input, the shift can not be expressed in it's value. The shift is implicit. Real filter characteristics are necessarily calculated with help of complex math. Maybe I should try to illustrate that as well some day. Anyhow, this page is getting too lengthy. There is just one thing I would like to mention now:

beware of subnormal numbers

Integrators, complex or not, are efficient lo-pass filters because they work with only a few multiplications. Still I found my resonating filters consuming lots of CPU time under certain conditions. When I saw this happening for the first time, I was puzzled. Precisely at moments where the resonators seemed to be in idle state, CPU load jumped to 30 percent or more, depending on the number of resonators running. Among dsp developers, this is a well known phenomenon, but I was still unaware.

What happens when decaying numbers approach zero? For integers this is unambiguous: smaller than 1 is 0. Floating point numbers however do not so easily give up. A 32 bit single precision floating point number can go down almost till 2-126. In the context of audio processing, numbers so small are completely irrelevant, but a naively programmed resonator will happily decay into this range. Then comes the range of the subnormal numbers, for which the data type has no regular representation. Some processors boldly flush these numbers to zero by default. Others, like there is Intel, handle subnormal numbers with intensive care. And intensive it is. A special flag is set, rendering pipelined instructions useless. This in turn, tends to block the thread where the resonators should be processed. It is amazing how much time and energy a processor can spend on irrelevant details. Almost human.

This intensive care treatment of subnormal numbers should be avoided. What is the range of your audio card? If it is 24 bit, that is some 144 dB. With unity gain as a reference, inbetween 2-24 and 2-126 is a huge range of never heard sample values. If the sum of both filter integrators is smaller than 2-64 for example, they can be safely flushed to zero long before they will get subnormal, and the inefficiency issue is gone.