Spectral Interpolation

An analysis/synthesis technique in which a musical tone is modelled as a set of harmonics with slowly varying amplitudes. As long as amplitudes vary relatively slowly, linear interpolation between wavetables provides an efficient synthesis technique.

To date, we have worked on reproducing sounds from prerecorded examples. Current work uses tables to store representative spectra (or waveforms) as a function of pitch and loudness (and perhaps other parameters in the future). This idea is mentioned in our early papers but not fully realized until the work by Istvan Derenyi.

Sound examples are available.


Back to Bibliography by Subject


Dannenberg, Serra, and Rubine, “Comprehensive Study of Analysis and Synthesis of Tones by Spectral Interpolation,” Journal of the Acoustical Society of America, Supplement 1, Vol 82 (Fall 1987).

This is a short paper on the spectral interpolation technique, documenting a conference presentation.

ABSTRACT: A new approach to the real-time generation of digital sounds uses a completely automated analysis/synthesis technique for natural sounds. This approach leads to a more efficient implementation than classical additive synthesis; moreover it allows dynamic spectral variations to be controlled with only a few high-level parameters. Additive synthesis devices require a large number of oscillators (one for each partial). This technique gives excellent results; however, it requires a large amount of computation, and a large amount of control data. On the other hand, fixed-waveform synthesis uses only one oscillator, but the results are of poor musical quality since there is no dynamic evolution of the spectrum. A new technique has been investigated in which spectral variation is achieved through spectral interpolation. The research shows that spectral interpolation provides high-quality synthesis including controlled timbral variation at little more than the cost of a table-lookup oscillator. The task of analyzing different kinds of instrumental sounds to produce control information for this technique has been automated.

[Acrobat (PDF) version]


Dannenberg, Serra, and Rubine, “A Comprehensive Study of Analysis and Synthesis of Tones by Spectral Interpolation,” Technical Report CMU-CS-88-146, Carnegie Mellon University Computer Science Department, June 1988.

This is a long version of our JASA conference presentation.

ABSTRACT: This paper persents a technique for the analysis and digital resynthesis of instrumental sounds. The technique is based on a model which uses interpolation of amplitude spectra to reprodce short-time spectral variations. The main focus of our work is the analysis algorithm - starting from a digital recording we are able to automatically compute the parameters of our model. The parameters themselves, harmonic amptittues at selected times, are small in number and intuitively interpretable. The model leads to a synthesis technique more efficient than classical additive synthesis: moreover it allows dynamic spectral variations to be controlled with only a few high-level paramters in real time.

We have studied two analysis/synthesis methods based on spectral interpolation. The first uses only spectral interpolation. This method has allowed us to compress recordings of ochestral instruments to an average of 400 bytes per second without perceptible loss of realism, and to resynthesize these sounds with about 10 arithmetic operations per sample. The second method is a hybrid in which a sampled attack is spliced onto a sustain synthesized via spectral interpolation.

The spectral interpolation model has been applied successfully to different intstriments belonging to the brass and woodwind family. We plan to extend the study to many more instruments.

[Acrobat (PDF) version]


Serra, Rubine, and Dannenberg, “Analysis and Synthesis of Tones by Spectral Interpolation,” Journal of the Audio Engineering Society, 38(3) (March 1990), pp. 111-128.

ABSTRACT: A technique is presented for the analysis and digital resynthesis of instrumental sounds. The technique is based on a model that uses interpolation of amplitude spectra to reproduce short-time spectral variations. The main focus of this work is the analysis algorithm. Starting from a digital recording the authors were able to compute automatically the parameters of this model. The parameters themselves, harmonic amplitudes at selected times, are small in number and intuitively interpretable. The model leads to a synthesis technique more efficient than classical additive synthesis. Moreover it allows dynamic spectral variations to be controlled with only a few high-level parameters in real time. Two analysis/synthesis methods are studied based on spectral interpolation. The first uses only spectral interpolation. This method made it possible to compress recordings of orchestral instruments to an average of 400 bytes per second without perceptible loss of realism, and to resynthesize these sounds with about 10 arithmetic operations per sample. The second method is a hybrid in which a sampled attack is spliced onto a sustain synthesized via spectral interpolation. The spectral interpolation model has been applied successfully to different instruments belonging to the brass and woodwind family. The authors plan to extend the study to many more instruments.

[Acrobat (PDF) version]


Serra, Rubine, and Dannenberg, “The Analysis and Resynthesis of Tones via Spectral Interpolation,” in Proceedings of the 1988 International Computer Music Conference, Cologne, West Germany, September 20-25, 1988. Eds. Christoph Lischka and Johannes Fritsch. San Francisco: International Computer Music Association, 1988. pp. 322-332.

An article for the computer music community. The JAES article is probably easier to come by and more complete.

ABSTRACT: This paper presents a new approach to the real-time generation of digital sounds. Our approach is based on the interpolation of spectra over time, and leads to a completely automated analysis/synthesis algorithm for the reproduction of natural sounds. The technique enables accurate reproduction of the spectral variations of acoustic instruments. Furthermore, spectral interpolation synthesis has a more efficient implementation than that of classical additive synthesis.

We have applied the spectral interpolation analysis/synthesis technique to different instruments and have obtained promising results. The technique greatly reduces the amount of data needed to represent a harmonic sound; however, it fails to convincingly reproduce inharmonic sounds. To reproduce sounds with inharmonic attacks we use a hybrid method which combines sampling and

spectral interpolation synthesis.
[Acrobat (PDF) version]

Serra, Rubine, and Dannenberg, “A Comprehensive Study of the Analysis and Synthesis of Tones by Spectral Interpolation,”CMU Technical Report CMU-CS-88-146, June 1988.

This is the most comprehensive presentation of our work on analysis and synthesis. I still have copies; if you would like one, send me email.


Dannenberg, Pellerin, and Derenyi. “A Study of Trumpet Envelopes,”in Proceedings of the International Computer Music Conference. San Francisco: International Computer Music Association (1998) pp 57-61.

This paper is the first I know of that reports on envelopes of trumpet tones played in a musical context. "Real" trumpet envelopes exhibit features never seen in the classic studies. We also show statistical relationships between melodic shape and envelope shape, confirming Clynes' general idea, but contradicting some specific details of the Clynes model. This paper is really a companion to Derenyi and Dannenberg 1998 (below).

[Acrobat (PDF) version] [HTML version]


Istvan Derenyi and Roger B. Dannenberg. "Synthesizing Trumpet Performances," in Proceedings of the International Computer Music Conference. San Francisco: International Computer Music Association (1998) pp 490-496.

This paper puts into practice what we described as early as 1987 and what we've been aiming for all along: the idea that we can generate envelope information and use it to drive a spectral interpolation synthesizer. Interesting results here include some studies of the validity of the assumption that the spectrum is really a function of amplitude and frequency, and that there is not enough "history" in the system to matter. This paper is really a companion to Dannenberg, Pellerin, and Derenyi 1998 (above).

[Acrobat (PDF) version] [HTML version]


Dannenberg and Derenyi, “Combining Instrument and Performance Models for High-Quality Music Synthesis,” Journal of New Music Research, 27(3), (September 1998), pp. 211-238.

This is a full-length journal article which partially overlaps the two ICMC articles listed above.

ABSTRACT: Convincing synthesis of wind instruments requires more than the reproduction of individual tones Since the player exerts continuous control over amplitude, frequency, and other parameters, it is not adequate to store simple templates for individual tones and string them together to make phrases. Transitions are important, and the details of a tone are affected by context. To address these problems, we present an approach to music synthesis that relies on a performance model to generate musical control signals and an instrument model to generate appropriate time-varying spectra. This approach is carefully designed to facilitate model construction from recorded examples of acoustic performances. We report on our experience developing a system to synthesize trumpet performances from a symbolic score input.

[Acrobat (PDF) version]


Dannenberg and Matsunaga, “Automatic Capture for Spectrum-Based Instrument Models,”in Proceedings of the 1999 International Computer Music Conference, San Francisco: International Computer Music Association, (1999), pp. 145-148.

This paper reports on initial work to automate the capture of Spectral Interpolation models.

ABSTRACT: Our goal is to automate the analysis of recorded acoustic performances in order to study the relationship between scores and performance. An automated system segments a recorded performance into individual notes. These are then analyzed to determine pitch and amplitude envelopes. Spectral data is also measured. The technique consists of two stages. First, a rough estimation stage performs pitch detection based on MQ analysis. Second, an accurate estimation stage uses period-synchronous analysis. The data will ultimately be used by a machine learning process to build instrument and performance models. Experiments with trumpet tones are described.

[Acrobat (PDF) version]