Mittwoch, 11. September 2019

Coasta Cordalis, restoration of analog master tapes and and single vinyls


An employee of Bellaphon Records has dived into the archive to retrieve old audios of Costa Cordalis and daylight. He came with the 1/4 "masterband of his first album "Folkolore Aus Aller Welt" from 1966 and some vinyl singles.
This material now had to be digitized and, if necessary, worked up.
To my astonishment was the sound that came from the 1/4 "band very well, the sound was stereo and there was almost no noise to hear. The entire album was only produced with two acoustic guitars and a lead vocal. With the pegeltone I fixed the azimuth of the machine heads. Then I set the recording level so that the loudest passages were about -10 to -6dB. In total, there should be 13 tracks on the album. To my surprise, there was a bonus track, which was refereed and additional announcements to the individual titles in the appropriate languages ​​of the songs. These takes were played at the end of the tape. I bounced them together after the digitization to the individual songs. In a particular title, the noise in Intro was a bit strong, so at the request of Bellaphon, I ran away from the section in question. For this job I have several tools of iZotope (RX7) and Steinberg available. 
De-noiseing is generally about reducing noise without affecting the sound of the music. My plug-in is available with the Voxengo Gliss Equalizer in combination with a dynamic equalizer. Both components have intervened very little in the program. Adding both tools quickly stopped the noise without affecting the amount of music.

Much more complex was the challenge of optimizing the sound of old single-vinyl records.
 Here was a whole range of imperfections to work on. The sound was mono, contained plenty of disk crackers and sometimes quite unsightly distortions. First of all, I played the vinyls (dry) as they were. When digitizing it was only about the plates properly in an appropriate level to play in the calculator. Again, I kept the level at -10 dB to -6dB. Also for eliminating the cracker software from Wavelab and IZotope is available. Here the RX 7 worked very well. The cracker I could eliminate it. However, it was necessary to individually identify and mark each cracker and calculate or render with the correct threshhold. If I chose the section too long, because two crackers were relatively close to each other, then the tool "killed" the punch of the snare drum because it considered the transients of the drum to be a crackpot.The RX 7 was also able to eliminate distortion on some passages because it has kept the distortion over transients. This worked very well on some passages, in other places a more specific solution was needed. 

I experimented with a multiband compressor in combination with a de-esser. Distortion in most cases affected the voice in the frequency range between 2000 Hz and 4000 Hz. With the compressor, I reduced the critical frequency range by about 2-3 dB. It turned out, however, that this reduction was not enough on the one hand, but on the other hand, with a greater reduction, the voice became noticeably too quiet. Therefore, I did additionally with the de-esser very selectively reduced the tearing frequency. This narrowband processing allowed me a further reduction of 2 to 3 dB. Once again, I've found that a plug-in for a specific application does almost miracles, but does not provide a usable result for another problems. In any case, it is necessary to deal with the matter exactly. Simple "plug and play" tends to destroy the program rather than achieving a good result. I am amazed at what is now possible with modern software.

Stefan Noltemeyer

Mittwoch, 16. August 2017

10 Tips for using an audio compressor in the mixdown.

1. The best audio compressor is the one whose presence is not heard at all. (Apart from exceptions such as side-chain compression in EDM).

2. A compressor does not increses the signal, but it reduces peak levels. Only the Make Up Gain will raise the level.

3. This does not make the loud sections of the music louder but the quiet.

4. The compressor reduces the original dynamics.

5. Problematic are the transients (fast attacks), for example of an acoustic guitar. The transients compressed at first, because they have a relatively high level. This reduces the level, but also the original sound, because transients are very important for the sound.

6. Voices can be easily reduced because they have very few transients. 12 dB and more (recommended ratio 4: 1) are pssible.

7. But with a strong voice compression, the breather and other background noises are much louder.

8. With the side-chain input, the work of the compressor is not controlled by the input signal, but by the signal that is present at the side-chain input.

9. For example, a synthesizer in a mix can be automatically quieter when the voice is applied to the side-chain input. This creates space for the voice and makes a mix lively.

10. In the parallel compression, the compressed signal and the uncompressed signal are interconnected phase-stiff. This way you can increase the low level signal and keep the original transients. (Dry control)

Stefan Noltemeyer

Freitag, 28. Juli 2017

technical anaysis of a produced song

Before I start mastering a song, I listen and analyze its acoustic-technical problems. I separate the overall sound into its components.

First, I check out the low frequencies. 
Are the basses generated by the bass,the bassdrum or both?
Are there any other instruments which sounds below 100 Hz? -
If the Lo End o.k or should I use a high-pass filter above 30 Hz. (This shows me also my spectrum analyzer)
Are there any problems between the bass and the bassdrum?

Next, I check out the mid frequencies between 200 Hz and 3000 Hz (where the "music plays").
Is there any instrument or a voice that is too loud?
Is there a single frequency that is too loud?
Is there any instrument or a voice that should be louder?
Is the kick of the bassdrum (the attack) and snaredrum powerful enough?

What about the presences (between 3000 Hz and 8000 Hz)
Which instruments are still involved in this frequency range?
Is the voice sound present enough?
Are there problems with "S" in the voice?
Are there instruments and overtones that are to strong here?
Is the hihat too loud (classic error in the mixdown)?
Is Snaredrum still there?

Next check out the high-end above 8000 Hz
Which instruments play "top"?
(Again) Are the hihat, ride and crash o.k.?

What about the total sound
Does anything boom in the bass?
Does the title sound too sharp or too dull?
What happens in the side channel, are there phase cancellations (stereo information)?
What is the total level, has the sum ever been compressed?
Do we need more loudness
if I´m unsure I will listen to the song in comparsion to other titels same genre.

Stefan Noltemeyer

Donnerstag, 10. Dezember 2015

Linear and non linear audio distortion

Essay from the book "Mastering" (translated by google)

Stefan Noltemeyer:


All components of a sound studio can be divided into four groups:
1. transducers (microphones, speakers)
2. amplifiers (preamplifiers, power amplifiers, impedance converter, equalizers, compressors, limiters, effects units)
3. Memory (analog and digital tape machines, computer hard discs)
4. Cable

Excepting the digital storage, all these components produce distortion in varying levels.
Distortions inevitably arise in any type of audi transmission.
Distortion in the audio technology means , the modification or manipulation of an original signal. Distortion may intentionally or unintentionally caused.

One distinguishes between linear and non-linear distortions.

Linear distortions

In linear distortions can turn different delay distortion and amplitude distortion. In contrast to non-linear distortions incurred in linear distortions no additional frequencies or harmonics.

Runtime distortions in each transmission chain. A signal always comes later to the receiver, as it has been sent. If a full-range equally offset in time, the problem is mostly irrelevant. Exceptions are the latency problems in working with computers. The processing time required by a DSP (Digital Signal Processor), audio data can be processed depending on the processing power and quality of the audio card to several hundred milliseconds, respectively. When a musician with his instrument or recording his voice "by the computer" listen, he will feel the delay as disturbing.
In the analog audio technology problems arise when several signals with different delays are switched together.
Delay distortion in which the delay is shorter than the period of a single vibration, called phase distortion or phase shift.

If different frequency bands vary greatly delayed in a transmission chain, may lead to audible effects.
I distinguish two different groups of perpetrators.
Firstly produce individual components in the electronics delay distortions, on the other hand caused delay distortions under certain conditions in acoustics.
In an amplifier stage, there are different times for high and low frequencies by specifications of individual components. A well-known example is the so-called RC member. It consists of a combination of a capacitor and an electrical resistance. Such a RC member, for example, the main component of an equalizer. It shifts the phase angle from high to low tones within an audio signal. Audible this is only in extreme cases. Relevant it is when adding up many small runtime shifts. Details are described in the chapter "Equalizer".

On closer examination of the acoustic aspects of audio engineering, one often encounters phenomena that cause delay distortions.
Phase cancellations make disturbing especially with low- frequency signals.
In a graphic example, we have two sine waves with a frequency of 50 Hz and amplitude equal to 10 milliseconds large relative to one another. This corresponds exactly to a 180 degree phase shift. The vibration is placed in the left channel of the stereo image, the other right. If you switch now the signal to mono, the two vibrations are added to a sum. At the crest of a sine wave is a wave of the other sine wave added. In this case, delete them both completely made Dis sum is zero. Silence !

Since there is practically no music in the pure sine waves, and as a phase shift of exactly 180 degrees will never happen, this is a theoretical example. At a wavelength of several meters for low frequencies (6,80m at 50 Hz) can very well keep in mind that the phenomenon of phase cancellation blatantly changes if, for example, one meter further away from the sound source.

When you record a sound source with two or more microphones,
it comes to phase cancellation, since the sound requires different maturities to the different microphones. It caused interference when you switch the signals from the microphones together. Clear up the sound waves, is known as destructive interference, add up the sound waves it is called constructive interference. In both cases there is a distortion of the original sound. In acoustics it is called comb filter effects.

The frequency spectrum of an instrument e.g. an acoustic guitar or a wing, which is picked up by two microphones, by such a comb filter effect is partly strongly distorted. The decisive factor is each other the volume balance of both microphones.
Are they identical, the effect is strongest.
Is a microphone, for example, one meter further away from a sound source, as a
otherwise, the result is a delay difference of 0.029 sec.

Almost classical problem is during the recording of a piano or a grand piano. The problem arises as follows: It is common that a microphone for the bass and one is used for the treble. Now since both microphones not only the area for which they are intended, to record, but the full range of the wing, there is phase shifts due to different maturities of the two signals. Since the sound is very slow, small run-time differences become a problem already. To illustrate this, let us imagine a single bass string of a wing. If she was injured, the resulting sound from two microphones is recorded. Now these two microphones have a different distance from the sounding string. Suppose the microphone, which is actually intended for the recording of the treble, located one meter further away from the sounding string, as the microphone for recording the bass. The sound needs a 1: 340 th of a second longer to treble microphone. In mixing the signals from both microphones now come together. In the nature of the sound, both are almost identical, after all, they have taken up the same sound, but they are time-shifted slightly against each other. It can be assumed that it now provides a panorama of the microphone to the left and the other to the right. Thus, the problem is actually eliminated. Now if a listener is exactly in the middle of the speakers, which reproduce the piano sound, it will at least be able to hear in the deep tones that there are phase cancellation. If you switch now this stereo signal to mono, the two slightly offset signals add together. Now it comes to cancellations of individual frequencies. Compared with a recording of the same instrument with only a microphone, there will be a lesser amount of bass. How serious are the cancellation depends on the ratio of the volume of both signals. The greater the difference in the volume, the less is the problem. If both signals are approximately identical, arising comb filter effects. The human ear will perceive such a comb filter effect than a changed tone.
It is known that the human ear for delay distortions or phase distortion is much less sensitive than, for example for amplitude distortions. But the ear may very well perceive the resulting errors. You identify it only takes a little practice.

To learn how a phase cancellation effect in a complex signal, one can tentatively change the polarity on one of his speakers. The membrane of a loudspeaker vibrates at a resting point. Now, if the polarity is reversed a box that has the result that the membrane of a box goes to the outside at a signal and at the other box inwards. This has the consequence that a box generates a sound-pressure and the other box, a sound negative pressure.
This can be heard. One gets the impression as if the bass reproduction to be considerably weaker.

In 1988 I began in Frankfurt in a recording studio as an assistant, coffee maker Pizzaholer and studio technicians ,. This studio was set up for "audio file" Sound Recording. It had a big recording room with a grand piano and a control room with incorporated premeasured Studio Acoustic (live end / dead end). We have there recorded and produced many jazz and rock bands. In receiving free times I was the studio often for its own sessions and sound-engineering experiments. Many of my knowledge and experience I have acquired to me there. When shooting our wing I have tried many variations on the one hand the best possible way to capture and secondly to minimize the resulting phase shifts through the "Polymikrofonie" the sound of the instrument. By the way, that has my ability to perceive even small phase shifts, trained.
Apart from the reduced bass playback leaves a phase shift an indefinable, skuriles feeling in the head. A friend described it as follows: It feels like someone driving with a sponge over your scalp.
Recently, I have a plug-in of the company Universal Audio named "Little Labs" discovered. It offers the possibility of the phase of a signal of 0 degrees to 180 degrees to rotate continuously, and it can delay the signal continuously from 0 to 4 milliseconds. With this tool, you can compensate for the phase problems of a stereo signal effectively. As a control, I take a phase correlator for help. The less left of center moves the indicator, the optimal will be the result.

Possible technical causes of phase cancellation:
Often I have XLR cable soldered; An XLR connector has three numbered with numbers Anschüsse, No. 1 is the connection for the shield, No. 2 for the positive signal wire, No. 3 for the negative signal wire. When I 2 and 3 Swap connection to one of the two male ends, I create in conjunction with a second (properly connected cables) a 180 degree phase cancellation.
Modulation effects such as chorus produce phase shifts that can cause cancellations of low frequencies. Therefore, the chorus on a bass is critical.

Amplitude distortion

Amplitude distortion called all changes an original signal, wherein the amplitudes of individual frequencies are amplified or attenuated. Are, for example, raised the high frequencies, the sound will be more brilliant than the original. If the low frequencies increased, you will feel the sound "more powerful" than the original sound.
With an equalizer amplitude distortions are deliberately brought about in order to make a sound in a certain way.
In the German translation the word Equalizer "leveler" means.
Originally an equalizer was used in the broadcast technology to compensate for unwanted amplitude distortions of a transmission chain. It should therefore be "razed" by means of an equalizer, the transmitted sound to the original.
There are a number of polluters of amplitude distortion.
Each cable amplitude distortions produced by the electrical resistance of the copper wire. If all frequencies equally reduced, you will be able to perceive the change only in the volume. Will some frequencies more reduced than others, one perceives a change of tone. The problem is known in a guitar cable. Since the output of an electric guitar is high-impedance, the cable acts as a filter that reduces high frequencies.

Amplitude distortions also arise because an amplifier not all frequencies of the audio spectrum accurately amplifies the same. Especially at very low and very high frequencies that is relevant. A quality characteristic of amplifier is the frequency response (20-20,000 Hz for example)
This means that the amplifier is capable of all frequencies between 20 and 20,000 Hz with (almost) the same volume can be transferred.
Relatively strong arise unwanted amplitude distortion in a loudspeaker or a loudspeaker. In most chains About Conference speakers are the "weakest" element, because it causes the most massive linear distortions.

Nonlinear Distortion
Includes a sound non-linear distortions, the original have been added intentionally or unintentionally overtones. Most is integral multiples of the fundamental frequency. The ratio of the root of the unwanted overtones is called distortion. The louder the harmonics relative to the fundamental, the higher is the THD. These distortions are caused by the non-linear characteristics in the active components (transistors, ICs, amplifier tubes) of an amplifier. So there are the non-linear distortions, which are referred to in common parlance as distortions. Due to the high quality of today's electronic components, one can assume that you have this kind of distortion only perceives when overloaded an amplifier massively. The resulting congestion in the heat also plays a role, because the temperature often influence on the characteristic of an active component.

The most famous use of non-linear distortions, there is in the world of guitarists. What is usually unintentionally, is looking for a guitarist. Originally, the distortion created only if you hired a guitar amplifier for maximum gain and thus overburdened. The guitarist enjoys not only the overtones newfound, but he also uses the fact that the Sustain extended by the distortion, or in other words, that the sound is longer before he stopped. Since this effect is desired to produce only in connection with a brutal volume, the industry has been built since the 60s devices that generate at least this or a similar distortion at low volume. They call themselves Fuzz, Tube Screamer, overdrive or distortion.

Montag, 3. August 2015

10 Tips for the perfect mix

  1. Even the mix is almost done in the process of producing a song, it makes sense to start the mixdown from the very beginning. Switch off all tracks, bypass all effects and inserts like, compressor and equalizer. Start with the bassdrum, adjust the level to appx -10 dBFS.
  1. continue with the snaredrum, the hi-hat, the bass follows ... (These are the instruments that generates the highest level.) Next are harmony instruments, keyboards and guitars. After that you can open the main voice to check if there are any sounds which covers the vocals.
  1. divide the mix in subgroups, one group for drums, one for keyboards, one for the voices, and so on ....You can edit the over all sound of the groups and you get better control of the total level.
  1. In the work of each single track, insert at first the equalizer following the compressor. For insert effects, like phaser, flanger, delays you can switch a second compressor after that.
  1. When filtering individual tracks, it may be possible, that the sound of an instrument it self does not sounds great, but in mixdown it works perfectly. So when setting a equalizer also listen to the connection of all instruments, not only solo.
  1. Acoustic instruments, voices, guitars, uses absolutely a compression. It limits the dynamic range. A ratio of 4:1 with a compression of 8-12 dB is useful for the voice. Watch the low frequencies compressing the bass (It easily gets lost); acoustic guitars tend slightly to pump by compression.
  1. Every instrument, every sound gets its assigned position. Imagine a three-dimensional coordinate system. There are left and right (panorama), up and down (treble and bass) and front and rear, determined by reverb and delay. The fourth dimension is the time. Each instrument gets his place. The automation allows changing the level of instruments in the flow of the song.
  1. Whenever multiple instruments sounds in the same or similar frequency range simultaneously, they mask each other. One classic example: the snaredrum and the voice (depending on the pitch), the louder the snare, the quieter sounds the voice.
  1. Effects are important. They are the salt in the soup, but each sound has its moment of performance. Delays can be programmed, on and off switched. Its easy to vary the send level of a reverb effect. Too much of a good thing brings too salty to enjoy.
  2. Never do the mix into an inserted limiter in the sum. He falsified the instrument balance. Maximizing or mastering is a separate session. It´s no problem to raise up a low leveled mixdown, but if its to high, you get in trouble (distortion). Never do mixdown and masring at once. It is useful to take a break of 24 hours until you do the mastering, or consult a specialist like...

Stefan Noltemeyer:

Donnerstag, 25. November 2010

MS-editing in the mastering process

MS-editing in the mastering process

The same way you can split a stereo signal into a left and right signal, you can split it into a middle and side signal. The middle signal is the sum of left and right. The side signal is the difference of left and right.

When we split the signal this way we have other possibilities to edit when mastering. We can edit individual instruments in the stereo panorama, and it is possible to edit the middle signal and side separately. This is useful for editing the hihat or a doubled (left/right) guitar.


M= mid signal
S= side signal
L= left channel
R= right channel

Stefan Noltemeyer :

Montag, 5. Oktober 2009

The finess difference with Tube-Tech SMC 2B and Studer A 80

There are two devices which are significant in creating the sound of These are the analogue multiband compressor with tube technology Tube-Tech SMC 2B and the analogue taperecorder Studer A 80.
In the following notes I describe what these machines do and how we use them.
With the multiband compressor it´s possible to compress several frequencies in a different range. This is very useful because , in a “normal” compressor, the frequencies are compressed which have the highest level. These are mostly the frequencies of 40 Hz to 400 Hz. So the compression of the complete spectrum is determined by these low frequencies.
With the Tube Tech I have the choise to compress three different frequency bandwidths. If there is a song which has a strong bass, it´s possible to set the compression only these lower frequencies. The vocals or guitars are still untouched.
The increase in volume is realised by the compression of the Tube-Tech’s tubes. There is a different adjuster for each bandwidth (adjusters for bass, middle and the treble bandwidth). This is nothing more then a three band equlizer. So I use this machine not only to increase loudness but it is a dynamik equlizer as well. The equalization starts when a defined threshold level is reached.
So it is possible to raise the level of separate instrument groups which makes it possible to enormously influence the balance of the original mix. This is naturally only done when necessary. In respect to the original sound of the song we keep the original charcter. The goal of mastering is to feature what´s best in the music.
So I use the Tube-Tech primarily to bring back several instruments or vocals that have been lost in the mix. As a result you get a „tidy“ total sound.
With little tweaks of compression from 2-3dB, and a tube sounded make up gain I am shaping an unique homogeneously warm sound

By using our Studer A 80 tape recorder the mastering process gets a unique direction.
The mastered song is recorded on a magnetic tape and played back just 1/10 sec later.
The specific charakter is formed by mapping the audio signal in magnetic energy and converting it back.
Subjectively, this creates the impression that the digital bits are joined to a homogeneous whole field together. My impression is that it advances the deep staggered .
So this is additive process to bring the analogue sound in the digtal world.

The band saturation is a popular theme in the use of analog tape recording.
You will get a compression when you record a very high level on a magnetic tape.
The magnetic flux has a specific limit. If you cross this level the magnetic flux doesn´t raise in the same way as the power level. In this way you will get the famous band saturation.

Stefan Noltemeyer :