Interview about RAUM


Learn more about RAUM, its development, and reverb effects in general in this interview with Principal Software Engineer Dr. Julian Parker.

Learn more about RAUM, its development, and reverb effects in general in this interview with Principal Software Engineer Dr. Julian Parker.

What was your role in the RAUM project?

I did development in terms of the reverb algorithm itself, and also in terms of tuning it, giving it an aesthetic perspective.

Can you elaborate on what the reverb algorithm is and how it relates to the final product?

The Grounded and Airy algorithms in RAUM are based on the idea of the feedback delay network, which consists of a lot of delay lines. They are all connected together by what is essentially a big mixer. In technical terms you would call it a matrix, a big grid of numbers that is describing the connection between all the delay lines.

The smoothness of a reverb is defined by how many resonances you can generate. A metallic sounding reverb does not have many resonances, just like a boxy room does not have many resonances. Whereas, a room that has a more irregular geometry sounds much more smooth. I noticed that the way people were using these feedback matrices was not optimal, because you would end up with resonances placed on top of each other at the same frequency. They just make that part louder instead of adding more density to the reverb. The Dense mode in Airy and Grounded was designed to give the most density possible. That was the initial spark for the project in technical terms.

The other thing I wanted to do when I initially designed the algorithms was to try and make the early reflections integral to the reverb network itself. The traditional way of dealing with early reflections was to have two things in parallel. You had the bit that generated your late reverb, which is diffuse. And then you had a completely separate, parallel structure that would do the early reflections, which is very flexible because you can design them specifically, separately. But then you have this really subjective task of balancing the early reflections and making them flow into the tail in a believable way. And that is quite difficult.

So in this case some of the challenging complexity of reverb is solved by a new take on how early reflections are generated.

Yeah, exactly. Instead of having a separate, parallel structure for the early reflections, you actually take the early reflections from inside the main reverb, in a very specific way. So you know, my first echo is going to come from this delay line, which has this length, and my second echo is going to come from that other delay line, which has that length. But they are present in the overall reverb anyway. This means that any time that is present in the reflections is also there in the build up of the late, diffuse tail. Which is of course what would happen in reality, because you have got sound bouncing around and build up until it turns into a wash. So, this is an attempt to unify those two things.

As far as I know, the basic technique of the feedback delay network goes back at least two decades or maybe three. Did any of the existing devices play a role in the development of RAUM?

Only in the broad sense. I mean, I wanted it to be able to do realistic reverb, but I also wanted it to fulfill the role of certain Lexicon reverbs or the Alesis Quadraverb, for example. I wanted it to be able to be an unreal reverb that sounded good. But we did not want to go back to the same algorithms. We wanted a more modern, more hi-fi take on those sounds. By the way, Cosmic is not using a feedback delay network, it is more in the direction of the traditional Lexicon and Alesis reverbs, or the Eventide ones.

Many effects provide flexibility through lots of modes and controls, RAUM is very focused. Only three algorithms, and fewer controls compared to many other reverbs. What was the thought process behind selecting the controls and their ranges to cover all this ground?

What we aimed for was to make sure that every parameter is interesting, and also to try and give them wide ranges, but wide ranges that were sensible. Wide ranges where all the settings are interesting, but different. I think this is especially true for the size control, which has a super wide range. It goes from really tight to so sparse that it is almost like a granular effect. But there is never a point where you would think: "Why would I turn it here, this is not usable." It is a nice gradient of useful things.

We tried to do this with all of the controls, I would say. The modulation is another good example. In the early parts of the control you just get a little bit of movement that breaks up echoes, more like the traditional reverb modulation use case. As you go up to 30, 50 percent you will get chorusing. Once you go above about 70 percent, it will start going into weird, uncanny territory, where it sounds detuned and dissonant. You have got three different use cases, and then you try and pack them all into the same control, and make the transition between them meaningful.

Reverb is often understood as a set and forget effect, but RAUM feels really playable and sounds great when automated. You can throw automation at pretty much any control and it still sounds good. Was this intentional?

Yeah, it is definitely intentional. We first started thinking about reverb like this in the context of MASCHINE, where you want every effect to sound great when using it with the automation view. But I guess it was also just the Zeitgeist. For example, Tom Erbe did the Erbe-Verb, which is heavily invested in real time control issues for reverb. I thought more about this again when we started working on RAUM.

I would say we did focus on this especially with the pre-delay section, and also with the Freeze function. I mean, Freeze does not even make much sense in a set and forget way, you either automate it or play it live. It is an interactive performance element. The filters are also very carefully tuned to feel nice. Both for tweaking the tone of the reverb, but also for playing with them.

I think we were just very aware that it is interesting to allow a reverb to do these things. In a way, it takes it from being a mixing or texture effect to being an instrument. Also, if you are designing parameters to be nicely automated, they just feel better when you are turning the controls. So even in normal usage it contributes to the user experience.

Do theory or tweaking play a bigger role in developing a reverb?

I think tweaking is super important, but not in the way of throwing stuff against the wall to see what sticks. What I like to do, and I do this not just with reverbs, is to understand how the tweaks are going to change the sound. Then I can be listening to the correct things and tweak until they sound how I want them to be. It is about having a clear idea of cause and effect. With reverbs that sometimes gets difficult as it is such an abstract system, there are so many different things interacting. I try to maintain a strong connection between what I am trying to achieve and how I would have to listen to actually hear it.

Neither is really more important than the other. The algorithm sets the bounds of what you can do, but it is going to have a lot of bad sounds in that space. So tuning and giving it meaningful parameters is what makes it sound good, I would say. If the algorithm is not very capable, you still might be able to make it sound great for one specific situation, but it is not going to have a wider applicability.

Tell me one of your favorite uses of RAUM.

I definitely like using the pre-delay section as a resonator or a comb filter.

Then let us talk about pre-delay.

Every reverb has pre-delay, but it is usually just a utility function, unless you can maybe sync it. We wanted to have it syncable, so we put that in, and then I thought, hang on. We just have this delay line here and it is not really doing anything apart from delaying the input. All we have to do is add one feedback path and then suddenly we have got something quite interesting. So that is how we ended up having the feedback control.

But there is a lot of other things going on in there. The sound of the pre-delay is very specific. It is a bit like the Modern mode in REPLIKA, but not exactly. Most of the time when you are doing delays, you use interpolation to exactly hit a particular delay time. But the problem is that any time you use interpolation, it introduces filtering. Especially if you have feedback, you can hear that in a degradation of the signal as it recirculates.

In this case we basically decided that we do not care if there is a tiny fraction of a millisecond of error in the delay time. We are going to quantize all the available delay values to an integer number of samples, which means we do not have any loss from interpolation. So you have an absolutely lossless delay line, until you start introducing filtering yourself. And I think you can really tell the difference here, especially in the comb filtering use cases, because it has this high frequency clarity which is a bit unusual.

We also did not want to use distortion to tame the peaks when you are doing comb filtering, because this would also make it less clean than we wanted it to be. But it is still necessary, because if you have a completely linear comb filter, you will get really strong resonances that are super loud. So instead, we put a limiter in the feedback path. Not in the main signal path, but when the delay is recirculated, then it goes through a limiter. The release time of the limiter is adapted based on the delay time, so it never really sounds like limiting. It just means that it stops the feedback from getting out of hand, while still allowing it to resonate nicely. I think you can really hear that. I never really got that kind of cool sounds out of high frequency comb filtering before.

What are some of the other use cases where this clean pre-delay sound comes into play?

I mean, obviously it is just better even as a normal pre-delay, because you do not have any high frequency filtering if you do not want it. It is also nice when you use it as a looper. You can use extremely long delay times, I think four bars is the maximum. This interacts really nicely with the limiter as well, because then the limiter kind of acts like an auto overdub feature.

The sound is completely clean and will recirculate forever at 100% feedback, and then you can play on top and introduce bits into the buffer. If you had interpolation, that would not work because after 20 repeats or something you would start to hear the filtering building up and building up. But this will hold a loop forever, unless you dial in the filters.

You mentioned earlier that Cosmic is a different algorithm.

Cosmic is completely different. It is basically a new flavor of the REPLIKA Diffusion mode.We changed the modulation, its range is much wider in RAUM. Also in REPLIKA you have one control for both the amount of modulation and the frequency, which is nice but also a bit restrictive. We wanted to give it much more control. You have pretty huge modulation depths available and also a wide frequency range all the way up to low audio rates. It gets fast enough to produce noisy sounds. This was done to give it a wider variety of tones.

And the pre-delay feedback includes it in some way, right?

Yeah, the thing about Cosmic is that it is just a lot of cascaded all-pass filters. It needs to sit within a delay loop with feedback in order to make longer reverbs. Grounded and Airy are sitting after the pre-delay, and the feedback does not include them. But in Cosmic the reverb part is put inside the feedback loop.

The nice thing about Cosmic is that each time you get an echo with the pre-delay it gets more blurred and more diffuse. Which is great sometimes, but then Grounded and Airy are nice because you can add space on top of whatever you are doing with the delay, without it all getting smeared out and completely mushed. The two approaches have different kinds of strengths and use cases. That is why we wanted to have them both.