1. Hi everyone,

    Apple just released Logic Pro 10.5 for MacOS 10.15. We found out that Crush Pack, Mod Pack, Replika, and Replika XT will crash.

    Our teams are currently working on a fix, and we hope to have this out to you as soon as we can!

    Best wishes, 
    The NI Team

    Dismiss Notice

Best smoothing time for Knobs?

Discussion in 'Building With Reaktor' started by Quietschboy, Jul 29, 2018.

  1. Quietschboy

    Quietschboy NI Product Owner

    Messages:
    465
    Hi all,
    i have some knobs here, controlling synth parameters or Volume. Smoothers are needed to prevent hearable steps.
    On one hand, smoothing should be soft enough for slow movements of a knob, on the other hand fast movements should not be sluggish.
    So, what´s the mean value you woud prefer?

    What is about adaptive solutions which check the speed of Knob movement?
    Are there any experiences?

    Thanks,
    Mark
     
  2. Chet Singer

    Chet Singer NI Product Owner

    Messages:
    804
    It depends on what you're planning to drive them with.

    I long ago began using a linear 20ms smoother because it's what I deduced the Nord Modulars were using, and if it was good enough for Clavia it was good enough for me. But if that's steppy for some functions I'll increase it to as long as 100ms.

    In the old days I used the primary Event Smoother. Nowadays I use the core Lin Smoother [A].
     
    • Like Like x 2
  3. herw

    herw NI Product Owner

    Messages:
    6,403
    thanks that helps me :)
     
  4. colB

    colB NI Product Owner

    Messages:
    3,246
    I've tried adaptive before. Can work well, but there are lots of gotchas in there ;)

    Basic concept is to count the time between knob events and adjust the smoothing rate to match.

    Problems with a simple 'naive' implementation:
    * on initialization or after a knob hasn't been touched for a while, the time can be very long.
    * if the knob input slows down, you still get some stepping.
    * glitchy/jumpy/jittery input can cause glitchy/jumpy/jittery output

    options to mitigate these problems:
    * limit the maximum time
    * allow slow smoothing ramps to be interrupted and update the ramp rate accordingly
    * reset ramp rate to some standard 'default' (e.g. 20ms) on initialisation or if the time is longer than some much bigger max threshold
    * make the ramp time longer than the input event delta by some fudge factor (or some factor of the ratio of previous delta to the one before that...etc.)
    * smooth or filter the ramp time

    It's always going to be a bit of a compromise, but for some things adaptive can be less of a compromise than picking a fixed smoothing time and having either zippering or slow response.
     
    • Informative Informative x 2
  5. Quietschboy

    Quietschboy NI Product Owner

    Messages:
    465
    Thank you guys for your response!
    Yea Colin, for the adaptive smoother i had some similar thoughts. Of course not that worked out like your explanations ;)
    Linear smoothing is the thinkable most cheapest.
    So, seen only subjective: Is it worth to take much more CPU load for an (many...) adaptive smoother into account ?
    I know this can´t be answered seriously.
    The Load also depends on the options you mentioned above.
    Have you an idea how much higher the CPU Load for a full equipped adaptive smoother is, approximately?
     
  6. colB

    colB NI Product Owner

    Messages:
    3,246
    Here's a comparison between 20ms linear, 100ms linear and an adaptive implementation.

    You can hear clear zippering with the 20ms linear.
    The 100ms gives reduced zippering, but it's still noticeable when you go slowly. 100ms also slows down the control response significantly.
    Adaptive reduces zippering still further, and the slow down in response is vastly reduced. It's not perfect, but definitely an improvement for some applications.

    For this comparison I chose to control the pitch of a sine wave to make glitches most noticeable. I also set the knob for 100 steps. It's interesting to set the steps lower to say 64 or even 16 to see how the adaptive approach handles these extremes.

    I expect this implementation could be improved upon both in terms of functionality and efficiency, but it shows the value of the approach.
    I developed it using my M-Audio Axiom midi keyboard, and it work particularly well on the rotary encoders that have some built-in acceleration. Its possible to go from very fast filter tweaks to super slow smooth changes.
     

    Attached Files:

    • Like Like x 3
  7. Quietschboy

    Quietschboy NI Product Owner

    Messages:
    465
    That´s wonderful!
    It feels and sounds so much more analog.
    If you switch back to Lin or no smoothing, it´s like beeing dropped into the stone age :eek:
    I added "no smooth" and a pulse OSC to your Ens. Just to compare. With Pulse it sonds more dramatically. The Vol Knob follows smoothing mode, too.

    In adaptive mode the smoothing is, depending on the movement speed and moving width, sometimes very laid back and maybe i heard still some stepping in mid movement tempo (i´m not sure).

    The sample rate clocked modules are less than i thought of.

    I feel it´s worth to bang my head around this :)
    Very nice!!! :thumbsup:
     

    Attached Files:

    • Like Like x 1
  8. sellotape

    sellotape NI Product Owner

    Messages:
    345
    In the most cases i'm contented with lin smoothers between 20-50ms depending on the sensibility and stepping of the control. Making them adaptive sounds nice on the first thought but it comes with lots of disadvantages as colB already mentioned. I'm doing a lot of poly stuff so a smoother has to be as simple and cpu friendly as possible.
    Imho the scaling and stepping of knobs is much more important. That's why i really dislike reaktors midi implentation (no nrpn) or midi in general.
     
  9. colB

    colB NI Product Owner

    Messages:
    3,246
    Control knobs are inherently monophonic, so cpu worries of polyphonic processing are really not relevent. The main extra cost of an adaptive approach is an extra divide per input event. However that's a pretty cheap addition. Those control events are not coming in at anywhere close to audio rate. It's really not that expensive.
    And that's the problem with fixed time solutions - many users are stuck using MIDI controllers that only offer a resolution of 127 steps.
    In that situation, an adaptive approach beats a fixed time linear smoother hands down.

    The problems I listed are mostly solvable.

    The only gnarly issue is that the adaptive smoothing time lags the input by 1 event. This means that when making slow changes, the target parameter keeps changing a little after you stop moving the knob, and when making faster changes, some minor zippering can still be audible. Both of these symptoms are inherent in the approach but can be mitigated by tuning the implementation and accepting compromises elsewhere.
     
    Last edited: Aug 1, 2018
    • Like Like x 1
  10. Chet Singer

    Chet Singer NI Product Owner

    Messages:
    804
    Thanks for this, and your solution intrigues me: I'm going to investigate its use with a breath controller. Breath control has the same requirements: fast response when changing quickly, and the elimination of zipper noise when changing slowly, especially when blowing very softly and the Midi values are in single digits.
     
  11. sellotape

    sellotape NI Product Owner

    Messages:
    345
    I know but unfortunately it's hard to deal with mono/poly inside core. Sometimes its better to put it in a external mono cc and sometimes it's better to put it straight in the poly cc and save the extra input. If core would be smart enough to realize that when all the voices have the same values it just need to calculate it once and share the result with all voices or something...
    OT: Midi is so 80s that you really can ruin the joy of playing your stuff. Might be good to control old gear or an atari but for todays standards you will miss so many sweetspots. Once you played an old monosynth or a modular you are going to realize how restrictive digital controls are. I really don't know why a 7bit control is still a industry standard in 21st century. If reaktor would at least be able to handle nrpn we could go up to 16383 steps.
     
  12. colB

    colB NI Product Owner

    Messages:
    3,246
    I totally agree. Unfortunately though we're still stuck with it. At least when using the majority of hardware controllers with Reaktor.
    The MIDI implementation in Reaktor is surprisingly limited. There's really no excuse for not having more flexibility with the Channel message modules. As you say its 80s tech, and there's not a whole lot to it, so why we can't have a full implementation is hard to rationalise.
     
    • Like Like x 1
  13. colB

    colB NI Product Owner

    Messages:
    3,246
    I've added in min and max smoothing.

    This example limits the longest fade time to 250ms. This is better unless very slowly changing inputs are important. I also changed the fudge factor to 3 which seems to help reduce some of the subtle stepping when a medium speed knob tweak slows down, but without a noticeable impact on the 'feel'.
     

    Attached Files:

    • Like Like x 2
    • Informative Informative x 1
  14. colB

    colB NI Product Owner

    Messages:
    3,246
    Is it purely breath strength, or also lip pressure?

    The only downside of eliminating zippering is that when changing a controller slowly, then stopping, there will be a lag before the final value is reached that lasts as long as the time between the last value and the one before. I guess in the case of a wind instrument, this might not be too much of an issue though as long as the difference between blowing gently and stopping is more than a single value. Then the last little bit will be a fast fade from gently breath to none.
    If the last audible breath level is 1 and stopping is 0, then there might be issues as the sound would fade out rather than stop when the player stops blowing. It would be good if possible to configure the hardware so that pppp level playing is still a few clicks away from 0 in terms of controller output value.

    Once it is tuned correctly I think it will work pretty well, and should definitely feel better than a fixed linear smoothing time.
     
  15. herw

    herw NI Product Owner

    Messages:
    6,403
    I am testing smoothers in EMSCHER. To get the limits, nearly all knobs are smoothing (at 6000Hz), means ca. 140 smoother.
    When i am choosing bank 1 snap 001 (all containers activated) without any smoother, i get ca. 53% CPU usage. Quietschboy (many thanks for his impressive testing versions. His phantasy is incredible) and i are testing several versions (much boring and time intensive).
    I have implemented a version from MONARK.

    upload_2018-9-11_14-20-15.png

    upload_2018-9-11_14-20-39.png

    After many test it works perfectly BUT the CPU-usage is terrible: nearly 80% and often „over”. Nevertheless it is interesting to see what happens when using so many smoothers.
    MONARK uses a range of [0|1] and i had to transform original range. I am using a table which is very simple to use.

    Now i am testing several other versions. Will tell you about any progress.
     
    Last edited: Sep 11, 2018
    • Like Like x 1
  16. colB

    colB NI Product Owner

    Messages:
    3,246
    Do you mean the full commercial Monark, or the Monark Blocks?
    I notice you have some variations of adaptive smoothers in there as well, are they using a different approach from the one I posted? Or the same thing but with various optimizations?

    Interesting that a table would be efficient. I suppose if all smoothers can refer to the same table then it could stay in cache and be nice and fast...
     
  17. herw

    herw NI Product Owner

    Messages:
    6,403
    no the macros are not yours but parts of full commercial Monark. Didn't know that they are different from MONARK blocks, so i am not allowed to publish, only use them private but not for you :(
    MONARK's smoothers are optimised for [0|1].
    I have made two macros: If the range of a knob is [0|1,] i can use MONARK's smoother (i can gate the clock by the container), if not i have to transform the range before using MONARK's smoother and retransform at the output. That's very simple.
    MONARK's smoother is adaptive too so i called the macro adaptive smoother. Mark knows details better.
    But as i said the CPU-usage is terrible. A gated clock helps but not for publishing.

    The table is very simple. I didn't want to add a constant at the inputs. It's nothing special.

    I will test with MOARK Block's smoother too because it is common.
     
    Last edited: Sep 12, 2018
  18. colB

    colB NI Product Owner

    Messages:
    3,246
    That's annoying. I don't have Monark, just the Blocks, and I did have a good look through the code when they were released, and didn't notice anything special about the smoothers - that's why I asked. It would be interesting to know how the Monark ones work.

    You should email NI - they might be totally happy for you to use the Monark smoothers.
     
    Last edited: Sep 11, 2018
  19. Quietschboy

    Quietschboy NI Product Owner

    Messages:
    465
    The Monark Smoother´s simplicity is very interesting and impressive.
    I did not got fully behind the maths, just to say:
    It doesn´t need to count the time between the last two incoming events. Instead it "guesses" it from the delta value of the events. The larger dVal is, the shorter the ramp must be. For this, the fixed value range (here 0 to 1) is needed to have an ancor. The ramp time calculation uses a x^8 in some way.
    upload_2018-9-11_23-27-49.png

    "sc" is the clock at setable rate "sr"
    The "cheap slew limiter" is a simple ramp
    The constant 75 seams to be some sort of "fudge factor"

    I compared it´s smoothing curve visually to my attempt which is based on your´s, Colin, and i have to say: It´s quite nice! Soft and fast. Not better or worse than the time counting attempt. Both have their pros and cons. So i´ld say they are very similar. For your ears they are mostly equal.

    Unfortunately, as Herwig told, this smoother lacks at higher clock rates, means at least everything above 1000Hz. Talking about base CPU Load, not CPU Load while smoothing!

    I´ve never thought that building and implementing a relatively simple Macro like a smoother can need that much effort (several weeks...!).
    It´s not the smoothing process itself.
    It is the environment you want to build it in!
    In big Core Cells like in Herwig´s Modular, the compiling time strikes back! Additionally, a Smoother may be used massively in a synth with tons of knobs. So, we´re searching the golden average. Compilation time versus CPU Load....
    NI Linear smoother was a great help to get in the right direction.
     
  20. herw

    herw NI Product Owner

    Messages:
    6,403
    „Guesses” isn't really right but „calculates”
    dVal=|$-z|
    I have another interpretation:
    upload_2018-9-12_13-52-19.png
    Later in cheap slew limiter you add the value |$-z|·k·(75/sr) where k is a correction value (see later). As the difference |$-z| decreases, the added steps will be lower and lower. So z increases (or decreases) slowly more and more until the steps are lower than 1e-05 and clipped by 1e-05. It is like charging or discharging a capacitor - a recursion.The steps are multiplied by 75/sr=75/6000=1/80.
    So what is the trick of the calculation with x^8 and the correction value k?
    upload_2018-9-12_14-2-30.png
    Remember that MONARK needs a range of [0|1] to work right. The value z draws near $ more and more so dVal decreases.
    First it calculates 1-dVal, then (1-dVal)^8 and finally 1-(1-dVal)^8 and multiplies by dVal=|$-z|. At the beginning of the recursion |$-z| is relative high f.i.
    dVal=|$-z|=0.75 then the multiplication is 0.75·(1-(1-0.75)^8)≈0,749989, for ...
    dVal=|$-z|=0.5 you get ≈0.498047 and for ...
    dVal=|$-z|=0.25 you get ≈0.224972 .
    It will be always lower than dVal.

    All these values are multiplied by 1/80; you get the next step width. So the more near to $, the smaller the steps. 1e-05 clips these steps sizes. So at the end of the recursion you get linear increasing (z<$) or decreasing (z>$).
     

    Attached Files:

    Last edited: Sep 12, 2018