1. Please do not install macOS 10.15 Catalina yet, as Native Instruments software and hardware products are not supported under macOS 10.15 yet. For more info, please go HERE!
    Dismiss Notice

Core : clock optimization

Discussion in 'Building With Reaktor' started by Tiramitsu_, Sep 17, 2019.

  1. Tiramitsu_

    Tiramitsu_ NI Product Owner

    Messages:
    28
    In this context, I gain 1% when the router blocks the clock but if it let pass, I get 0,5% more than no routing.

    Kind of hard to deal with this tradeoff o_O
    Can somebody explains me what happens here ? I doubt a single routing can take as much computation. Or is it comes from the bundle ?

    Thanks
     

    Attached Files:

  2. colB

    colB NI Product Owner

    Messages:
    3,014
    How are you measuring the 1% and the 0.5%

    Is it in a realistic deployment setting?
    If so, how can you be so accurate?
    I find that I'm lucky to be able to really tell with a +/- 2% accuracy due to readings moving around so much - and even variation from one test to the next.
     
    • Like Like x 1
  3. Tiramitsu_

    Tiramitsu_ NI Product Owner

    Messages:
    28
    Heu......just trusted Reaktor read out :p
    I tested that thing on other projects and the bad effect is sometime not noticeable.
    I try to gain ressources by bypass modules not used, felt surprised in the moment...
     
  4. colB

    colB NI Product Owner

    Messages:
    3,014
    In a real world situation, the 'bad effect' might not exist at all.
    Mini examples can't tell you much really about performance, because there are so many things that can have an effect.
    Compiler optimisations, hardware cache algorithms, Operating system...
    Unless you are getting a massive cpu hit in a real world context, it's probably best to ignore it.

    I don't think the Reaktor readout is faulty, it just doesn't tell you anything useful if you're not in a realistic context.

    e.g. if you are building a block, set it up in an ensemble with a bunch of other Blocks, and do tests with the the whole block running, not just part of it, then make changes and check how they effect the cpu meter.
     
    • Like Like x 2
  5. Catman Dude

    Catman Dude NI Product Owner

    Messages:
    361
    Yes, those busy meter readings are just giving one a ballpark, which, with comparison to other instruments and ensembles, provides a loose and general sense of the cpu hit of a new project.
     
    • Like Like x 1
  6. Tiramitsu_

    Tiramitsu_ NI Product Owner

    Messages:
    28
    Thank you for clarification, I have a little synth project for fit my workflow and sure I can delay optimizations for later when all the features are implemented. Little bit aside of the subject, I wonder how an ensemble like Photone with 8 voices only hit 8% on my cpu, thats crasy !
     
  7. colB

    colB NI Product Owner

    Messages:
    3,014
    That's not delaying optimisations. You should only be optimising when you can demonstrate the need (so ideally not at all), and you can only demonstrate the need without real world testing.
    I'm working on a large project at the moment. It's getting there, but not all desired features are implemented.
    As it progresses, it's taking more cpu, and longer to compile. Unless the cpu usage or compile times become unacceptable, I will keep going without optimisation, because when it's feature complete, I will need to decide which of those is worse - knowing that some cpu optimisations might make compile times worse, and some compile time fixes might make cpu usage worse... or not... the main thing is to get there first, then decide what is the priority, and work on that... otherwise I would have wasted a LOT of time early in the project on stuff that might change or be thrown out completely...

    e.g. right now, I'm completely re-writing some major functionality because after usability testing (playing with the thing, making music) I have learned that I'm not happy with part of it, and it's now clear how it could be improved. I'm really glad I didn't waste hours optimising the part that is now in the bin!

    It may turn out that compile time becomes such an issue that I have to ditch some features completely - it would be terrible to have wasted time performance testing and optimising those sections earlier on in the development!

    ---------------------------------------------------------------------
    Of course there are some exceptions to this general rule

    *If you are working on a library with code that is intended for reuse in many different contexts, then it should be optimised. This is very unusual in Reaktor because no functions, no object oriented mechanisms etc. mean that libraries aren't as powerful or useful a concept as they are in more traditional fully featured languages.
    *If you are working on code that you _know_ will be crucial and time critical, and you know more than one way to implement it, and you know which way is most efficient (you must _know_ and not be guessing), then obviously you would use the more efficient code. I wouldn't call this optimisation though - more like good programming :))

    ========================================================
    A true story:

    Once upon a time (about 15 years a go, give or take), I was doing some Gameboy Advance development. We had decided not to use the official Nintendo toolkit because of the cost, so were using 'community' dev tools. For the most part these were acceptable (after I had located and fixed a bug in the interrupt handler to allow multiple interrupts :))... Anyway, the project was progressing, and starting to look good, but there was a problem. Every now and then, the screen would stutter and glitch. It was fairly random and unpredictable, but it was consistently happening.
    It took ages, but I tracked it down to the compiler chain's library memory allocator. It was 'optimised' so that it just handed out memory from its heap until that was exhausted, then all at once it would process all the different chunks of used memory that had been handed back, and start again. This garbage processing stage was too cpu intensive for the poor wee gameboy (16.8MHz cpu), and resulted in those game-play glitches.
    So I wrote a custom memory allocator that immediately processed the used memory as it was released and the problem was gone.
    My memory allocator was not 'optimised', and the library one was, but the algorithmic optimisations they had used to save average cpu usage have made it less general - no good for close to real-time low end system coding.

    Just an example of how optimisation can cause unexpected negative consequences if you don't do it in the context of your near feature complete application running in a close to realistic scenario.

    Reaktor is of course a different thing from GBA, but the same general rules apply.
     
    Last edited: Sep 18, 2019
    • Like Like x 2
  8. Tiramitsu_

    Tiramitsu_ NI Product Owner

    Messages:
    28
    The mal8ediction is when you can make you own tool like a synth, the process is never ending as you always have new ideas to incorporate ! And finaly, I lost my mind in the edit mode instead of just make music in a constraint environment...:rolleyes:

    If you are working on code that you _know_ will be crucial and time critical, and you know more than one way to implement it, and you know which way is most efficient (you must _know_ and not be guessing), then obviously you would use the more efficient code.


    I'm kind of maniac and even when I code, I saw a source as a poetry and each line, like a verse must be perfect ! Therefore, a big project (like a video game ! :D ) often became a nightmare.

    Thanks Colin, I'll take your words as a credo !
     
    Last edited: Sep 20, 2019