1. IMPORTANT:
    We launched a new online community and this space is now closed. This community will be available as a read-only resources until further notice.
    JOIN US HERE

only a joke or more? fromvoice in core

Discussion in 'Building With Reaktor' started by herw, May 8, 2009.

Thread Status:
Not open for further replies.
  1. herw

    herw NI Product Owner

    Messages:
    6,421
    i have made an interesting experiment and i am able to isolate voices in eventcore and audiocore cells.

    As a joke-example i have isolated a lfo (from voice 1), square oscillator (from voice 2) multiplied with voice 2 of envelope and an envelope (from voice 3) and added so that in one polyphone audio signal are three monophone audio signal.

    Only a joke (?) but possible. To voice is also possible if you accept one sample offset and manage it yourself.

    An interesting application would be to use saw at voice 1, square at voice 2 , tri at voice 3 ... to get different timbres on several voices.
    BTW (1,2,3,4) is a polyphone (!) constant. I don't use any fromvoice/tovoice in primary level.

    ciao herw
     

    Attached Files:

  2. Horuschild

    Horuschild NI Product Owner

    Messages:
    1,635
    WOW! Thanks Herw. Its not a joke for me. I will have to try this out on some thing I was struggling with some time back.
     
  3. Chet Singer

    Chet Singer NI Product Owner

    Messages:
    822
    herw, this is a *really* useful technique to know. Thanks much!
     
  4. Aleksandr Smirnov

    Aleksandr Smirnov NI Product Owner

    Messages:
    1,539
    Sounds really interesting, I would like to try it out and what do you have for (1,2,3,4) inputs - just a constant or some knob? Also how many voices were in instrument? I don't clearly understand this as you don't separate voices, just conditions, through which your audio elements go through.
     
  5. Chet Singer

    Chet Singer NI Product Owner

    Messages:
    822
    He's driving it with the V output of the Voice Info module, I believe.
     
  6. BertAnt

    BertAnt NI Product Owner

    Messages:
    414
    Nice idea ! Only Voice Info that can't be made in core i guess. This reminds me of mixing console, we can make 'submix' of poly signal too before adding them together..
    ---
    The great part with this approach is To Voice can accept poly signal, which is impossible with the primary one. Cool !
     
  7. Aleksandr Smirnov

    Aleksandr Smirnov NI Product Owner

    Messages:
    1,539
    That's really interesting then!
     
  8. BertAnt

    BertAnt NI Product Owner

    Messages:
    414
    Voice Router, i think that's the appropriate name for this function :)
     
  9. Aleksandr Smirnov

    Aleksandr Smirnov NI Product Owner

    Messages:
    1,539
    Great! Check out this short ensemble - it really works.

    Inf - polyphonic event input for voice
    V - number of voice
    In - when Inf = V it goes through
    Alt - if Inf not equal to V it goes through

    :)

    Don't know if I got it right tho, but for me Sine goes through only if it is chosen voice - so, it works. :cool:
     

    Attached Files:

  10. cookiemonster

    cookiemonster NI Product Owner

    Messages:
    151
    This sincerely is not a joke.
    It´s the way it´s been done for years now.
    It´s astonishing again and again how even top experienced builders get led around pretty basic things by their main field of work sometimes.
    There´s a lot to discover concerning voice work in Core.
    Things that work quite well and others that have driven me mad.
    Good luck, Gerald.
     
  11. herw

    herw NI Product Owner

    Messages:
    6,421
    no you are wrong i didn't use it but for this example it is possible to use the voice module too.
    I am driving it with polyphone constants and as cookiemonster says these possibilities are sleeping in R5 (i don't know anyone who used this technics of polyphone compare-router-structures but perhaps this was my failure).
    The trick is to define constants with different voice components.
    You only need a preparing ensemble to define these constants.
    Normal constants have only one value which means: all voices have the same value. F.i. for this example i have defined the constant (1,2,3,4) which has on voice 1 the value 1, on voice 2 the value 2 etc.. With the voice module you have a prepared polyphone constant. But perhaps you need other polyphone constants.
    You can define them simple by using the toVoice-module in that help program. Then i store this polyphone constant by a snap value module (!) set the porperties to isolate and store only the snap module (or copy it). This snap value module is my polyphone constant.
    Now i can use it everywhere.
    The second trick is the routing because i can use normal (1)=(1,1,1,1) and other polyphone constants f.i. (1,0,1,1) to compare and get an individual routing for all voices.
    I have defined in my example the constant (1,2,3,4) but i used it in an three voice instrument - no problem. I compare (1,2,3,4) and (1,1,1) (only three voice-instrument) and get the logic result (true, false, false). As i don't use the false routes, REAKTOR set these voices to 0.
    You can use the constant in ensembles with more than four voices too; R5 adds a 0 to the other voices.
    The ensemble i uploaded here shows a comparison of this technic. At the top i use polyphone constant and manage the voice routing inside of the core cell. At the bottom i use the trditional technic with polyphone signal. (yellow box)
    Outside i added in primary the fromVoice-modules to show the result from core cell.
    The cpu usage is a little bit higher as in traditional routing (set to 100 voices) - yes because the voice routing is solved in REAKTOR very fine.
    The interesting point could be to create some applications which are not so simple and which have no traditional signal path. I think that my example is only interesting theoretically for discovering some sleeping possibilities.

    There was another starting point two or three years ago i think with stopping SR-clock for several unused voices to reduce cpu-usage.

    ciao herw

    BTW: return back to the start of my post. I think it is a good idea from Chet to use the voice info module. From this module you can create other polyphone constants using routers and latch modules.
    Perhaps someone uses the interesting primary module shift-voice now?
     

    Attached Files:

    Last edited: May 11, 2009
  12. BertAnt

    BertAnt NI Product Owner

    Messages:
    414
    Thanks for sharing this great technique ! So i guess it doesn't matter how many constants are defined in the poly snap value, because only the constants from voice range of the instrument are available, so let say i defined 1-100 constant in the snap value inside 100 voice instrument, it still gives 1-4 constant only in 4 voice instrument ?
    ---
    And about the SR-clock, isn't it automatically stopped by the routers for false compare results ?
    ---
    Another interesting application : Select Min and Max for audio poly signal (using the 'unique' character from core merge module), which in primary only available for events..:)
     
  13. herw

    herw NI Product Owner

    Messages:
    6,421
    yes
    yesciao herw
     
  14. herw

    herw NI Product Owner

    Messages:
    6,421
    Here is an example how to realize a core/primary solution of TO-VOICE.
    The LFO-signal is routed to voice 1 then to output v1 to primary level. The audio voice combiner changes the poly signal (lfo, 0, 0) to a monophic audio signal (lfo) and is leed back to the core cell (input unisono). The router routes the monophonic signal to voice 3. All voices are added (out) to show the effect.
    There is an automatic unitdelay outside of the corecell by the voice combiner.

    It is not possible to realize the same only inside of the core cell :(

    The second ensemble shows an example with the voice shift module. The shift from voice 1 to voice 3 (shift 2) is calculated inside of the corecell (here only constants). So you can realize dynamic voice changing if you want.

    A newer version of REAKTOR should solve this inside of core.

    ciao herw
     

    Attached Files:

  15. BertAnt

    BertAnt NI Product Owner

    Messages:
    414
    I don't understand.. you could just simply route lfo to voice 3 inside core cell..
     
  16. herw

    herw NI Product Owner

    Messages:
    6,421
    yes - it is only a veeeery simple example to show that this is possible.

    The sense is that you can manipulate several monophonic signals paths. I am interested in REAKTOR-theory.
    Whether this possibilities are usefull i am not able to show quickly.

    One application could be a self FM-modulation of an OSC:
    voice 1 is f.i. a sin-wave voice 2 too with another frequency. Now let go out voice 2 and in again and changed to voice 1. Now you can multiply both signals. Didn't try out but it should run!

    Or you can hard synchronize voice 1 by voice 2 with only one oscillator.

    Perhaps some audio experts can help me and list some applications?

    ciao herw
     
  17. herw

    herw NI Product Owner

    Messages:
    6,421
    Here is a simple example:
    a polyphone LFO (voice 1 f=0.4Hz, voice 2 f=0.6Hz) is frequency modulated by itself.
    I changed voice 1 and voice 2 to voice 3 and multiplied them (voice3=voice1*voice2).
    So i need only one LFO to get FM modulation.
    The polyphone LFO modulates a tri oscillator.

    ciao herw
     

    Attached Files:

    Last edited: May 11, 2009
  18. BertAnt

    BertAnt NI Product Owner

    Messages:
    414
    Seems like the primary Voice Shift module made things simpler eventually. Thanks for the idea of using poly snap value as a constants source, i find it very useful for creating shift values quickly for the Voice Shift module.
     
  19. herw

    herw NI Product Owner

    Messages:
    6,421
    the second method with voice-info is easier.
    As i mentioned before it is only of theoretical interest because the voice managing of primary is more cpu-efficient.

    ciao herw
     
  20. BertAnt

    BertAnt NI Product Owner

    Messages:
    414
    I think with Voice Info i still have to replace each of the values, because i'm gonna connect it to 'Shft' input of the Voice Shift module. With the prepared snap value i can make different and not necessarily ordered shift values for each voice, just by changing the poly snap values within the preparation instrument.
     
Thread Status:
Not open for further replies.