1. Please do not install macOS 10.15 Catalina yet, as Native Instruments software and hardware products are not supported under macOS 10.15 yet. For more info, please go HERE!
    Dismiss Notice

How to sort and merge several tables into one table? (and Partial Framework)

Discussion in 'Building With Reaktor' started by Quietschboy, Aug 24, 2019.

  1. Quietschboy

    Quietschboy NI Product Owner

    Messages:
    436
    The complete task i have to solve is a little bit over my head. I need your help to find the most efficient solution for that.
    At first, i´ll simplify my question to the root of the task and get deeper into it´s complexity later if necessary:

    For a polyphonic ACEW debug tool i need to merge a polyphonic table (Core Array) containing the data for pre-selected voices into one monophonic table for GUI read out.
    This polyphonic data is recalled by an iteration. So with each iteration step, one entry of a column of each voice is given to the receiver. Voice by voice.

    For now, let´s have only one entry per column and per voice: a time stamp.
    It´s just a number. These numbers must be in ascending order in the target table (a Core array, too).
    Unfortunately, these numbers come in an unpredictable order from the source table and must be sorted.

    Example:
    Source table 1 from voice 1:
    2, 4, 5
    Source table 2 from voice 2:
    1, 3, 6

    resulting iteration stream:
    2, 1, 4, 3, 5, 6

    desired order in target table:
    1, 2, 3, 4, 5, 6

    There could be gaps between numbers, also, and voices could hold same numbers. The latter case must be merged into one target column, but that is a task which can be discussed later.

    The optimal solution would be to process the sorting in "real time" without buffering and re-reading or other delay based processes.
    I think there must be additional lists build up containing pointers to next and last entrys or something like that.
    Unfortunately i have no experience with that things.
    Thank you in advance!
     
  2. colB

    colB NI Product Owner

    Messages:
    3,014
    Hmm, maybe not enough information.

    I could probably help with the sorting, but I wonder if you can find a solution that doesn't need any sorting?
    It depends on the details of the 'unpredictable order' and how the target table is used.

    A few years a go I had a similar problem trying to figure out how to have multiple instances of an instrument that used an event table with multiple clients. The problem was that when a second instance of the instruments was used, it also shared the same table data, so the two corrupted each others data.

    The solution was to use a 2D table and have each instance use a different offset into the table to avoid contention. I was stuck for a while because I was trying to find a way to guarantee the order of initialisation of the different instances so they could be assigned different IDs to be used as Y offset.

    The solution was that I or the wider system didn't need to know about order or what ID a particular instance was using - that was only needed internally... Seemed simple after finding it, but not initially - human instinct was that I needed IDs, so I needed to know what they were or have some guarantees about them... maybe to view them for testing etc.

    Anyway, enough anecdotes... is there a way to have it all work out of order as long as all sections of the code know their own position in the order?

    maybe you could post some simplified example code?
     
    • Like Like x 1
  3. colB

    colB NI Product Owner

    Messages:
    3,014
    Can you use a longer iteration to drive the process. Then use Primary level voice manipulation to interleave the table reads, guaranteeing their order?
     
  4. colB

    colB NI Product Owner

    Messages:
    3,014
    maybe some variation on this idea?
    voice interlieve 01.PNG
    (the upper iterator just fills an array to be read.)
     
  5. colB

    colB NI Product Owner

    Messages:
    3,014
    Ok, here's a version that fills a polyphonic array. voice 1 gets 0,1,2 & 3, voice 2 gets 4,5,6 & 7 etc.
    They are interlieved because the code was simpler from a sketchy get it tested point of view.
    voice interlieve 02.PNG
    I'm not sure if this helps, but using an iterator and gating the input to the polyphonic core cell by applying modulo/divide to that iterator makes it easy to guarantee the polyphonic write and read order - at least for this simple example.

    Would need more info to know if it can help with your problem though.
     

    Attached Files:

  6. colB

    colB NI Product Owner

    Messages:
    3,014
    Here's the same thing but not interleaved, so the output is sequential
     

    Attached Files:

  7. Quietschboy

    Quietschboy NI Product Owner

    Messages:
    436
    Hehe, i remember that shared event table issue :)
    But i don´t think it is comparable with my topic.

    Yes, maybe. If there would be an ordered list (which my first question above asks for ;) ) which points to an unordered and merged table´s columns, this could be a solution.
    In ACEW 4.1 already are post processes which access this "GUI" table. IE a second table for the shrinked view is generated on the fly (while new columns come in) and a function named "keep" which stores a part of the columns over a reset of the ACEW´s view. Some more interactive functions are planned which need to search and select in the table.
    IE hiding rows (inports). Then the event view haves to move up.
    Or select columns for hiding/deletion or keeping (delete/hide remaining).
    There´s a lot of table content dependent stuff i can think about. All that is far from beeing worked out, but having that ordered list is a must.

    There are more nested conditions for the ordered list like the time state (simultaneous / asynchronous) which is stored in another row of the polyphonic source table´s column´s or, i.e., the asynchronous order between voices :eek: (a solution for that is prepared).
    But, hey, this is gooing much to deep for now. Step by step, so that i can get the principle, too :D

    Hmm, i´m not sure about your thoughts. Is it about the order of the returning answers per voice from the polyphonic array? I think and hope this is always ascending from voice 1 to n!???!
     
  8. colB

    colB NI Product Owner

    Messages:
    3,014
    Maybe I have misunderstood your problem then, I though you had some issue where you can't guarantee what order you are getting data from individual voices.
    My idea (see above posts) was to generate an iteration tableSize x numVoices and use div/Mod to query the voices individually. But if that's not the problem in the first place then this is no solution...

    So, more detail about the problem please :)
     
  9. colB

    colB NI Product Owner

    Messages:
    3,014
    More detail about this - what is the source table, and why is the order unpredictable?
    is the voice order predictable?
    how are you querying the source table?
     
  10. Quietschboy

    Quietschboy NI Product Owner

    Messages:
    436
    You´re too fast (as feared :p ).
    I´ll look into your examples tomorrow. It´s late.
    A polyphonic Core Array containing prepared {b} messages :p
    The order depends of the first two values (time in sec. and time in samples) and a third condition (state) which is bit multiplexed to the first value (time in sec.) which again is demultiplexed in the target receiver... ;)
    Don´t ask such questions...
    Each time stamp is that of an incoming event (or simultaneous events on several inports) at the ACEW´s Sensor (Core macro).
    As the Array is only filled with a new message per voice if an event arrived, and the read iteration reads all voices "simultaneously", the order of the messages in relation to each other, meaning their time stamps, is messed up in the receiver. Or in other words: The entries are shifted to the left of the table. There are no spaces added when an entry in another voice occours. Like in the example in my very first post.

    I really, really, REALLY hope so! 1, 2, 3, ...n

    The first iteration event asks for the number of entrys in the array (FIFO) per voice and gives it back to the iterator´s Inport N. So, the iteration stops when all "open" message floats are recalled...per voice. Then, each following iteration event queries one float of the {b} message(s), per voice.

    Leaving id and # from the partial framework bus away, the initial example can be expanded like this:
    This is the order i expect, not testet yet)
    - Tsec+State from Voice 1, 2, 3, ...n
    - Tsamples from Voice 1, 2, 3,...n
    - The Port Flags (OR) from Voice 1, 2, 3,...n
    - Value 1 from Voice 1, 2, 3,...n
    ..
    - Value n from Voice 1, 2, 3,...n

    These messages are only sent as long as a voice´s Array contains data!
    The length of each messages and per voice is variable!
    There also other communication messages with another id and length, but that really doesn´t matter for now...

    I´ll send the lates ACEW version tomorrow. But it does not include any polyphonic development. Only the preparations like iteration and some things.
     
    Last edited: Aug 25, 2019
  11. colB

    colB NI Product Owner

    Messages:
    3,014
    OK, getting clearer now. Its like a bunch of separate lists, one per voice, each one is locally in the correct order, but putting them together is not so trivial, because if they are simply interleaved, the order can get messed up.
    It would be simpler to drop from poly to mono before this, but maybe you can't because some of these events will be simultaneous, which wouldn't work so easily?

    Brainstorming:-------------------

    ...what if an event on any voice creates a 'spacer' dummy-event with the same timestamp on the other voices, that can be discarded after this stage, but enables simpler ordering?

    ...your example from the first post would look like:

    Source table 1 from voice 1:
    *1 ,2,*3, 4, 5,*6
    Source table 2 from voice 2:
    1, *2 ,3,*4,*5, 6
    (where * denotes a dummy event with that timestamp)

    resulting event stream:
    *1, 1, 2, *2, *3, 3, 4, *4, 5, *4, *4, 6

    After the *n events are ignored, you end up with
    1, 2, 3, 4, 5, 6

    ...Or you could use another buffer. So you do drop from poly to mono earlier, and when there are simultaneous events from different voices on the same tick, you insert them into the buffer in voice order... and at the end of the chain, you have a process that constantly tries to empty the buffer... (I've used this idea for sending Multiple sequential MIDI messages from processes driven by the same audio tick)... this is problematic because it's difficult not to have a fixed hard coded voice limit...

    ...the obvious option is some kind of sorting system... I suppose you would need to collect the next event from each voice and put them in order.
    I think the easiest approach is an insert sort, but not a fancy tree based one, a simple linear iterative one would work. But you would need a list type structure so you can easily insert.... Or you could use a 'bubblesort' type of thing. That's a bit slower, but would still be pretty fast for dealing with voices - whichever, it would always be slower than using dummy/shadow spacers in the data (if that's even possible?)
     
  12. herw

    herw NI Product Owner

    Messages:
    6,374
    I would use per se a monophonic array. This means that you have more informations about every receiving event (sec, smp, state, voice, $) or (voice, sec, smp, state, $) but you don't need to manage different tables and all event-messages have the same length i think.
    If you want to filter a special voice for reading, it i possible to get only these informations from the whole array.
    I would try to use core macros from partials framework/standard_library/core/polyphonic:

    PF polyphonic.png

    Perhaps recall all voices.mdl (see attachment) helps, to write polyphonic datas directly into a monophonic core array.

    In the example of your first post you get messages like [voice, time stamp]. I use it here as first index entry (from your ACEW-table on GUI). So you would write into the monophonic array the message [entry, sec, smp, state, voice, $]. You don't need an array for every voice.

    With your example the monophonic array would be filled with the messages
    [1, sec, smp, state, 2, $],
    [2, sec, smp, state, 1, $],
    [3, sec, smp, state, 2, $],
    [4, sec, smp, state, 1, $],
    [5, sec, smp, state, 2, $],
    [6, sec, smp, state, 1, $], ...
     

    Attached Files:

    Last edited: Aug 25, 2019
  13. Quietschboy

    Quietschboy NI Product Owner

    Messages:
    436
    Exactly!

    Whenever poly-mono merging happens, a sorted list must be made. So it plays no role when or where that happens.

    This is definitely not possible for two reasons:
    - No inter-voice communication in core
    - no event loop capability of core (if one thinks inter-voice communication can be done in Primary and then fed back into te core cell)
    To get clear: The information "event happened on voice x" would have to be processed immediately on the other voices.

    You mean exporting the bus message for each event immediately when it arrives in the core sensor macro?
    Yes that would be possible. Then no buffering and iteration for read out would be required. But: Then you need several wires instead of only one from the Core Cell to the Primary Macro... All event information like time, state, ports and values must be processed immediately and by that given out over separate wires. I absolutely want to avoid this.
    But, indeed, for the inter-voice order of asynchronous events, this is an approach that haves a good chance to work and which is planned:
    Each event from the structure which reaches the sensor asynchronously does not happen while the read iteration is running. Primary event paradigm! (i leave thread safeness out, for now (looks good, btw.)). So, these events can send a trigger to the primary world to build a monophonic ordered list of all asynchronous events on all voices. This list could contain time and voice per entry. Unfortunately this is not possible for init events as they would leave the core cell simultaneous on all voices. It would be possible for normal simultaneous SR clocked events on different voices, because they leave the core cell asynchronous when the outport is set to "allow Audio Events". But i deactivated that option because i don´t like it when setting ACEW up in a Core Cell, because of it´s costs and maybe an other essential condition i forgot already.

    As you see, the sorting algo must take care of 3 conditions:
    - time must be sorted
    - state is a condition
    - asynchronous event order between voices, which is found in a separate list.

    Time and State must be read and depending on it, the message could be merged into another one in the target table. If state says "asynchronous", the algo needs to look in the "asynchronous order list", also. I haven´t worked out that part already. It seems to be another complex task...


    I have to think about your idea to recall the buffer messages in a sorted order. This needs a lot more handshake communication between Primary and the Core Cell while the iteration is running. All querys/conditions have to be made in the core sensor. Voice by voice, and then have to be sorted, inserted, ordered in a list again for the correct voice routing for the read iteration. So in the end it doesn´t change anything on the need of a sorting/inserting algorythm?

    Yep, something like this :D

    The source array can be monophonic or polyphonic, depending on the user´s setting for his core cell where the sensor is placed in.
    The target array must be monophonic in the end. The way to that is topic ;)

    Just to clarify that:
    - the polyphonic core part don´t needs a VoiceID or the voice number for each message beeing transported via {b} to the Primary Part.
    Up to ACEW 4.1 the Voice is pre-selected by user. Only events of that voice are let through to the monophonic table.
    In future, the user can select a voice per row (meaning ACEW inports) on the GUI. So, the messages from different voices must be merged.

    Another idea i have in mind is to deliver and store everything from all inports on all voices in a polyphonic table and then, for GUI display, depending on GUI settings, pick just those entrys which fit the needs. Settings could be port visibility (On/Off, Show/Hide), Voice Nr. and maybe more.
    Unfortunately, delievering everything over {b} could make the cpu explode, i think. And this approach needs sorting again!
    Time is over, i´ll come back in the evening.
     
  14. colB

    colB NI Product Owner

    Messages:
    3,014
    If I'm understanding correctly, the maximum number of events that need sorted is the number of ACEW channels, not the number of voices - is that right?

    How far out of sync can two channel get?

    ie. can there be multiple events on channel 1 that arrive before some event on channel two that has an earlier timestamp?
    Or is the wort case just the order that the channels are read in?

    Maybe you should just describe the worst possible case in terms of out of order events that might need sorting!
    Most events out of order, most channels/voices needing sorted etc.
     
  15. Quietschboy

    Quietschboy NI Product Owner

    Messages:
    436
    Not easy to understand the question in the manner you might have meant it... So, i go deeper:
    The first approach, which is prepared in the attached ACEW, should be that which seems to be easiest to solve and that with the lowest cpu costs.
    To ensure this, only data which is of the user´s interest, should be captured and then transmitted to the Primary Receiver. To do so, the first macro in the Core Sensor named "port gates" was pimped. For each test inport exists one individually accessable gate. These exist for every voice, just to remind you. In the end, these gates will representate the buttons and voice selectors state for each row (or "channel") on the GUI.
    That means, only for those channels which are activated (On) on GUI, events are let thru in the appropriate gate in macro "port gates", and only voices which are selected on GUI can have an active inport.
    A voice can have several activated channels. A channel can be assigned to only one voice.
    All incoming events on all channels and all voices that will not match to the appropriate channel settings are discarded and not stored!

    To get back to your question, the Core buffer contains messages on maximum 12 voices. (8 for 8port ACEW and 4 for 4port ACEW...). I´m a little bothered about the term "event" in context to "sort". Each incoming test event builds a complete message, or better let´s say "entry", with all in core catchable informations. Simultaneous test events on one voice build one message only, too. In principle, each message corresponds to the so called entry in the GUI display. Additional information is added in the receiving part. Event color i.e., or the voice number and more.
    In the end it are entries wich must be sorted, not events, and in the case of messages from initialization or core-clock-synchronous test events on different voices a merging algorythm is needed.
    To get back to the question: The max. number of entries that must be sortet is 1024. That is the max number of entries that can be kept by the monophonic table for GUI read out. But i also want to implement loop recording. That was your wish, Colin ;). Then sorting must happen endless.
    As buffer read out happens in chunks of the size (better filling) of each voices buffer, the number of entries that must be sorted is not endless, of course.
    I think your question aimed to the number of input sources, better "tables", meaning voices, that must be sortet: It is max. 4, 8, or 12, depending on the ACEW variant. If the same voice is selected for all channels, then no sorting is necessary.


    i don´t get your question, sorry. You are talking about channels, shown as rows on GUI, not voices? With "arriving events" you mean those delivered by {b}, not arriving test events on the ACEW´s inports?
    Looking at the Sensor, time stamps in one voice are solid. The resulting messages are stored in correct order in the FIFO buffer. This also belongs to the order of asynchronous incoming test events (which may have the same time stamp). The Inport / channel number absolutely plays no role for that - inside one voice.

    Maybe your question indeed belongs to voices, when looking on the bus receiving part?:
    Only voices that contain data (depending on their gate settings made on GUI) send them over {b}.
    If voice 1 contains entries with time stamp 2 to 100 and voice 2 contains one single entry with time stamp 1, then {b} delivers in this order (simplified respectively non-interleaved - counted from 1, not 0 in this case):
    - voice 1, Entry 1, time is 2
    - voice 2, Entry 1, time is 1
    - voice 1, Entry 2, time is 3
    ...
    - voice 1, Entry 99, time is 100

    when modifiing the example to voice 2 having one single entry with time stamp 101, it looks like this:
    - voice 1, Entry 1, time is 2
    - voice 2, Entry 1, time is 101
    - voice 1, Entry 2, time is 3
    ...
    - voice 1, Entry 99, time is 100
    ..

    As promised, i attach the current ACEW version (4.2.2.15) here. I send only the 12port Primary variant which is my master. As this is in principle the same thing like the coreACEW toolkit, you could change the Core Cell´s type (R5 Legacy, R6) and mode (normal, GUI Core Cell) like you wish to. Just to mention...

    Dear Colin, i am really happy that you dig into this! Honestly i hoped you come in. And i don´t want to keep you from asking that detailed questions!!! I´ll answer them all. Also, if i have to explain the whole ACEW in detail. Many, many thanks to you, man! :thumbsup:
    My first and main intention of this thread was not to dive that deep into ACEW that early. I do not await specific solutions in Reaktor for that very specific and complicated task. It is more that i want to understand typical solutions for such "simple" tasks like that in my very first post before getting deeper. How is this or that done in other languages? Which of these solutions can be translated to Reaktor in terms of effiency? Then: How can i combine them to fit the special needs?
    I need a starting point, just for myself. Trying to find solutions/principles in the net while not knowing what exactly i search for, is very exhausting to me.
    So, as said, i have no problem with going on as previously (and nobody else except Herwig follows...), but maybe you can point me to one or two standard procedures that will do the requested task? A term, a page or something?
    I need more time, also. Some of your questions reached the boundaries of what i have worked out, yet. And i need time to let pictures (diagrams, tables, screenshots, etc.) speak.
     

    Attached Files:

    Last edited: Aug 26, 2019
  16. Quietschboy

    Quietschboy NI Product Owner

    Messages:
    436
    Herwig, i reworked the ACEW 2.0 circuit diagram to fit the 4.2 state. It is not complete, but reflects the event flow up to the GUI table in the Primary Part reasonably well.
    ACEW Schaltbild 4.2 - 03b .jpg
     
    Last edited: Aug 27, 2019
    • Like Like x 2
    • Informative Informative x 1
  17. Quietschboy

    Quietschboy NI Product Owner

    Messages:
    436
    I go on summarizing and clarifing some details. Just for that we talk about the same things ;-)

    All following belongs to the already working ACEW 4.2 approach / intermediate state to ACEW 5

    Event messages in the FIFO buffer:
    - are Partial Framework block messgaes {b}
    - one message is build for one or several simultaneous incoming events
    - this message is attached to the end (write pointer)
    - depending on the number of simultaneous arriving events, the messages can be of different length!
    - if an overflow of the FIFO occours, this is also sent as a Event message (ID = 1, whereas Control messages have ID=2)
    - are build separately on each voice where events arrived and passed the gate

    An event message contains these floats:
    ID
    # (number of following floats)
    tsec & state (time seconds and state, bit muxed)
    tsmp (time samples)
    ports (ports where event(s) arrived, Bit muxed (by Max Zagler))
    val1
    ..
    val n

    - an overflow message contains no values. Only ID, #, tsec/state, tsmp

    Read Iteration:
    - starts clocked in intervals, not "event triggered"
    - runs polyphonic and reads all voices "simultaneously" (following Primary asynchronous paradigm for voices
    - asks for the number of entries in FIFO first. The answer is "N" which sets the Iterator´s N
    - after that, the iteration reads the FIFO buffer (if N>0)
    - the iteration macro sets the router in the primary part (by "Gate"), also.
    --> no iteration is running, G=0: polyphonic and asynchronous order events for the future ACEW 5 are awaited (not used for now / I´ll comment this later)
    --> Iter starts, first Event (value coded for Sensor), G=1: A "N" event is awaited from Core to set the iteration length
    --> Iter, further trigger events, G=2: Incoming events are awaited that belong to {b} from FIFO
    - if the Primary GUI Table is full before the end of the iteration is reached (the ordinary case), a BREAK signal is sent to the iterator.
    - if BREAK, then an additional fake iteration event (coded by value) is sent to the Sensor to close all Port Gates ("P full" or "Primary Table full") and to clear the read pointer and fill state (level) of the FIFO.

    It was not possible for me to break the iteration from the core sensor. It would be possible to send such a break.trigger from the Sensor, of course, but the router would still be set to await a {b} event! The Router would have to know that next event is a break trigger before the break trigger arrives. But he can´t, because he´s controlled by the iteration macro which sits behind a router outport and would have to receive the break signal first to set the Router. :confused:
    So that´s why the iteration length must be set beforehand. Nevertheless, maybe there is a way under different circumstances for ACEW 5?
     
    Last edited: Aug 27, 2019
    • Informative Informative x 1
  18. colB

    colB NI Product Owner

    Messages:
    3,014
    Maybe my choice of words wasn't great... Rather than rewording the question, I'll explain the reason for it !!!

    I wanted to know if it is possible to guarantee some sort of reasonable upper limit to the number of items being sorted.

    If the maximum is 8 or 24 or something, then you could use an unrolled loop and then you don't have to worry about iterators and managing the synchronisation of them on top of everything else.
    If the maximum is 400 or 1024 or some large number, then you would be forced to use an iterator...
    If it was me, in the context of ACEW, I would be tying to find a solution that sorted the data at a point where I could minimise the number of items to sort, so I could use an unrolled loop.

    Now you have posted your nice diagram, it seems that the limit would be the maximum number of voices. Makes it difficult. Although I know you wouldn't be asking if it wasn't difficult :)

    --------------------------------------------------------------


    It seems like as you suggested in your initial post, you need a linked list at the output. Then as each event comes out of the pipe, you iterate over the rest of the data until until you find an entry that has an older timestamp (mostly hopefully the first you check or one of the first few), then insert the new entry there, and if you've already drawn the newer ones, redraw them along with the new entry... (Or just redraw every display clock as long as any new data has arrived in that time - might be a better option, and definitely easier.)

    If you want to go this way, the data structure isn't too hard, I have some code already for the linked list that works well in the situation where you have a fixed number of entries, e.g. 1024 voices. (Am I right that that is the voice limit? maybe it's changed?). And you can use partials framework iteration library, or just a plain iterator to drive it.

    if this simple approach uses too much cpu, it might be possible to streamline it by keeping track of the position of the latest entry per voice, and then each voice starts it's search there rather than at the end of the list, and searches in the other direction. I'm not sure it makes much difference though and is more complex...
     
  19. herw

    herw NI Product Owner

    Messages:
    6,374
    is this the point of your question?

    point of question.png
     
  20. Quietschboy

    Quietschboy NI Product Owner

    Messages:
    436
    Ah, no, it´s mainly the Voice Merge. Maybe the iteration and the Message splitter must be changed, too.

    ACEW Schaltbild 5.0 - 01 _cr.jpg
     
    Last edited: Aug 28, 2019