1. IMPORTANT:
    We launched a new online community and this space is now closed. This community will be available as a read-only resources until further notice.
    JOIN US HERE

challenge for math/programming literati

Discussion in 'REAKTOR' started by ashwaganda, Mar 4, 2005.

Thread Status:
Not open for further replies.
  1. ashwaganda

    ashwaganda Forum Member

    Messages:
    2,191
    humble composer that i am, i bow to the genius of the math/programming literati out there ...

    <bowing>

    i'm looking for a generator (macro) that will spit out an endless sequence of integers between 0 and 999999.

    so far, so easy. now here's the challenge:

    among the macro's controls is a rand knob. when rand=100 the generator will spit out a 100% random sequence of integers. when rand=0, the generator will output a 100% non-random sequence. between 0/100 the values move from less to more random.

    what are "non-random" values? here's where it gets cool/sexy:

    they are patterns or series that exhibit order and intentionality and beauty and intelligence. ;-)

    for example:

    - fibonacci series
    - logarithmic series
    - prime numbers
    - fractal series/patterns
    - repetitive patterns of all ilk
    - inverse/retrograde variants

    and so on.

    think of it this way ... if a musical SETI program were to analyze the numbers being output, when rand=100 (full randomness), it would detect 0% patterning/intentionality, and when rand=0 it would detect 100% patterning/intentionality.

    the creative part of this challenge is to create beautiful, richly varied, and expressive intentionality (patterns, series) for the non-random component ... and to find a way to move gradually between 0% and 100% randomness.

    if you haven't already guessed, i plan to use this generator to randomize ensembles in all possible ways: patch-creation, automation, minute analog-like fluctuations (jitter) in pitch/timbre, etc.

    thanks in advance, mathematicos. (and don't forget to watch the new episode of NUMB3RS tonight!)

    rick
     
  2. apalomba

    apalomba NI Product Owner

    Messages:
    267
    This would be a very cool macro. I have a suggestion
    for a simple solution. So you essentially have two
    sets of data points, on the left you have the
    ordered/intentioanl series, in the right the random
    series. Each time you need a new data
    point, you grab the current point from each set and
    then linearly interpolate between them. The amount of
    interpolation would be controlled by a knob.
     
  3. hirnlego

    hirnlego Forum Member

    Messages:
    82
    I think the problem is how to mix the two sources. Adding values? Averaging? Alternating them? Using which logic?
    You can simply put a crossfade with linear interpolation... I assume that with a value of 0.1 in the inputX, the output mix is 90% input0 and 10% input1. Am I wrong?

    b.
     
  4. apalomba

    apalomba NI Product Owner

    Messages:
    267
    To do simple interpolation you would take the two
    values and calculate the relative offset to eachother
    and multiply by the percentage.

    x = ordered series
    y = random series
    d = percentage
    x+((x-y)*d)
     
  5. stereomax

    stereomax NI Product Owner

    Messages:
    170
    Three questions (to understand things better):

    1. Should the output of the macro at 0% randomness be every time the same or should it produce a randomly chosen "meaningful" sequence? (I'm afraid you want the latter... ;-)

    2. Do you think that this "interpolation" proposal would work? I'm afraid the result of interpolation would "look" random at very low levels of randomness. The "alternating" proposal seems more logical to me: to have a sequence that - at a setting of 50% randomness - have 50% "meaningful" data points (i.e. "meaningful" sequences of variable length interrupted by "true random" sequences so that statistically half of the total sequence consists of "meaningful" sequence parts).

    3. What's your concept of "randomness"? As far as I know it's really difficult to generate "true random" sequences. I don't know how Reaktor handles this. Isn't the "random" sequence Reaktor generates already something that SETI would detect as "intelligent"?
     
  6. jh019i

    jh019i Forum Member

    Messages:
    24
    "Do you think that this "interpolation" proposal would work? I'm afraid the result of interpolation would "look" random at very low levels of randomness."

    I think this is right on. Anytime we add any random value to the ordered sequence, aren't we essentially creating a random sequence? I mean, because you are adding the ordered series to the random series, is 100% random actually more random than 10%? Are there actual degrees of randomness?

    Or am I misunderstanding this?

    The suggestion of an alternating series makes more sense I think.
     
  7. ashwaganda

    ashwaganda Forum Member

    Messages:
    2,191
    > This would be a very cool macro. I have a suggestion for a simple solution. So you essentially have two sets of data points, on the left you have the ordered/intentioanl series, in the right the random series. Each time you need a new data point, you grab the current point from each set and then linearly interpolate between them. The amount of interpolation would be controlled by a knob.

    could work. but i wonder how the interpolation would affect the "intelligent" stream? if a fractal sequence is averaged with a random sequence, does the semi-random-fractal hybrid bear enough resemblance to the fractal version to preserve enough of its beauty and intelligence?

    > I think the problem is how to mix the two sources. Adding values? Averaging? Alternating them? Using which logic? You can simply put a crossfade with linear interpolation... I assume that with a value of 0.1 in the inputX, the output mix is 90% input0 and 10% input1. Am I wrong?

    no, that sounds right. the trick is: what does 10% mean in these terms?

    > To do simple interpolation you would take the two values and calculate the relative offset to each other and multiply by the percentage.

    > x = ordered series
    > y = random series
    > d = percentage
    > x + ((x-y) * d)

    wouldn't the formula be:

    x + ((y-x) * d)

    ?

    > Three questions (to understand things better):

    > 1. Should the output of the macro at 0% randomness be every time the same or should it produce a randomly chosen "meaningful" sequence? (I'm afraid you want the latter... ;-)

    of course i want the latter! ;-) what beauty/mystery is there in the former?

    > 2. Do you think that this "interpolation" proposal would work? I'm afraid the result of interpolation would "look" random at very low levels of randomness.

    that's what i was saying above. i'm with you ...

    > The "alternating" proposal seems more logical to me: to have a sequence that - at a setting of 50% randomness - have 50% "meaningful" data points (i.e. "meaningful" sequences of variable length interrupted by "true random" sequences so that statistically half of the total sequence consists of "meaningful" sequence parts).

    this is how i'm thinking also ... but i must admit to having a naive take on mathematics. that's why i'm wondering if the interpolation would, in fact, "work" (whatever that means).

    3. What's your concept of "randomness"? As far as I know it's really difficult to generate "true random" sequences. I don't know how Reaktor handles this. Isn't the "random" sequence Reaktor generates already something that SETI would detect as "intelligent"?

    again, i'm coming at this as a composer, not a mathematician. my ideal of randomness is a stream of numbers that, upon analysis by our musical SETI machine, would be found to contain 100% randomness: no nontrivial patterns, repetitions, intentionality, etc.

    as to whether reaktor's random generators produce nontrivial patterns/repetitions... i have no idea. perhaps someone else does?

    rick
     
  8. apalomba

    apalomba NI Product Owner

    Messages:
    267
    Ideally if you wanted to alternate between sequences, you
    would want to have some probability macro controling
    a switch that switched between the two streams. But
    this macro would need to have a per event resolution.
    That way anytime you needed to read an event it would
    stochasticly decide which stream. Does Reaktor give you
    a per event resolution?
     
  9. EMISnode

    EMISnode Forum Member

    Messages:
    235
    You know what's fun? Raising the rate of a probability generator to audio rate, and outputting the values as audio. It's really cool to hear how the noise changes tonality with parameter changes.

    Yes, Reaktor does give you per-event resolution.
     
  10. ashwaganda

    ashwaganda Forum Member

    Messages:
    2,191
    mein bruder

    my brother the mathematician (electrical engineer, actually) agreed to help me with this ... so i should have some interesting approaches for developing a continuum between randomness and non-randomness soon.

    his first suggestion:

    A good beginning point would be white-noise driven finite impulse
    response filters. For instance the second-order case is:

    x(n)=a1*x(n-1)+a2*x(n-2)+w(n)

    ummmmmmmmmmm ...

    (like i say, he's the mathematician of the family. ;-) )

    rick
     
  11. CList

    CList Moderator

    Messages:
    3,299
    I'd like to propose a couple of things...

    Firstly, the "% of intelligible vs. random"...
    I think it would be cool if 10% random did the following; 85% of the events out of 100 would be exactly on the existing pattern, 15% percent would be "off" by a random factor, but that random factor would only be a +/-5% variation of the value. ("variation" is the whole range that you want numbers to fall in - so if you were going up to 1000, then a 5% variation would be +/-50). 20% random would have about 25% of the events affected by randomness, with the randomness being up to +/-15% of the value, etc... This way the pattern still follows it's flow, every event does not get randomized by 10% or 25%. At 100%, every event would be totally randomized up to +/-100% of it's value. Perhaps the variation percentage should R^2 (where "R" is the "randomness from 0.0 to 1.0) this way a 50% randomness could only vary 50% of the events by 25% of the range. Alternately you could make it so that the range of randomness is simply +/- some % * the current value - but then 100% randomness is not really random - hmm, gotta think about a happy medium there.

    Secondly, for the series. I think I'd probably just create an event table where each row is pre-populated with a different "intelligent" series. If you wanted to limit the number of value in the series, you could make everything after the last value by something like -99999, so that if the table was read for a given row and the returned value was -99999, it would jump to the start of the table and start reading from there. E.g. if one of your series was factorials, you'd very quickly get to huge useless numbers and soon Reaktor would start running into calculation problems, so you might want to only do 1! up to 7! and then repeat. In this case cells 7 to the end of the table would be filled with -99999 for this row. If your series were "even numbers" you might fill the whole table (however wide it is) with the even numbers. You might have another series that says "count 1 to 12 over and over" so that row of the table would have cells 0 through 11 willed with numbers 1 through 12, then the rest set to -999999.

    The nice thing about filling the whole rest of the table with -999999 is lets say you're on position 100 of the even-numbers sequence, then you turn a knob to go to the "1 to 12" sequence - well, it'll jump back to the beginning of the table right away! So you may ask; "why not just fill the table with 1 to 12 over and over?" - ah, because then each sequence would repeat according to the length of the table, if the table were 100 wide, then it wouldn't be a sequence of 12 over and over, because after 8 iterations, it would only do 1 to 4 then go back to the next 1 to 12.

    Just some ideas for you...
    Shouldn't be very hard to build at all, but don't look at me, I'm packing up to move back to NYC this weekend and will be living out of a suitcase for the next week.

    - CList
     
  12. lxint

    lxint NI Product Owner

    Messages:
    764
    some thoughts :


    I wonder if any SETI or NSA program would be able to detect a pattern in any of those series you deem "sexy" ( I really wonder - I have no idea, though my guess based on nothing would be : they can't )

    and hirnlego is right - simply adding / mixing in plain noise isn't very useful you need some weighting like in apalombas suggestion

    but you could use pattern creation algorythms similar to those used in hirnlegos ensembles, and add the noise to the pattern rules themselves :
    no noise : the pattern is layed out strictly by rules - more noise : the rules are obeyed with certain deviations -

    and like stereomax I agree that you first have to get an idea, what the randomness, as well as the pattern or meaningful behaviour are supposed to behave

    Rachmiel you recently said somewhere that you spent a lot of time adjusting your radom parameters - so its obvious plain 0..1 randomness is not really what you want - in the same way I think your "sexy" patterns are not really wat you want, unless you have really fallen in love with one ore two of them and just want those and nothing else

    jH019i

    >>"Are there actual degrees of randomness?"

    well they are, to entities that perceive or interact / react on patterns ( humans for instance ) - with only little deviation ( little noise ) you can still predict the next value to a certain degree, imagine a staircase that is some hundred years old - not every one puts their feet at exactly the same spot where everybody else did, but still you can see that the stairs are really worn out at certain spots,
    while elsewhere they seem to be unaffected

    >>"as to whether reaktor's random generators produce nontrivial patterns/repetitions... i have no idea. perhaps someone else does?<<

    they produce a pattern wich is only random in that that it is different on each machine each time you run the programm depending on the implementation they will depend on other things too but are, like anything else around us, a pattern

    the "MeinBruder" approach is interesting, but two things are to consider : you cannot feed this filter with white noise and get a pure sine ( or an exact match of a pattern ) out of it unless the filter very long
    the reasons are : a fir has to be rather long to be precise, to have a "sharp" fq resolution, and your random white noise sequence does not really contain all fqs, and not with an equal energy either, unless the sequence is infinite, ( either in resolution and duration ) the sequence cant contain frequncies that are lower than the age of the universe, for instance ( just added this remark to trigger some metaphysical thoughts )

    but besides of that you can do intersting things with this approach cell phone communication, for instance


    to end with, since Clist already posted a complete receipe for reaktor, I want to mention that for analogue like deviations and fluctuations, some fractal series like henon should work very well, I personally though find thigs with complex, more organic interactions more interesting :

    a good example seems to be JClarks latest upload, or weedwhacker - I havent looked very much into clarks, maybe it doesnt even anything of what I have in mind :

    what I am thinking of and what I am very fond of are systems that behave similar ( and maybe even as simple and excatly so, withought any other "hacks" ) to a bunch of springs, coupled together in some way

    - the behaviour gets unpredictable very soon, but will in any case always depend on the systmes own interal state, and on the interaction of that with the outside world ( just like you and me )
    if you tick a simple spring a second time, it will wobble differently, if you tick it a second time in a very special way that mtaches the springs state, it will come to a hold, more springs react more complex

    on the other hand, you can design this system so that it does strictly behave much very much like a violine or a flute, for instance, very predictable in some aspects, if you want to -
     
  13. herw

    herw NI Product Owner

    Messages:
    6,421
    It is correct to separate the production of the sequences from the coincidence dependent variation. In my opinion the coincidence dependent variation must have a certain recognizing effect also with a middle percentage. I.e. I find it meaningfull that each value of the sequence (in dependency of the percentage) is varied. Thus the necessity places itself to execute for each event of the sequence its own calculation of the variation.

    One can imagine different beginnings:
    • the changed value is replaced proportionally by a random value. I.e. one replaces the original value with a probability by any random value. The changed sequence consists thus of a proportional mixture of original data and random values.
    • the changed value varies coincidentally around the original value whereby the range of the number depends of possible values and the percentage. (CList suggestion, if I understood it correctly). I.e. any random values does not become certified, but one permits only a fluctuation in a certain interval.
    • the changed value varies around the original value on the basis of a probability distribution (e.g. binomial distribution). I.e. one takes the original value as expectancy value and varies it on base of the binomial distribution around this value. The binomial distribution has the advantage in relation to the other two solutions that - so mentioned - outliers values are relatively rare (CList considers this by using a factor r^2), recognizing in relation to the original sequence is still given also with middle percentage.

    ciao herw
     
  14. herw

    herw NI Product Owner

    Messages:
    6,421
    examples of binomial distributions

    ciao herw
     

    Attached Files:

  15. ashwaganda

    ashwaganda Forum Member

    Messages:
    2,191
    > Firstly, the "% of intelligible vs. random"... I think it would be cool if 10% random did the following; 85% of the events out of 100 would be exactly on the existing pattern, 15% percent would be "off" by a random factor, but that random factor would only be a +/-5% variation of the value. ("variation" is the whole range that you want numbers to fall in - so if you were going up to 1000, then a 5% variation would be +/-50). 20% random would have about 25% of the events affected by randomness, with the randomness being up to +/-15% of the value, etc... This way the pattern still follows it's flow, every event does not get randomized by 10% or 25%. At 100%, every event would be totally randomized up to +/-100% of it's value. Perhaps the variation percentage should R^2 (where "R" is the "randomness from 0.0 to 1.0) this way a 50% randomness could only vary 50% of the events by 25% of the range. Alternately you could make it so that the range of randomness is simply +/- some % * the current value - but then 100% randomness is not really random - hmm, gotta think about a happy medium there.

    sounds like a viable way to make randomness "serve" order ... i.e., so that bits of order were preserved all the way to 99.99% randomness. i'll have to think about it.

    > Secondly, for the series. I think I'd probably just create an event table where each row is pre-populated with a different "intelligent" series. If you wanted to limit the number of value in the series, you could make everything after the last value by something like -99999, so that if the table was read for a given row and the returned value was -99999, it would jump to the start of the table and start reading from there. E.g. if one of your series was factorials, you'd very quickly get to huge useless numbers and soon Reaktor would start running into calculation problems, so you might want to only do 1! up to 7! and then repeat. In this case cells 7 to the end of the table would be filled with -99999 for this row. If your series were "even numbers" you might fill the whole table (however wide it is) with the even numbers. You might have another series that says "count 1 to 12 over and over" so that row of the table would have cells 0 through 11 willed with numbers 1 through 12, then the rest set to -999999.

    sounds good, using -999999 (or similar) as an "end of array" marker. i also like the creation of an "order" table populated with a wide variety of intelligent patterns. it's one of the ways i thought of doing this, and i think it could work well.

    but, you know, somehow it feels a bit like ... cheating to me. ideally, if this were possible, i'd like to create an algorithm that generated, on its own, "intelligent" patterns/series. with a few simple rules that got combined, varied, permutated, etc. something like:

    - increment/decrement the previous value. variables: how to increment/decrement (by 1, by 2, by 1-2-3-4, etc.); how many times to increment/decrement; whether to loop (decrement to a low point, then increment to a high point, back down to low, up to high, etc.).

    - repeat a sequence of values. variables: how to repeat (forward, looping), how many values in sequence, how many repeats, are repeats exact (or are new values thrown in or some left out), etc.

    - follow a value with its symmetric double. variables: what is the axis of symmetry?

    and so on.
     
  16. ashwaganda

    ashwaganda Forum Member

    Messages:
    2,191
    some thoughts :

    > I wonder if any SETI or NSA program would be able to detect a pattern in any of those series you deem "sexy" ( I really wonder - I have no idea, though my guess based on nothing would be : they can't )

    if there is significant repetition in it, mSETI (musical SETI analyzer) definitely would detect it. and, i'm guessing, if there were a linear/geometric/whatever progression (even numbers, fibonacci series, prime numbers, etc.), it would detect it too.

    but, yes, beautiful/sexy ... do these things have discernable patterns?

    > and hirnlego is right - simply adding / mixing in plain noise isn't very useful you need some weighting like in apalombas suggestion

    > but you could use pattern creation algorythms similar to those used in hirnlegos ensembles, and add the noise to the pattern rules themselves : no noise : the pattern is layed out strictly by rules - more noise : the rules are obeyed with certain deviations -

    yes.

    >> "as to whether reaktor's random generators produce nontrivial patterns/repetitions... i have no idea. perhaps someone else does?

    > they produce a pattern wich is only random in that that it is different on each machine each time you run the programm depending on the implementation they will depend on other things too but are, like anything else around us, a pattern

    sounds like we're moving into chaos theory here ... which is, perhaps, appropriate. but, as always, for me: musicality trumps math. :)

    > the "MeinBruder" approach is interesting, but two things are to consider : you cannot feed this filter with white noise and get a pure sine ( or an exact match of a pattern ) out of it unless the filter very long the reasons are : a fir has to be rather long to be precise, to have a "sharp" fq resolution, and your random white noise sequence does not really contain all fqs, and not with an equal energy either, unless the sequence is infinite, ( either in resolution and duration ) the sequence cant contain frequncies that are lower than the age of the universe, for instance ( just added this remark to trigger some metaphysical thoughts )

    ummmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm ... (math!)

    > what I am thinking of and what I am very fond of are systems that behave similar ( and maybe even as simple and excatly so, withought any other "hacks" ) to a bunch of springs, coupled together in some way

    > - the behaviour gets unpredictable very soon, but will in any case always depend on the systmes own interal state, and on the interaction of that with the outside world ( just like you and me ) if you tick a simple spring a second time, it will wobble differently, if you tick it a second time in a very special way that mtaches the springs state, it will come to a hold, more springs react more complex

    > on the other hand, you can design this system so that it does strictly behave much very much like a violine or a flute, for instance, very predictable in some aspects, if you want to -

    sounds interesting. i'll run it by meinBruder. :)

    rick
     
  17. ashwaganda

    ashwaganda Forum Member

    Messages:
    2,191
    > It is correct to separate the production of the sequences from the coincidence dependent variation. In my opinion the coincidence dependent variation must have a certain recognizing effect also with a middle percentage. I.e. I find it meaningfull that each value of the sequence (in dependency of the percentage) is varied. Thus the necessity places itself to execute for each event of the sequence its own calculation of the variation.

    i think something has been lost "in translation" with the above ... i think i know what you mean, but am not sure.

    > One can imagine different beginnings:

    > the changed value is replaced proportionally by a random value. I.e. one replaces the original value with a probability by any random value. The changed sequence consists thus of a proportional mixture of original data and random values.

    the danger here is that order sprinkled with full randomness might end up sounding/feeling very random.

    > the changed value varies coincidentally around the original value whereby the range of the number depends of possible values and the percentage. (CList suggestion, if I understood it correctly). I.e. any random values does not become certified, but one permits only a fluctuation in a certain interval.

    yes, i think this will create more of a sense of "loosening order."

    > the changed value varies around the original value on the basis of a probability distribution (e.g. binomial distribution). I.e. one takes the original value as expectancy value and varies it on base of the binomial distribution around this value. The binomial distribution has the advantage in relation to the other two solutions that - so mentioned - outliers values are relatively rare (CList considers this by using a factor r^2), recognizing in relation to the original sequence is still given also with middle percentage.

    similar to the above idea. could work ... :)
     
  18. CList

    CList Moderator

    Messages:
    3,299
    Good programming is always about cheating - afterall realtime 3D rendering on the order of Unreal or FarCry couldn't be done w/o a hell of a lot of it! It really makes no sense to use up the processing power to generate the more complex series we talked about - you might as well just put them in a table. You *could* still manipulate them into many variations *after* reading the table. Things like;
    - multiply series by X
    - alternately add and subtract x from the series
    - out(n) = out(n) / out(n-1)
    - always restart the series after "X" steps
    - etc.

    Having those kinds of tweaks takes very little work to build or CPU, and gives the user more flexibility while still giving them a rich library of series to start with (in the table). You'd have two drop-lists one named: "Base Series", and another called "Series Variation"...or something like that. Don't forget that you'll either need to get the entries in the table to begin with, so you'll have all the "fun" of building the series-generating structures if you like!

    - CL
     
  19. ashwaganda

    ashwaganda Forum Member

    Messages:
    2,191
    > Good programming is always about cheating - afterall realtime 3D rendering on the order of Unreal or FarCry couldn't be done w/o a hell of a lot of it! It really makes no sense to use up the processing power to generate the more complex series we talked about - you might as well just put them in a table. You *could* still manipulate them into many variations *after* reading the table. Things like;

    > - multiply series by X
    > - alternately add and subtract x from the series
    > - out(n) = out(n) / out(n-1)
    > - always restart the series after "X" steps
    > - etc.

    this makes sense. it's how wavetable-lookup synthesis is done: grab 1-N cycles of a wave from a table, then have at it.

    > Having those kinds of tweaks takes very little work to build or CPU, and gives the user more flexibility while still giving them a rich library of series to start with (in the table). You'd have two drop-lists one named: "Base Series", and another called "Series Variation"...or something like that. Don't forget that you'll either need to get the entries in the table to begin with, so you'll have all the "fun" of building the series-generating structures if you like!

    yes! good point.

    i think that this is a viable, efficient (cpu-wise), and musically fruitful way of creating the randOrder generator. if done well, it could definitely pass the test of generating beautiful, meaningful sequences of numbers.

    another, perhaps entirely different version of randOrder, could use techniques of emergent behavior (flocking, for example) to generate numeric sequences.

    for example, to generate a "meaningful" sequence of midi pitch values between 24 and 108 (3 octaves below middle C to 4 octaves above) -- i.e., a "compelling" melody -- you could use three simple rules:

    1. generated numbers "want to" fill the entire range (24-108).

    2. generated numbers "want to" move to a gravitational pitch center (say 72, one octave above middle C).

    3. generated numbers "want to" be close to previously generated numbers.

    the interaction of these three primal drives -- i.e., how they resolved their conflicts -- would, i think, create a sequence of pitches that would exhibit "melody" (because pitches would tend to clump near each other, rather than jumping haphazardly all around) and would span the entire range with a bell-curve like distribution around 72.

    but the only way to know what would *really* happen would be to implement it and see. that's the fun of rule-based self-organizing generation: you're not fully in charge. :)

    rick
     
  20. ashwaganda

    ashwaganda Forum Member

    Messages:
    2,191
    response to lxint from meinBruder

    > the "MeinBruder" approach is interesting, but two things are to consider : you cannot feed this filter with white noise and get a pure sine ( or an exact match of a pattern ) out of it unless the filter very long the reasons are : a fir has to be rather long to be precise, to have a "sharp" fq resolution, and your random white noise sequence does not really contain all fqs, and not with an equal energy either, unless the sequence is infinite, ( either in resolution and duration ) the sequence cant contain frequncies that are lower than the age of the universe, for instance ( just added this remark to trigger some metaphysical thoughts )

    The long transient problem can be circumvented... if you take a second order FIR filter with a highly peaked freq response and fire it up from rest, it will take it a righteous long time to build up to its almost-pure steady-state tone. But if you precompute and store initial conditions derived from the steady-state portion of one of its previous responses, and initialize it that way, there is no transient period at all, you are there at the pure tone right away. The other problem, that we are not sampling frequencies which are real high or real low, is interesting intellectually but would have no effect on the acoustic properties of the output. We can't hear them, and the gain of the filter is zilch at those frequencies anyway.
     
Thread Status:
Not open for further replies.