1. IMPORTANT:
    We launched a new online community and this space is now closed. This community will be available as a read-only resources until further notice.
    JOIN US HERE

More music less hassle?

Discussion in 'REAKTOR' started by fm 2030, Jan 21, 2003.

Thread Status:
Not open for further replies.
  1. fm 2030

    fm 2030 Forum Member

    Messages:
    190
    Hello,

    This may sound a little techy / complicated to some of the people who could most enjoy actually using this approach - in fact, it should be a way of avoiding 'technical' decisions and rather focussing on the aesthetics of the sounds, in a very direct way IMHO.

    Would anyone be interested in using Reaktor in a way that involved parameters being automatically randomised to a given extent, and then evaluating the results, and producing another set of variatians, attempting to 'learn' which characteristics you like, down seperate paths, which could then be bread into each other? And then morph between them either on a timeline, or in an xy pool where they can be placed arbitrarily, and continualy morph in an expressive (or controlled with modulation) way? You would?

    I've been using free versions of Reaktor since 2.3, because of a program my Dad (Stephen Todd) wrote while working with William Latham (now of http://www.artworks.co.uk/ fame, check out Organic Art and forthcoming Organic Sound to Light), which I was interested in using to control Reaktor. They expressed some initial interest, and provided the program, but I guess are busy with other things, and to be fair, we haven't really provided them with much stuff to encourage them to take notice. (The words "thank you for your Reaktor related activities" or similar stuck in my mind as they agreed to provide R3 and two dongles). Indeed, Stephen's pretty busy with work; I guess the weak link in a way is me not providing any good working examples to them etc.

    As it stands, the interaction between the two programs is far from perfect, and setting things up for mutation can be a pretty arduous task, even when you know what you are doing (and severely restricted in that it's limited to midi cc's). So it's not really in an ideal form, and anyone that used it WOULD have some problems getting started. It hasn't been mentioned on any forums or anything up to this point, and I thought it would be interesting to put it up for discussion. The new methods for snapshot management in R4 could be a step in the direction of a scenario where it potentially became intergrated into Reaktor, if NI were sufficiently interested. Alternatively (and more likely), OSC could, if properly implemented (hopefully R4), all but alleviate the current problems and limitations that it has as two seperate entities interacting via midi. We are looking into getting mutator into a form where it can be made publically available, with easy to set-up Reaktor support. I don't know if this will happen between now and Reaktor 4's release.

    Are you still reading? That's it... Feedback appreciated.
     
  2. ecook

    ecook NI Product Owner

    Messages:
    24
    Are you just throwing the general topic out for discussion, gauging interest to see if there's an audience, or actually looking for people to help test a system like the one you're describing?

    In any case, I'm interested, tell me more. The prospect of applying the level of detail and self-organization that appeared visually in Organic Art to sound/music would be tremendous.

    -E. Cook
    http://www.simulated.net
     
  3. leehurley

    leehurley Forum Member

    Messages:
    45
    are you talking like feeding it training data and having it learn based on that? sounds like you're getting into the area of ai / genetic algorithms possibly. sounds fun. OSC could probably do it. have you tried at all with max/msp or supercollider?
     
  4. DoubleWah

    DoubleWah NI Product Owner

    Messages:
    62
    This sounds like a very interesting project, although I would guess that the devil will be in the details... In general, you might want to have a look around at the growing literature on Artificial Life and Music. In recent years, all the major Artificial Life academic conferences have had papers delivered by people intersted in music, and there have been some workshops on this topic. The general motivation is to explore ways of using computational models of "life-like" systems to make sound and/or music. This would include the sort of evolutionary search that you are talking about.

    I am pretty sure that there are research papers on using genetic-algorithms (systems that try and evolve parameter settings) to design software synthesisers. It would be great to see this work "come out of the lab" as it were, and using a popular but hugely flexible synthesiser like Reaktor would be a great way to bring these ideas to a wider audience.

    Another popular technique has been to use "cellular automata" (simulations of many locally interacting "cells") to create sound. There are some synthesisers that are available on the net that use these (check out the work of Eduardo Reck Miranda), and there's even a simple example in the user library for Reaktor (I think).

    Anyway - enough rambling on... have a search around for web pages on "alife" and "music". This is a really new area, and I think there's great work still to be done!

    Simon
     
  5. dongledogg

    dongledogg New Member

    Messages:
    5
    I'm quite interested in such things. I am currently searching for software that converts video into audio and vice versa. So far I've found GEM for PD, VideoDSP for JMAX, and JItter for Max. Does anyone know of other worthwhile software that does this type of thing?

    :Dongledogg
     
  6. Sprengart

    Sprengart New Member

    Messages:
    2
    Hey,

    check out www.steim.nl and there product Big Eye. It takes video information and convert it into midi massages. There are still other softwares witch convert videoinformations into midimassages don`t remember the names You can start a little research ad www.audiovisualizers.com or www.vjcentral.com.
    Some time ago I had the idea to control a visual synthesizer ( in my case videodelic from U&I Software) via midi from an reaktorinstrument. I posted it in the other audiosoftware section of this forum. Never got any answers.
    Though I am not to deep in the whole midi thing I never got farer than to run the visualsynthesizer from a reaktor build sequencer.
    The field of turning music into visuals (Not illustrating music like the MTV videoclips!) is very interesting.

    videodelic greetings
    Michael
     
  7. fm 2030

    fm 2030 Forum Member

    Messages:
    190
    Hi Everybody,

    Good to hear from you all. I don't really know what I was trying to achieve by putting this thread up, but I thought it was better up than down. Exactly as DoubleWah said "I am pretty sure that there are research papers on using genetic-algorithms (systems that try and evolve parameter settings) to design software synthesisers. It would be great to see this work "come out of the lab" as it were, and using a popular but hugely flexible synthesiser like Reaktor would be a great way to bring these ideas to a wider audience." In fact, it is something that one of my uni lecturers is apparantly researching - I haven't talked to him about it (yet.). It seems to me that the kind of level a vast quantity of reaktor users are using the software at, not going really heavily into the nitty gritty, but mainly wanting to use a system that gives them some access to interesting sounds, and mainly being interested in making music. Stephen was recently one of the chairs to a genetic algorithms conference seminar type thing, and it makes you realise that there are several people working on the area, sharing the fruits of their labours a little between themselves, and discussing wether or not it could or should be of mainstream interest. Of course it F$^&ing should, get it out there. That's what I say.

    I'd like to let people actually use the program, even as it is now (and as I've mentioned, proper implementation of OSC will be a god send).
    Actually, Stephen started working on a thing that would convert video to midi a while ago, on the basis that £130 was far too expensive for the phatboy controller I was planning on buying! Also, as an impressionable youngster (must've been '96 or so), I attended an underwater concert (underwater speakers sound good), where the music was controlled by mutator and max, and they planned on having a video feed so that someone in bright red swimsuit would control mutator, but didn't get it up and running. I was reading a bit about http://www.eyesweb.org/ which people (dongledogg, I'm looking in your direction) might be interested in. Actually, has anyone tried using CsoundAV, and if so, can they tell me why it's been glitchy and unstable on the three computers I've tried it on, and why it doesn't consistently respond to command line arguments? Maybe this isn't the place; I'll get onto Mr Moldano (or whatever his name is).

    I'm doing a sonic art's degree (pretentious, moi?), and have a module on Interactive and Algorithmic composition this semester (teaching starts in a few days) so I will learn more about artificial life (which I've just started looking into a couple of weeks ago, then got distracted. Will do...), chaotic systems, and I'm also quite interested in doing a bit with physical modelling (as in mechanics, not waveguide. Just loved CList's Newtonian Bouncer, and indeed implemented the two balls exerting force on each other (I'll put it up soon), and got Stephen to put a similar thing in Rover (the mutator xy where you can drop in mutants and it interpolates between them according to mouse / whatever pos.) where each mutant exerts it's own gravity. Fun fun fun.)).

    Anyway, the thing about genetic algorithms (of which mutator is one of the original implementations <swells with pride> ;-), in the mutator context, is that the user has subjective control; you just listen, and rate things / encourage them in different directions. Each mutant is a static set of parameter values, and these mutants can be arranged in a fairly 'standard' compositional manner (so you think about the specific shape of a piece of music), or controlled gesturally (in rover). These aspects are essentially seperate from the genetic algorithm aspect, admittedly, but the way I see it, the thing about genetic algorithms and this kind of system is that it's all about human aesthetic decisions (not that I've got anything against things that aren't ;-).

    I will start trying these things in max/msp and supercollider very soon (I will probably have to submit work in max/msp format at the end of the semester), and I know that it has built in low level 'mutation' objects for doing genetic algorithms.

    Oh, I seem to have gone on a bit.

    <singing> Oh,,, I'm so happy / you're so fine...
    I'd like to let anyone who's interest try out the program, bear with me...
     
  8. ecook

    ecook NI Product Owner

    Messages:
    24
    I'm certainly interested; let us know when you've got something ready for people to try out.

    -E. Cook
    http://www.simulated.net
     
  9. leehurley

    leehurley Forum Member

    Messages:
    45
    you could always write your own max externals too.
     
  10. stodd

    stodd New Member

    Messages:
    2
    A few comments.

    I am trying to arrange with IBM to be able to make 'Mutator' code public (on Alphaworks) {also with Computer Artworks for the icons}. I will post here if I manage and once Mutator is posted.

    Much of the code is over 10 years old, and some of it has been refreshed recently. The original Mutator was the first system to use genetic techniques as a subjective user interface (based on the Biomorph demo by Richard Dawkins). First version was in 1989, and ran on a mainframe. Current version is PC only - and no likleyhood of any other version (as it is written in Visual Basic).

    It only acts as a genetic interface. It does not have other artificial life aspects, or a 'genetic' algorithmic music aspect. (It does include a demo of very basic/feeble algorithmic music).

    Mutator has interacted with many other programs (music, art, financial planning), by a variety of underlying mechanisms(simulated keystroked, DDE, COM, midi and others). OSC looks very promising (but not the Reaktor 3 implemenation which is both very limited and buggy). Unfortuantely very little music software seems to be written with well designed external programming interfaces that would make interoperation with something like Mutator fairly clean and easy. Reaktor (strong as it is on other ways) is one of the worst.

    Mutator has operated with Max some years ago; in a collaboration with Michel Redolfi at CIRM in Nice. Indeed, this was the basis of the midi interaction I am using with Reaktor now. There were various problems (like Mutator needing to run on a PC, and MAX on a Mac, with a real midi cable rather than a virtual one) and lack of time on both sides, so we never did too much. It was used at one concert. We've also got it operating with PD.

    Greetings
     
  11. lxl:::;xl::

    lxl:::;xl:: Forum Member

    Messages:
    222
    sounds very nice, but do you mean that you want to use midi cc or not?

    -lx.
     
  12. Thom

    Thom NI Product Owner

    Messages:
    114
    randomising is great!

    I use some very ancient software (from mid 80s) to control an old Ensoniq SQ80 synth. It allows me to choose which parameters and the extent of randomisation I wish apply to generate new sounds or adapt existing ones. Although this control is probably very rudimentary compared to what is being described in this thread, I can say that for me, the SQ80 becomes a far more useful and inspiring instrument to work with because of it. However, I believe that the appeal of the ability to use randomisation in sound design depends very much on your preferred working style.

    I have also noticed that the ensembles that are uploaded to the user library that include patch randomisation features (such as the ones by rachMiel) are generally faster to work with. This is because I can hear how changes to subsets of the ensemble controls affect the sound, in a very fast and immediate way. This improves my ability to learn how the ensemble works and how to generate the sounds that I am interested in. I can do major edits with the randomisation and then fine tune promising results by hand. For me this is an excellent way of working. I am really looking forward to seeing the patch randomisation that is being included in Reaktor V4. Any other system for randomising sounds would be very welcome as well!
     
  13. citrusonic

    citrusonic NI Product Owner

    Messages:
    51
    Beta Tester R Us

    send me a copy
     
  14. stodd

    stodd New Member

    Messages:
    2
    lx asks ??? sounds very nice, but do you mean that you want to use midi cc or not????

    I don't want to use midi cc, but for Reaktor 3 I don't know of any alternative -- it's midi cc [or poly aftertouch, with all the same problems] or nothing.

    I would much rather use something with sensible names and a much more continuous value range (for example gripd + send/receive messages in PD). With any luck, OSC in Reaktor 4 will provide us with that. OSC is certainly up to the job.

    Greetings, Stephen
     
  15. trash80

    trash80 Forum Member

    Messages:
    33
    ...

    i dont know if this is on topic, but itd be pretty easy to create a reaktor
    midi CC ensemble that would generate random values for a set of CCs when you
    hit a "randomize CCs" button, then you could store the CCs you liked into a
    event table Y access by hitting a "i like this CC" button, hit the random
    button again, store the CCs you liked into the next X access, etc, etc, etc,
    till you have a good range of liked CCs. then finally you could grab the
    lowest and highest vales for each CC Y access in the table, and hit the
    "generate random CCs from the ranges i i like" button. and there you have
    it. really simple without getting into AI.

    and most results in theory would be quite nice. maybe... ;)

    ps, that was the longest run on sentence ive ever written. :)

    // timothy lamb -_- trash80.net

    _________________________________________________________________
    MSN 8 with e-mail virus protection service: 2 months FREE*
    http://join.msn.com/?page=features/virus
     
  16. burgessa23

    burgessa23 NI Product Owner

    Messages:
    73
    Hey Timothy, what's up with your website?
    it ain't there...
     
  17. Idealator

    Idealator New Member

    Messages:
    3
    This sounds like it would work if you presented things in a way people could grasp(i.e. nice interface), so then it would attract those who don't like to use stuff like Max/msp and it might also attract people who do but find this cuts to the chase. As you say, more music less hassle.
    And those words bring to mind one program in particular: Softstep...

    http://algoart.com/web/softstep.htm
     
  18. chadnellis@joimail.com

    chadnellis@joimail.com Forum Member

    Messages:
    70
    sounds to me like it would only be any use if you literally went through hundreds or thousands of synth param combinations.

    Wouldn't this be better if it was automated - you could have a constraint based system for analysing the output of the synth - then mutate the input values until your constraints are met. This I think would be a far more useful application of evolutionary algos with reaktor.

    Ofcoarse developing an analysis component would probably require some nifty FFT coding - and a constraint based system with high level naturel language rules very hard also. Maybe NI will inevitably end up doing?? At any rate I would definatly consider for the future how your program might be able to automatically be propogated without the need for user input.

    For me, I like patches that have a broads spectrum of possible sounds where you can morph from one state to another and the sound stays good and changes in a functional / useful way - having to battle with things like filter tracking and stuff can make finding this kind of dynamic capability in a synth hard and a tool like this could be invaluable.

    Ed.
     
  19. fm 2030

    fm 2030 Forum Member

    Messages:
    190
    Hello again,

    Just to comment on what people have said / ramble in the usual way...
    It did occur to me that automating the selection based on some kind of analysis could be quite interesting (it probably isn't going into mutator into the forseable future - fft or none) but it's probably more suited to other kinds of process (I might look into this in max/msp for college, though). It could be particularly interesting / usefull in situations where the results can't be heard quickly (particularly controlling some kind of sequencer), which makes evaluating permutations much more lengthy (not to mention confusing!).
    Timothy's suggestion sounded like justing getting into AI a little bit, rather than not at all - I don't know much about AI, but as far as I know it's not voodoo wichcraft ;-). The way the algorithms work, the computer tries to work out how combinations of CC's are liked, down separate paths, create hybrids etc.

    Hopefully you(windows users)'ll all be able to give mutator a try soon, if you like. I feel that, maybe, it could be quite an accessible, useful and / or fun thing to play with. The interface is pretty simple, anyway.

    Cheers,

    Peter
     
Thread Status:
Not open for further replies.