Author
|
computer programming, generative music, and.. psytrance?
|
pH_
IsraTrance Junior Member
Started Topics :
12
Posts :
154
Posted : Jan 14, 2010 09:49:56
|
this is a thread for fellow computer programmers
i've long wanted to use computer programming & its philosophies in a psytrance project. something that'd have "modular" songs having many "plug-and-play" elements within that when done live, in an improv. fashion, would result in a totally unique experience each time since it would be hard to trigger the dozens of "extras" in the same sequence every time. i would imagine it would be quite a live show in terms of uniqueness..
up until now i was drawing a blank when it came to realizing this dream. i've discovered generative music and the "live programming" when i found this article: http://www.perl.com/pub/a/2004/08/31/livecode.html?page=1
i think this approach could go great with psytrance. are there any other coders out there that see some potential here or is it just me? |
|
|
Speakafreaka
IsraTrance Junior Member
Started Topics :
18
Posts :
779
Posted : Jan 14, 2010 14:38
|
No need to delve to this level, you'll be pleased to know ...
In Live you can already do exactly this with audio clips, and 'follow actions' and the 'Random' MIDI effect.
With a decent MIDI Controller ... off you go!
  .
http://www.soundcloud.com/speakafreaka |
|
|
carbonic
IsraTrance Junior Member
Started Topics :
10
Posts :
115
Posted : Jan 14, 2010 14:44
|
well, I read the article, maybe a little bit fast but to me it doesn't make any sense. It's like going from Windows 7 to using Dos back again.
Modern production software offer this, tweaking around, playings with virtual instruments/effects does it, right? The hundred scripts get written in the background, but the GUI makes it easy for us. It's just some kind of 'easy programming'.
  http://soundcloud.com/morphous |
|
|
voidstar
Started Topics :
0
Posts :
16
Posted : Jan 14, 2010 14:59
|
Interesting article. I've considered similar ideas myself in the past but instead thought about using genetic algorithms and neural nets so that a computer might learn to create meaningful music for itself when hooked up to midi instruments and samplers. The problem though is that there's no easy or meaningful way I can think of to train such a program. Maybe though there's a half and half solution where a computer can be given pre-defined musical components or scripts or whatever and trained to play these components by itself thus creating new and unique arrangements each time the program is run. Interesting stuff (well at least if you're a computer geek like me ;p) |
|
|
pH_
IsraTrance Junior Member
Started Topics :
12
Posts :
154
Posted : Jan 14, 2010 19:01
|
voidstar is on the right track here i haven't thought about using NNs for music, that's above my head, but it's interesting too.
i know that modern software is able to do this to an extent. i just dont think ableton can do it to the extent i want. as a programmer for a decade i just think that adding in some custom programming could make things even more interesting. plus being a programmer i'm never really satisfied with the way other peoples software (even VST) work. there's always a feature i want, or don't want.
to continue though: i'm unsure if you could make something to control a VST from say, python, so that when you execute the py script it plays/tweaks the said VST. if that works, & following the "live programming" idea, a song would have a few scripts to play the kicks, bass, etc so it's just a matter of choosing which musical function to play during performance. you could take it a step further and program functions to tweak a VSTs parameters (simple example could be a "climb" function that would tweak the said VST to make it climb), this way not only could you improv. what addons are played, but the tweaking movement of the addons.
i'll have to play around a bit more, i know this is possible |
|
|
carbonic
IsraTrance Junior Member
Started Topics :
10
Posts :
115
Posted : Jan 14, 2010 21:32
|
Quote:
|
On 2010-01-14 19:01, pH_ wrote:
voidstar is on the right track here i haven't thought about using NNs for music, that's above my head, but it's interesting too.
i know that modern software is able to do this to an extent. i just dont think ableton can do it to the extent i want. as a programmer for a decade i just think that adding in some custom programming could make things even more interesting. plus being a programmer i'm never really satisfied with the way other peoples software (even VST) work. there's always a feature i want, or don't want.
to continue though: i'm unsure if you could make something to control a VST from say, python, so that when you execute the py script it plays/tweaks the said VST. if that works, & following the "live programming" idea, a song would have a few scripts to play the kicks, bass, etc so it's just a matter of choosing which musical function to play during performance. you could take it a step further and program functions to tweak a VSTs parameters (simple example could be a "climb" function that would tweak the said VST to make it climb), this way not only could you improv. what addons are played, but the tweaking movement of the addons.
i'll have to play around a bit more, i know this is possible
|
|
But isn't that automating? I mean even if the most complex algorithm controls say VST it is meant before, in details, constructed by the programmer! Of course exception is if it uses some random function to generate something. But when it comes to random, mhhh I wouldn't be calling it art. On the other hand making computers "feel" the music...my brain can't think of that as possible! So apparently someone should be behind it.
  http://soundcloud.com/morphous |
|
|
phila
Started Topics :
5
Posts :
18
Posted : Jan 14, 2010 23:48
|
SuperCollider (http://supercollider.sourceforge.net/) is good! It is *very* complex in my opinion, but you can make pretty much *all* with it... I know it at a decent level, I also had a course in it last year... It is cool, but I do not have hundreds of hours to spend in it ..
However: I am not sure about it, but pd (pure data) can host vst instruments, and pure data can be controlled with python.. so you can write python script that interact with vsts.. not sure
But, also without python , I know that PD is cool: you can make your own custom interfaces, audio generators and so on.. You just need some time to spend on it! |
|
|
nonseq
Started Topics :
0
Posts :
18
Posted : Jan 15, 2010 22:52
|
I'd say it's probably more useful for micro-articulations than 'launching' samples which can quickly lead to chaos in the typical strictly regimented dance track.. |
|
|
realtime
Started Topics :
5
Posts :
350
Posted : Jan 16, 2010 16:20
|
|
epoplive
Started Topics :
0
Posts :
1
Posted : Jan 24, 2010 08:38
|
@ph_
I'm a programmer myself and this is something I've thought about quite a bit. I've noticed alot of the same patterns in psytrance, and think it could definately be auto-generated to some degree. There would probably be quite a bit of stuff that didn't come out quite right for the human ear though is my guess, so you would still need to do some tweaking after it created the base for you.
Unfortunately this type of project would also take all of my time, and thus isn't something I can persue. Perhaps when I retire. :/ |
|
|
braininavat
Started Topics :
5
Posts :
233
Posted : Jan 24, 2010 18:41
|
The problem with machine learning and music is good music is deceptively hard to quantify.
There is also a problem that while it may seem simple with only have 12 notes to work with, there is massive combinatorial problem
once you factor in note length and silence for even a 4 bar melody.
The reality is for the amount of time this would take to get something usefull, you would have been better off just clicking notes in a piano roll.
Thats also why its almost impossible to find generative music that doesnt sound like exactly what it is, a computer randomly spitting notes out. |
|
|