If we use the definition “music is organized listening”, then the imperative if we want to create music is to create listening, without it, we are not making music.
Interaction is one way to compel listening and engagement with music. The music becomes a live experience. Different every time. Personal. The listener is an integral part of the experience. Listening is not optional.
Is the use of technology imperative in creating Interactive Music?
Why make something interactive?
Why make interactive music?
Why NOT make interactive music?
I automate whatever can be automated to be freer to focus on those aspects of music that can’t be automated. The challenge is to figure out which is which.
How does the system we use to make music dictate the music we make with it?
How important is that level of control to you?
Robert Henke is the composer and co-developer of Ableton Live. His views on composition and music have effected TONS of music producers and millions of music listeners.
How much of the music are you creating (Poiesis) and how much of it are you just recreating (Anapoiesis)?
Does that matter to you?
The browser affords many interactive opportunities such as touch, gyro, location, microphone, camera, keyboard, outside APIs, and more. The list is constantly expanding.
The web is (for now) free and open!
The browser is the means for production and distribution in one. So you are creating with the same tools as other people are using to view what you are creating. This lends itself easily to remixin and forking and the dialogues created through those things.
Low barrier for distributing software. Programming in other languages requires compiling for specific architectures, download and install scripts, additional security measures. Javascript on the other hand is executed as soon as someone lands on your page.
what are the down sides to working on the web?