|vanDongen-Gilcher||Apr 10, 2003 4:42 am|
|Marc Lavallée||Apr 10, 2003 5:02 am|
|Maurizio Umberto Puxeddu||Apr 10, 2003 12:00 pm|
|Jordan J||Apr 10, 2003 1:13 pm|
|vanDongen-Gilcher||Apr 10, 2003 1:37 pm|
|vanDongen-Gilcher||Apr 10, 2003 2:24 pm|
|Hans-Christoph Steiner||Apr 10, 2003 4:24 pm|
|e skogen||Apr 10, 2003 4:40 pm|
|Marc Lavallée||Apr 10, 2003 7:05 pm|
|Patrick Pagano||Apr 10, 2003 7:49 pm|
|devnull||Apr 10, 2003 8:20 pm|
|Pall Thayer||Apr 11, 2003 12:58 am|
|Patrick Pagano||Apr 11, 2003 3:37 pm|
|Maurizio Umberto Puxeddu||Apr 16, 2003 11:08 am|
|Maurizio Umberto Puxeddu||Apr 17, 2003 4:01 am|
|vanDongen-Gilcher||Apr 17, 2003 4:11 am|
|kiilo||Apr 17, 2003 7:01 am|
|Hans-Christoph Steiner||Apr 17, 2003 7:06 pm|
|vanDongen-Gilcher||Apr 18, 2003 3:12 am|
|Mathieu Bouchard||Apr 18, 2003 4:02 pm|
|Yves Degoyon||Apr 18, 2003 4:05 pm|
|Maurizio Umberto Puxeddu||Apr 18, 2003 4:43 pm|
|Maurizio Umberto Puxeddu||Apr 18, 2003 4:52 pm|
|J. Scott Hildebrand||Apr 18, 2003 5:40 pm|
|jose manuel berenguer||Apr 18, 2003 5:48 pm|
|Maurizio Umberto Puxeddu||Apr 18, 2003 6:06 pm|
|jose manuel berenguer||Apr 18, 2003 7:11 pm|
|Chris McCormick||Apr 18, 2003 8:53 pm|
|Michal Seta||Apr 18, 2003 9:58 pm|
|Michal Seta||Apr 18, 2003 10:00 pm|
|Marc Lavallée||Apr 18, 2003 10:28 pm|
|Bryan Jurish||Apr 19, 2003 12:52 am|
|kiilo||Apr 19, 2003 3:06 am|
|vanDongen-Gilcher||Apr 19, 2003 6:00 am|
|Maurizio Umberto Puxeddu||Apr 19, 2003 6:08 am|
|Chris McCormick||Apr 19, 2003 1:16 pm|
|alex cook||Apr 21, 2003 12:37 pm|
|Subject:||Re: [PD] Re:[OT] How do your performance environments looks like?|
|Date:||Apr 17, 2003 4:11:20 am|
Maurizio Umberto Puxeddu said at "Re: [PD] Re:[OT] How do your performance environments looks like?."r[2003/04/16 18:08]
For an interesting example of a haptic musical interface, check out the University of York's Cymatic: http://www-users.york.ac.uk/~smr12/
This gives me a better idea. It is not by chance that they focus on physical modeling. There are many ways to make sounds with a computer where haptic interfaces makes much less sense.
Of course. Although I think that there are many "pure" synthetic synthesis methods were haptic feedback can be useful as well. example: A midi-fader is also gives some form of haptic feedback, you know the position by touch without having to look at the screen. What I am thinking of are situations where a single controller axis controls multiple interconnected parameters. One parameters is obviously sensable because of the position. Pitch for instance. But an external sound source might determine the portamento or some kind of modulation effect on that sound. Using ff I can make the response of the joystick reflect the synthesis better. For me it is all about designing a performance interface that matches my musical concepts.
During a recent improvisation workshop, some people complained that I was totally inexpressive while playing, even if I was able to make "gestural" sounds and make use of the whole dynamic range which is more extreme than the rest of the players (instrumental).
I noticed that my teacher can (on occasion) use an artificial gestuality (that is, not justified by strict interaction with the device). For me this is not necessarily a problem, this split being a part of the nature of playing electronic devices.
This split is also part of accoustic devices I think. It is also an old discussion in classical music. With a piano the only factors determining the sound are the speed of depressing the keys and the position of the pedals. All movements before and after are "artificial" . There are a lot of bad pianist with fake expresionist movements, there are also very good pianist who move a lot and very good ones who sit like a statue. And then there are conductors and solo guitarist in rockbands, singers ......:)
But I think movement does have an effect, even if only indirectly. It is often easier to get in the rhythm if your body moves with it, for example. I like to move when I perform, an I always stand when I play electronics.
My dislike of laptop performances is more complex than this. I think that a mouse is too simple and too single dimensional an controller for serious music performance and improvisation It is for me anyway. And I think that I can hear this in performances, and see it reflected in the way of sitting at a desk behind a laptop. The attention of the performer seems focused on moving the cursor to place X on the screen, and not on making sound Y in the room.
What I am more interested in is finding ways of making a computer a musical instrument without losing the possibilities. The biggest difference between a computer and a more traditional instrument is in the past and future time of the performance. A traditional instruments acts in the now. A computer has access to what has gone before, and you can project into the future. What I am trying to do is to make an interface that gives me the flexibility of a traditional instrument (instant change and reaction) without losing the extended time-scale of the computer program. To do this I want to have the interface reflect the state of the machine to the player in an intuitive or at least learnable way. One way is graphical, another is haptic. In the example above, the external sound source could also be what I played 1 minute ago. Or something choosen from what I played by pattern matching with neural nets.
So, that is long mail. Hope you find it interesting and not too rambling.