Aim of this project is to design a human-computer interface that will let the user play music like on an instrument. There should be some non trivial mapping betwenn the user action and the resulting music and an advanced user should be able to produce usable music after training.
Finally we decided to build a sword-interface that lets fighters create music while combat. This fight will get a coordinated choreography to learn or train a special piece of music. The sword will have sensors to detect the hit position on the blade. Then this certain position will be translated into a tone from a basic scale for each sword. The computer shall generate some additional tones to create a more complex sound. Vibration sensors on each sword will determine the volume.
Read more about the construction of the swords.
Read more about the hardware of the swords.
Read more about the software of the swords.
Here is a list of the problems we encounterd during the project:
How to model the electrical contacts between the swords?
We first found metalic fly screens, but it was too difficult to attach them to the sword in a robust way, event nailing them onto the wood would not prevent it from getting disrupted.
We found some metalic foil that can be sticked on the swords. This solved our problem, this stickers can be connected to our hardware easily by sticking them over the wire.
How to connect two Phidgets of the same type (InterfaceKit) to a software the same time?
Last, but not least there was no real problem, but we had to find out how to create instances of an Phidget in java calling the right interface by its serial ID. Max/MSP is able to plugin one InterfaceKit itself with the help of external plugins, but we need to adress two different Kits. That is, because we are depending on our own external yet. After digging into the implementation of the iStuff ToolKit we found out how to adress the Interfaces and run them in a correct thread beside our working thread
There are several things that can be done additionally and there are also existing things that could be improved. Our swords can generate some discrete tones, because we are having ten seperated contacts on each sword. It also would be interesting to find out how it sounds with more tones by using continous covering of the blades. To modify the sound a bit more you could add more sensors. For example a tilt sensor, so you could create a cool lightsaber effect. By doing user tests we could ascertain what typical movement and contact patterns are and maybe would have to change the sound creation regarding to the results.