Implementation of the Multi-Touch Engine and the Control Gestures
Touch Detection
Basic touch detection and handling is done by the chair's own multi-touch sollution, developed and implemented by Malte Weiß. An application, called the Multi-Touch Agent, is running in the background and handles the transformation of raw camera data into usable touches. Those touches are distributed to all applications declared asTransforming a 3D Space Into a 2D Interactable Surface
Selecting an object in 3D when only interacting on a fixed surface introduces a lot of difficulties, first and foremost occlusions. We follow the approach "what you see is what you get" and we will try to implement the game such that it never offers too much choices to select, we will especially avoid choices that would be unreachable due to occlusion.The game grid is a karthesian space devided into cubes of the same size. Assuming we know the center of such a cube (and we do!) how do we decide, if a touch on the surface belongs to this particular cube?
Here we benefit from our user centered perspective: Since we do all the projection math ourselves and dont leave it to OpenGL, all we need to do is save the current matrix (each frame) and put the 3D point, we want to project, through it. So we take the center of a selectable cube, multiply it with the four by four transformation matrix and get its exact position on the 2D screen.