FTIR Multitouch and Display Device
- A Guide to build your own -
Experiments with Processing, OSC
 
 

hello dear reader. on this site you will find some notes and tips on building your own FTIR (Frustrated Total Internal Reflection) input device that is actually working! i've searched the net quite some time finding all infos to get it up and running. the other part is pure software and the possibilities to create funny and also useful things are immense.

this site does not run any forum. there are good blogs / forums, and many snippets of information around. for example:

[http://tinker.it/now/2007/02/28/multitouch-table-experiment/]

[http://www.multitouch.nl] (Harry van der Veen, one man behind nuigroup)

nuigroup is a general site with good infos. "we are an interactive media group researching and creating open source machine sensing techniques to benefit artistic and educational applications". there are good forums with tips on hardware, software in different flavours:

[http://nuigroup.com/forums/]

[http://stud.hgkz.ch/tim.roth/wordpress/] (Tim Roth, using one of the first bloody MTAK IIIs I sold, got good improvment tips from him)

it took me about 3 weeks (it became some more now) to get all the right parts, build it up and write the basic 'test-if-works' code which controls a virtual synth running in Reaktor over OSC (Open Sound Control). since i couldn't build it without the experiences others made, i want to share my experiences with you as well.

the inventor / pioneer in low-cost FTIR mutlitouch devices to my knowledge is Jeff Han:

[http://cs.nyu.edu/~jhan/]

btw: this is what inspired me initially, it's a short demo of ReacTable (using ReactiVision):

[http://www.youtube.com/watch?v=0h-RhyopUmc]

after a while of digging deeper into the subject, i realized that there are some important differences of a 'Jeff-Han-style' table/wall and the 'ReacTable-style' tables. the following notes are mainly on the plain FTIR approach. diffused illumination (DI) aspects and physical object tracking/pattern recognition are related topics. the following clip of Jeff Han shows a perfect combination of hardware and software:

[http://www.youtube.com/watch?v=9zGDNFpOMcA]

if you have any comments on the following notes, don't hesitate to contact me directly: tangible [at] 001.ch

started 1. april 2007, last update: 21. august 2007

new multitouch shop here:

[http://www.lowres.ch/mts]

archive flash videos

some more images:

here here here

but now, let's get started!

 
basic setup, some notes on (F)TIR  
 

Total Internal Reflection (TIR) is described here:

[http://en.wikipedia.org/wiki/Total_Internal_Reflection]

an overview of the electromagnetic spectrum:

[http://en.wikipedia.org/wiki/Image:Electromagnetic-Spectrum.png]

radiation with wavelenght < 400 nm is "bluer than blue", ultraviolet

radiotion with wavelenght > 700 nm is "redder than red", infrared

all in between is visible for the human eye.

for the following setup, we are operating roughly in the 'near IR' spectrum. the following site gives some interesting insights, from the photographer's point of view:

[http://www.kenrockwell.com/tech/ir.htm]

back to FTIR. in short: when light enters into a transparent material such as acrylic (glass not tested) at a certain angle, it won't leave the material again without an intervention but reflect internally. if a finger is pressed on the plate, the light rays reflected on the top surface of the acrylic underneath the finger get 'frustrated' (that's the F in FTIR) and some portion of it no longer stays in the material. this means that some light leaves the plate directly unter the point where a finger is pressed, optimally in a 90° angle to the plate. we can now capture all pressure points (IR light spots) digitally and then do something with these informations.

the acrylic plate is floated with IR (infrared) light from LEDs (Light Emiting Diodes) in a non-human-visible wavelength spectrum of about 900 nm. as the camera that caputres the plate from beneath has it's IR filter removed (so it can see all IR light) and mounted another filter, that blocks visible light, it only 'sees' the points where the light leaves the plate (and not any other visible stuff like a projected image on the same surface). with imaging techniques namely blobdetection, these lightspots are identified and tracked. normally, the plate is also used as the projection surface for the application. pressures can give a direct visual feedback on the plate, but of course this is not mandatory.

[http://www.cs.nyu.edu/~jhan/ftirsense/ftirschematic.gif]

 

above pictures show action on MTLF-I, raw pictures from the webcam without processing.

check out this very cool java applet that dynamically shows how the light from a LED is refracted at different angles (LED rays, LED position, acrylic edge cut) and distances to the acrylic:

(click on a picture above to start the applet).

the applet was written by [Jason Modisette], thanks a lot for this contribution!

 
list of tools and materials needed  
 

materials:

- arylic plate, size that is ok for you. make it comaptible to the screensize you will use, so 4:3, 16:9 or so. i took a 640 * 480 * 10 mm plate.
**TIP** don't go under 8mm of thickness. i started with a 5 mm plate, without success. with 10 mm, you have enough surface for a nice polish and for the LEDs to shine in. buy at any glass/acrylic shop.

- IR LEDs. i bought some TSAL 6200, I(F) 100mA, V(F) ~ 1.5V, peak wavelength 940 nm. depending on the size of the plate, you scale the number of LEDs. i have both on top and on the bottom (the long side) 42 LEDs mounted, with a distance of 1.5 cm. i know with a good configuration maybe less are necessary. but i wanted to be sure that the plate is flooded well. buy at any electronics-shop, for instance [http://www.farnell.com].

- resistors. to run the LEDs, we need also resistors. the number of needed resistors and the value of the resistance depend on the electronic circuit you build. i took an old pc-powersupply, used the 12 V output, and then put several blocks (7 x LED + 1 resistor) in parallel. the resistor should be around 30 ohm to get best results. measure the output of your powersupply, if it shows only 10 Volts at the 12 Volt output, take that into consideration. best you buy the resistors along with the LEDs.

- some sort of frame the LEDs can be mounted. i took an L-profile aluminium and just drilled in the 5mm holes for each LED. the acrylic plate is then put on this L-profile, where the LEDs are positioned right in the middle so they shine directly in (radial).

- some sort of a stand that let's you put on the frame with the acrylic plate and that is of course open to the bottom for camera and projection.

- a mirror - you can't go wrong. just don't buy these plastic mirrors. it should not be to small. i took the same size as the acrylic plate. 30. June 2007: you can wrong just a bit. if you want best possible results, you must buy an 'optical' mirror, for instance here [http://www.screen-tech.de]

- some wires, soldering material, Molex connector. this connector simplifies the connection to a PC power supply [http://www.molex.com] (search for "disk drive power connector"). if you have another source, you may attach it in other ways.

- sand paper, from quite to very fine

- polish paste, sometimes used to polish watches with acrylic glass to shine again.

- projection surface: thin, milky paper works very good. you get this at office shops. it is difficult to find off-the-shelf sizes that are large enough to cover the plate. it is maybe necessary to cover it using more than one sheet, wich is a bit odd because the fixation is taking place not only on the edges. the deluxe variant is Rosco rear projection screen [http://www.rosco.com/]. maybe you will need silicon for a better touch feeling. i got some Elastosil M4641 here [http://www.formensilicone.ch] to do basic testing. maybe some IR-blocking foil is handy, for instance from here [http://www.ifoha.de]. i will order a surface from Harry van der Veen ([http://www.multitouch.nl/]).

devices:

- video source: webcam (most simple) or analog video cam which signal is digitized by a capture card. if you have a sony cam with nightshot function, you are lucky concerning the IR filter. if you have an old webcam, use this one. the modification to remove the filter is doable.

[http://www.consumer.philips.com/consumer/catalog/....]

[http://www.unibrain.com/Products/VisionImg/tSpec_Fire_i_BC.htm].

[http://www.ptgrey.com/products/fireflymv/index.asp]

<<
Q: What is CMOS camera. What is it different from CCD camera?

A: CMOS sensor is a kind of sensor that is normally 10 times less sensitivity then CCD sensor. As human eye can see object under 1 lux illumination (full moon night ). CCD sensor normally will see better or as good as human eye in the range of 0.1 ~3 lux and are 3 to 10 times more sensitive then CMOS sensor.
>> [http://www.ktnc.co.kr/support_01.asp#6]

To get best results, IR wavelenght of used diodes should match the range where the camera operates with highest IR-sesitivity (spectral sensitivity characteristics). the following picture shows an example of a graph:

be careful when reading the specs on the box of cameras standing on the shelf. for instance 30 fps does not mean automatically that this framerate is available in max resolution. some cameras even seem to duplicate frames to have bigger framerates (??).

-optional alternative lens to the one that is normally deliverd along with the camera. take one with a suited focal lenght and F-number for target setup.

[Focal Length and Aperture Explained for the Photography Novice]

normally optional lenses are sold at the same place as the camera but there are a lot of e-bayers selling different styles of them.

-optional IR bandpass filter

[http://www.thorlabs.com//NewGroupPage9.cfm?ObjectGroup_ID=1001]

august 07: first tests with an IR-bandpass filter in both FTIR and DI setup were not satisfying. only IR-light that comes from the LEDs directly and in a narrow angle seem to pass the filter. all indirect light does not. i don't get this one straight.

- video projection: beamer (if you want to display onto acrylic) or monitor

- an old PC powersupply or a good 12 Volts source

tools:

- drilling machine

- jigsaw

- hot glue

- screwdriver

- soldering iron

- multimeter

it's not yet time to mention any software stuff here. let's move on to the development of the multitouch FTIR hardware!

 
preparing the acrylic plate: let it shine!  
 

if you get the acrylic plate from a good place, you can tell them to cut it right the size you want it. tell your seller he should finish the edges as good as possible (planing).

now you rub over the edges (in 90° ..) with sand paper. from xxx to 600, then turning over to a polish paste. the paste is applied with cotton wool (you may use the sticks you tend to use for ear-cleaning).

when you think it's good enough, take off the protective foil from the plate.

 
fixing the projection surface: just paper  
 

there are different products for multimedia projection purposes which i didn't test but i read that they are not much better than a suited paper. with milky paper that is used to copy something while holding the paper over the sujet. i think also architects have a kind of a milky paper when they work old-school.

if the projection surface is big and the paper small, more than one paper must be used. the problem is, every fixation point of the paper on the acrylic plate provocates the FTIR effect. so these have to be at the edges in general. if it is not avoidable, use just a very small line of transparent tape. i managed to cover the 640 x 400 mm plate with some A3 papers.

although the sharpness is good, it still looks a bit crappy. it's better now..

august 07: little how to on building up an easy acrylic plate for FTIR use

basic layering profile:

====(0.8 cm reflective top cover / aluminium tape)
-------------------...
| acrylic plate
-------------------
...
-------------------...
| diffuser (non-IR-blocking), for instance milky paper
-------------------...
========(1 cm aluminium tape, also used for fixation of diffuser)
***********(supporting of plate)

this setup is not suited for pattern recognition since patterns can not be seen sharp enough from beneath. it is working well when touching directly on the acrylic plate. improvments of the drag feeling can be achieved with a very thin oil film on top of the acrylic. the diffuser is fixated with aluminium tape at the edges. it is important that the top coverage is a bit less wide than the bottom coverage. otherwise lightrays would be reflected in an unwanted angle and seen from the camera.

another possible layering profile is the following:

....------------------...
....| diffuser, for instance Rosco projection screen
....------------------...
....------------------...
....| silicone rubber
....------------------...
====(0.8 cm reflective top cover / aluminium tape)
------------------...
| acrylic plate
------------------...
***********(supporting of plate, reflective)

depending on the diffuser used, pattern recognition is possible through the layers with rear illumination.

 
putting togehter the LEDs and resistors, mounting to frame  
 

there is no generic cook book for this, but the following setup works without something beginning to boil and smell.

a good tool that helps creating circuits is for instance qucs:

[Qucs project: Quite Universal Circuit Simulator]

an online tool just for LED arrays is found here:

[LED series parallel array wizard]

it's a good idea to first mount the LEDs to the L-profile, so you can connect them more easily. drill a whole every 1.5 cm with a 0.5 cm borer (the same diameter as the LEDs have) and put in a LED.

a LED has two legs where the longer one is called the anode and the shorter one is the cathode. now connect first the series resistor to the first LED (anode). Then the following 6 LEDs, cathode of the first with anode of the second, cathode of the second with anode of the third and so on. after the 7 LEDs, one block is done. now build 12 of those blocks (.....:) that's 2 x 6, for the top and for the bottom 'LED-strip'.. then connect the blocks parallel to a 12 Volt power source, this means the series resistors of each block goes to the +12 Volts pole, and catode of the last LED of one strip goes to the mass.

now we're done with this part!

first steps for a prototype before going big

prepared aluminium profiles

the LEDs mounted on the L-profile, with a second one behind to cover the LEDs legs

now also covered from the top. as you can see, there is space so you can put in the acrylic plate easily.

i wanted a frame that i can fold up, so i made the corners flexible.

 
hacking the camera: dumping IR filter, mounting visible light filter  
 

i suggest you start with a relatively cheap webcam and 'hack' it. basically you must remove the IR-filter and mount a visible light filter, which can be a little square of magnetic foil of a good old floppy disk or the part of a photofilm that is black (put 2-3 layers together).

there are good guides helping you in this task for your camera here:

[http://www.hoagieshouse.com/IR/]

[http://www.multitouch.nl/?p=9]

cameras:

[http://www.consumer.philips.com/consumer/catalog/....]

[http://www.unibrain.com/Products/VisionImg/tSpec_Fire_i_BC.htm].

[http://www.ptgrey.com/products/fireflymv/index.asp]

CCD is 'better' than CMOS in our context. the sensitivty to IR-light of the CCDs is dependent on wavelength. to get best results, IR wavelenght of used diodes should match the best operating level of the camera (check the 'spectral sensitivity characteristics').

optical filters (IR bandpass):

[http://www.thorlabs.com//NewGroupPage9.cfm?ObjectGroup_ID=1001]

a paper on optical filter design:

[https://www.omegafilters.com/pdfs/oem_guide.pdf]

several boardlevel cameras have a microlens holder, using a 12M microlens. other popular standards are CS and C mount. lens mounting standards:

[http://en.wikipedia.org/wiki/List_of_lens_mounts]

custom adapters:

[http://www.truetex.com/micad.htm]

almost every webcamera can be opened one way or the other, and the lens(-plastic-screw-thing) can be unscrewed. the IR-filter is on the bottom of this 'screw', means, it can be seen when the lens is upside down. now sometimes it is glued in. i made the experience, that you can simply cut this 'screw-lens' 1 mm with a good knife, cutting off the IR-filter together with a small part of the plastic cover. (you will get a plastic ring and the lens). then the lens can be screwed back in to the place it was, a little bit shorter now.

do the test: use your TV-remote control. if you film it with the modified webcam, it should flicker very bright. if you now put on the visible light filter, you should see only the light from the remote control. depending on the environment, you also have the sun or other light sources bringing in IR-light.

maybe the camera part is not the most time-consuming but the most tricky one (cutting of the IR-filter...). you could damage your camera. another way the community proposes is to simply buy a new IR-ready lens at e-bay, so you just have to exchange it with the original lens (still you need to open the camera).

to enable IR separation for different illumination models or for general optimization, a bandpass IR filter can be used (see links above).

my crappy old webcam did the job quite well. since the adjustment is a question of millimeters, you will need a good holder for your cam.

14. May 2007: another camera / lens solution

camera and lens from unibrain. i made the experience, that for most applications, 15 frames is near to enough with all other parts performing well. be sure that you use the right drivers to get best results.

the remote lens holder and the lens (upside down) with a little square of photofilm as the visible light filter.

 
doing the first success test  
 

put the acrylic plate in the frame, power up the LEDs. start any tool that shows the caputre of your modified webcam. hold the webcam beneath the plate. press the plate with your fingers. if you see white blobs, that's cool! it's now a question of calibrating, fine-tuning, software and performance of your PC(s).

basic setup of the stand

it's a good idea to make a fixation so you build up the stand always with the same distances. otherwise it's a question of luck to get the optics fitting right.

[http://www.brainbell.com/tutors/A+/Hardware/Power_Supply_Connectors.htm]

[http://www.tavi.co.uk/ps2pages/ohland/95pwrconn.html]

[http://dundee.cs.queensu.ca/wiki/index.php/Building_a_Multi-Touch_Sensitive_Table]

[http://www.whitenoiseaudio.com/touchlib/faq.html]

14. May 2007: another beamer/mirror/table setup

beaming from above, over a relatively small mirror. the mounting board is a stand-alone part with a little surface on top covering the beamer's connectors.

the beamer board from the front. the mug on the table is in fact a rotator control (experimenting with reacTIVision and Processing... more soon).

 
coming to the software part: processing pixels  
 

to recognize the light spots, some image processing is needed. there are several tools that simplify these procedures. i've tried 'vvvv' and now use 'Processing' because i like the java-style and good documentation of it. i found a library for Processing that just does blobdetection. everything is built around this functionality.

[http://dev.processing.org/]

[http://v3ga.net/blog/2006/10/blobdetection-site/]

touchlib from whitenoise showed as a very powerfull out-of-the-box multitouch project. it runs on windows, but ports (written in C++) are welcome. one of the great things of it is the calibration tool. if you have an initial FTIR-setup and want to test it quickly, you go best with touchlib running a demo app like 'smoke' or 'pong' after doing the calibration. with the osc extension, virtually any other stuff can be done. touchlib really does what it says.

[http://www.whitenoiseaudio.com/touchlib/]

 
using OSC to control devices  
 

if we have the raw data from the FTIR input device (the coordinates of the pressure points and maybe the pressure strength/pressure size), we can do with it what we like. a common way of controlling music instruments is MIDI (Musical Instrument Digital Interface). many softwarebased music instruments support OSC (Open Sound Control). since there is a good OSC java library, it was possible to send such commands over the net to a sound generator accepting OSC after a short while. i made a test with a synth running in Reaktor, and it works nice. **TIP**: not all applications accept OSC bundles. in this case, just serially send the messages.

[http://www.cnmat.berkeley.edu/OpenSoundControl/OSC-spec.html] OSC spec V1.0 by Matt Wright

[http://www.sojamo.de/] oscP5, controlP5 by Andreas Schlegel

 
a handy sketch for the toolkit 'Processing' as a starting point  
 

although i like linux very much, i started this project on a win32 machine. since Processing is available also for linux and mac OSX, most libs and code can be used there as well. on the win32 side, the bridge from the webcam to the application (for capturing) is done over WinVDig, which is working in conjunction with the QuickTime libs.

rerequisites:

- Processing (java included)

- a running WinVDig (you can test this with the QTCap application included)

[http://www.old-versions.net/] QuickTime 6.5

[http://www.vdig.com/WinVDIG/] by Tim Molteno. V1.01 works good with QT6.5

download the sketch: lowres_all.pde

also download needed libs:

[http://www.sojamo.de/]

[http://www.shiffman.net/2006/05/18/moviemaker/]

download simple stupid lowres help lib: pressure.jar

download a movie with pressures as test input: tangible-test.mov

the sketch will need a font named BistreamVeraSansMono-Roman-20.vlw. just create a font of your flavour with the font creation function in Processing and replace it.

use it, modifiy it, let improvements flow.

some of the switches can not be controlled while processing. tweak the booleans at the beginnig of the file to choose if you process a live caputre signal or a movie. the variable polyphony is used in conjunction with the OSC logic and defines, how many pressure points will be accepted. it defines the size of available channels, that are managed in some kind of a pool. an assigned channel stays the same while pressing and dragging and is put back to the pool when unpressing. the figures below show the assigned channel textually at the last position: x/y/channel

figure 'calibration': simplifies beamer>mirror>projection surface and camera>mirror>acrylic plate adjustments

figure 'main screen': with the toggle controls, you can turn on/off several functions such as binary cut (adaptive thresholding algorithm to make balck/original&white color image), osc, create movie, rotate input 180°, show calibration help, show debug. with the slider controls, you can adjust the recognition parameters: binary cut level, blob luminosity, min/max blob size, blur radius.

if you have a setup, where the beamer is in front of you, beaming the signal over a mirror to the plate you are touching, the beamer is configured to display 180° rotated. normally you will need some keystone correction. the camera can be installed 180° rotated, or you can use the software switch (at some performance cost of course).

another goodie is the switch to create a movie. everything except the controls are recorded into a motion jpeg file. tweak file format, frame rate etc. in the source. click toggle on to start recording, and again to finish movie.

with controlP5, it is *extremly* simple to add controls that influence a variable of choice. **TIP**: with alt+h you can hide all controls. for instance, you toggle on calibration mode and then hide all controls. another example: press alt-s, and your current set of controls and its values are saved to an xml-file. press alt+l to load a previously saved gui-set! try also alt+k to get a keyboard-only input possibility. this lib is great! you can even drag a control to another place on the gui, while processing (press alt).

[http://v3ga.net/blog/2006/10/blobdetection-site/]

[http://www.sojamo.de/] oscP5, controlP5 by Andreas Schlegel

[http://www.shiffman.net/2006/05/18/moviemaker/] by Daniel Shiffman

 
the application: controlling a simple polyphonic synth  
 

i had the possibilty to test the Processing sketch with a very simple synth running in Reaktor 5. it is built up extremly easy, scalable to more channels.

one channel ist basically an oscillator and a prebuilt block providing gate, cuttoff and ADSR. every channel has it's pitch configured to accept OSC messages in the form /Pitch<channelnumber> <value>. i put the pitch control on the x-axis of my multitouch panel. the y-axis can be configure for instance to control cutoff.

the gate on/off controlling is one of the tricky ones. the goal is, that if a finger pressure stays at the same or almost the same place over several frame processing cycles, the oscillator should stay gated and stop, if the finger is released or the deviation to the previous coordinates is too high. ok, this involves some kind of motiontracking. this is done straight forward, by keeping a vector with the preivous pressure points and comparing the new ones with the old ones, respecting the deviation setting. if a previous pressure point is found inside the deviation limit, the channel assigned will be reused and gate will stay on. otherwise, channel is released to 'pool' and gate off will be sent to this channel.

the channel assignement is dependent on the polyphony. polyphony must be set to the number of channels your synth will be able to handle in parallel. if not using OSC, set this value to at least 20 (=2 persons pressing all 10 fingers on the plate). only the number of pressure points that is equal to polyphony will be assigned and processed.

short: pressure points must be tracked. channel assignment must be done respecting previous assignment map. channel assignment is directly dependent from polyphony. an polyphony=3 example:

cycle/channel1/channel2/channel3
0/free/free/free -> press finger 1
1/used(1)/free/free -> press finger 2
2/used(1)/used(2)/free -> release finger 1
3/free/used(2)/free -> press finger 3
4/used(3)/used(2)/free -> press finger 4
5/used(3)/used(2)/used(4) -> press finger 5
6/used(3)/used(2)/used(4) -> blocking

one could change the algorithm so it behaves in a FIFO way, like most MIDI synths do. the note that is on for the longest time will be dropped to assign new note on events to the freed channel.

the following movies are recorded with a digital camera. they're AVI, sorry. watch movie testing the multitouch device:

[beamer-and-monitor.AVI] for some scenarios, it's nice to have the signal to display both on the plate and on a separate monitor. i do basic testing here, and the calibration is not yet what one could call precise.

[funny-blob-test-reverse-light.AVI] when having lots of ir light, you can take advantage of the reverse effect. when holding a hand over the plate, it's like holding the hand into a large round blob.

[overall-test-with-synth.AVI] plate used just for input. on the back, you see the synth, Processing displaying debug infos and (bad quality) audio of what the synth plays (a portion delayed to the gestures).

you can see that my hardware performance (which is one old Dell Inpiron 7500 notebook running win2k) lacks of speed. i will test it on faster hw some day.

august 07: see here some better performing setup (using MTLF-I):

[MTLF-I_touchlib_smoke_demo.mpeg]

 

MULTITOUCH LED FRAME & ASSEMBLY PARTS
*NOW ON SALE* *NEW MULTITOUCH SHOP*
 
 

if you are interested, check out the products [here]

[electronic and mechanical plans of the MTLF-I (multitouch LED-frame)]

[some pictures]

(legacy MTAK III).

 
some thoughts on diffused illumination (DI)  
 

here is a little comparison i did recently to get a better view on the FTIR/DI topic. i must admit that i only have little practical experience with DI, so it must be consumed with caution.

Diffused Illumination (rear; front DI not discussed)

pros:
-out-of-the-box LED illumination can be used ('LED-lamps'), therefore maybe cheaper than FTIR
-works smooth, triggers just a little before finger reaches surface (what i heard)
-closer to a combined multitouch/pattern recognition system than classic FTIR setup

cons:
-very dependent on ambient light, lighting system must be variable/ different lighting systems must be in place
-difficult to light up the surface evenly
-more complex image processing needed

Frustrated Total Internal Reflection

pros:
-working robust, well/better explored compared to DI (?)
-needed amount of light in the acrylic well scalable with number of LEDs
-can be made working independent from ambient light
-works with relatively bad vision system, needs only simple image processing to detect blobs

cons:
-dependent on surface layering, no pattern recognition possible
-custom made LED illumination along the edges of acrylic must be available, therefore more expensive than DI

facit: it depends on what the target application will require. DI sounds interesting!

 
further directions, miscellaneous  
 

generalization: binding of pressures and gestures to actions. many gui tools nowdays are foreseen for one-pointer-at-a-time operations of course. an odd way would be to convert touches to mouse events, which are serially processed. could imagine a markup language, that describes the pressure points of one frame. tools could then setup on this format. ok, it's a bit an overhead, but defined interfaces are good for modular building. i dind't search the net deeply on the multitouch input handling, but it is an interesting topic.

this looks interesting (26. April 07):

[TUIO: A Protocol for Table-Top Tangible User Interfaces]

iTUIO is based on OSC: "2.2 Message Format: Since Tuio is implemented using Open Sound Control (OSC) [4] it follows its general syntax. An implementation therefore has to use an appropriate OSC library such as [5] and has to listen to the following message types"

since there are lots of impressive videos of apps using reacTIVision around, i assume that the reacTIVision software is one of the most mature (using TUIO):

[reacTable / reacTIVision]

(4. May 2007) the site says "For the multi-touch finger tracking use the small finger stickers from the file "finger.pdf". Please note that the finger tracking is only available with the default amoeba set. Future versions of reacTIVision will support plain finger tracking without the need of these finger stickers.". if i get it right, the tracking of finger pressures/drags in reacTIVision is done using a little marker on the finger, which implies that you would have 5 stickers on your hand for a 5-finger pressure. the concept of the fiducial markers (representing any logical unit) is cool when playing with physical objects on the table. i am looking forward for the "plain finger tracking"

a good paper on reacTIVision is this:

[Improved Topological Fiducial Tracking in the reacTIVision System]

i had some obscurities concerning the needed IR-illumination of a combined variant. how are fiducial markers placed on the table illuminated? in FTIR setup, the light is mainly captured inside the acrylic and it wont illuminate a marker (not tested). the above paper says: "Physically, the table is illuminated from below with infrared light.". this would influence the FTIR setup in an unwanted way. so... why not take two illumination models, where one uses a slightly different wavelength (but both in the non-visible spectrum above ~800nm), so that they can be sepearetd by two cameras (which will only capture a very narrow band using a passband filter)? still there is a difficult issue in this combination: the milky projection surface has to be on top so the fiducial markers are in close contact with it and are clearly visible from underneath the plate. when the projection surface is mounted on the bottom of the acrylic (as for FTIR setup), fiducial markers are to blurry to be tracked. when touching on the projection surface, other issues arise. it should be as easy to clean it as the surface of the acrylic plate and be resistable to stuff like greasy fingers. and it should not permanently provocate the FTIR effect, which makes it difficult to fixate on the acrylic.

Robert William King came up with an idea, to combine a LED-backlit LCD television with a FTIR-setup. such an LBLCDTV would use LEDs that use less power, producing less heat and not using a noisy fan like a regular beamer does. it would just light from behind an LCD panel (a regular LCD disassembled to a certain degree). hot topic.. first sight here:

[Supersize Your TV for $300: Build Your Own XGA Projector!]

high intensity coupled light source:

[http://www.thorlabs.com/NewGroupPage9.cfm?ObjectGroup_ID=884]

getting less hasty and more handy: if you had a large, light, low-cost and low-power-consumption multitouch canvas with no equipment beneath like beamers, cameras etc., it will be mass capable. several multitouch techniques will improve and it is open which one will prevail.

imagine you had some transparent material, that can be used to capture the IR light. something like an IR-sensitive foil. now put this foil on a thin, (...) LCD flatpanel, not using the beamer and the camera. put the acrylic canvas on top, now we have a thin thing. if anybody has such a magic foil, let me know :) also let me know if you have any corrections.

 
about the author  
 

Thomas M. Brand

contact me at tangible [at] 001.ch

march - august 2007