Saturday, 6 August 2011

Virtual Reality Virtually Complete

Many hours were spent finding the correct image transformation, but it seems like I've finally made some significant progress. A fair amount of fine tuning is still required, but the transformation and setup seem to be sound.

Virtual Reality in all its glory.
To create and and run the virtual reality, Blender 3D (an open source software, free for anyone) is used which further allows me to warp the output. The image transformation function used by us has been contributed to Blender 3D by Dalai Felinto, based on work by Paul Bourke, both of whom can't go unmentioned here. Dalai helped me directly with advice about image transformation and Blender 3D. Further, Hans-Jürgen Dahmen (not mentioned for the first time in this blog), kindly provided me with information on the image transformation used in his setup (Hölscher et. al. 2005).

The Setup
As described previously in this blog, the virtual reality  has three components: projector, mirror system and screen. The dimensions and specifications of all of those components are crucial to find the right transformation. But even after all the calculations were done it wasn't easy to arrange all the parts exactly. Mounting the projector in just the right place isn't easy, neither is getting the mirrors at the correct angle and distance relative to each other.

Projector mount. Quite wobbly, but works if nobody touches it... or exhales too close to it.
The projector has to be positioned at the point of origin of the ray (top left, where the yellow lines meet)

Projector and screen. On the right you can see the pressursied air line taking an adventurous route across the room before it feeds into treadmill and airtable.
We are using a Vision Techmount ceiling bracket with a 500mm extension pole. It does the job, but a fair amount of play means I will have to go back and stabilise it, otherwise it will be too wobbly. That shouldn't be too difficult though.

Screen and mirror construction are described in detail here. 

First tests have shown that the material the screen is made of (canvas paper) doesn't have an ideal surface texture which I think decreases sharpness/contrast. Further, you can see in the picture above that  the borders between the paper strips are quite visible which might provide confounding visual stimuli to the mouse. I am looking into ways of mitigating this effect.

The mirrors have been arranged previously to mounting the projector. However, minor adjustments to the flat mirror could be done to correct for slight misalignment of the projector without introducing a noticeable error in the virtual reality.
 
Image Transformation
This part is what gave me a headache for a while, not because it's so difficult but because I'm very incompetent when it comes to geometry. Paul Bourke's website (link) contains a very comprehensive description of his virtual reality system (which is different to our system but the explanations are nevertheless very helfpul). More than that, the image tranformation functions for his system are available in Blender 3D and can be adapted to work for our system.

The goal is to get a 360° view in a ring-shape. A ring because the centre of the projector output hits the apex of the convex mirror. Concentric rings around the centre translate into horizontal lines on the screen. I hope that makes sense, see pictures below if it doesn't.

To achieve this, the 'spherical panoramic' function of Blender 3D, and the input-output warpmesh are crucial for my application. The spherical panoramic creates a 360° view taken with 6 virtual cameras and stitches them together (similar to a time-lapse panoramic). In a second transformation, an input-output mapping is applied to the picture. In other words, two meshgrids are created: meshgrid A is placed over the spherical panorama and meshgrid B has the shape I need. Each node in grid A has a corresponding node in grid B and the picture is warped into the shape of grid B. I hope this will make more sense after my illustration with many picture below:

This is a total of the maze. The cube is where the observer is positioned, facing the red wall.
This is the normal perspective view as it appears on the screen initially. Nevermind the monkeyface.
First transformation: the spherical panoramic. This is an inbuilt function of Blender 3D which makes things a lot easier. If you don't know how this picture is created try this: stretch your arms out forward and with the middle fingertip of your left hand touch the middle fingertip of your right. Now make a ring with your arms by bending your elbows to the side while your fingertips still touch. Imagine an observer standing in the middle of that ring looking at your chest seeing only what's in front of him/her. Now in a final step, part your finertips and stretch your arms to either side. The observer can see 360° in front of him/her now (your hands correspond to the green wall in this case).

After this comes the crucial step: to translate the spherical panoramic into a ring-shape we use two meshes, one is laid over the spherical panoramic and the other has the desired shape.

The square grid is laid over the spherical panoramic. Each node has a corresponding node in the grid below. I've reduced the number of nodes to 12x12 for illustration the actual warpmehses used have 48x48 nodes.
The ring-shaped mesh. The bottom left node of the square grid corresponds to the bottom node of the inner-most ring of this grid. From there it goes around counter-clockwise.
And this is the result. The picture is upside down which has got to do with the fact that the projector is mounted upside down.

The maths behind this is reasonably simple and can be found on Paul Bourke's website (link). First, the square meshgrid is constructed. In our example above it is 12x12, note though that it is actually 12x13, the reason for this is that in the circular meshgrid we have to come full circle, thus the first and the last point on each ring are the same.

For the circular meshgrid one can simply take the square coordinates and calculate sinus and cosinus respectively (more information here):

transmatx(i,j) = y_radmap * cos(theta);
transmaty(i,j) = y_radmap * sin(theta);

(you can find the full listing on the "gridscript listing", found in the blue bar running across the top of the page)


There is more to say about how the transformation is implement in Blender 3D, but I don't feel like writing an essay about this here. If anyone has questions feel free to e-mail me (address is at the top right of this website).

Final Words
Image transformation and virtual reality construction are complete now, what's left to do is a fair amount of fine-tuning. How much fine tuning will depend not only on my subjective sense of what is sufficiently good but will also be based on first tests with animals. After the amount of work and research that has gone into that system I'm happy to see it finall working.

Here are a few impressions of the glowing ball:





References:
Hölscher, C., Schnee, A., Dahmen, H., Setia, L., & Mallot, H. A. (2005). Rats are able to navigate in virtual environments. Journal of Experimental Biology, 208(Pt 3), 561-569. Co Biol. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/15671344 

No comments:

Post a Comment