Exploring leap motion integration with Zspace

For my independent study project, I am interested in adding an additional input to the ZSpace, called Leap Motion. The ZSpace is a holographic display  and allows the user to manipulate the object (via a stylus) in the virtual environment as if the object were right in front of them. By integrating the Leap Motion device, I hope to bring a more natural way to manipulate the object by using both hands.

Specific hand gestures are still being though of, but I envision that the Leap Motion could also draw objects it self via the index finger.

 

Leap Motion

ZSpace

eTextiles Project Proposal & Timeline

Topics:

The main area I would like to pursue in this independent study is a deeper dive into the integration of textiles and technology. Some possibilities I would like to explore include:

  • Creating fiber/textile sensors

  • Nitinol “muscle” wire

  • Conductive textiles

  • Screen printing conductive paint

  • Acid etching copper taffeta

I would also like to refine the programming on my existing Arduino projects (the Moodie and the Robe a la Foudre) and do more explorations into the application of biometric sensor data to my designs.

Final Project(s):

I will do some exploration and experimentation of each technique before choosing the 3 most successful/relevant techniques. I will then expand each of the 3 techniques into a small “proof of concept” project that is suitable for exhibition.

Calendar:

Week 1 [2/3 – 2/9]

  • Order supplies

  • Test vinegar etching on copper taffeta

Week 2 [2/10 – 2/16]

  • Spinning conductive fibers

  • Knit/crochet conductive yarn

Week 3 [2/17 – 2/23]

  • Continue knit/crochet sensor experiments

  • Test options with other conductive textiles

  • weaving?

Week 4 [2/24 – 3/2]

  • Test nitinol wire uses/limitations

Week 5 [3/3 – 3/9]

  • Test conductive paint screen printing

  • Test screen printing resist on copper taffeta

Week 6 [3/10 – 3/16]

  • Test dyeing etched copper taffeta

  • vast disperse dye vs. heat set

Week 7 [3/17 – 3/23]

  • Project 1 begin

Week 8 [3/24 – 3/30]

  • Project 1 cont.

Week 9 [3/31 – 4/6]

  • Project 1 completed

  • Project 2 begin

Week 10 [4/7 – 4/13]

  • Project 2 (cont.)

Week 11 [4/14 – 4/20]

  • Project 2 completed

Week 12 [4/21 – 4/27]

  • Project 3 begin

Week 13 [4/28 5/4]

  • Project 3 (cont.)

Week 14 [5/5 5/9]

  • Project 3 completed

Frontier Fellow Project: In Focus Anatomy

In Focus Anatomy (working title)

Through the Wisconsin Institute for Discovery, I will be developing anatomically accurate drawings that will be structured with a “google-map” type of interface, functioning as a virtual microscope. This interactive educational tool will allow the viewer to zoom in to the cellular level on a variety of contrasting organisms, encouraging an active investigation of various life forms.

First image in progess: Onion (Allium)

Start out with basic drawing, then use the outline to guide the higher-detail and the cellular level stages. I will be visiting the Dept. of Botany’s collection in the next few days to look at Allium root slides for reference, to ensure a more accurate depiction of cellular construction.

Project in development: test map will be up very soon!

ReKinStruct – Abstract & First Look

Kinects are generally used to obtain depth maps of an environment using their speckle pattern reflection of infrared light and a color pattern similar to a video camera. This data is used to analyze the position and movement of human body thus giving a virtual reality gaming environment. Contrary to the above traditional use of Kinects, they can be used to obtain 3D Point Clouds similar to the usage of a LiDaR scanner. The striking difference between the Kinect and LiDaR scanning would be that a Kinect could be moved in any direction and still continues to obtain the Point Cloud Data while the LiDaR scanner has to be kept stationary during its operation. The Point Cloud Data obtained can be simultaneously or subsequently processed to enhance, reconstruct missing points and texturize the data obtained and can also serve as a map for SLAM (Simultaneous Localization and Mapping). The Point Cloud Library(PCL) and C++ are to be used almost for the entire process. OpenCV, OpenNI are the next likely frameworks.

And to get this project kicked off, I got Windows, Visual Studio and Kinect SDK installed on my Mac (It took much longer than I thought). With a little help from the Kinect SDK apps, I was able to obtain the color and depth maps of my laptop inside my laptop inside my laptop inside. . .

Voila.!

From here, the next step would be obtain a continuous point cloud from the Kinect which would get us something like a 3D image. And from there, it all looks exciting.

Naveen