Campera Position and Gestures

As a starting point for gestures I am now able to use the transition from an open hand to a close hand to move the camera position within the world builder scene. Heres a quick video demonstrating this:

For next week I would like to find out how I can grab the object and rotate it.

From crisis comes opportunity…

(Sorry for the lateness of the post – I didn’t realize that internet connectivity was going to be such a problem in Canada.)

The exhibition piece has been unfortunately plagued with failure.

1) Muscle wires:  I’ve come to agree with one blogger who called muscle wires “a solution in search of a problem.”  They’re extremely interesting in theory and have great potential.  However, their limitations make them only useful in very specific circumstances, none of which apply to my project.

2) Felt:  Even in the “easy” part of the project, I managed to utterly fail.  Although the print looked outstanding as I executed it, after steam setting and rinsing I found that the first color had bled and run all over the fabric and the thiox had eaten holes in the wool.  When I attempted to reprint on a new panel, my screen emulsion blew out during rinsing and I accidentally over processed my felt, creating a piece that was unusably small.

IMG_0230Plan B:

Since the piece is being installed on Thursday, I have obviously moved onto Plan B.  I ordered 5 meters of addressable RBG LED strip.  These will be mounted in rings and programmed to light in a circular “chase” pattern when triggered, thereby imitating the circular pulse from the inspiration video.  Adafruit has a very user friendly set of libraries available that I am using for the Arduino code.  As for the actual textile, I am in the process of printing a new panel using pigments, rather than the dye/thiox combination.

Robe a la Foudre:

In other news, I showed the completed lightning gown at the convention I attended this weekend.  It worked even better than I could have hoped and it looked amazing on stage!  I received a Judges’ Choice award for “Blowing the Judges Away” as well as a special commendation for excellence in dye work.  I’ll post video as soon as I have access to it.

1969387_10154105195850370_6687798253762406846_n

ReKinStruct : Shifting Gears

As I said in the last post, I am trying to obtain colour and depth stream from two kinects at once. Apparently, OpenNI is not the best way to do it. So I am trying to go back to Kinect SDKs. I have successfully installed the softwares. There were a lot of dependency interferences from the PrimeSense Kienct Sensors that I had already. Had a bunch of installing and uninstalling to do. Now, the Kinect works to obtain colour and depth images as in one of my first posts :

https://blogs.discovery.wisc.edu/projects/2014/02/09/rekinstruct-abstract/

I am learning how to obtain this through a program in selected intervals so I can make a time varying point cloud. Will keep you updated.!

A good and simple tutorial to install Kinect SDKs that I found was:

http://www.packtpub.com/article/getting-started-with-kinect-for-windows-sdk-programming

Meanwhile, I got the dataset for the candle burning with an interval of one second between consecutive PCDs. The following link has 400 PCDs of a candle burning spaced one second apart.

https://filelocker.discovery.wisc.edu/public_download?shareId=8ab2882502ec2aea65d711cfec4bbdd8

Password: ReKinStruct

ClusterVisualization

Link

The last couple of weeks I have been working on visualizing  CONDOR cluster data in our system, that is either the CAVE, the devlab etc.

Currently, the output looks something like this:

Current cluster monitor.

Current cluster monitor.

Each single cell represents a slot which can run an assigned job. Multiple slots are part of a node (a single computer) and multiple nodes are formed into groups. The coloring depicts the load from blue (low load = bad) to green (high load = good, efficient). Red nodes show offline nodes, black ones have missing information and grey ones are occupied otherwise. This display is updated every 20 minutes.

The approach I took was separating everything into their hierarchical structure: slots into nodes into groups:

The high-energy physics group

The high-energy physics group

This is the high-energy group. Many computers and you can tell at a glance that these computers are quite powerful, as they stack really high. Most of them are running jobs at different efficiency.

The cae group

The cae group

The cae pool, on the other hand, looks completely different. Many low-powered computers with 4-8 cores, which are also not accessible (grey or black). If you move closer to the nodes, a single letter per slot shows you which job group this job belongs to. Many jobs are recurrent and therefore have their own letter (for example, Y or G). Overlaid in white is the short node name.

The cluster in action

The cluster in action

The best thing is that we have multiple log files and we can cycle through them. Not only can you see how jobs move through the cluster and load changes, but also how the groups grow and shrink if new computers join the group or are going offline.

There are also two main data highlights in here the load view, as above, and the job view, in which the same job class is highlighted throughout the system.

Seeing this visualization in the CAVE is very interesting as you are surrounded by these massive towers. The CHTC group liked it very much so far and we will demo this system next week during CONDOR week 2014.

Continuing with hand gestures

This week I will be working on implementing hand gestures in the world builder application.

The gesture that I would like to work with would allow the user to select and hold and object. The first gesture would be putting your hand near the object, like this:

photo

The second gesture would be to close the hand into a fist:photo (1)

During this gesture I would use the palmPostion() and palmVelocity() functions to both determine the position of the object and palm and to also rotate the object given its change in its velocity vector.

Leap Motion Integration

Sorry for the late update this week. Exams and projects have preoccupied my time primarily.

Last week I was able to get the skeletal viewer code for the leap motion put into the worldbuilder application. This actually was a pretty painless process except now I the leap motion is not drawing the hand. I’ll be working on getting this problem resolved hopefully today or this weekend.

After that bug is fixed, I will work on getting gesture movements implemented like how I explained in my last post.

Since I dont have a picture or video to provide this week heres a link of some cool leap motions apps that just came out recently:

http://blog.leapmotion.com/post/81491756746/vimeo-at-your-fingertips-4-more-new-apps

Update: Right now it seems that the leap motion and the Zspace dont get along. When I unplug the zspace the leap motion goes into the onFrame function (a function that is called every frame). Currently working on resolving this strange bug.

Update: Its working! Just had to plug the usb for the zspace into a different port. now the worldbuilder with leap motion integration are completely independent from the skeletalviewer!

photo_working

 

 

Getting Closer…

My new Flexinol finally arrived.  I wired up a new test using the thinner music wire I purchased and the results were…unsuccessful, but encouraging.

https://www.youtube.com/watch?v=OqKqs0836uM

You can see in the video that the Flexinol works as anticipated and does cause some movement in the paper.  However, either the wire is still too stiff or the Flexinol simply doesn’t contract enough to create the range of motion I’m looking for.  Currently I’m researching ways of amplifying that range of motion, possibly through some sort of actuator/pulley system.  This site has some interesting options.

On the non-electronic front, the felt panel has been created and will be printed tomorrow.

ReKinStruct : Candle Dataset

I have uploaded the candle dataset on the file locker system. You can download it from here.

https://filelocker.discovery.wisc.edu/public_download?shareId=1d35394ba86c9251776af95cb2821f46

Password: ReKinStruct

These are 40 PCD files of snapshots taken of two candles with a 10 second interval between consecutive shots.

I have also uploaded a video of the ReKinStruct viewer for the candle dataset here. The video however is like a fast forward version of candles burning where the 40 PCD files are displayed back to back with a time interval of 1 second. This makes a close-to-reality rendering of candles burning but only 10x faster.

http://youtu.be/zwA9J8xv248

I believe I kept the candles too close to the Kinect and didn’t get it’s depth data perfectly. However, the shadows and the candle melting show how the time varying datasets look in 3D.

I have been trying to setup two Kinects to grab data simultaneously and have been getting the classic segmentation fault.

Two Kinects SegFlt

I wonder if it goes to back to the original issue where the OpenNI grabber was not sensing the Kinect. I have looked online for some tutorials and most people seem to have used two OpenNI grabbers simultaneously. Will dig in about this a little more and post progress.!

Back ordered

The Bad News:

The Felxinol I ordered was out of stock, which put me a week behind on my timeline.

The Good News:

The revised order has already shipped and I should have it by the end of the week!

In other news…

I started doing some testing on the textiles for my exhibition piece.  I felted a test piece and did a Thiox discharge, which worked great!

FeltTestOn the topic of other projects, I finally finished adding the LEDs to the Lightning Gown and ran a testing program – they all work!!  I have a fabric bend sensor kit coming soon that I will install to get the gestural response I was looking for.

The “Moodie” has been accepted to the fashion show and has also been submitted to Design Gallery 2014.  Rather than have the sensors freak out while on display, I found some code to create a looping crossfade of colors in the RGB LEDs, which looks great.