Team Exploring Interfaces – Final Post


DS501 - team interfaces poster -half size

 

 

What does your project do, and how does it work?

We’ve built a an experiment-in-a-game, allowing people to test different input devices by playing 4-second minigames displayed in the Oculus Rift.  The minigames require participants to complete four atomic tasks: selecting, moving, and rotation objects in the virtual world, and simple spatial navigation.  The current form of the project is designed to be run as four stations, each with one of four input devices: a conventional mouse, an Xbox 360 controller, a Wiimote pointing device, and hand tracking via Leap Motion.  Participants rotate through the four stations, trying each device in turn; we record the success or failure of minigame trials, as well as time elapsed in each.

Unity 5 was used as the base of our project, and for mouse and Xbox controller input.  The official Unity integration package for Leap Motion provides hand tracking; the open source Unity-Wiimote library provides Wiimote tracking.  All input methods control a screen-space cursor, mapped to: the pointing direction of the Wiimote; the Xbox controller’s left joystick; and on the Leap, the position of the right palm.

 

What did each team member contribute?

In the beginning phases of the project, we all explored different tasks, as well as input methods and interfaces.

Alex built base code, and was in charge of Leap input and overall mop-up.

Dave built the final rotation and avoid tasks, Xbox controller integration, and the initial exploration of showing a cursor in VR.

Ryan built the final sorting task, early Wiimote integration code, built the VR-visible timer, and explored ways to get VR and input devices onto Linux.

Nicky explored ways to get VR and input devices onto Linux.

 

How does the team feel about the project?

It could use more polish.  The minigames are playable on a normal monitor, but too hard in VR.  Some of these difficulties are illustrative of the differences between the two display modalities, which is the point of the project; some, however, are just bugs or rough edges, which may confuse the sometimes subtle issues we want to highlight.

People still had fun trying out the different input devices and pointed out some interesting differences, so overall it was a success.

 

What were your largest hurdles in the project?

 

Getting Data from Input Devices

There are lots of open source projects online that attempt to connect exotic input devices to specific programs (like Unity), or specific OSes.  Or to specific combinations of programs, OS, and Bluetooth stack.  And maybe specific versions of each piece of software and hardware, as well.  Sorting through to find libraries that provided the input data we wanted in a way we could use was nontrivial.  Many claimed to be able to do what we needed but either hadn’t yet implemented the feature, or introduced some incompatibility.  The only real solution was lots of trial and error.

 

Unity

Unity didn’t do enough to bridge the gap from no knowledge of real-time interactive graphics to also doing same in VR.  For every problem it solved, it introduced at least one more bug or weird system to run afoul of.  Looking things up online yields solutions for a mix of different versions, with  inconsistent compatibility; UI was a particular problem.  Leap integration further conflated things, as it seems to be tuned to an older Unity version and involves some unknown version of the Oculus Unity integration, but still depends on the new VR checkbox in Unity … it’s a weird Frankenstein mess.  Unity packages for other input methods worked similarly, in that it was unclear what version of Unity they were designed for.  Unity also released at least three upgrades during the course of the project, each fixing old bugs and introducing new ones.  This made things generally unstable — restarting Unity or rebooting the machine were often one of the first steps in debugging, which is not a good way to work.

 

Git + Unity

Some files in Unity don’t play nice with Git.  Scene files were binary (though Ryan found a setting at the end that probably made them a merge-able XML; we didn’t have time to investigate).  We sidestepped this by doing as much as possible programmatically, and working in separate scenes and doing an in-unity merge when absolutely necessary.  Some other things (controller settings, the VR checkbox) would mysteriously change; it’s unclear when this was due to Unity rebuilding defaults, or to something accidentally sneaking into the repo, or to cross-platform Unity differences.

 

How well did your project meet your original description and goals?

We met our original baseline goal of presenting atomic tasks and gathering data, and we went a little further in wrapping things up as minigames to make the final presentation a bit more fun.

 

If you had more time, what would you do next on your project?

We could add more minigames, more input methods, and more measures — and, of course, a bit more polish.  Also, we’ve only explored interfaces that use  2D screenspace cursors; VR allows for interesting 3D interfaces, and with more time (and more 3D-oriented input devices) we could explore those as well.