Initial Pilot: Thoughts on Procedure

I ran a few people through the baseline pilot last week; data’s not ready yet, but here are some thoughts on the general process, and participant feedback:

Things are running two to three times longer than expected; automating measurement may cut this in half.

There may be a kind of fatigue effect from being in the virtual environment too long, losing what sounded like a sort of calibration between proprioceptive and spatial senses; only one of four participants reported this, so there may also be different internal models at work between them.

Participants report being aware that only two distances were being presented; it was suggested we might test continuous distances along our range of interest.

End of Semester Recap – Spring 2015

Summary; Accomplishments

This semester started with reading papers; there’s a large body of work about perception in AR/VR, and my research would benefit from a ready set of references. Some of the topics I’ve now supporting references for:

  • decline of accommodation with age
  • a variety of calibration techniques
  • various depth cues in 2D and 3D
  •  feedback effects (tentative, these need more investigation)
    techniques for measuring perceived distance
    theories of perception in virtual spaces (though these seem a bit thin)
  • The current experiment has also been refined, and a new pilot in Unity should be complete in the coming weeks.

I’ve also done some work for my optimization class project that might be helpful to other projects in the lab; for xZ = y, generating matrices Z given known good x and y should be simple; it may also be possible to generate xs, given a promising set of ys.

Challenges

The lit review almost became a paper in it’s own right, but finding a recent survey made that a more complicated task. There’s still something there, some inconsistencies across papers, angles the survey doesn’t notice, but the path to a paper discussing them is less clear. Rather than a single paper, they may become separate investigations. There may also be room to adapt other lit aggregation techniques, like meta analysis, though it remains unclear how directly these techniques apply.

Organizing information found in the literature has been a challenges. There’s also a lot to keep straight: chains of support, who knows what from which papers; open questions, conflicts, and other things bearing further investigation … it’s already a complex web of interrelationships, and I suspect I’ve still only done a rather shallow exploration of the space of relevant papers. I need a better way to organize this.

Pilot development stalled a bit, as I decided what tools to use. An increase in the demands of my classwork about halfway through the semester derailed attempts at custom code; a shift to Unity means less labor intensive, but perhaps less adaptable, experimental tool development.

My Feelings on the Results

I’m a little frustrated things aren’t moving faster, and that I don’t have more paper write-up posts to better record what I’d found in my readings; I’m a bit frustrated that I don’t have a better command of the things I’ve read, a better system to lead me quickly back to the bits of interest, as interest arises.

But I’m happy to have had the exposure: I have a better sense of where the field is at, and where the current research fits. And somewhere in a giant (virtual) pile of papers, I have the references to support whatever I might write in the future; finding them might not be as efficient as I’d like, but it should still be effective.

Next Steps

In the next few weeks, the Unity pilot should happen.
Based on its results, the research develops in whatever way seems most promising.

Some paper and notes organization system will remain a side project, as might my custom experiment code.

Final Weeks of Classes

Mostly classwork this week and next — final homeworks, exam, and project.

For project: the simple base case is solved (as linear, which should be fast). Next is either a branch-and-bound exploration of possible set selections / correspondences via MIP, or modeling x -> y mappings as a series of Z matrices over “time” and minimizing change of the Zs.

For the Pilot: Looked at what a one-axis distance metric does. Vertex-sensitive distortions are less of an issue, but curves are noticeably distorted in the periphery. Overall less distracting than the vertex-sensitive distortions seen with 2D.

Switch to Unity 5

Switched to Unity 5; was able to get a basic shader doing something like what we want, quickly. There are distortions, sensitive to mesh shape; it’s unclear at what magnitude of change these become a problem.

We also have a choice of different distance metrics (sphere, circle, one axis — anything more interesting?), which would also influence the kinds of distortions we see.

Still need:

  • to test in the Rift
  • input for manual adjustments (test code is just oscillating on a sine)
  • better scene(s)

Optimization Project and Continuing Work on Pilot Codebase

I’m looking into wedging some lab work into my optimization class project; it seems like I’ll get something together, but I’m not quite sure what yet.  The full problem may be too large a space, but we may end up with the ability to describe limits given some set of points, or evaluate the quality of some candidate correspondences, etc.  I’m fairly sure I can solve at least the simple base-case version; I suspect I’ll need something more complex for the project itself.

Work also continues on the pilot code.  I worry I’m getting bogged down in development niceties; while it would be nice to have pretty code I can use quickly in the future, even an ugly pilot gives data.

I’ll dust off some old Ogre code and see if it can quickly adapted to this  pilot.  If so, the pretty code can be worked on in parallel with running the pilot.

The Coding Continues

Sidetracked a bit by classwork and feature creep, the coding continues.

The features are for more efficient debugging and development flow — things I was hoping to put off until after the pilot, but that needed to happen eventually anyway.

There’ll be a burst of homework late this week, so I probably won’t finish cleaning up the stimulus environment and cramming everything back into the oculus until next week.

The classwork is partly selecting a project.  Anything around the lab we need optimized?

Still coding

Still building the base code. Rediscovering the joys of const correctness and circular inheritance. Should be done this week.

Also found something in the below papers I should look into later — they suggest that people perceive a “stable” scene even under varying “camera calibrations”. It looks like there may be a series of papers to chase down to get the full idea, but the gist (from the abstract of 2006):

“[…] observers are more willing to adjust their estimate of interocular separation or distance walked than to accept that the scene has changed in size.”

Which sounds like active stimulus overriding past experience. Which is maybe a constructive way to think about adaptability.


  1. Gilson, S. J., Fitzgibbon, A. W., & Glennerster, A. (2011). An automated calibration method for non-see-through head mounted displays. Journal of neuroscience methods, 199(2), 328-335.

  2. Glennerster, A., Tcheang, L., Gilson, S. J., Fitzgibbon, A. W., & Parker, A. J. (2006). Humans ignore motion and stereo cues in favor of a fictional stable world. Current Biology, 16(4), 428-432.

Building Code

I’m in the middle of building some c++ openGL foundational stuff. I expect it to be done in a week, week-and-a-half.

The goal is to have something lightweight for quick prototyping, without disrupting the lab’s main codebase. Exotic input (Leap, Kinect) and data (large pointclouds) may not make it into the initial version, but the goal is for clear paths to add them.

I also looked into:
– streaming from various mobile video sources: found some android source, vague indications RaspberryPi can be used with effort, but no clear guarantees of performance
– ways to rig passive steadycam / gimbals: found three main designs (an inverted “U”, gyroscope-style with a T-bar, and a “bow + pendulum”), but there are some questions as to where one wants the center of mass. It makes me want to rig simple simulations to decide if different designs do what we need.

I don’t have enough organized thoughts on these for a writeup here, but may dump some links into the google doc.