Accomplishments
This semester, I’ve explored issues relevant to stereoscopic rendering in general, and the Oculus Rift in particular. I’ve also explored the current state of software packages that offer rendering to the Rift. We’re on the cusp of having a viable test platform for our calibration experiments, and I have a better understanding of the problem we’re trying to solve.
There was also some investigation into what the Oculus SDK does with its calibrated values, and if we can leverage them for our investigations. The answer is mostly no, though we may need to force their FOV to some fixed value before we manipulate ours.
Challenges
There are a lot of options for rendering to the Rift, and a they bore exploring.
A fair chunk of time was spent repurposing code inherited from other lab projects — becoming familiar with their structure, and paring them down to be a bit more nimble and debuggable. Most of “nimble” here is file size; some of our projects have huge data sets or library collections that weren’t immediately relevant to the current effort (and didn’t fit in the storage I had available); part is restructuring them to not use the same files as other actively developed projects, so my changes don’t compete with other lab members’. This is a normal part of code reuse, and there’s nothing about this code that made it especially difficult — it just took time to decide what everything did, and what parts I needed.
Engines like Unity and Unreal seemed promising, but weren’t quite ready.
The Oculus SDK is in a phase of rapid development. New versions usually provide enough improvement that we want to use them, and enough changes that reintegration takes some effort. The major shift was DK1 to DK2, but the minor shifts still cause problems (the newest version may be the source of some current code woes, but may solve issues with OpenGL direct rendering, as well as jitter in Unity; both of these could make development much faster).
Also, we’d like to use as much of the Oculus-supplied rendering pipeline as possible (for easier reproducability, and thereby greater validity), but it’s been a pain to wedge more of our changes into it, or more of it into our in-lab engine — particularly as it keeps changing. We’re currently at a relatively happy medium.
There were also some problems finding someplace for my code to live; the code bases I’m working from are big, even after paring them down, and have moderate hardware demands; they proved too much for my poor laptop or the initial spare lab workstation. However, my the new computer in my office has more than enough hard drive space and GPU muscle for my current needs.
There’s also a shift from “I read a bunch of interesting papers” posts as the semester goes on. This is because much of my reading time was taken by other classes, in spaces not immediately relevant to this work. I expect that next semester, a lighter class load will leave more time for reading in this space.
Next Steps
There’s some polish to be done on the code — adding experimenter controls and cleaning up participant stimulus. Then we can pilot with different pointcloud environments, and investigate different calibration procedures. Then, proper experiments.