The Coding Continues

Sidetracked a bit by classwork and feature creep, the coding continues.

The features are for more efficient debugging and development flow — things I was hoping to put off until after the pilot, but that needed to happen eventually anyway.

There’ll be a burst of homework late this week, so I probably won’t finish cleaning up the stimulus environment and cramming everything back into the oculus until next week.

The classwork is partly selecting a project.  Anything around the lab we need optimized?

Still coding

Still building the base code. Rediscovering the joys of const correctness and circular inheritance. Should be done this week.

Also found something in the below papers I should look into later — they suggest that people perceive a “stable” scene even under varying “camera calibrations”. It looks like there may be a series of papers to chase down to get the full idea, but the gist (from the abstract of 2006):

“[…] observers are more willing to adjust their estimate of interocular separation or distance walked than to accept that the scene has changed in size.”

Which sounds like active stimulus overriding past experience. Which is maybe a constructive way to think about adaptability.


  1. Gilson, S. J., Fitzgibbon, A. W., & Glennerster, A. (2011). An automated calibration method for non-see-through head mounted displays. Journal of neuroscience methods, 199(2), 328-335.

  2. Glennerster, A., Tcheang, L., Gilson, S. J., Fitzgibbon, A. W., & Parker, A. J. (2006). Humans ignore motion and stereo cues in favor of a fictional stable world. Current Biology, 16(4), 428-432.

Building Code

I’m in the middle of building some c++ openGL foundational stuff. I expect it to be done in a week, week-and-a-half.

The goal is to have something lightweight for quick prototyping, without disrupting the lab’s main codebase. Exotic input (Leap, Kinect) and data (large pointclouds) may not make it into the initial version, but the goal is for clear paths to add them.

I also looked into:
– streaming from various mobile video sources: found some android source, vague indications RaspberryPi can be used with effort, but no clear guarantees of performance
– ways to rig passive steadycam / gimbals: found three main designs (an inverted “U”, gyroscope-style with a T-bar, and a “bow + pendulum”), but there are some questions as to where one wants the center of mass. It makes me want to rig simple simulations to decide if different designs do what we need.

I don’t have enough organized thoughts on these for a writeup here, but may dump some links into the google doc.

Oculus 0.4.3, 0.4.4 want you to draw a tiny rectangle

I found the bug. I’ll do a proper writeup in this space soon, but the gist is they do latency testing as part of the timewarp headpose prediction; they do this by drawing a rectangle of varying color in the upper right (out of user view).

Our code doesn’t do this, because it’s only present in some of the D3D-based example code (TinyWorld), not in the OpenGL-based documentation PDF.

I bet the other demo app code has some surprises lurking, as well.

More thorough writeup in this place, soon.

Oculus lib wrapped; survey review forthcoming

Built the bulk of a simple wrapper for the Oculus lib. Should make debugging faster. Needs a review pass for general organization / naming if I want to share with others, and I’ve still some questions on what happens when attaching to a window — I don’t see how the weird 2×2 of possible rendering states (direct or extended, distortion via lib or in app) is selected for, or how to test which state their tool is requesting, or what params to pass to which function in which of the four possible states. The docs don’t seem to have answers; their code might, or some quick tests should elucidate. Also should do something with their tracking.

But the end result is a lib that bottles up all the Oculus stuff, allowing bugs to be tested in isolation, and hopefully protecting other code from future lib changes.

Also sifting through a survey paper. Not quite ready for a writeup, yet.

Initial Papers: Accommodation and Age

It seems to be well understood that accommodation declines with age, but I’d like a strong reference to cite, and a little better understanding of the how and why.

Still don’t have a ref I really like for citing. Mordi is maybe okay.

What I found:

When optometrists want a citation for this, it’s a set of slightly conflicting papers. They seem to agree on a range from 50-60 for end of decline … mostly.

The Mordi papers seems to be a good overview, though they’re maybe a bit strongly in favor of the Hess-Gullstrand (lens-centric) theory; they set the tone for my current understanding of things, so if they’ve a bias, I’ve probably inherited it. They cite at least one other paper they claim is a good overview.

Their take is that most conflicting results are due to small sample sizes and the general difficulty of isolating things for measurement (which might partly just be arguing for their larger sample size and chosen means of measure, but their arguments are plausible to my naiveté). For example, in their dynamics paper, they suggest there’s a linear area of response speed bookended by nonlinear (due to different biomechanics) , and many studies mix response in the linear and non-linear region; this skews results of participants with reduced accommodative range, where the nonlinear accounts for a larger relative portion of the total region. So, uh, things are fraught with complicated subtleties.

Seems most modern work is either deciding why static accommodation loss happens, or measuring dynamic aspects with some novel tech.

Loss is modeled either by the Hess-Gullstrand theory, which thinks it’s lenses, or the Duane-Fincham extra-lenticular theory, which perhaps focuses on the ciliary muscle, though I think it was Strenk that seemed to think muscles could cohabitate with Hess-Gullstrand. It’s surprising that things are still so foggy with so many years of study and methods of inquiry applied (Glasser takes lenses from cadavers and manipulates them in complete isolation; other studies use IR or lasers or ultrasound for high-frequency imaging of eyes in situ). Medical science sounds messy, on several levels.

Also there’s a Hung-Semmlow “model of accommodation”. (Ophthalmology is a land of hyphens.)

Also, “Duane’s curve” may be the standard against which amplitude of accommodation is compared, though it’s from 1912 and assumably based on subjective measures. ( Future reading?)

Other factors explored include increased pupillary response with age (contraction? enough to change observed luminance?), what sounds like an absence of small oscillations in steady-state focus, and changes to tonic accommodation.

I have a vague sense that it might be worth measuring tonic accommodation, but no concrete reason other than that it changes between individuals. Fairly sure other VR studies have done this, but I don’t remember any findings.

Also maybe worth investigating: Mordi claims the decline apparently extending to 60, instead of 50, years of age is due to a “depth-of-focus” contamination effect. Maybe important? They cite a paper.

  1. Glasser, A., & Campbell, M. C. W. (1998). Presbyopia and the optical changes in the human crystalline lens with age. Vision Research, 3816/S0042-6989(97)00102-8(2), 209–229. doi:10.10

  2. Heron, G., Charman, W. N., & Schor, C. M. (2001). Age changes in the interactions between the accommodation and vergence systems. Optometry and Vision Science : Official Publication of the American Academy of Optometry, 78(10), 754–762. doi:10.1097/00006324-200110000-00015

  3. Kasthurirangan, S., & Glasser, A. (2006). Age related changes in accommodative dynamics in humans. Vision Research, 46, 1507–1519. doi:10.1016/j.visres.2005.11.012

  4. Mordi, J. a., & Ciuffreda, K. J. (2004a). Static aspects of accommodation: Age and presbyopia. Vision Research, 44(February 1997), 591–601. doi:10.1016/j.visres.2003.07.014

  5. Mordi, J. a., & Ciuffreda, K. J. (2004b). Dynamic aspects of accommodation: Age and presbyopia. Vision Research, 44, 591–601. doi:10.1016/j.visres.2003.07.014

  6. Ramsdale, C., & Charman, W. N. (1989). A longitudinal study of the changes in the static accommodation response. Ophthalmic & Physiological Optics : The Journal of the British College of Ophthalmic Opticians (Optometrists), 9, 255–263.

  7. Schaeffel, F., Wilhelm, H., & Zrenner, E. (1993). Inter-individual variability in the dynamics of natural accommodation in humans: relation to age and refractive errors. The Journal of Physiology, 461, 301–320.

  8. Strenk, S. a, Semmlow, J. L., Strenk, L. M., Munoz, P., Gronlund-Jacob, J., & DeMarco, J. K. (1999). Age-related changes in human ciliary muscle and lens: a magnetic resonance imaging study. Investigative Ophthalmology & Visual Science, 40, 1162–1169.

Reference Overview: HMD Calibration and Its Effects on Distance Judgments

Initial paper:

Kuhl, S. A., Thompson, W. B., & Creem-Regehr, S. H. (2009). HMD calibration and its effects on distance judgments. ACM Transactions on Applied Perception (TAP), 6(3), 19.

Experiments testing distance estimation subject to three potential miscalibrations in HMDs: pitch, pincusion distortion, minification/magnification via FOV. Only FOV is seen to cause change. Calibration procedures are suggested; the gist is to match against real world objects, popping the HMD on and off.

List of references, grouped by topic and ordered (loosely) by novelty vs related papers, usefulness, and whim:

— horizon / tilt different in VR / Real?
OOI, T. L., WU, B., AND HE, Z. J. 2001. Distance determination by the angular declination below the horizon. Nature 414, 197–200.
ANDRE, J. AND ROGERS, S. 2006. Using verbal and blind-walking distance estimates to investigate the two visual systems hypothesis. Percept. Psychophys. 68, 3, 353–361.

— support for effect of horizon position / tilt
MESSING, R. AND DURGIN, F. 2005. Distance perception and the visual horizon in head-mounted displays. ACM Trans. Appl. Percept. 2, 3, 234–250.
RICHARDSON, A. R. AND WALLER, D. 2005. The effect of feedback training on distance estimation in virtual environments. Appl. Cognitive Psych. 19, 1089–1108.
GARDNER, P. L. AND MON-WILLIAMS, M. 2001. Vertical gaze angle: Absolute height-in-scene information for the programming of prehension. Exper. Brain Res. 136, 3, 379–385.

— depth in photographs (2D?)
SMITH, O. W. 1958a. Comparison of apparent depth in a photograph viewed from two distances. Perceptual Motor Skills 8, 79–81.
SMITH, O. W. 1958b. Judgments of size and distance in photographs. Amer. J. Psych. 71, 3, 529–538.
KRAFT, R. N. AND GREEN, J. S. 1989. Distance perception as a function of photographic area of view. Percept. Psychophys. 45, 4, 459–466.

— AR calibration (vs real world objects)
MCGARRITY, E. AND TUCERYAN, M. 1999. A method for calibrating see-through head-mounted displays for AR. In Proceedings of the IEEE and ACM International Workshop on Augmented Reality. IEEE, Los Alamitos, CA, 75–84.
GILSON, S. J., FITZGIBBON, A. W., AND GLENNERSTER, A. 2008. Spatial calibration of an optical see-through head mounted display. J. Neurosci. Methods 173, 1, 140–146. Lawrence Erlbaum Associates, Hillsdale, NJ, 229–232.
GENC, Y., TUCERYAN, M., AND NAVAB, N. 2002. Practical solutions for calibration of optical see-through devices. In Proceedings of the 1st IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR’02). IEEE, Los Alamitos, CA.
AZUMA, R. AND BISHOP, G. 1994. Improving static and dynamic registration in an optical see-through HMD. In Proceedings of the 21st Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH’04). ACM, New York, 197–204.

— effects of miscalibration / display properties
KUHL, S. A., CREEM-REGEHR, S. H., AND THOMPSON, W. B. 2008. Recalibration of rotational locomotion in immersive virtual environments. ACM Trans. Appl. Percept. 5, 3.
KUHL, S. A., THOMPSON,W. B., AND CREEM-REGEHR, S.H. 2006. Minification influences spatial judgments in virtual environments. In Proceedings of the Symposium on Applied Perception in Graphics and Visualization. ACM, New York, 15–19.
KUHL, S. A., THOMPSON, W. B., AND CREEM-REGEHR, S. H. 2008. HMD calibration and its effects on distance judgments. In Proceedings of the Symposium on Applied Perception in Graphics and Visualization. ACM, New York.
WILLEMSEN, P., COLTON, M. B.,CREEM-REGEHR, S. H., AND THOMPSON,W. B. 2009. The effects of head-mounted display mechanical properties and field-of-view on distance judgments in virtual environments. ACM Trans. Appl. Percept. 6, 2, 8:1–8:14.
WILLEMSEN, P., GOOCH, A. A., THOMPSON, W. B., AND CREEM-REGEHR, S. H. 2008. Effects of stereo viewing conditions on distance perception in virtual environments. Presence: Teleoperat. Virtual Environ. 17, 1, 91–101.
LUMSDEN, E. A. 1983. Perception of radial distance as a function of magnification and truncation of depicted spatial layout. Percept. Psychophys. 33, 2, 177–182.

— effects of feedback (lasts for a week?)
MOHLER, B. J., CREEM-REGEHR, S. H., AND THOMPSON,W. B. 2006. The influence of feedback on egocenteric distance judgments in real and virtual environments. In Proceedings of the Symposium on Applied Perception in Graphics and Visualization. ACM, New York, 9–14.

— visual quality
THOMPSON,W. B.,WILLEMSEN, P., GOOCH, A. A., CREEM-REGEHR, S. H., LOOMIS, J. M., AND BEALL, A. C. 2004. Does the quality of the computer graphics matter when judging distances in visually immersive environments? Presence: Teleoperat. Virtual Environ. 13, 5, 560–571.

— distortion correction
WATSON, B. A. AND HODGES, L. F. 1995. Using texture maps to correct for optical distortion in head-mounted displays. In Proceedings of the IEEE Conference on Virtual Reality. IEEE, Los Alamitos, CA, 172–178.
BAX, M. R. 2004. Real-time lens distortion correction: 3D video graphics cards are good for more than games. Stanford Electr. Eng. Comput. Sci. Res. J.
ROBINETT, W. AND ROLLAND, J. P. 1992. A computational model for the stereoscopic optics of a head-mounted display. Presence: Teleoperat. Virtual Environ. 1, 1, 45–62.

— camera calibration (spherical distortion, maybe some vision stuff)
TSAI, R. Y. 1987. A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE J. Rob. Autom. 3, 4, 323–344.
WENG, J., COHEN, P., AND HERNIOU, M. 1992. Camera calibration with distortion models and accuracy evaluation. IEEE Trans.
Patt. Anal. Mach. Intell. 14, 10, 965–980.

— “distance underestimation exists”
WITMER, B. G. AND KLINE, P. B. 1998. Judging perceived and traversed distance in virtual environments. Presence: Teleoperat. Virtual Environ. 7, 2, 144–167.
KNAPP, J. 1999. The visual perception of egocentric distance in virtual environments. Ph.D. thesis, University of California at Santa Barbara.

— measures of percieved distance
SAHM, C. S., CREEM-REGEHR, S. H., THOMPSON, W. B., AND WILLEMSEN, P. 2005. Throwing versus walking as indicators of distance perception in real and virtual environments. ACM Trans. Appl. Percept. 1, 3, 35–45.

—- NOT FOUND —-

CAMPOS, J., FREITAS, P., TURNER, E.,WONG, M., AND SUN, H.-J. 2007. The effect of optical magnification/minimization on distance estimation by stationary and walking observers. J. Vision 7, 9, 1028a.

ELLIS, S. R. AND NEMIRE, K. 1993. A subjective technique for calibration of lines of sight in closed virtual environment viewing systems. In Proceedings of the Society for Information Display. Society for Information Display, Campbell, CA.

SEDGWICK, H. A. 1983. Environment-centered representation of spatial layout: Available information from texture and perspective. In Human and Machine Vision, J. Beck, B. Hope, and A. Rosenfeld, Eds. Academic Press, San Diego, CA, 425–458.

(also of note: sedgwick seems attatched to work on distance judgements vs spatial relations / disruptions)

GRUTZMACHER, R. P., ANDRE, J. T., AND OWENS, D. A. 1997. Gaze inclination: A source of oculomotor information for distance
perception. In Proceedings of the 9th International Conference on Perception and Action (Studies in Perception and Action IV ).

STOPER, A. E. 1999. Height and extent: Two kinds of perception. In Ecological Approaches to Cognition: Essays in Honor of
Ulric Neisser, E. Winograd, R. Fivush, and W. Hirst, Eds. Erlbaum, Hillsdale, NJ.

(book)
LOOMIS, J. M. AND KNAPP, J. 2003. Visual perception of egocentric distance in real and virtual environments. In Virtual and
Adaptive Environments, L. J. Hettinger and M. W. Haas, Eds. Erlbaum, Mahwah, NJ, 21–46.

(book)
ROGERS, S. 1995. Perceiving pictorial space. In Perception of Space and Motion,W. Epstein and S. Rogers, Eds. Academic Press,
San Diego, CA, 119–163.

(requested)
RINALDUCCI, E. J.,MAPES,D., CINQ-MARS, S. G., ANDHIGGINS,K. E. 1996. Determining the field of view in HMDs: A psychophysical method. Presence: Teleoperat. Virtual Environ. 5, 3, 353–356.

(misc find, not in refs)
Hendrix, C., & Barfield, W. (1994). Perceptual biases in spatial judgements as a function of eyepoint elevation angle and geometric field of view (No. 941441). SAE Technical Paper.

(misc find, not in refs)
Blackwell Handbook of Sensation and Perception
http://onlinelibrary.wiley.com.ezproxy.library.wisc.edu/book/10.1002/9780470753477

Phenomenal Regression: First Look

A participant views a circle placed on a table in front of them, and is asked to describe what they see.  Their answer lies somewhere between what geometry tells us the retinal image should be (or, what we might render in a virtual world), and the “real” version of the circle, undistorted by perspective.  Back in the ’30s, Thouless observed this, and dubbed it “phenomenal regression” — that the observed, “phenomenal” shape is not the expected retinal image, but rather “regresses” to the “real” shape.

phenomenal regression example

From Elner & Wright, 2014.

This makes some sense with shapes (and orientations) simple enough to describe perspective transformation as compression on one axis; that is to say, when the “real” form is unambiguously just a circle, because other orientations are significantly less interesting.  Or perhaps the real shape is that aligned with the plane the object rests on — a mental estimation of an overhead view of the table?

Thouless claims it’s not simply a familiar form, though that experiment bears another read to convince me.  There’s also a bit on properties like brightness/color; Thouless seems to imply shape is not the only property for which we exhibit this regression, and that seems to further confuse how one constructs the “real” form.

Elner and Wright have recently (2014) explored using the concept as a measure of “spatial quality” in virtual environments; they introduce regression as “an involuntary reaction that cannot be defeated even when pointed out”, which could make for a compelling measure.  Their experiment is inconclusive (virtual cues possibly influenced by a physical tripod), and I’ll need to become more familiar with the lit on size constancy to understand why they claim so strongly that it’s not what they (nor Thouless) are doing.  But, they’ve a thorough paper, particularly related works and analysis; I suspect they do know what they’re doing, and I should probably revisit this sometime to better understand the implications.

 


  1. Elner, K. W., & Wright, H. (2014). Phenomenal regression to the real object in physical and virtual worlds. Virtual Reality, 1-11.

  2. Thouless, R. H. (1931). Phenomenal regression to the real object. I. British Journal of Psychology. General Section, 21(4), 339-359.

  3. Thouless, Robert H. “Phenomenal regression to the ‘real’object. II.” British Journal of Psychology. General Section 22.1 (1931): 1-30.

End of semester recap

Accomplishments

This semester, I’ve explored issues relevant to stereoscopic rendering in general, and the Oculus Rift in particular.  I’ve also explored the current state of software packages that offer rendering to the Rift.  We’re on the cusp of having a viable test platform for our calibration experiments, and I have a better understanding of the problem we’re trying to solve.

There was also some investigation into what the Oculus SDK does with its calibrated values, and if we can leverage them for our investigations.  The answer is mostly no, though we may need to force their FOV to some fixed value before we manipulate ours.

Challenges

There are a lot of options for rendering to the Rift, and a they bore exploring.

A fair chunk of time was spent repurposing code inherited from other lab projects — becoming familiar with their structure, and paring them down to be a bit more nimble and debuggable.  Most of “nimble” here is file size; some of our projects have huge data sets or library collections that weren’t immediately relevant to the current effort (and didn’t fit in the storage I had available); part is restructuring them to not use the same files as other actively developed projects, so my changes don’t compete with other lab members’.  This is a normal part of code reuse, and there’s nothing about this code that made it especially difficult — it just took time to decide what everything did, and what parts I needed.

Engines like Unity and Unreal seemed promising, but weren’t quite ready.

The Oculus SDK is in a phase of rapid development.  New versions usually provide enough improvement that we want to use them, and enough changes that reintegration takes some effort.  The major shift was DK1 to DK2, but the minor shifts still cause problems (the newest version may be the source of some current code woes, but may solve issues with OpenGL direct rendering, as well as jitter in Unity; both of these could make development much faster).

Also, we’d like to use as much of the Oculus-supplied rendering pipeline as possible (for easier reproducability, and thereby greater validity), but it’s been a pain to wedge more of our changes into it, or more of it into our in-lab engine — particularly as it keeps changing.  We’re currently at a relatively happy medium.

There were also some problems finding someplace for my code to live; the code bases I’m working from are big, even after paring them down, and have moderate hardware demands; they proved too much for my poor laptop or the initial spare lab workstation.  However, my the new computer in my office has more than enough hard drive space and GPU muscle for my current needs.

There’s also a shift from “I read a bunch of interesting papers” posts as the semester goes on.  This is because much of my reading time was taken by other classes, in spaces not immediately relevant to this work. I expect that next semester, a lighter class load will leave more time for reading in this space.

Next Steps

There’s some polish to be done on the code — adding experimenter controls and cleaning up participant stimulus.  Then we can pilot with different pointcloud environments, and investigate different calibration procedures.  Then, proper experiments.