Rift: Modifying the Projection / FOV part 1

I’ve attempted to recreate the FOV used by the Rift, by calling CalculateFovFromEyePosition using defaults found in OVR_Stereo.cpp (mostly in OVR_Stereo.cpp::CreateDebugHMDInfo):

 info.ProductName = "Oculus Rift DK2"; 
 info.ResolutionInPixels = Sizei ( 1920, 1080 );
 info.ScreenSizeInMeters = Sizef ( 0.12576f, 0.07074f );
 info.ScreenGapSizeInMeters = 0.0f;
 info.CenterFromTopInMeters = info.ScreenSizeInMeters.h * 0.5f;
 info.LensSeparationInMeters = 0.0635f;
 info.Shutter.Type = HmdShutter_RollingRightToLeft;
 info.Shutter.VsyncToNextVsync = ( 1.0f / 76.0f );
 info.Shutter.VsyncToFirstScanline = 0.0000273f;
 info.Shutter.FirstScanlineToLastScanline = 0.0131033f;
 info.Shutter.PixelSettleTime = 0.0f;
 info.Shutter.PixelPersistence = 0.18f * info.Shutter.VsyncToNextVsync;


 case EyeCup_DKHD2A:
 renderInfo.LensDiameterInMeters = 0.035f;
 renderInfo.LensSurfaceToMidplateInMeters = 0.02357f;
 // Not strictly lens-specific, but still wise to set a reasonable default for relief.
 renderInfo.EyeLeft.ReliefInMeters = 0.010f; 
 renderInfo.EyeRight.ReliefInMeters = 0.010f; 

 renderInfo.EyeLeft.NoseToPupilInMeters = 0.032f; 

const float OVR_DEFAULT_EXTRA_EYE_ROTATION = 30.0f * MATH_FLOAT_DEGREETORADFACTOR;

I expected to reproduce the default FOV, which yields a scene like so:not calling the function

What I see instead is this:calling the function

 

It needs some work.

That work may be probing the values via debugger, as the default FOV is built; or, I could work through the code in that function and decide how things would go awry as displayed above.  Or arguing with include files some more, so I can probe values via code.  (Or, finding some bug in my code that’s doing something dumb.)

But soon that work will have to become bypassing the Oculus SDK as much as possible; it’d be nice to standardize to their system, but it’d be even nicer to actually get this working.

More Additions to Mountain

This week I added some models and made the mountain about four times bigger. Some of the things I added are a few trees, coins, flags, a smaller jump, a tunnel to ski through, and a barrier at the bottom of the hill to stop the user. I also added snow, a reset button, some temporary textures, and set a terminal velocity so the user doesn’t go flying through obstacles and coins. The sphere was also made bigger to help the user pick up coins. I chose to delay making the side boundaries for the hill because making those will make modeling the obstacles in the middle of the mountain more difficult. So that will be one of the things I do last.

Here are some screenshots of the current state of the game:

unityScreenshot1 unityScreenshot2

Right now, there is plenty of room to add more obstacles, so that is what I will focus on for next week. Some of the things I could add for next week are more trees/jumps/flags, possibly an underground tunnel, maybe a hoop to jump through, and other obstacles as I think of them. Also, I will look into placing skis where the sphere currently is, that way the user knows where to stand while trying to ski down the mountain.

Added Simple Mountain Model

This week, I created a simple mountain model with one jump and added it to the unity project. The sphere seems to respond correctly to the terrain. At first, when the sphere was rolling really fast, it would go through the terrain on perpendicular (or near perpendicular) collisions and fall into the abyss. I think the high speeds were causing the glitch, so I slowed the sphere down and made the terrain thicker. It hasn’t had any problems since then, so hopefully that won’t be an issue in the future. I also changed the lighting color, but that was just to make it a little easier to look at. Sam and I also went through the project and got it added to the Cave Shared directory.

Here are some screenshots of the current state of the game:

unityScreenshot unityScreenshot2

For next week, I am going to try to make a more complex mountain model as this one was just a test model. I will hopefully add in trees, more jumps, and obstacles to make the game more entertaining. And I will have something at the bottom of the hill to stop the user from flying off the edge.

The Oculus Rift: CalculateFovFromEyePosition

Brief entry, I’m still kind’ve beat from quals.

(Maybe I’ll flesh this entry out into a description of the Rift’s particular rendering peculiarities sometime — seems roughly three stages: virtual-to-screen, screen-to-lens, and lens-to-eye. But for now:)

Deep in the 0.4.2 Oculus Rift SDK lurk functions for setting display render properties. One of these functions has some illustrative comments.

FovPort CalculateFovFromEyePosition ( float eyeReliefInMeters,
                                      float offsetToRightInMeters,
                                      float offsetDownwardsInMeters,
                                      float lensDiameterInMeters,
                                      float extraEyeRotationInRadians /*= 0.0f*/ )

Returned is an FovPort, which describes a viewport’s field-of-view as the tangents of the angles between the viewing vector and the edges of the field of view — that is, four values for up, down, left, and right. The intent is summed up in another comment:

// 2D view of things:
//       |-|            <--- offsetToRightInMeters (in this case, it is negative)
// |=======C=======|    <--- lens surface (C=center)
//  \    |       _/
//   \   R     _/
//    \  |   _/
//     \ | _/
//      \|/
//       O  <--- center of pupil

 // (technically the lens is round rather than square, so it's not correct to
 // separate vertical and horizontal like this, but it's close enough

Which shows an asymmetric view frustum determined by the eye’s position relative to the len’s. This seems to describe the physical field-of-view through the lens onto the screen; it’s unclear what other rendering properties this might influence (render target size is implied in a comment in a calling function, and it should also be expected to influence the distortion shader), but I’ve confirmed that it affects the projection matrix.

But then it gets a bit weird:

// That's the basic looking-straight-ahead eye position relative to the lens.
// But if you look left, the pupil moves left as the eyeball rotates, which
// means you can see more to the right than this geometry suggests.
// So add in the bounds for the extra movement of the pupil.

// Beyond 30 degrees does not increase FOV because the pupil starts moving backwards more than sideways.

// The rotation of the eye is a bit more complex than a simple circle. The center of rotation
// at 13.5mm from cornea is slightly further back than the actual center of the eye.
// Additionally the rotation contains a small lateral component as the muscles pull the eye

Which is where we see extraEyeRotationInRadians put to use; this may imply an interest in eye tracking.
Also, in the function that calls CalculateFovFromEyePosition:

// Limit the eye-relief to 6 mm for FOV calculations since this just tends to spread off-screen
 // and get clamped anyways on DK1 (but in Unity it continues to spreads and causes
 // unnecessarily large render targets)

So you aren’t allowed closer than 0.006 meters. They cite rendering target size concerns; are there also physical or visual arguments?

How close is even physically plausible? According to a single paper I just looked up at random: “eyelash length rarely exceeds 10 mm”. So at 6, seems there’s a good chance we’re within uncomfortable eyelash-touching territory — that is, closer than wearers are likely to want to be.

More directly relevant to the current work: what are the visual ramifications for lens-induced artifacts, for the rendered scene, and for any interplay between them. It looks like the Rift has aspirations to precisely calibrate its render to eye position, down to the lateral movement of the eye as it rotates. What happens if we de-sync the virtual and physical (or maybe: scene-to-screen and lens-to-eye) aspects of the calibration they’re so meticulously constructing?

More on that next week.


references:

Thibaut, S., De Becker, E., Caisey, L., Baras, D., Karatas, S., Jammayrac, O., … & Bernard, B. A. (2010). Human eyelash characterization. British Journal of Dermatology, 162(2), 304-310.

Tracking down Rift fusion problems.

For a while I was attempting to build visual stimulus in the Rift, but they always seemed off.  Sense of depth was off, it seemed difficult to fuse the two images at more than one small region at a time, and there were lots of candidates for why:

  • a miscalculation in the virtual eye position, resulting in the wrong binocular disparity
  • a miscalibration of the physical Rift (the lens depth is deliberately adjustable, and there are multiple choices for lens cups; our DK1 also had a bad habit of letting the screen pop out of place up to maybe an inch on the right side)
  • lack of multisampling causing a lack of sub-pixel information, which may be of particular importance considering the DK1’s low resolution
  • incorrect chromatic abberation correction causing visual acuity to suffer away from image and lens center (which could have been separate, competing problems in the case of miscalibration)
  • something wrong in the distortion shader, causing subtle, stereo-cue destroying misalignments

Here’s two images I used to test.  In both images, I tried to center my view on on a single “corner” point of a grid pattern, marked with a red dot; image center is marked with yellow axes.

First, from the official OculusWorldDemo:

oculus tuscany demo tile grid - focus marked

 

So, that’s roughly where we want our eye-center red dots to be, relative to image center.

And from our code:

lab limestone -- marker 2 - 1280x720 - scribbles - smaller

Which matches what I was experiencing with the undistorted images in the Rift — things were in almost the right places, but differences between the images seem exaggerated in not immediately coherent ways.

The key difference is in the shape of the individual eye’s images.  In the image from my code, the shape of the right eye image is “flipped” relative to the official demo’s; or rather, our code hadn’t flipped the right eye’s coordinate system, relative to the left’s.

This is most apparent with the hard right edge — I’d written this off before as a mismatch in screen resolutions between PC and rift causing the screen to get cut off, but nope, turns out that lurking deep within OculusRoomTiny (the blueprint for our Rift integration), the center offset for the right eye was being inverted before being passed to the shader.  This happened well away from the rest of the rendering code, so it was easy to miss.

I changed our code to match — and the difference was striking.  Full stereo fusion came naturally, and general visual acuity and awareness of the scene was improved.  And so was awareness of a whole host of new flaws in the scene — the lack of multisampling and screen door effect were much more apparent, tracking errors more annoying, and errors in recreations of realistic scenes far more in focus.  It’s interesting how thoroughly the distortion misalignment dominated those other visual artifacts.

Related — the misaligned distortion may have been causing vertical disparity, which I hear is the main problem when using toe-in to create stereo pairs.  Vertical disparity is accused of decreasing visual acuity; perhaps this takes the form of inattention, rather than blurriness, which would explain our suddenly becoming more attentive to other artifacts when the distortion shader was fixed.  Maybe more in a future post.

Two conflicting results on distance estimation in virtual environments.

Two studies, both using head mounted displays and realistic environments, seem to have conflicting findings regarding distance estimation in virtual environments.

Sahm et al (2005) find that distance estimations in the real and virtual worlds, measured via blind walking and blind throwing, exhibit consistent and significant underestimates of about 30%. They use a realistic virtual environment meant to reproduce the physical experiment environment, like so:

stimulus used by sahm et al

In two papers by Interrante et al (2006, 2008), no significant difference is found between distance estimations measured by blind walking in the real or virtual world.  They use a similar realistic environment that reproduces the physical environment:

stimulus used by Interrante et al (2006)

And in the later study, they also test an enlarged and shrunken version of the virutal environment (moving the wall in or out 10%, while leaving objects like doors the same size):

stimulus used by Interrante et al (2008)

They find no significant difference real and same-sized virtual world estimations; they see a significant effect in the larger virtual rooms, and a marginally significant effect in the smaller. Both resized rooms cause underestimations. This suggests that some kinds of inaccurate reproduction of (or better: deviation from?) the real world either induce distance underestimation, or prevent whatever state of sync between mental models of the virtual and physical worlds participants otherwise may have entered. Also weird that neither resized case saw distance over-estimation, only under-.

But of key interest for this post: why did one group see distance under-estimation, and the other not? This may be an interesting area of investigation — are there cues present in the one environment, but not in the other? For instance, depth cues: The hallway presents a fairly strong horizon cue, while the room may not. Or, is it a “realism” cue, some technique of lighting or texture handling or any other minute rendering detail?  Is it specific inaccuracies in the modeling of the spaces — both were hand-made (or, in the resized case, edited), so some inaccuracies are likely inevitable.  There may be some artistry in choosing where those inaccuracies are allowed to fall (and perhaps the smaller rooms in the resized case saw a marginal effect because they chose “better” — and perhaps implying a relationship between exocentric and egocentric distance estimation?)  Or is it some lower-level perceptual difference; the Sahm hallway has strong black outlines separating walls from floor, where the Interrante room has a significantly darker floor.  The specifics are unclear, but their respective studies’ results suggest that some difference in the environments may be responsible for the reuse (or more accurate application?) of real-world rules or models.

It may be worth keeping an eye out for how these sorts of environments are constructed elsewhere in the literature.


references:

  1. Interrante, V., Ries, B., & Anderson, L. (2006, March). Distance perception in immersive virtual environments, revisited. In Virtual Reality Conference, 2006 (pp. 3-10). IEEE.
  2. Interrante, V., Ries, B., Lindquist, J., Kaeding, M., & Anderson, L. (2008). Elucidating factors that can facilitate veridical spatial perception in immersive virtual environments. Presence: Teleoperators and Virtual Environments, 17(2), 176-198.
  3. Sahm, C. S., Creem-Regehr, S. H., Thompson, W. B., & Willemsen, P. (2005). Throwing versus walking as indicators of distance perception in similar real and virtual environments. ACM Transactions on Applied Perception (TAP), 2(1), 35-45.

Completed Unity Tutorial

This past week I went through some Unity tutorials and got a sample game working where the user controls a sphere and rolls around the scene, collecting cubes until they are all collected. I didn’t go through the tutorials I mentioned in the last post because I found what I thought was a more relevant set of tutorials, found here: http://unity3d.com/learn/tutorials/projects/roll-a-ball/introduction . I figured since the ski slope simulation relies on an invisible ball rolling down a hill, this would be a more relevant tutorial as opposed to the first-person shooter game I mentioned last week. However, I might go through a few of those tutorials down the road as they did seem to cover some useful things.

Here is a screenshot of the Unity game:

unityScreenshot

For things to work on for next week, I am hoping to make a new (possibly larger) model for the mountain, import it into Unity, and get the sphere’s physics to properly respond to the mountain’s terrain. Also, I will hopefully get the project added to my github account

disabling the HSW in Oculus libOVR 0.4.2

Oculus Rift DK2 and SDK 0.4.2 bring some exciting improvements, and some small problems.

The SDK has switched to a new C API, which probably means restructuring our existing Rift code.

It also shows this at the start of every app:

Oculus 0.4.2 Health and Safety Warning

HSW shown atop sample code from the Oculus forums.

… which introduces something like a 4 second delay to the start of any application.  Which is a problem for a developer, who might run their app dozens or hundreds of times in a few hours. This is only an issue when using libOVR’s internal rendering engine (though I assume Oculus frowns on distributing software without something similar).

It looks like at there’s some intention to allow it to be disabled at runtime, via:

ovrHmd_EnableHSWDisplaySDKRender( hmd, false );

A good candidate to be triggered with a _DEBUG preprocessor define or something . . . but this isn’t currently exposed.

The next easiest way seems to be toggling an a #define at line 64 of /CAPI/CAPI_HSWDisplay.cpp:

#if !defined(HSWDISPLAY_DEFAULT_ENABLED)
     #define HSWDISPLAY_DEFAULT_ENABLED 0  // this used to be a 1
#endif

… then recompiling LibOVR.  Which isn’t so bad — VS project files are provided, and worked with only minor tinkering (I had to add the DirectX SDK executables directory to the “VC++ directories” section of the project properties).

Recompiling the SDK works for what we’re doing, and it’s not hard, and we might need to modify libOVR eventually anyway . . . but it’s still a little silly.

Now that libOVR is building alongside some sample code, there’s some spelunking to do through the OculusWorldDemo to find how the new API suggests we modify eye parameters.  Digging too deeply into the new C API, it looked like there may be other new (C++?) classes inside . . . and that’s probably deeper than we’re meant to access.  Some shallower functions skimmed from OculusWorldDemo are currently being investigated.

(Note that this was actually tested with 0.4.1; I’ve to install the directX SDK on this other machine before testing against 0.4.2.  I’ll probably remove this note once that’s done.)

Ski Slope Simulator – First Post (9/5/2014)

For my independent study project, I am planning on making a newer version of the ski simulator with the Unity game engine. The initial steps for this project will be to go through some Unity tutorials and get a sample project running (since I am completely new to Unity), get the physics working in a simple demo of having a ball roll down a hill, and get the original ski simulator working in Unity. From there, hopefully I will be able to make some additions to the program.

This week, I got my blog account set up, looked through the first video in the series of Unity tutorials, and organized some plans for the project.

Next week, I am hoping to go through the rest of the Unity tutorial videos found below and hopefully get the tutorial project up and running: http://unity3d.com/learn/tutorials/projects/survival-shooter/environment