2/09/2015 TEB Update

What I accomplished this week

Due to copious amounts of schoolwork and being ill, I did not accomplish as much as initially planned for this week.  I did receive the final components needed from Digikey to complete the process of populating board (SMD slide switch, push buttons, and female sockets for the MCU), which I was able to do without any major issues.

Problems

When soldering the slide switch, I noticed the pad spacing for the switch layout was slightly off.  I was still able to soldering the switch to the board without a problem, but the spacing should be adjusted for the next iteration of the PCB.

There are also small protrusions from the bottom of slide switch that were not accounted for when designing the PCB. For this iteration, I just used a fine Hobby knife to cut these protrusions off the switch, but holes will have to be added to the next PCB design to accommodate these protrusion.

Next Week’s Work

This next week, I would like to test the board to see if it is functioning properly and doesn’t fry any components when you power it uplike the last one did. The main concern addressed with Iteration 1.2 of the PCB was the H-bridge section which is responsible for enabling the voltage to be applied across the thermoelectric cooler in either direction. If it is does work without issue, then the next step will be to start the process of rewriting the code to utilize the hardware interrupts.

Week 3

This week got devoured by meetings, so I don’t have a huge project update.  I had a good meeting with Meg Mitchell in which we discussed my direction and plans for my upcoming qualifier show.  She responded particularly well to this piece:

jcf003The intention of this piece is to use the copper taffeta applique on the pants as a capacitive sensor with enough range to make a textile proximity sensor.  When the sensor is activated, the 3D printed spines would lift via muscle wires.

We discussed some additional applications for this sensing technique and I’ve been sketching ideas.  The plan for this week, now that my supplies have arrived, is to test the capacitive sensing (since the pants are already sewn).  If I can get that working, I might change up my final show garment plans to all use a similar sensing technology.  This would allow me to focus on design completion and responses, rather losing too much time this semester to testing technology.

I have a meeting scheduled on Wednesday with Natalie Rudolph, a professor in the Mechanical Engineering department who works on the 3D printing team.  We’re going to discuss their work and what some possible avenues of collaboration may be.  More to come!

Reference Overview: HMD Calibration and Its Effects on Distance Judgments

Initial paper:

Kuhl, S. A., Thompson, W. B., & Creem-Regehr, S. H. (2009). HMD calibration and its effects on distance judgments. ACM Transactions on Applied Perception (TAP), 6(3), 19.

Experiments testing distance estimation subject to three potential miscalibrations in HMDs: pitch, pincusion distortion, minification/magnification via FOV. Only FOV is seen to cause change. Calibration procedures are suggested; the gist is to match against real world objects, popping the HMD on and off.

List of references, grouped by topic and ordered (loosely) by novelty vs related papers, usefulness, and whim:

— horizon / tilt different in VR / Real?
OOI, T. L., WU, B., AND HE, Z. J. 2001. Distance determination by the angular declination below the horizon. Nature 414, 197–200.
ANDRE, J. AND ROGERS, S. 2006. Using verbal and blind-walking distance estimates to investigate the two visual systems hypothesis. Percept. Psychophys. 68, 3, 353–361.

— support for effect of horizon position / tilt
MESSING, R. AND DURGIN, F. 2005. Distance perception and the visual horizon in head-mounted displays. ACM Trans. Appl. Percept. 2, 3, 234–250.
RICHARDSON, A. R. AND WALLER, D. 2005. The effect of feedback training on distance estimation in virtual environments. Appl. Cognitive Psych. 19, 1089–1108.
GARDNER, P. L. AND MON-WILLIAMS, M. 2001. Vertical gaze angle: Absolute height-in-scene information for the programming of prehension. Exper. Brain Res. 136, 3, 379–385.

— depth in photographs (2D?)
SMITH, O. W. 1958a. Comparison of apparent depth in a photograph viewed from two distances. Perceptual Motor Skills 8, 79–81.
SMITH, O. W. 1958b. Judgments of size and distance in photographs. Amer. J. Psych. 71, 3, 529–538.
KRAFT, R. N. AND GREEN, J. S. 1989. Distance perception as a function of photographic area of view. Percept. Psychophys. 45, 4, 459–466.

— AR calibration (vs real world objects)
MCGARRITY, E. AND TUCERYAN, M. 1999. A method for calibrating see-through head-mounted displays for AR. In Proceedings of the IEEE and ACM International Workshop on Augmented Reality. IEEE, Los Alamitos, CA, 75–84.
GILSON, S. J., FITZGIBBON, A. W., AND GLENNERSTER, A. 2008. Spatial calibration of an optical see-through head mounted display. J. Neurosci. Methods 173, 1, 140–146. Lawrence Erlbaum Associates, Hillsdale, NJ, 229–232.
GENC, Y., TUCERYAN, M., AND NAVAB, N. 2002. Practical solutions for calibration of optical see-through devices. In Proceedings of the 1st IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR’02). IEEE, Los Alamitos, CA.
AZUMA, R. AND BISHOP, G. 1994. Improving static and dynamic registration in an optical see-through HMD. In Proceedings of the 21st Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH’04). ACM, New York, 197–204.

— effects of miscalibration / display properties
KUHL, S. A., CREEM-REGEHR, S. H., AND THOMPSON, W. B. 2008. Recalibration of rotational locomotion in immersive virtual environments. ACM Trans. Appl. Percept. 5, 3.
KUHL, S. A., THOMPSON,W. B., AND CREEM-REGEHR, S.H. 2006. Minification influences spatial judgments in virtual environments. In Proceedings of the Symposium on Applied Perception in Graphics and Visualization. ACM, New York, 15–19.
KUHL, S. A., THOMPSON, W. B., AND CREEM-REGEHR, S. H. 2008. HMD calibration and its effects on distance judgments. In Proceedings of the Symposium on Applied Perception in Graphics and Visualization. ACM, New York.
WILLEMSEN, P., COLTON, M. B.,CREEM-REGEHR, S. H., AND THOMPSON,W. B. 2009. The effects of head-mounted display mechanical properties and field-of-view on distance judgments in virtual environments. ACM Trans. Appl. Percept. 6, 2, 8:1–8:14.
WILLEMSEN, P., GOOCH, A. A., THOMPSON, W. B., AND CREEM-REGEHR, S. H. 2008. Effects of stereo viewing conditions on distance perception in virtual environments. Presence: Teleoperat. Virtual Environ. 17, 1, 91–101.
LUMSDEN, E. A. 1983. Perception of radial distance as a function of magnification and truncation of depicted spatial layout. Percept. Psychophys. 33, 2, 177–182.

— effects of feedback (lasts for a week?)
MOHLER, B. J., CREEM-REGEHR, S. H., AND THOMPSON,W. B. 2006. The influence of feedback on egocenteric distance judgments in real and virtual environments. In Proceedings of the Symposium on Applied Perception in Graphics and Visualization. ACM, New York, 9–14.

— visual quality
THOMPSON,W. B.,WILLEMSEN, P., GOOCH, A. A., CREEM-REGEHR, S. H., LOOMIS, J. M., AND BEALL, A. C. 2004. Does the quality of the computer graphics matter when judging distances in visually immersive environments? Presence: Teleoperat. Virtual Environ. 13, 5, 560–571.

— distortion correction
WATSON, B. A. AND HODGES, L. F. 1995. Using texture maps to correct for optical distortion in head-mounted displays. In Proceedings of the IEEE Conference on Virtual Reality. IEEE, Los Alamitos, CA, 172–178.
BAX, M. R. 2004. Real-time lens distortion correction: 3D video graphics cards are good for more than games. Stanford Electr. Eng. Comput. Sci. Res. J.
ROBINETT, W. AND ROLLAND, J. P. 1992. A computational model for the stereoscopic optics of a head-mounted display. Presence: Teleoperat. Virtual Environ. 1, 1, 45–62.

— camera calibration (spherical distortion, maybe some vision stuff)
TSAI, R. Y. 1987. A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE J. Rob. Autom. 3, 4, 323–344.
WENG, J., COHEN, P., AND HERNIOU, M. 1992. Camera calibration with distortion models and accuracy evaluation. IEEE Trans.
Patt. Anal. Mach. Intell. 14, 10, 965–980.

— “distance underestimation exists”
WITMER, B. G. AND KLINE, P. B. 1998. Judging perceived and traversed distance in virtual environments. Presence: Teleoperat. Virtual Environ. 7, 2, 144–167.
KNAPP, J. 1999. The visual perception of egocentric distance in virtual environments. Ph.D. thesis, University of California at Santa Barbara.

— measures of percieved distance
SAHM, C. S., CREEM-REGEHR, S. H., THOMPSON, W. B., AND WILLEMSEN, P. 2005. Throwing versus walking as indicators of distance perception in real and virtual environments. ACM Trans. Appl. Percept. 1, 3, 35–45.

—- NOT FOUND —-

CAMPOS, J., FREITAS, P., TURNER, E.,WONG, M., AND SUN, H.-J. 2007. The effect of optical magnification/minimization on distance estimation by stationary and walking observers. J. Vision 7, 9, 1028a.

ELLIS, S. R. AND NEMIRE, K. 1993. A subjective technique for calibration of lines of sight in closed virtual environment viewing systems. In Proceedings of the Society for Information Display. Society for Information Display, Campbell, CA.

SEDGWICK, H. A. 1983. Environment-centered representation of spatial layout: Available information from texture and perspective. In Human and Machine Vision, J. Beck, B. Hope, and A. Rosenfeld, Eds. Academic Press, San Diego, CA, 425–458.

(also of note: sedgwick seems attatched to work on distance judgements vs spatial relations / disruptions)

GRUTZMACHER, R. P., ANDRE, J. T., AND OWENS, D. A. 1997. Gaze inclination: A source of oculomotor information for distance
perception. In Proceedings of the 9th International Conference on Perception and Action (Studies in Perception and Action IV ).

STOPER, A. E. 1999. Height and extent: Two kinds of perception. In Ecological Approaches to Cognition: Essays in Honor of
Ulric Neisser, E. Winograd, R. Fivush, and W. Hirst, Eds. Erlbaum, Hillsdale, NJ.

(book)
LOOMIS, J. M. AND KNAPP, J. 2003. Visual perception of egocentric distance in real and virtual environments. In Virtual and
Adaptive Environments, L. J. Hettinger and M. W. Haas, Eds. Erlbaum, Mahwah, NJ, 21–46.

(book)
ROGERS, S. 1995. Perceiving pictorial space. In Perception of Space and Motion,W. Epstein and S. Rogers, Eds. Academic Press,
San Diego, CA, 119–163.

(requested)
RINALDUCCI, E. J.,MAPES,D., CINQ-MARS, S. G., ANDHIGGINS,K. E. 1996. Determining the field of view in HMDs: A psychophysical method. Presence: Teleoperat. Virtual Environ. 5, 3, 353–356.

(misc find, not in refs)
Hendrix, C., & Barfield, W. (1994). Perceptual biases in spatial judgements as a function of eyepoint elevation angle and geometric field of view (No. 941441). SAE Technical Paper.

(misc find, not in refs)
Blackwell Handbook of Sensation and Perception
http://onlinelibrary.wiley.com.ezproxy.library.wisc.edu/book/10.1002/9780470753477

Kent State Fashion/Tech Hackathon

This past weekend I drove to Kent State in order to attend the TechStyle Symposium and the Fashion/Tech Hackathon. 

The symposium, held on Friday, was a gathering of apparel professors and graduate students from schools around the world, including Kent State, Iowa State, Loughborough University and others.  The talks in general revolved around various applications of technology in the apparel field.  They touched on topics such as 3D garment simulation, laser cutting, digital fabric printing and building technology into garments in order to assist disabled children.  In addition to the talks, there was a brief poster session featuring presentations from Iowa State graduate students.  Overall, the symposium was interesting and an excellent networking opportunity.

The Hackathon was a much different event, although equally interesting.  Over 150 students (undergrads and grads) from across the country were brought together and tasked with creating some sort of wearable technology prototype in 36 hours.  Assorted supplies were provided by the organizers (Arduinos, LEDs, Intel Edisons, Myo armbands, Oculus Rifts, etc.) for the hackers to use.  We also had access to the Kent State TextileLab facilities, included a 3D body scanner, 3D printers, a laser cut and digital fabric printers.  Teams could either be formed beforehand or at the event.

Although I had been told in advance that graduate students were welcome, there seemed to be very few actually attending the event.  That made trying to find a team a little awkward.  In the end, I decided to just work by myself.  That did mean that without any additional tech help I scaled back some of my experimentation and chose a project I knew I could complete within the alloted time.

The final outcome is what I called the LightPrint Dress (a terrible name, I know; in my defense I had only had 3 hours of sleep).

I rendered the neckpiece which I printed on a Makerbot Replicator 2, then embedded it with UV LEDs harvested from several small UV flashlights.  I designed the fabric and had it printed at the Kent State facilities.  I then hand-stencilled UV reactive liquid (aka Tide) onto sections of the pattern before draping the dress.  The intended outcome was that the UV lights would activate the reactive portions of the pattern, thereby changing the appearance of the textile in an interactive way.  Unfortunately, due to time and material limitations the final effect was not what I had hoped.

The project as a whole, however, was well-received by the judges.  I was awarded the prize for “Most Technically Challenging Hack” by one of the event sponsors.  The judges seemed most impressed by the fact that I had completed all aspects of the project by myself, thus showing a broad range of skills.  The prize was a Moto360 Smartwatch, which I am still trying to figure out how to use, lol.

While the project was not hugely challenging by my personal criteria, it is a good proof of concept that I would like to pursue further.  A future iteration I would like to explore is one where all of the electronics are fully integrated/encased in the neckpiece with a recharging port and wireless connection to some sort of app to enable user programming.  Rather than using UV LEDs, I would like to install high-powered RGB LEDs and use white fabric for the actual garment.  Theoretically, this would allow me to create user-controlled, color change garments.

Overall, the entire event was an excellent experience.  I would definitely participate in another fashion hack in the future.

Phenomenal Regression: First Look

A participant views a circle placed on a table in front of them, and is asked to describe what they see.  Their answer lies somewhere between what geometry tells us the retinal image should be (or, what we might render in a virtual world), and the “real” version of the circle, undistorted by perspective.  Back in the ’30s, Thouless observed this, and dubbed it “phenomenal regression” — that the observed, “phenomenal” shape is not the expected retinal image, but rather “regresses” to the “real” shape.

phenomenal regression example

From Elner & Wright, 2014.

This makes some sense with shapes (and orientations) simple enough to describe perspective transformation as compression on one axis; that is to say, when the “real” form is unambiguously just a circle, because other orientations are significantly less interesting.  Or perhaps the real shape is that aligned with the plane the object rests on — a mental estimation of an overhead view of the table?

Thouless claims it’s not simply a familiar form, though that experiment bears another read to convince me.  There’s also a bit on properties like brightness/color; Thouless seems to imply shape is not the only property for which we exhibit this regression, and that seems to further confuse how one constructs the “real” form.

Elner and Wright have recently (2014) explored using the concept as a measure of “spatial quality” in virtual environments; they introduce regression as “an involuntary reaction that cannot be defeated even when pointed out”, which could make for a compelling measure.  Their experiment is inconclusive (virtual cues possibly influenced by a physical tripod), and I’ll need to become more familiar with the lit on size constancy to understand why they claim so strongly that it’s not what they (nor Thouless) are doing.  But, they’ve a thorough paper, particularly related works and analysis; I suspect they do know what they’re doing, and I should probably revisit this sometime to better understand the implications.

 


  1. Elner, K. W., & Wright, H. (2014). Phenomenal regression to the real object in physical and virtual worlds. Virtual Reality, 1-11.

  2. Thouless, R. H. (1931). Phenomenal regression to the real object. I. British Journal of Psychology. General Section, 21(4), 339-359.

  3. Thouless, Robert H. “Phenomenal regression to the ‘real’object. II.” British Journal of Psychology. General Section 22.1 (1931): 1-30.

DSCVR and Unity

Hello everyone,

My name is Ted. I am a senior in the Applied Math, Engineering and Physics (AMEP) program with a penchant for architectural design and computer graphics. This semester I’ll be working with Prof. Ponto and the amazing piece of hardware at SoHE known as the DSCVR (Design Studies Commodity Virtual Reality). My goal is to cover the basics of the game engine called Unity, get acquainted with C#, and expand my modeling and rendering knowledge of 3DS MAX to develop an application that will be able to use the virtual reality features of the DSCVR to visualize building designs in real time.

This Week

On Tuesday last week, I got to do a tour of the DSCVR and experienced a hands-on demo of the system. The coolness factor is definitely overwhelming. The amazing thing about it is how accessible it would be to deploy such an equipment in a variety of settings. Any game developed in Unity can easily be made to run and take advantage of the DSCVR’s features by simply running a couple of script assets on top of your game files.

The following day I downloaded the Unity software to my home computer. Unfortunately, its grey interface can’t be changed to black and it makes text very hard to read unless you changed its size. Though it is possible to model within Unity, I am choosing to use my preferred 3D package to do the modeling and simply import the geometry into Unity. This week I spent a good ten hours watching some introductory tutorials on Unity and some others on modeling game sets on 3DS MAX.

Unity UI

Unity UI

Next Week

I expect to continue watching tutorials and begin making a simple game which will consist of a single room and a playable character.

See you next week!

Ted

2/1/2015 TEB Update

What I accomplished this week

  • Finished moving into the new office
  • Ordered the rest of the equipment and tools needed to complete my research
  • Populated the PCB with all the surface mount components. The process is as follows
    • Apply solder paste to component pads
    • Using tweezers, place components onto pads
    • Preheat oven to 400 degrees Fahrenheit
    • Place PCB in oven and wait a few minutes until you see the components ‘pop’ into place
    • Remove PCB and inspect joints. Use solder wick to remove solder if there is excess amounts and/or solder bridges
    • Clean PCB of remaining flux using toothbrush and isopropyl alcohol

Problems

  • Accidentally snapped a pin off the slide switch which rendered it as useless so I had to purchase more

Next week’s work

  • Solder the slide switch, push buttons and headers to PCB
  • Test PCB for functionality (Any fried components or smoke coming from PCB?)
  • Conduct electrical measurements using multimeter

Below is an image of the board for prototype 1.2

IMG_20150131_132557529

TEB Research Introductory Post

Hello world,

I just thought I’d introduce myself and tell you a little bit about who I am and my research.  My name is Jason Sylvestre and I am currently a freshman studying Electrical Engineering here at the University of Wisconsin-Madison. I will be working under Professor Kevin Ponto’s supervision on a project I started last semester for his Wearable technologies class. What I am trying to build is a thermoelectric bracelet that can be used for personal body temperature regulation.  TEB (stands for ‘Thermoelectric bracelet’) can be thought of as a personal air conditioner that can be used to improve thermal comfort.  Research has shown that when you apply a temperature change to a local part of the body, your brain perceives it as a change in your entire body temperature.  It is this psychological effect that I am trying to leverage with my device.

Currently I am in the process of building the second prototype, but once I have a fully functional device and have optimized pulse duration and intensity, I will perform a user study to see if this device actually makes a difference in personal comfort and plan to publish the results.

What I accomplished this week

I just moved all my equipment into my cubicle in the basement at the Wisconsin Institute for Discovery and began to populate the rev 1.2 PCB with the SMT components. I also gathered a list of supplies needed to complete this second iteration.

Next week’s work

Depending on how quickly Kevin can get me an oven for the soldering procedure, I am going to try to populate the rest of the board. I will also be modifying the code to make use of the hardware interrupts that the buttons are connected to. By utilizing hardware interrupts, there will hardly be any delay now in the program, which was a minor issue with revision 1.1

I look forward to a great semester of research. Thanks

Jason Sylvestre

Below is an image of prototype 1.1

20141210_181231

End of semester recap

Accomplishments

This semester, I’ve explored issues relevant to stereoscopic rendering in general, and the Oculus Rift in particular.  I’ve also explored the current state of software packages that offer rendering to the Rift.  We’re on the cusp of having a viable test platform for our calibration experiments, and I have a better understanding of the problem we’re trying to solve.

There was also some investigation into what the Oculus SDK does with its calibrated values, and if we can leverage them for our investigations.  The answer is mostly no, though we may need to force their FOV to some fixed value before we manipulate ours.

Challenges

There are a lot of options for rendering to the Rift, and a they bore exploring.

A fair chunk of time was spent repurposing code inherited from other lab projects — becoming familiar with their structure, and paring them down to be a bit more nimble and debuggable.  Most of “nimble” here is file size; some of our projects have huge data sets or library collections that weren’t immediately relevant to the current effort (and didn’t fit in the storage I had available); part is restructuring them to not use the same files as other actively developed projects, so my changes don’t compete with other lab members’.  This is a normal part of code reuse, and there’s nothing about this code that made it especially difficult — it just took time to decide what everything did, and what parts I needed.

Engines like Unity and Unreal seemed promising, but weren’t quite ready.

The Oculus SDK is in a phase of rapid development.  New versions usually provide enough improvement that we want to use them, and enough changes that reintegration takes some effort.  The major shift was DK1 to DK2, but the minor shifts still cause problems (the newest version may be the source of some current code woes, but may solve issues with OpenGL direct rendering, as well as jitter in Unity; both of these could make development much faster).

Also, we’d like to use as much of the Oculus-supplied rendering pipeline as possible (for easier reproducability, and thereby greater validity), but it’s been a pain to wedge more of our changes into it, or more of it into our in-lab engine — particularly as it keeps changing.  We’re currently at a relatively happy medium.

There were also some problems finding someplace for my code to live; the code bases I’m working from are big, even after paring them down, and have moderate hardware demands; they proved too much for my poor laptop or the initial spare lab workstation.  However, my the new computer in my office has more than enough hard drive space and GPU muscle for my current needs.

There’s also a shift from “I read a bunch of interesting papers” posts as the semester goes on.  This is because much of my reading time was taken by other classes, in spaces not immediately relevant to this work. I expect that next semester, a lighter class load will leave more time for reading in this space.

Next Steps

There’s some polish to be done on the code — adding experimenter controls and cleaning up participant stimulus.  Then we can pilot with different pointcloud environments, and investigate different calibration procedures.  Then, proper experiments.

 

Ski Simulator Final Post Questions

-What are your overall feelings on your project? Are you pleased, disappointed, etc.?
Overall I was pleased with the outcome of the project. After working on the two ski simulator demos this semester, I feel more comfortable with developing programs on Unity, creating models with 3ds Max, and designing textures with Paint.NET and the UV editor in 3ds Max. And I got a little taste of what it’s like to develop video games, which was my main goal for the semester.

-How well did your project meet your original project description and goals?
I feel that my project definitely met all my original descriptions and goals. When I first started the project, my goal was to get something similar to the original ski simulator working in Unity. After working with Unity for a little while, development became (surprisingly) very easy and quick. With how fast development was in Unity, it was easy to go beyond my initial goals for the project and exceed my initial expectations.

-What were the largest hurdles you encountered?  How did you overcome these challenges?
I would say the largest hurdle I encountered was simply getting started with Unity. When I started the project, I had no experience with Unity whatsoever. However, after sitting down and watching some tutorials on the Unity website, I was able to figure out how to make a simple program. Beyond that, I found Unity to be very intuitive and easy to learn for all the other features of the program. Any other issues were usually solved with a little bit of thinking and possibly a few google searches.

-If you had more time, what would you do next?
If I had more time, I would look at designing and implementing animated models with moving limbs in the program as that sounds challenging, but useful. Also, I would attempt to make the project more suited for the web-based and android environments. I might look into these things on my own as I think they might be nice to know and good things to add on to the program for fun.

Also, here is a screenshot from the editor of the entire hill (minus some trees due to the rendering distance of the impostors):

001

And if you want to download the executables, here are the links:

Windows: https://blogs.discovery.wisc.edu/public/apps/SkiSimulator/SkiSimulatorWindows.zip

Mac: https://blogs.discovery.wisc.edu/public/apps/SkiSimulator/SkiSimulatorMac.zip

Linux: https://blogs.discovery.wisc.edu/public/apps/SkiSimulator/SkiSimulatorLinux.zip