In conclusion

1) Describe your final project. What have you accomplished over the course of the semester?

The goal of my final project was to create an IPD (inter-pupillary distance) calculator, i.e a program that detects and calculates the distance in millimeters between the two pupils of the subject / research participant. This is done through circle detection via Hough Circles, and some camera calibration (pixels to mm) through mouse input. I have accomplished this goal for the most part, with some non-major consistency issues.

Other accomplishments were learning how to use the OpenCV library (and coding libraries in general), and the language Python. I came into this project with negligible knowledge of python and its syntax/rules, and ending the semester I feel to have a much better grasp on the language. Finally, this was the first project that forced me to use and learn the command line, due to the nature of python.


2) Describe your overall feelings about the project. Are you happy, content, frustrated, etc with the results?

I am content with the results, though frustrated with some inconsistency. Some methods I used and small algorithms I wrote work for some images and don’t work / work somewhat inaccurately for other images. Overall I am glad to have taken this opportunity to participate in research and feel that I have learned a lot, not only about Python/OpenCV but about the Virtual Reality field in general.
One particularly satisfying thing is overcoming my fears of not finishing the project. When I started it, everything seemed so intimidating as I wasn’t familiar with a lot of the tools I was using, so it was an uphill struggle.

3) What were some of the largest hurdles you encountered over the semester and how did you approach these challenges.

As mentioned in answer 2 and in many of my blog posts, the biggest hurdle I encountered was inconsistent results. A temporary – although admittedly not great -solution I have used is to alter parameters in my methods used by image basis as needed.

Other big hurdles were finding and understanding documentation and applying them to my project. Most OpenCV resources available were for C++, and the ones I found for Python were often for Python 2 rather than 3. Similarly I often encountered issues adjusting my code as a lot of online assistance I found was in Python 2.

One scary hurdle involved having to urgently factory-reset my computer a few weeks ago after an onslaught of viruses and this forced me to reinstall OpenCV, numpy, PyCharm, and some other stuff. Fortunately, I had all my code backed up so none of that was lost.

4) If you had more time, what would you do next on your project?
I would keep working on trying to make it more consistent and trying out other methods to detect circles. Since I had issues with blob detection and chessboard pattern detection, I would spend more time trying to figure out why these wouldn’t work for me and how to get them to work. I would clean up the code to make it more efficient, make it more user-friendly, generalize the program more, potentially add a webcam feature, and if I wanted to go above and beyond if I had more time I would try to create a GUI for the project.

Final Week

Last week was spent trying different camera calibration methods. A few days were spent on the checkerboard method, a few more on sticky notes and other color detection. Many problems cropped up with both these methods. I couldn’t get the program to truly recognize the checkerboard, and the way my program is set up it was very difficult to get the blob detection up and running.

This week I tried the last resort clicking method which surely was the least problematic out of all the methods. Some annoying issues I ran into were:
1) finding reliable Python tutorials and documentation as most resources were only available for C++
2) once a python resource was found, it was version 2.7 rather than 3.6, so a lot of time was spent fixing up indentation and syntax issues

Once all that was figured out, I wrote a tiny simple algorithm to take in the first two mouse clicks and stop after the first two, and then find the distance between the two. I have also finished up with the pixel -> mm conversion math, all that needs to be done now is testing with a reliable piece of paper / sticky note etc. taped to the forehead. I will probably do that on Sunday and edit this post with results, along with adding the option for command line arguments for more generality.

The only final issue is consistency, as hough circles distance in pixels detection seems to work on only 50% of the images, and I have yet to find out a solution for the short-scalar overflow. It shouldn’t be occurring based on the method I’m using, not sure what’s wrong.

Week 11

It took some time to get it to work, but with trial and error and cutting out multiple combinations of checkerboards, I might have got OpenCV to detect the board. The problem was the pattern has to be larger than 2×2 (OpenCV detects only the inner circle and likes having a wide border, so in essence a 3×3 cutout would actually be detected as 2×2, however that works)

3×3 was the largest that I could get to fit on my forehead, so I decided to get a printout of a chessboard with smaller square sizes, and cut out a 5×5 piece to test it as 4×4 and this is the outcome:
checkerworks

I wrote an if statement that draws a pattern around the detected corners of the chessboard, and the screenshot appears to faintly show this, I couldn’t get the pattern to be colored to be more visible. Here’s the original image:lasttry

Or I might just be seeing things; it seems as though the closer I focus on the gray screenshot, I can see a pattern, but not further away. May just be an optical illusion and I’ll take a look at ret’s boolean value to know for sure.

 

However another problem that popped up with this specific input photo is overflow, which the command prompt shows in the screenshot. This leads me to get an inaccurate result for the pixels distance, despite it working for every other image. This is another hurdle that I spent time trying to fix but wasn’t able to.

For next I’ll try to get done with the process of getting the detected points of the chessboard to get the distance in pixels (minus the white border perhaps) for a conversion value, and then perhaps take another photo with the chessboard to get an accurate pixels distance, and then complete the calibration from there.

Week 10

This week, a chessboard pattern was printed out and tested with multiple pictures of my face and my roommate Julian’s face. OpenCV’s chessboard detector method wouldn’t quite work on us but the problem seems to be the size of the squares. For it to fit on the forehead, it had to be cut down to 3×2. However, OpenCV’s method detects the inner corners only which makes it 2×1, and requires at least 2×2. So I will spend more time trying to work with that and if it doesn’t work out I’ll try other calibration methods.

badtest

In this screenshot, the algorithm didn’t quite accurately detect my eyes but that was a rarity as it works with most other images:
julian

oktest

Applying the distance method to these photos, the result in pixels actually seems pretty accurate. The distance between the eyes turns out to be around half of the image’s width in pixels, more or less.

After some trial and error for finding the distance in pixel, a (potentially tentative) solution has been found. Actually, two different solutions were found which yielded me different results with the picture I was testing:distance

In this screenshot I used a method widely recommended by stackoverflow and other sources for finding the distance between two points, which can be seen in the code. To find the center of the two circles in the first place, a small and probably inefficient algorithm was used.

The current result from the image that I’m going to go with for now is 266.27 pixels.

The next task is to figure out the width in pixels of the image itself to determine whether this is a plausible solution, as the other solution I was getting was 92485 pixels.

Once that is figured out, I should very soon be able to begin with the cardboard detection and conversion values to find distance in mm. I will try to accomplish this with a picture of my own face.

Week 8

This week’s post won’t involve many useful screenshots as most my time was spent
1) trying to get things to work
2) looking into how to calculate the IPD in pixels

This post comes a bit late as most of last Monday-Thursday was spent a bit scattered with a little lack of direction, with uncertainty of how the checkerboard pattern was supposed to be used. I tried reading through many, many tabs of documents and tutorials involving distance between objects in OpenCV but so far didn’t make much progress:

tabs

My meeting with Kevin Ponto and Alex Peer on Friday was pretty helpful for me to get back on track and have a better idea of what to do.

Since the meeting, a lot of my time has been consumed trying to get the Hough Circles method to work on a picture of my own face (so far I have been failing a lot, and haven’t yet been able to figure out why. Inputting an image of my face doesn’t even output the image when I run the program). So instead I tried taking a picture of my friend’s eyes, who has much more defined irises and pupils and the method worked on her:

kt

As work is ongoing, I will keep trying to get a picture of my face working, in prep for the final step of the camera calibration with the checkerboard. In the meantime, the immediate goal is to figure out how to get the IPD in pixels. I have a good idea of the formula and math to be used, although I am still digging for documentation and tutorials to get the location/coordinates of the centers of the circles in the image.

Weeks 6&7

These two weeks, a lot of time was spent debugging and working on both photos and webcam (although photos will be focused on from now on).

A lot of time was spent on getting to detect the face and eyes via Haar Cascades. This was accomplished relatively easily for photos with the help of the openly available tutorial, though there was a little hiccup where the program wouldn’t run at all. There was some trouble with the Haar Cascade parameters and learning how to integrate the xml files as the tutorial was based on a different version of OpenCV than I am currently using, but with some digging I was able to find a solution:

haar

After this I tried applying the same method for the webcam by changing some lines of code around with the help of the OpenCV webcam tutorial, and spent a few hours trying to get that to work but wasn’t able to. I followed the steps at https://realpython.com/blog/python/face-detection-in-python-using-a-webcam/ with some changes (I changed the part where the code uses system arguments because I wasn’t using any) but I still struggled to get it to work. Then I came across another helpful resource in https://pythonprogramming.net/haar-cascade-face-eye-detection-python-opencv-tutorial/ and was able to get the face+eye detection working with the webcam, after adding the full paths to the xml files for the cascade parameters (apparently an older version of OpenCV didn’t need the full path)

I still felt like I wasn’t making much progress and all my time and effort was scattered in multiple places so I decided to try and focus on pupil detection for pictures, which I was struggling with using blob detection, so I looked into some other methods and Hough Circles worked out pretty well for me through the tutorials at http://docs.opencv.org/3.1.0/da/d53/tutorial_py_houghcircles.html and of course, changing the parameters around to accommodate for the objects we care about. (is hough circles a viable alternative?)

Using Hough Circles on the Ronaldo photo wasn’t getting me anywhere but soon it was pretty clear to me that the photo wasn’t zoomed in enough and the pupils weren’t visible at all so now I’m trying the method on other eye-focused pictures and that helped a lot:

working

If I use Haar Cascades, I am already able to detect the eyes which is the area we care about but the challenge there is that I’m not familiar with the concept of how to work with only the detected space (there is a square drawn around both eyes, and I’m not sure how I would go about using only that space and working with it to get to the pupils)

The goal for next week is to integrate the checkerboard pattern and starting to get to the distance tracking, though right now I’m pretty clueless as to how I’m going to accomplish that.

Week 5

This week turned out to be unexpectedly much slower for me due to some projects being due and a round of midterms.

Overall, the goal was to pick out some pictures of human beings, provide them to the OpenCV program that I have so far, and be able to accurately detect the eyes (hopefully, more specifically, the pupils) in the photo.

That did not go so well despite hours of tinkering with the parameters:
notworking

I spent some time trying to tinker with all the SimpleBlobDetector parameters to try and get a combination that would detect only his eyes (or be able to detect at least his eyes and potential other stuff). Unfortunately there was nothing that would work. My speculation is that this may be due to the picture quality/shade.

For next week I shall try to use this same process with other pictures to get a feel for the parameters and how they work. Once I get used to that, if I’m able to make quick progress, the next big step would be to develop an algorithm / some sort of code that does blob detection on the live webcam.

Ultimately, once I’m able to get to that point, the plan is probably to use OpenCV’s checkerboard features to detect some kind of checkerboard pattern applied to the forehead to calculate the distance between the two detected pupils.

Week 4

This week, I worked on a few different options as I was indecisive. I spent half my time trying to figure out whether I should make use of some open source face detection code that was available for use and to try to use that: https://realpython.com/blog/python/face-recognition-with-python/
or rather to focus on blob detection instead. In the former option I thought I could detect the face to narrow down our area of focus and ignore the rest of the image as all we really care about is the eyes.

However, I had trouble understanding how to use the code and wasn’t able to make use of it yet, so I may revisit that later. Instead, I turned to blob detection; I found a nice resource online that explains how to set parameters for OpenCV’s blob detector: http://www.learnopencv.com/blob-detection-using-opencv-python-c/

I downloaded a new image of Cristiano Ronaldo and saved it to my project folder to try and use that image as a starting point for pupil detection and distance-to-pupil calculation before I generalize the program. However I kept encountering this weird error:

errpr

So naturally, I turned to the image of the cat that I already had and was opening just fine before adding any of the blob detection code. Unsurprisingly, I was able to make some progress with that and get it to detect some blobs (but also incorrectly detect fake blobs). I had to mess around with the params a bit for it to work:

area

I later discovered that playing around with the area parameter was helpful in having the program only detect real blobs. I still have yet to figure out how to get it to detect only the pupils (as opposed to the entire eyeball), and most importantly, getting it to work with other images, especially of humans. That will be my goal for next week along with making the blob detection more accurate.

Week 3

This week, I installed PyCharm and attempted to install all the necessary packages to it (nimpy and cv2). I tried to learn how to run a file remotely through the interpreter and how to organize my files in one location.

Currently I’m having trouble setting up my project in PyCharm (it won’t detect the necessary modules and I’ve been trying to figure out how to fix that)

sigh

EDIT: After spending some hours trying to figure out the issue, I found out that I had Python 2.7 installed from a while ago and that was getting mixed up with the 3.6 that I had installed recently, so that may have been causing PyCharm to not recognize the packages. I had the wrong interpreter selected in the IDE settings, so selecting the correct one fixed the issue!

yes!

In addition to that, I tested out cv2’s live video/webcam functionality in PyCharm and started working on face/eye/blob detection. I used the code from http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_gui/py_video_display/py_video_display.html
and running the program correctly opened the laptop’s webcam (how cool) but I couldn’t get the window to close properly (it would keep reopening).

 

Now that I actually have some foundation and idea of what’s going on, the plan for next week is what the initial plan was for this week – to actually start playing with blob and face detection from pictures and live webcam.