{"id":1232,"date":"2016-05-13T00:28:06","date_gmt":"2016-05-13T00:28:06","guid":{"rendered":"http:\/\/blogs.discovery.wisc.edu\/projects\/?p=1232"},"modified":"2016-05-14T19:30:53","modified_gmt":"2016-05-14T19:30:53","slug":"3d-capture-semester-wrap-up","status":"publish","type":"post","link":"https:\/\/blogs.discovery.wisc.edu\/projects\/2016\/05\/13\/3d-capture-semester-wrap-up\/","title":{"rendered":"3D Capture &#8211; Semester Wrap Up"},"content":{"rendered":"<p>Our project on 3D motion capture was certainly a learning experience. \u00a0Although we have yet to make models of phenomena over time, we have certainly improved on our skills in data collection (photographic techniques), post-processing, and model construction and refinement in Agisoft Photoscan.<\/p>\n<p><strong>Data Collection<\/strong><\/p>\n<p>Initially, we took photos of objects indoors lit by a halogen light, using a 30-110 mm lens. As a result, we struggled with a few issues:<\/p>\n<ul>\n<li>Due to the directional light, it created diffuse shadows over the model, which made it difficult for Photoscan to align the camera angles. \u00a0In addition, the hue of the light altered the appearance of the final generated textures for the object (everything was tinted yellow).<\/li>\n<li>We were attempting to model white-ish objects on a white counter top, which also made it difficult to align camera angles. \u00a0With lack of apparent textures and similarities in color, it made it more difficult to distinguish between the model and the environment.<\/li>\n<li>With a 30-110mm lens, we zoomed in close to the object so that it took up the entire frame of the image, but due to our placement of the camera this created a very narrow depth of field. As a result, if we focused on one piece of the object, other parts of the object would appear blurry, and would interfere both with dense point cloud generation and texture generation. (In the Depth of Field and Color Issues link, you can see how the sword and arm are in focus, but more distant features like the head are out of focus)<\/li>\n<li>The importance of proper coverage is still something we\u2019re working with. We&#8217;ve often forgotten to take high shots of the top of an object to be left with an excellent looking model with a hole in the top of the mesh which can be quite frustrating. This is also true of gaps in the model, such as bent arms or areas between the legs, coverage of these areas is essential to prevent holes or conjoined parts of the model.<\/li>\n<\/ul>\n<p><a href=\"http:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/05\/GundamDOF-e1463096760380.jpg\"><img loading=\"lazy\" class=\"alignnone wp-image-1242 size-large\" src=\"http:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/05\/GundamDOF-e1463096760380-683x1024.jpg\" alt=\"Depth of Field and Color issues\" width=\"584\" height=\"876\" srcset=\"https:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/05\/GundamDOF-e1463096760380-683x1024.jpg 683w, https:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/05\/GundamDOF-e1463096760380-200x300.jpg 200w, https:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/05\/GundamDOF-e1463096760380-768x1152.jpg 768w\" sizes=\"(max-width: 584px) 100vw, 584px\" \/><\/a><\/p>\n<p>Essentially, with only few images taken in this fashion it would result in models that we&#8217;ve dubbed &#8220;pudding monsters&#8221; due to the lack of well-defined edges and features.<\/p>\n<p><a href=\"http:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/04\/No-Texture-Pudding.png\" rel=\"attachment wp-att-1216\"><img loading=\"lazy\" class=\"alignnone size-large wp-image-1216\" src=\"http:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/04\/No-Texture-Pudding-1024x574.png\" alt=\"No Texture Pudding\" width=\"584\" height=\"327\" srcset=\"https:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/04\/No-Texture-Pudding-1024x574.png 1024w, https:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/04\/No-Texture-Pudding-300x168.png 300w, https:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/04\/No-Texture-Pudding-768x430.png 768w, https:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/04\/No-Texture-Pudding-500x280.png 500w, https:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/04\/No-Texture-Pudding.png 1928w\" sizes=\"(max-width: 584px) 100vw, 584px\" \/><\/a><\/p>\n<p>Eventually, we&#8217;ve found out that shooting outdoors in overcast conditions provide excellent lighting, as there is uniform lighting all around the object\u00a0to prevent diffuse shadows on the object. This, in conjunction with more apparent surface detail, allowed for much better alignment and mesh generation.\u00a0We also found that taking close up shots after getting general coverage really improved the quality of the mesh and texture; it seems that one can mix different kinds of shooting eg. panning and circling as long as there is sufficient overlap between images to obtain alignment. On the note of alignment, we really should be using markers. They speed up the alignment processes immensely and they also help alignment where there is ambiguous overlap. We had to experiment with manually inserting markers into the datasets to get cameras to align in some cases, and while we were able to obtain full camera alignment using this method, it was very time consuming and not something that is viable for constant use. Some object have very difficult times aligning and I\u2019ve had issues getting bushes and large trees to align properly, sometimes only aligning 4-6 cameras out of a 200+ image set. Again, printing out markers could help immensely with this.<\/p>\n<p><a href=\"http:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/03\/Power_pole.png\" rel=\"attachment wp-att-1178\"><img loading=\"lazy\" class=\"alignnone size-large wp-image-1178\" src=\"http:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/03\/Power_pole-1024x576.png\" alt=\"Power_pole\" width=\"584\" height=\"329\" srcset=\"https:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/03\/Power_pole-1024x576.png 1024w, https:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/03\/Power_pole-300x169.png 300w, https:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/03\/Power_pole-768x432.png 768w, https:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/03\/Power_pole-500x281.png 500w, https:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/03\/Power_pole.png 1920w\" sizes=\"(max-width: 584px) 100vw, 584px\" \/><\/a><\/p>\n<p><strong>Post-Processing<\/strong><\/p>\n<p>Our final improvements to the project this semester were in editing photos before importing them into Photoscan. \u00a0Initially, we did no photo manipulation, but later we found that we could\u00a0perform\u00a0preprocessing on the photos to correct contrast, lighting, and exposure of the original images. We used this approach on the Mother Nature model from Epic (not shown here) that was shot indoors in directional light. The corrections may have helped with alignment accuracy and depth calculation, but the resulting textures were non uniform due to corrections being greater in certain angles than others. Overall the model turned out well, but there are certainly room for improvements. We are currently using the automatic mode on the camera for very quick modeling, but we wonder if it would be better to use manual exposure settings to maintain aperture, ISO, and shutter speed between photos. This manual constancy should allow for very even textures and consistent exposure settings between photos, but this approach would only be viable for diffuse and well lit subjects.<\/p>\n<p><strong>Model Construction and Refinement<\/strong><\/p>\n<p>Our biggest mistake we made in earlier stages of the project was constructing meshes from sparse point clouds. \u00a0Essentially, we were only constructing models by limited datapoints provided when camera angles are aligned. \u00a0With dense point cloud generation, Agisoft uses photogrammetry (which Andrew and Bryce should learn about next semester in CS 534 &#8211; Digital Photo Computation) to construct data points of key features and interpolate values between them, creating a much more detailed data set. \u00a0As a result, our models turned out\u00a0<em>much<\/em> better, even with low-detailed dense point cloud generation.<\/p>\n<p>For example, we were able to achieve good results with our Gundam model after manual camera alignment (due to lack of use of tracking markers) and ultra high dense point cloud rendering. However, this was very time consuming and the end model lacked the fine surface detail of the source and the surfaces tended to be very pitted with few smooth planes. We think that this was due to our use of a very shallow depth of field. Our\u00a0thoughts are Photoscan found the corresponding points in the two images, but in one it was in focus and in the other out of focus, this problem compounded from many angles would likely create a model that has a mix between these two and thus perhaps errors in the depth calculations. In the future we would like to repeat this with a very wide depth of field maintaining focus on all the parts of the model and we expect superior results.<\/p>\n<p><a href=\"http:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/05\/Gundam_High.png\" rel=\"attachment wp-att-1246\"><img loading=\"lazy\" class=\"alignnone wp-image-1246 size-large\" src=\"http:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/05\/Gundam_High-1024x515.png\" alt=\"Model from High-Quality Dense Point Cloud (~1,100,000 Vertices)\" width=\"584\" height=\"294\" srcset=\"https:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/05\/Gundam_High-1024x515.png 1024w, https:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/05\/Gundam_High-300x151.png 300w, https:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/05\/Gundam_High-768x386.png 768w, https:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/05\/Gundam_High-500x251.png 500w\" sizes=\"(max-width: 584px) 100vw, 584px\" \/><\/a><\/p>\n<p><a href=\"http:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/05\/Gundam_Low.png\" rel=\"attachment wp-att-1248\"><img loading=\"lazy\" class=\"alignnone wp-image-1248 size-large\" src=\"http:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/05\/Gundam_Low-1024x427.png\" alt=\"Model from Low-Quality Dense Point Cloud (~30,000) Vertices\" width=\"584\" height=\"244\" srcset=\"https:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/05\/Gundam_Low-1024x427.png 1024w, https:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/05\/Gundam_Low-300x125.png 300w, https:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/05\/Gundam_Low-768x320.png 768w, https:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/05\/Gundam_Low-500x208.png 500w\" sizes=\"(max-width: 584px) 100vw, 584px\" \/><\/a><\/p>\n<p>In addition, we refined the point clouds by manually trimming outlying data points, which created a much more accurate mesh. \u00a0This, accompanied with all prior techniques, allowed us to create a very nice model quite easily:<\/p>\n<p><a href=\"http:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/05\/Shoe3.png\" rel=\"attachment wp-att-1251\"><img loading=\"lazy\" class=\"alignnone wp-image-1251 size-large\" src=\"http:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/05\/Shoe3-1024x987.png\" alt=\"Right-Side Angle\" width=\"584\" height=\"563\" srcset=\"https:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/05\/Shoe3-1024x987.png 1024w, https:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/05\/Shoe3-300x289.png 300w, https:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/05\/Shoe3-768x741.png 768w, https:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/05\/Shoe3-311x300.png 311w, https:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/05\/Shoe3.png 1062w\" sizes=\"(max-width: 584px) 100vw, 584px\" \/><\/a> <a href=\"http:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/05\/Shoe4.png\" rel=\"attachment wp-att-1252\"><img loading=\"lazy\" class=\"alignnone wp-image-1252 size-large\" src=\"http:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/05\/Shoe4-1024x1000.png\" alt=\"Back Angle (Holes in the mesh are from lack of photo coverage)\" width=\"584\" height=\"570\" srcset=\"https:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/05\/Shoe4-1024x1000.png 1024w, https:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/05\/Shoe4-300x293.png 300w, https:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/05\/Shoe4-768x750.png 768w, https:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/05\/Shoe4-307x300.png 307w, https:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/05\/Shoe4.png 1040w\" sizes=\"(max-width: 584px) 100vw, 584px\" \/><\/a> <a href=\"http:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/05\/Shoe1.png\" rel=\"attachment wp-att-1253\"><img loading=\"lazy\" class=\"alignnone wp-image-1253 size-large\" src=\"http:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/05\/Shoe1-1024x933.png\" alt=\"Left-Side Angle\" width=\"584\" height=\"532\" srcset=\"https:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/05\/Shoe1-1024x933.png 1024w, https:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/05\/Shoe1-300x273.png 300w, https:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/05\/Shoe1-768x700.png 768w, https:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/05\/Shoe1-329x300.png 329w, https:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/05\/Shoe1.png 1103w\" sizes=\"(max-width: 584px) 100vw, 584px\" \/><\/a> <a href=\"http:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/05\/Shoe2.png\" rel=\"attachment wp-att-1254\"><img loading=\"lazy\" class=\"alignnone wp-image-1254 size-large\" src=\"http:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/05\/Shoe2-1024x993.png\" alt=\"Front Angle\" width=\"584\" height=\"566\" srcset=\"https:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/05\/Shoe2-1024x993.png 1024w, https:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/05\/Shoe2-300x291.png 300w, https:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/05\/Shoe2-768x745.png 768w, https:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/05\/Shoe2-309x300.png 309w, https:\/\/blogs.discovery.wisc.edu\/projects\/files\/2016\/05\/Shoe2.png 1057w\" sizes=\"(max-width: 584px) 100vw, 584px\" \/><\/a><\/p>\n<p>Here the models are exported from Photoscan as .obj files, and are processed into Javascript files and viewed with WebGL. \u00a0This just provided an alternative way to view models outside of Photoscan. (We are unsure how to embed HTML files with external Javascript files into WordPress, so that\u00a0we could view the model from all angles instead of images).<\/p>\n<p><strong>In Conclusion<\/strong><\/p>\n<p>We&#8217;ve had our share of issues, and for the most part, have worked through them. We\u00a0feel confident in the ability to get decent models, and are looking forward to more experimentation with DoF, markers, and using multiple chunks to obtain high quality on small detailed areas of models, which we will investigate this summer and possibly next fall. We also plan\u00a0to make models of more imposing objects, such as really small\/large structures, and possibly begin looking into making models of dynamic events\u00a0(starting simple, of course).<\/p>\n<p>We also have ideas of finding &#8220;uses&#8221; for the models, such as importing them into a game engine or 3D printing them, but we&#8217;ll have to see where it goes!<\/p>\n<p>-Andrew and Bryce<\/p>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Our project on 3D motion capture was certainly a learning experience. \u00a0Although we have yet to make models of phenomena over time, we have certainly improved on our skills in data collection (photographic techniques), post-processing, and model construction and refinement &hellip; <a href=\"https:\/\/blogs.discovery.wisc.edu\/projects\/2016\/05\/13\/3d-capture-semester-wrap-up\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":115,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/blogs.discovery.wisc.edu\/projects\/wp-json\/wp\/v2\/posts\/1232"}],"collection":[{"href":"https:\/\/blogs.discovery.wisc.edu\/projects\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.discovery.wisc.edu\/projects\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.discovery.wisc.edu\/projects\/wp-json\/wp\/v2\/users\/115"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.discovery.wisc.edu\/projects\/wp-json\/wp\/v2\/comments?post=1232"}],"version-history":[{"count":19,"href":"https:\/\/blogs.discovery.wisc.edu\/projects\/wp-json\/wp\/v2\/posts\/1232\/revisions"}],"predecessor-version":[{"id":1258,"href":"https:\/\/blogs.discovery.wisc.edu\/projects\/wp-json\/wp\/v2\/posts\/1232\/revisions\/1258"}],"wp:attachment":[{"href":"https:\/\/blogs.discovery.wisc.edu\/projects\/wp-json\/wp\/v2\/media?parent=1232"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.discovery.wisc.edu\/projects\/wp-json\/wp\/v2\/categories?post=1232"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.discovery.wisc.edu\/projects\/wp-json\/wp\/v2\/tags?post=1232"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}