{"id":964,"date":"2016-12-18T22:47:29","date_gmt":"2016-12-19T04:47:29","guid":{"rendered":"http:\/\/blogs.discovery.wisc.edu\/vr2016\/?p=964"},"modified":"2016-12-18T22:48:15","modified_gmt":"2016-12-19T04:48:15","slug":"final-post-for-vrsurgeon","status":"publish","type":"post","link":"https:\/\/blogs.discovery.wisc.edu\/vr2016\/2016\/12\/18\/final-post-for-vrsurgeon\/","title":{"rendered":"Final post for VRsurgeon"},"content":{"rendered":"<p><b>Motivation<\/b><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400\">The motivation for this project was two-fold. \u00a0We wanted to make a realistic enough virtual surgery simulator that people could practice with enough realism to help them in real surgery. \u00a0Practicing now involves cadavers or real-live patients. \u00a0It would be a lot cheaper and less dangerous to practice in real life. \u00a0Easy mistakes can be avoided with repetition. \u00a0For the second part of the project, we also wanted a way for the phone and Vive to interact in 3d space. \u00a0This could allow Google Cardboard to interact in Vive apps. \u00a0It\u2019s a nice, cheap alternative to a Vive HMD, and lets friends or other people experience Vive apps without the cost of adding an additional headset and computer.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><b>Contributions<\/b><span style=\"font-weight: 400\">:<\/span><span style=\"font-weight: 400\"><br \/>\n<\/span><span style=\"font-weight: 400\"><br \/>\n<\/span><span style=\"font-weight: 400\">Josh &#8211; Josh did the networking between the Vive PC and the Android phone. \u00a0See more about that in the \u201cProblems Encountered\u201d section. \u00a0Josh also got the Vuforia tracking working. \u00a0\u00a0Josh also took care of making everything else on the Android app work. \u00a0He had to make a subset of the room and change some shaders and stuff to keep the framerate. \u00a0Haixiang helped with the simulator integration.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400\">Dustin &#8211; \u00a0Dustin made the surgery room and some of the instruments used in the simulation. He also helped with object interactions and movement. Using the VR toolkit was a big help.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400\">Haixiang &#8211; Haixiang borrowed the simulation code from his lab mate. The code is well written and has a very clear interface. The simulation is computationally intensive, therefore we decided to run the simulation in a remote server and communicate with unity through networking. The major contribution is the networking protocol between the simulation server and unity. The secondary contribution of my part is the management of the interaction between the simulation mesh and the VR objects. <\/span><\/p>\n<p>&nbsp;<\/p>\n<p><b>Outcomes: <\/b><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400\">Our project is a simulation of a Z-plasty surgery. You can lift and separate the skin on the head of our patient as well as suture it back together. \u00a0There\u2019s a bunch of tools that accomplish these purposes. \u00a0The surgery is done in VR, and it can be viewed from an Android phone.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400\"> We met our original description and goals with the exception of the cutting of the skin. \u00a0We\u2019re very happy with the results and this is a project we would enjoy perfecting if we had the time. <\/span><\/p>\n<p>&nbsp;<\/p>\n<p><b>Problems Encountered\/Design Decisions:<\/b><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400\">A little bit about how it works: basically a remote server does all the simulation of the skin. \u00a0This server had to be remote because the simulation requires quite a bit of CPU power to run. \u00a0The server has 32 cores, and 256 GB of RAM (the CPUs were the important part). \u00a0<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400\">The simulator takes a position input in the object space, and barycentric coordinate and triangle ID as the input of the constraints to the simulator. The first challenge is to get those informations from unity. We used the transform.inverse_point\/inverse_vector functions for the position conversion. And luckily the ray-casting in unity returns the barycentric coordinate and triangle ID for a mesh collider. So as long as Unity and the simulation engine has the same triangle orders, we can directly use the Unity functions. <\/span><\/p>\n<p><span style=\"font-weight: 400\">The second challenge is that the simulation updates the mesh at about 2-3 frames\/second, while VR require to be updated at 90 fps. Therefore a separate thread is need for requesting new mesh positions from the simulation server to keep the frame rate. But unity functions can only be called from the main thread, we create a synchronization protocol between the two threads for informing unity to update the skin mesh, recompute its normal(for rendering) and update its collision mesh(for ray casing) when a new mesh position is received.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Last, the simulation runs on a machine under the cs.wisc.edu domain, took us a while to figuring out that there is a vpn can connect off cs machines to it.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400\">As for the Android app, the app uses Vuforia to put the surgery subject\u2019s head into Augmented Reality. \u00a0Out of the box, Vuforia didn\u2019t work too well. \u00a0Some settings had to be tweaked to get it running smoothly on a not-so-new phone (Nexus 5 from 2013). \u00a0The two settings that seemed to affect performance the most were the Camera Device Mode. \u00a0This should be set to CAMERA_MODE_FAST. \u00a0This setting changes if the phone goes for accuracy or speed while running Vuforia. \u00a0The fast mode has the occasional twitch, but the twitches are quickly corrected. \u00a0Also, the fast mode is quite a bit faster, so the framerate looks a lot nicer.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400\">[This section is an edited version from another project..]<\/span><\/p>\n<p><span style=\"font-weight: 400\">The process of design a tag is not for the feint of heart. \u00a0The biggest barrier to designing the tag was using Adobe Illustrator. \u00a0The program is designed for artists and graphic designers to use rather than engineers. \u00a0There are a number of parts that each VuMark contains.<\/span><\/p>\n<p><span style=\"font-weight: 400\">The algorithm Vuforia uses tracks enclosed shapes containing 5-20 sides. \u00a0The shape edges are represented by areas of high contrast difference. \u00a0The tags also require a \u2018clear space\u2019 on the inside of the shape. \u00a0The shape can be rotationally symmetric, but when tags are used for camera pose tracking, they are required to not be rotationally symmetric. <\/span><\/p>\n<p><span style=\"font-weight: 400\">The tag also has preset locations for white\/dark markers used for information encoding. \u00a0In my tag there are 31 markers which encode 127 unique values. \u00a0On the image of the marker there are actually 32 markers, but one (the top, center one) is only used to make the marker pattern look better. The unique values didn\u2019t really do anything here, but we th<img loading=\"lazy\" class=\"size-medium wp-image-965 alignright\" src=\"http:\/\/blogs.discovery.wisc.edu\/vr2016\/files\/2016\/12\/1-300x162.png\" alt=\"1\" width=\"300\" height=\"162\" srcset=\"https:\/\/blogs.discovery.wisc.edu\/vr2016\/files\/2016\/12\/1-300x162.png 300w, https:\/\/blogs.discovery.wisc.edu\/vr2016\/files\/2016\/12\/1-768x414.png 768w, https:\/\/blogs.discovery.wisc.edu\/vr2016\/files\/2016\/12\/1.png 849w\" sizes=\"(max-width: 300px) 100vw, 300px\" \/>ought we might put different stations or angles on different tags. \u00a0Never got around to that. \u00a0The markers we used are black and white, but they are only required to have high contrast as viewed in black and white. \u00a0While designing the tag, there is lot of freedom to use different colors and shapes so developers can design an appealing tag that fits their purposes. \u00a0The tag we used is shown above, and some example tags are also shown here to show the freedom of the tag design. \u00a0<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400\">For the simulation stuff, we used almost the same code on the PC and Android to stream the skin meshes. \u00a0We took out the initialization and everything worked fine. \u00a0<\/span><\/p>\n<p><img loading=\"lazy\" class=\"size-medium wp-image-966 alignright\" src=\"http:\/\/blogs.discovery.wisc.edu\/vr2016\/files\/2016\/12\/2-300x199.png\" alt=\"2\" width=\"300\" height=\"199\" srcset=\"https:\/\/blogs.discovery.wisc.edu\/vr2016\/files\/2016\/12\/2-300x199.png 300w, https:\/\/blogs.discovery.wisc.edu\/vr2016\/files\/2016\/12\/2.png 551w\" sizes=\"(max-width: 300px) 100vw, 300px\" \/><\/p>\n<p><span style=\"font-weight: 400\">For the syncing between the Vive and phone, we setup a relay on the server to relay packets from the Vive to the phone (both are connected anyway for the mesh streaming). \u00a0We tried to use Unity\u2019s networking to do this, but it was a <\/span><b>BAD IDEA<\/b><span style=\"font-weight: 400\">. \u00a0Basically, there isn\u2019t a very good way to use Unity\u2019s built-in networking while using two different scenes. \u00a0We ended up writing our own networking functions using TCP connections. \u00a0This worked a lot better. \u00a0To keep things simple, we also only streamed the tool selection, Vive HMD, and Vive controller positions and rotations (there was a small bug where the hook would sometimes be rotated 90 degrees\u2026 \u00a0but it worked really well besides that). \u00a0<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400\">To sync the Vive and the Phone in real-life coordinates, we just put the Vive controller on top of the Vuforia tag and used a button push to trigger the change. \u00a0This then moved the head to the position of the controller. \u00a0Then we just had to make sure the tag was lined up with the vive simulation (since we didn\u2019t do any rotations). \u00a0This worked pretty well. \u00a0It was a little off because it was hard to get the scaling right for the Vuforia stuff. \u00a0To set this once and for all, we just put the vive controllers on either side of the head, and kept scaling the Android stuff up until we got the controllers in the same spot on both images. \u00a0There\u2019s probably a more deterministic way to do this, but it worked as a quick fix. \u00a0We also didn\u2019t have time to find models for the HMD and Controllers the Vive uses, so they were replaced by a block and a squashed sphere. <\/span><\/p>\n<p>&nbsp;<\/p>\n<p><b>Next Steps:<\/b><\/p>\n<p><b><br \/>\n<\/b><span style=\"font-weight: 400\">It would be nice to have some interaction you could do on the phone. \u00a0We wanted to make it so the phone could choose to load different types of surgeries into the simulator. \u00a0Then the instructor could test a bunch of different ones right there. \u00a0We could have instructions for each one displayed on the Vive, so it\u2019s easier for the students to remember what to do.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400\">We also wish we could have added real-time cutting. \u00a0We wanted to, but those types of algorithms are complicated, and there wasn\u2019t time. \u00a0It would have been nice to have the cutting in there. \u00a0Then we wouldn\u2019t have even had to pre-load the models!<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400\">Another thing is currently the mesh streaming and constraint change(hook\/suture) are running in the same thread on the server. The unity client has to keep sending frame update request to the server and wait the server sending the updated mesh. Ideally we should use one port for constraint change (upstream from Unity to simulator) and another used for pushing mesh to Unity(downstream from simulator to Unity).<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><b>Video:<\/b><\/p>\n<p><span style=\"font-weight: 400\">Here\u2019s the video. \u00a0Sorry for the weird blurriness on the phone part.. \u00a0We only had a camera with manual focus, and Josh had to hold the phone and camera at the same time! \u00a0Also, the phone died part way through, but we got everything demoed before it died!<\/span><\/p>\n<p>Here is the link to the video. It is 70 MB so we can&#8217;t upload it directly to the webpage.<\/p>\n<p><a href=\"https:\/\/drive.google.com\/file\/d\/0B1gxFYloyBO4VFc0LU1jaHlTSFk\/view?usp=sharing\">https:\/\/drive.google.com\/file\/d\/0B1gxFYloyBO4VFc0LU1jaHlTSFk\/view?usp=sharing<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Motivation &nbsp; The motivation for this project was two-fold. \u00a0We wanted to make a realistic enough virtual surgery simulator that people could practice with enough realism to help them in real surgery. \u00a0Practicing now involves cadavers or real-live patients. \u00a0It would be a lot cheaper and less dangerous to practice in real life. \u00a0Easy mistakes [&hellip;]<\/p>\n","protected":false},"author":164,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[44],"tags":[],"_links":{"self":[{"href":"https:\/\/blogs.discovery.wisc.edu\/vr2016\/wp-json\/wp\/v2\/posts\/964"}],"collection":[{"href":"https:\/\/blogs.discovery.wisc.edu\/vr2016\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.discovery.wisc.edu\/vr2016\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.discovery.wisc.edu\/vr2016\/wp-json\/wp\/v2\/users\/164"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.discovery.wisc.edu\/vr2016\/wp-json\/wp\/v2\/comments?post=964"}],"version-history":[{"count":2,"href":"https:\/\/blogs.discovery.wisc.edu\/vr2016\/wp-json\/wp\/v2\/posts\/964\/revisions"}],"predecessor-version":[{"id":968,"href":"https:\/\/blogs.discovery.wisc.edu\/vr2016\/wp-json\/wp\/v2\/posts\/964\/revisions\/968"}],"wp:attachment":[{"href":"https:\/\/blogs.discovery.wisc.edu\/vr2016\/wp-json\/wp\/v2\/media?parent=964"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.discovery.wisc.edu\/vr2016\/wp-json\/wp\/v2\/categories?post=964"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.discovery.wisc.edu\/vr2016\/wp-json\/wp\/v2\/tags?post=964"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}