{"id":489,"date":"2016-10-27T06:04:38","date_gmt":"2016-10-27T11:04:38","guid":{"rendered":"http:\/\/blogs.discovery.wisc.edu\/vr2016\/?p=489"},"modified":"2016-11-23T14:05:16","modified_gmt":"2016-11-23T19:05:16","slug":"sixth-sense-feel-what-you-read-a-vr-approach-to-spreading-social-awareness","status":"publish","type":"post","link":"https:\/\/blogs.discovery.wisc.edu\/vr2016\/2016\/10\/27\/sixth-sense-feel-what-you-read-a-vr-approach-to-spreading-social-awareness\/","title":{"rendered":"Sixth Sense &#8211; Due 10\/27 &#8211; Project Proposal"},"content":{"rendered":"<p><strong>Title: Sixth Sense &#8211; Feel What You Read &#8211; A VR Approach to Spreading Social Awareness<\/strong><\/p>\n<p><strong>Team Name &#8211; Sixth Sense<\/strong><\/p>\n<p><strong>Group members<\/strong><\/p>\n<ol>\n<li><span style=\"font-weight: 400\">Ameya Raul<\/span><\/li>\n<li><span style=\"font-weight: 400\">Bixi Zhang<\/span><\/li>\n<li><span style=\"font-weight: 400\">Shruthi Racha<\/span><\/li>\n<li><span style=\"font-weight: 400\">Zhicheng Gu<\/span><\/li>\n<\/ol>\n<p><strong>Description<\/strong><b>: <\/b><\/p>\n<p><span style=\"font-weight: 400\">Imagine seeing your own imagination played out in front of your eyes as you read a book. The VR book (traditionally known as the \u201cpicture book\u201d) aims to take you to the world of fairytales, where whatever you read really happens. Imagine a book, that can show you the past, like Tom Riddle\u2019s book in Harry Potter. In this project we intend to limit our scope to articles pertaining to social causes. We also intend to experiment with automatic visualization of articles using deep learning techniques.<\/span><\/p>\n<p><em><span style=\"font-weight: 400\">Virtual Reality in Journalism:<\/span><\/em><\/p>\n<p><iframe loading=\"lazy\" width=\"1170\" height=\"658\" src=\"https:\/\/www.youtube.com\/embed\/BuGUQ6svJyg?feature=oembed\" frameborder=\"0\" allowfullscreen><\/iframe><\/p>\n<p><em><span style=\"font-weight: 400\">Visualisation of Text:<\/span><\/em><\/p>\n<p><img loading=\"lazy\" class=\"alignnone size-medium wp-image-490\" src=\"http:\/\/blogs.discovery.wisc.edu\/vr2016\/files\/2016\/10\/Screen-Shot-2016-10-27-at-5.17.25-AM-300x177.png\" alt=\"screen-shot-2016-10-27-at-5-17-25-am\" width=\"300\" height=\"177\" srcset=\"https:\/\/blogs.discovery.wisc.edu\/vr2016\/files\/2016\/10\/Screen-Shot-2016-10-27-at-5.17.25-AM-300x177.png 300w, https:\/\/blogs.discovery.wisc.edu\/vr2016\/files\/2016\/10\/Screen-Shot-2016-10-27-at-5.17.25-AM-768x454.png 768w, https:\/\/blogs.discovery.wisc.edu\/vr2016\/files\/2016\/10\/Screen-Shot-2016-10-27-at-5.17.25-AM-1024x606.png 1024w, https:\/\/blogs.discovery.wisc.edu\/vr2016\/files\/2016\/10\/Screen-Shot-2016-10-27-at-5.17.25-AM.png 1248w\" sizes=\"(max-width: 300px) 100vw, 300px\" \/><\/p>\n<p><strong>Purpose:<\/strong><\/p>\n<p><span style=\"font-weight: 400\">We often come across charity drives calling out for help during social crisis. Earlier shocking statistics and graphic photos worked \u2014 the message was powerful and emotive. But after too many pamphlets and commercials, the message is plain and has lost its intensity. Donors are no longer as invested in philanthropic causes because non-profit organizations fail to create empathy, let alone an understanding of the problem. So what do you do now? Where does the future of fundraising lie for charities? The answer may lie in virtual reality. <\/span> <span style=\"font-weight: 400\">In this project we aim to leverage virtual reality to bridge the emotional and physical disconnect between victims, non-profit organizations and the donors by achieving a visualization of articles pertaining to social causes in Virtual Reality. VR is not only a powerful medium to spread awareness but also helps to fully convey the feelings of the victims of such incidents, thereby reinforcing the value of the message. <\/span><\/p>\n<p><strong>What will people experience:<\/strong><\/p>\n<p><span style=\"font-weight: 400\">We intend to provide a full visualization of the articles within a limited scope. We aim to display the text along with the images. If the text in VR is hard to see, we would opt to render the narration of the article in parallel. We understand that automation of article visualization is a hard problem. However we wish to experiment with deep learning techniques to automatically create a mapping between the action or object and the text and thereby provide a partial visualization of the article. <\/span><\/p>\n<p><span style=\"font-weight: 400\"><strong>What take-aways do you want the user to have:<\/strong> <\/span><\/p>\n<ul>\n<li><span style=\"font-weight: 400\">A strong connect and empathy with the trauma of the people undergoing the social crisis. <\/span><\/li>\n<li><span style=\"font-weight: 400\">The ability to realize what difference each one of us can make in another person\u2019s life.<\/span><\/li>\n<\/ul>\n<p><strong>Concept art:<\/strong><\/p>\n<p><img loading=\"lazy\" class=\"alignnone size-medium wp-image-491\" src=\"http:\/\/blogs.discovery.wisc.edu\/vr2016\/files\/2016\/10\/Screen-Shot-2016-10-27-at-5.59.03-AM-300x205.png\" alt=\"screen-shot-2016-10-27-at-5-59-03-am\" width=\"300\" height=\"205\" srcset=\"https:\/\/blogs.discovery.wisc.edu\/vr2016\/files\/2016\/10\/Screen-Shot-2016-10-27-at-5.59.03-AM-300x205.png 300w, https:\/\/blogs.discovery.wisc.edu\/vr2016\/files\/2016\/10\/Screen-Shot-2016-10-27-at-5.59.03-AM-768x525.png 768w, https:\/\/blogs.discovery.wisc.edu\/vr2016\/files\/2016\/10\/Screen-Shot-2016-10-27-at-5.59.03-AM-1024x700.png 1024w, https:\/\/blogs.discovery.wisc.edu\/vr2016\/files\/2016\/10\/Screen-Shot-2016-10-27-at-5.59.03-AM.png 1196w\" sizes=\"(max-width: 300px) 100vw, 300px\" \/><\/p>\n<p><strong>What equipment you plan to use:<\/strong><\/p>\n<p><span style=\"font-weight: 400\">We would like to experiment with both AR and VR and see which experience feels better. However if this is beyond scope, we would prefer <\/span><\/p>\n<ul>\n<li><span style=\"font-weight: 400\">VR with an Oculus Rift or any suitable Headset.<\/span><\/li>\n<li><span style=\"font-weight: 400\">360 degree camera in case we decide to capture a scene of our own and articulate it.<\/span><\/li>\n<\/ul>\n<p><strong>A description of what you think you know how to do (as a group):<\/strong><\/p>\n<ul>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Programming : Programmers proficient in Java\/Python\/C\/C++ and experience in working with deep learning and natural language processing(NLP)<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Design : UX and Design expert in team to work on the relative layout, sizing and appearance of objects in the VR scene.<\/span><\/li>\n<\/ul>\n<p><strong>A description of things you are less sure you know how to do (as a group):<\/strong><\/p>\n<ul>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Coding in Unity<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Graphics concepts <\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Synchronous interposing of text, VR scene and audio.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Projecting Images\/Videos in 360 degrees<\/span><\/li>\n<\/ul>\n<p><strong>What are your first steps:<\/strong><\/p>\n<ul>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Literature Survey of prior art and case studies done along the lines of virtual reality used for Social Causes and Story Rendering.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Research and User Experience testing of the various VR headsets available and evaluate based on features and aptness for the problem under consideration.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Shortlist a set of literary articles (fictional and journalist), for which we would aim to create VR visualizations.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Frame functional requirements, design diagrams\/sketches and user workflows. For example: <\/span>\n<ul>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">How do we tell where the user is positioned in the VR scene?<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Should we project the text in VR like a subtitle, or should we use audio to narrate the scene in the background? <\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Transitions from one scene to another as the user reads an article? <\/span><\/li>\n<\/ul>\n<\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Create a prototype\/proof of concept of a simple and small article<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Follow agile model to evaluate, analyse and incorporate changes\/additions in the project as we progress.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Attempt to leverage machine learning techniques to auto generate visualizations. Use easy articles initially, and tweak the system accordingly.<\/span><\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>Title: Sixth Sense &#8211; Feel What You Read &#8211; A VR Approach to Spreading Social Awareness Team Name &#8211; Sixth Sense Group members Ameya Raul Bixi Zhang Shruthi Racha Zhicheng Gu Description: Imagine seeing your own imagination played out in front of your eyes as you read a book. The VR book (traditionally known as [&hellip;]<\/p>\n","protected":false},"author":175,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[39,42],"tags":[],"_links":{"self":[{"href":"https:\/\/blogs.discovery.wisc.edu\/vr2016\/wp-json\/wp\/v2\/posts\/489"}],"collection":[{"href":"https:\/\/blogs.discovery.wisc.edu\/vr2016\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.discovery.wisc.edu\/vr2016\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.discovery.wisc.edu\/vr2016\/wp-json\/wp\/v2\/users\/175"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.discovery.wisc.edu\/vr2016\/wp-json\/wp\/v2\/comments?post=489"}],"version-history":[{"count":2,"href":"https:\/\/blogs.discovery.wisc.edu\/vr2016\/wp-json\/wp\/v2\/posts\/489\/revisions"}],"predecessor-version":[{"id":720,"href":"https:\/\/blogs.discovery.wisc.edu\/vr2016\/wp-json\/wp\/v2\/posts\/489\/revisions\/720"}],"wp:attachment":[{"href":"https:\/\/blogs.discovery.wisc.edu\/vr2016\/wp-json\/wp\/v2\/media?parent=489"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.discovery.wisc.edu\/vr2016\/wp-json\/wp\/v2\/categories?post=489"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.discovery.wisc.edu\/vr2016\/wp-json\/wp\/v2\/tags?post=489"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}