Sixth Sense – Due 10/27 – Project Proposal


Title: Sixth Sense – Feel What You Read – A VR Approach to Spreading Social Awareness

Team Name – Sixth Sense

Group members

  1. Ameya Raul
  2. Bixi Zhang
  3. Shruthi Racha
  4. Zhicheng Gu

Description:

Imagine seeing your own imagination played out in front of your eyes as you read a book. The VR book (traditionally known as the “picture book”) aims to take you to the world of fairytales, where whatever you read really happens. Imagine a book, that can show you the past, like Tom Riddle’s book in Harry Potter. In this project we intend to limit our scope to articles pertaining to social causes. We also intend to experiment with automatic visualization of articles using deep learning techniques.

Virtual Reality in Journalism:

Visualisation of Text:

screen-shot-2016-10-27-at-5-17-25-am

Purpose:

We often come across charity drives calling out for help during social crisis. Earlier shocking statistics and graphic photos worked — the message was powerful and emotive. But after too many pamphlets and commercials, the message is plain and has lost its intensity. Donors are no longer as invested in philanthropic causes because non-profit organizations fail to create empathy, let alone an understanding of the problem. So what do you do now? Where does the future of fundraising lie for charities? The answer may lie in virtual reality. In this project we aim to leverage virtual reality to bridge the emotional and physical disconnect between victims, non-profit organizations and the donors by achieving a visualization of articles pertaining to social causes in Virtual Reality. VR is not only a powerful medium to spread awareness but also helps to fully convey the feelings of the victims of such incidents, thereby reinforcing the value of the message.

What will people experience:

We intend to provide a full visualization of the articles within a limited scope. We aim to display the text along with the images. If the text in VR is hard to see, we would opt to render the narration of the article in parallel. We understand that automation of article visualization is a hard problem. However we wish to experiment with deep learning techniques to automatically create a mapping between the action or object and the text and thereby provide a partial visualization of the article.

What take-aways do you want the user to have:

  • A strong connect and empathy with the trauma of the people undergoing the social crisis.
  • The ability to realize what difference each one of us can make in another person’s life.

Concept art:

screen-shot-2016-10-27-at-5-59-03-am

What equipment you plan to use:

We would like to experiment with both AR and VR and see which experience feels better. However if this is beyond scope, we would prefer

  • VR with an Oculus Rift or any suitable Headset.
  • 360 degree camera in case we decide to capture a scene of our own and articulate it.

A description of what you think you know how to do (as a group):

  • Programming : Programmers proficient in Java/Python/C/C++ and experience in working with deep learning and natural language processing(NLP)
  • Design : UX and Design expert in team to work on the relative layout, sizing and appearance of objects in the VR scene.

A description of things you are less sure you know how to do (as a group):

  • Coding in Unity
  • Graphics concepts
  • Synchronous interposing of text, VR scene and audio.
  • Projecting Images/Videos in 360 degrees

What are your first steps:

  • Literature Survey of prior art and case studies done along the lines of virtual reality used for Social Causes and Story Rendering.
  • Research and User Experience testing of the various VR headsets available and evaluate based on features and aptness for the problem under consideration.
  • Shortlist a set of literary articles (fictional and journalist), for which we would aim to create VR visualizations.
  • Frame functional requirements, design diagrams/sketches and user workflows. For example:
    • How do we tell where the user is positioned in the VR scene?
    • Should we project the text in VR like a subtitle, or should we use audio to narrate the scene in the background?
    • Transitions from one scene to another as the user reads an article?
  • Create a prototype/proof of concept of a simple and small article
  • Follow agile model to evaluate, analyse and incorporate changes/additions in the project as we progress.
  • Attempt to leverage machine learning techniques to auto generate visualizations. Use easy articles initially, and tweak the system accordingly.