{"id":953,"date":"2016-12-18T22:17:29","date_gmt":"2016-12-19T04:17:29","guid":{"rendered":"http:\/\/blogs.discovery.wisc.edu\/vr2016\/?p=953"},"modified":"2016-12-18T22:19:09","modified_gmt":"2016-12-19T04:19:09","slug":"sixth-sense-due-1218-final-posting","status":"publish","type":"post","link":"https:\/\/blogs.discovery.wisc.edu\/vr2016\/2016\/12\/18\/sixth-sense-due-1218-final-posting\/","title":{"rendered":"Sixth Sense \u2013 Due 12\/18 \u2013 Final Posting"},"content":{"rendered":"<p><span style=\"font-weight: 400\"><em>Title<\/em>: Sixth Sense &#8211; Feel What You Read<\/span><\/p>\n<p><em><span style=\"font-weight: 400\">Team Members: Ameya Raul, Bixi Zhang, Shruthi Racha and Zhicheng Gu<\/span><\/em><\/p>\n<p><b>Motivation &#8211; <\/b><span style=\"font-weight: 400\">What is the goal of your project?<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Charity drive messages which were earlier powerful and emotive now seem plain have lost their \u00a0intensity. <\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">In this day of age, pictures aren&#8217;t as moving, and statistics aren&#8217;t as impactful since nearly all the shock value has faded away. The issue is that most of us are so greatly distanced both physically and emotionally from those who need help. <\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Donors are no longer as invested in philanthropic causes because non-profit organizations fail to create empathy, let alone an understanding of the problem. So what do you do now? Where does the future of fundraising lie for charities? The answer may lie in virtual reality.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">A powerful means to evoke empathy lies in virtual reality. <\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Aim of the project &#8211; leverage virtual reality to bridge the emotional and physical disconnect between victims, non-profit organizations and the donors by achieving a visualization of articles pertaining to social causes in Virtual Reality.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Journalism in 360 degrees : https:\/\/youtu.be\/BuGUQ6svJyg<\/span><\/li>\n<\/ul>\n<p><b>Part I<\/b><\/p>\n<p><span style=\"font-weight: 400\">In the first part, we display social aspect videos when people are reading an article. We try the following two ways to display the text.<\/span><\/p>\n<ol>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Display the text and video at the same time. Users will see the text appears on the top of the video. <\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Display text and video in alternate separate frames. Users will first see a part of text on a black background, then a part of video corresponds to the text will be shown.<\/span><\/li>\n<\/ol>\n<p><b>Part II<\/b><\/p>\n<p><span style=\"font-weight: 400\">In the second, we attempt to automatically display object animations based on the text content. We consider the following two approaches.<\/span><\/p>\n<ol>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Automatic object creation and placement in the scene. This requires all object animations to exist beforehand, and the relative placement of objects is tricky.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Scene to phrase mapping. We try to involve mapping scenes in a video to each sentence in the text. Makes use of video tags for mapping<\/span><\/li>\n<\/ol>\n<p><b>Related Work and Literature Survey<\/b><\/p>\n<p><span style=\"font-weight: 400\">Literature Survey of prior art and case studies done along the lines of virtual reality used for Social Causes and Story Rendering.<\/span><\/p>\n<p><span style=\"font-weight: 400\">\u00a0a) <\/span><span style=\"font-weight: 400\"><a href=\"https:\/\/pdfs.semanticscholar.org\/1475\/9c2226d24d3926b91b375d6fcd85cf403813.pdf\">https:\/\/pdfs.semanticscholar.org\/1475\/9c2226d24d3926b91b375d6fcd85cf403813.pdf<\/a><\/span><\/p>\n<p>b) <a href=\"https:\/\/www.edsurge.com\/news\/2016-08-16-stanford-experiments-with-virtual-reality-social-emotional-learning-and-oculus-rift\">https:\/\/www.edsurge.com\/news\/2016-08-16-stanford-experiments-with-virtual-reality-social-emotional-learning-and-oculus-rift<\/a><\/p>\n<p><span style=\"font-weight: 400\">Research and User Experience testing of the various VR headsets available and evaluate based on features and aptness for the problem under consideration.<\/span><\/p>\n<p><span style=\"font-weight: 400\">\u00a0 \u00a0a)\u00a0<\/span><a href=\"http:\/\/www.wareable.com\/headgear\/the-best-ar-and-vr-headsets\"><span style=\"font-weight: 400\">http:\/\/www.wareable.com\/headgear\/the-best-ar-and-vr-headsets<\/span><\/a><\/p>\n<p><span style=\"font-weight: 400\">\u00a0 \u00a0b)\u00a0<\/span><span style=\"font-weight: 400\"><a href=\"http:\/\/www.theverge.com\/a\/best-vr-headset-oculus-rift-samsung-gear-htc-vive-virtual-reality\">http:\/\/www.theverge.com\/a\/best-vr-headset-oculus-rift-samsung-gear-htc-vive-virtual-reality<\/a><\/span><\/p>\n<p><b>Contributions &#8211; <\/b><span style=\"font-weight: 400\">Describe each team-member\u2019s role as well as contributions to the project.<\/span><\/p>\n<p><b>Ameya Raul<\/b><\/p>\n<ul>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Deciding on the scale and positioning of the two spheres for stereoscopic vision<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Integration of Oculus with Unity<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Development of Code for multiple objects movement.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Design and development of simulation of visualization of Streaming Text.<\/span><\/li>\n<\/ul>\n<p><b>Bixi Zhang<\/b><\/p>\n<ul>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Trying various Video Formats and resolutions to determine the best combination to use in Oculus.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Interposing of Text and Video Frames<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Design of various fonts of Text<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Design of different Text Placement Strategies and trying it in Oculus. <\/span><\/li>\n<\/ul>\n<p><b>Shruthi Racha<\/b><\/p>\n<ul>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Deciding on the scale and positioning of the two spheres for stereoscopic vision<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Integration of Oculus with Unity<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Development of Code for multiple objects movement.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Design and development of simulation of visualization of Streaming Text.<\/span><\/li>\n<\/ul>\n<p><b>Zhicheng Gu<\/b><\/p>\n<ul>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Conversion of videos from one codec format to another using Ffmpeg.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Extraction of audios from videos<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Development of Code for multiple objects movement.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Design and development of simulation of visualization of Streaming Text.<\/span><\/li>\n<\/ul>\n<p><b>Outcomes<\/b><\/p>\n<ol>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Describe the operation of your final project. What does it do and how does it work?<\/span><\/li>\n<\/ol>\n<p><span style=\"font-weight: 400\">Our project comprises of 2 parts &#8211; The Social Aspect and The Automation. For each of these, we developed and compared 2 approaches.<\/span><\/p>\n<p><b><b>a) The Social Aspect<\/b><\/b><\/p>\n<p><span style=\"font-weight: 400\">Both of our approaches are aimed at figuring out the best user experience with respect to the coordination of text and video.<\/span><\/p>\n<p><img loading=\"lazy\" class=\"alignnone size-medium wp-image-954\" src=\"http:\/\/blogs.discovery.wisc.edu\/vr2016\/files\/2016\/12\/Screen-Shot-2016-12-18-at-10.10.40-PM-300x222.png\" alt=\"screen-shot-2016-12-18-at-10-10-40-pm\" width=\"300\" height=\"222\" srcset=\"https:\/\/blogs.discovery.wisc.edu\/vr2016\/files\/2016\/12\/Screen-Shot-2016-12-18-at-10.10.40-PM-300x222.png 300w, https:\/\/blogs.discovery.wisc.edu\/vr2016\/files\/2016\/12\/Screen-Shot-2016-12-18-at-10.10.40-PM-768x569.png 768w, https:\/\/blogs.discovery.wisc.edu\/vr2016\/files\/2016\/12\/Screen-Shot-2016-12-18-at-10.10.40-PM.png 920w\" sizes=\"(max-width: 300px) 100vw, 300px\" \/><\/p>\n<ul>\n<li><b>Approach 1: Display text as the video runs<\/b><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400\">In this approach we display text near the center while the video progresses. To account for user\u2019s turning around, the text is displayed at 3 angles, 0, 120 and 240. Text is displayed on top of a semi transparent background in order to ensure easy readability. <\/span><\/p>\n<p><span style=\"font-weight: 400\">We experimented with moving the text to the bottom of the video (like a subtitle). However, because the video is projected on the inside of a sphere (to guarantee 360 degree vision), the text appears squished if placed anywhere other than near the center. We also attempted to move the text along with the user camera. The user found this particularly irritating because the text blocked out whatever he moved his head to look at.<\/span><\/p>\n<p><span style=\"font-weight: 400\">The major drawback is that the text blocks out some useful info in the video. The text also seems to shatter the presence felt in the virtual world.<\/span><\/p>\n<ul>\n<li><b><b>Approach 2: Display text on separate frames<\/b><\/b><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400\">To overcome the drawbacks of Approach 1, we experimented with placing text intermittently within the video on separate frames. A small paragraph (2 &#8211; 3 sentences) is displayed in white on a black background followed by a corresponding video segment.<\/span><\/p>\n<p><img loading=\"lazy\" class=\"alignnone size-medium wp-image-955\" src=\"http:\/\/blogs.discovery.wisc.edu\/vr2016\/files\/2016\/12\/Screen-Shot-2016-12-18-at-10.11.50-PM-238x300.png\" alt=\"screen-shot-2016-12-18-at-10-11-50-pm\" width=\"238\" height=\"300\" srcset=\"https:\/\/blogs.discovery.wisc.edu\/vr2016\/files\/2016\/12\/Screen-Shot-2016-12-18-at-10.11.50-PM-238x300.png 238w, https:\/\/blogs.discovery.wisc.edu\/vr2016\/files\/2016\/12\/Screen-Shot-2016-12-18-at-10.11.50-PM.png 598w\" sizes=\"(max-width: 238px) 100vw, 238px\" \/><\/p>\n<p><span style=\"font-weight: 400\">This experience fairs better. Unlike the previous approach where users were compelled to read text and look around at the same time, this approach allows the user to focus on only one thing at a time. Moreover, the user can focus on different aspects of the video, thereby increasing immersion.<\/span><\/p>\n<p><b><b>b) The Automation<\/b><\/b><\/p>\n<p>In this phase of our project, we attempt to automatically display object animations based on the text content.<\/p>\n<ul>\n<li><b><b>Auto-Animation<\/b><\/b><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400\">In this approach, we attempt to display objects and apply animations to them based on the text. For example, if the text is \u201cThere is a tree. It is raining\u201d, we would display a tree and make it rain. The subset of objects and move functions is formed by parsing the input text and forming a sub dictionary from the superset dictionary of pre created objects, e.g, ball, tree and raining.<\/span><\/p>\n<p><img loading=\"lazy\" class=\"alignnone size-medium wp-image-956\" src=\"http:\/\/blogs.discovery.wisc.edu\/vr2016\/files\/2016\/12\/Screen-Shot-2016-12-18-at-10.13.15-PM-300x254.png\" alt=\"screen-shot-2016-12-18-at-10-13-15-pm\" width=\"300\" height=\"254\" srcset=\"https:\/\/blogs.discovery.wisc.edu\/vr2016\/files\/2016\/12\/Screen-Shot-2016-12-18-at-10.13.15-PM-300x254.png 300w, https:\/\/blogs.discovery.wisc.edu\/vr2016\/files\/2016\/12\/Screen-Shot-2016-12-18-at-10.13.15-PM.png 640w\" sizes=\"(max-width: 300px) 100vw, 300px\" \/><\/p>\n<p><span style=\"font-weight: 400\">We require that all objects are present in the inventory. When the application runs, Unity selects the appropriate object (using a dictionary of mappings) and displays it at the appropriate position. The fact that a library of all possible objects and animations is required places a limitation on the scalability.<\/span><\/p>\n<p><span style=\"font-weight: 400\">It is difficult to control the relative placement and movement of objects. Moreover adding environmental constraints such as gravity may limit the application to imaginary lands with magic (e.g where balls can move up without coming back down). <\/span><\/p>\n<ul>\n<li><b><b>Video Segment Mapping<\/b><\/b><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400\">To overcome the above drawbacks, we shift to a more scalable and simpler approach. This involves dynamically displaying different segments of videos according to the sentences in an article. This requires a collection of video segments, where each segment has a set of \u2018tags\u2019. A tag is a single word description of an important aspect of a video. We then proceed to match each sentence in the text to a segment of the video. This is done by choosing the video segment which has the most number of tags in common with the sentence. We then proceed to stitch the videos in the order of the text. <\/span><\/p>\n<p><span style=\"font-weight: 400\">We leverage Unity\u2019s 3D Text for displaying the sentence when the corresponding video plays. Text is displayed at the center of the video only at an angle of 0 degrees (In front of the user). Text is displayed in white, which contrasts it with its surroundings, making it easier to read.<\/span><\/p>\n<p><img loading=\"lazy\" class=\"alignnone size-medium wp-image-957\" src=\"http:\/\/blogs.discovery.wisc.edu\/vr2016\/files\/2016\/12\/Screen-Shot-2016-12-18-at-10.14.04-PM-300x252.png\" alt=\"screen-shot-2016-12-18-at-10-14-04-pm\" width=\"300\" height=\"252\" srcset=\"https:\/\/blogs.discovery.wisc.edu\/vr2016\/files\/2016\/12\/Screen-Shot-2016-12-18-at-10.14.04-PM-300x252.png 300w, https:\/\/blogs.discovery.wisc.edu\/vr2016\/files\/2016\/12\/Screen-Shot-2016-12-18-at-10.14.04-PM.png 646w\" sizes=\"(max-width: 300px) 100vw, 300px\" \/><\/p>\n<p><span style=\"font-weight: 400\">For the scope of this problem, we restricted ourselves to a video called \u201cThe Source\u201d which deals with water shortages in the desert. We segmented this video and manually assigned tags to each segment. We then created multiple articles on water shortages to observe how the experience changed each time. We were even capable of creating a completely opposite story where the borewell is dug first (which dries up) so people have a water shortage later on.<\/span><\/p>\n<p><span style=\"font-weight: 400\">We observe that the user experience was drastically better. User\u2019s were able to relate the sentences and the videos, and the fact that these videos were based on real incidents, increased the sense of presence in the environment.<\/span><\/p>\n<p><span style=\"font-weight: 400\">2) How well did your project meet your original project description and goals?<\/span><\/p>\n<p><span style=\"font-weight: 400\">Throughout the course of our project we have stayed in line with our initial vision &#8211; Visualization of a social article in VR and Text based automatic animation rendering. The latter part was too ambitious to complete within the scope of this project, however we are content with the\u00a0<\/span><span style=\"font-weight: 400\">approaches we were able to explore and the extent to which we could convert \u201cText based automatic animation rendering\u201d into a working proof of concept. Given the opportunity, we would love to experiment more by introducing Machine Learning algorithms such as Artificial Neural Networks to auto map text to objects by leveraging a dataset (e.g. Imagenet). Figure [5] highlights the goals we achieved over the course of the project.<\/span><\/p>\n<p><img loading=\"lazy\" class=\"alignnone size-medium wp-image-959\" src=\"http:\/\/blogs.discovery.wisc.edu\/vr2016\/files\/2016\/12\/Screen-Shot-2016-12-18-at-10.15.46-PM-300x137.png\" alt=\"screen-shot-2016-12-18-at-10-15-46-pm\" width=\"300\" height=\"137\" srcset=\"https:\/\/blogs.discovery.wisc.edu\/vr2016\/files\/2016\/12\/Screen-Shot-2016-12-18-at-10.15.46-PM-300x137.png 300w, https:\/\/blogs.discovery.wisc.edu\/vr2016\/files\/2016\/12\/Screen-Shot-2016-12-18-at-10.15.46-PM-768x351.png 768w, https:\/\/blogs.discovery.wisc.edu\/vr2016\/files\/2016\/12\/Screen-Shot-2016-12-18-at-10.15.46-PM-1024x468.png 1024w, https:\/\/blogs.discovery.wisc.edu\/vr2016\/files\/2016\/12\/Screen-Shot-2016-12-18-at-10.15.46-PM.png 1282w\" sizes=\"(max-width: 300px) 100vw, 300px\" \/><\/p>\n<p><span style=\"font-weight: 400\">3) As a team, describe what are your feelings about your project? Are you happy, content, frustrated, etc.?<\/span><\/p>\n<p><span style=\"font-weight: 400\">As a team, we feel happy and satisfied with our progress over the course of the project. We feel we have been able to leverage each team member\u2019s skill set to the maximum in various aspects of the project ranging from design, programming to time management and team dynamics. We feel more confident when dealing with Unity issues. We all enjoyed playing with the Oculus Rift. Overall we feel that the project was a great success and one of the salient features of the class.<\/span><\/p>\n<p><b>Problems encountered<\/b><\/p>\n<p><span style=\"font-weight: 400\">We encountered a myriad of issues pertaining to either Unity or the logic of our project. Following is a brief overview of the significant ones.<\/span><\/p>\n<p><b>Part I<\/b><\/p>\n<ul>\n<li><b><b>Positioning of text within a video<\/b><span style=\"font-weight: 400\"> : This was a design question. Which positioning yields the best user experience? Moreover, positioning text away from the center resulted in distortions. Positioning was important to increase presence.<\/span><\/b><\/li>\n<\/ul>\n<ul>\n<li><b>Importing Videos in Unity<\/b><span style=\"font-weight: 400\"> : Adding videos to the asset folder took large amounts(e.g. 20 minutes for a 2min 240p video) which restricted us to low resolution short videos. <\/span><\/li>\n<\/ul>\n<ul>\n<li><b>Ensuring Readability:<\/b><span style=\"font-weight: 400\"> \u00a0Setting an appropriate text size and the width of the text box was important to ensure readability in low resolution videos.<\/span><\/li>\n<\/ul>\n<ul>\n<li><b>Resolution of the video<\/b><span style=\"font-weight: 400\">: We got the video with the highest resolution, but it still seems not good in the Oculus. This is probably because the importing process of Unity or the videos lost some quality when we edit them.<\/span><\/li>\n<\/ul>\n<p><b>Part II<\/b><\/p>\n<ul>\n<li><b><b>Relative Positioning of Objects: <\/b><span style=\"font-weight: 400\">It was difficult determining where auto-generated objects should be positioned relative to each other in an environment. Auto generating terrain and a detailed scenery was far beyond our scope as all objects needed to be created before hand.<\/span><\/b><\/li>\n<\/ul>\n<ul>\n<li><b>Relative Movement of Objects: <\/b><span style=\"font-weight: 400\">Capturing all kinds of motion of objects was a difficult task in itself, but now capturing the interactions of movements of different objects with each other (e.g. what happens when 2 objects collide, do they bounce back, or do they explode?) was tricky. Moreover, dynamically adding constraints (based on the text, e.g. brooms can fly if you are in a Harry Potter based VR environment) to restrict object movements in a virtual environment is a difficult problem.<\/span><\/li>\n<\/ul>\n<ul>\n<li><b>Gather Video Segments: <\/b><span style=\"font-weight: 400\">Creating a set of video segments that would pertain to all different combinations of text is difficult.<\/span><\/li>\n<\/ul>\n<p><b>Next Steps &#8211; <\/b><span style=\"font-weight: 400\">We envision the following next steps for our project : <\/span><\/p>\n<p><b>Part I<\/b><\/p>\n<ul>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Move the text with the camera. Allow the user to zoom in\/out or move it using the controller.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Figure out a way to bypass Unity\u2019s bottleneck and import higher resolution videos. We need to make high resolution videos to get better reading experience.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Experiment with superimposing 3D text on top of videos <\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Allow the user to move the text with a separate controller<\/span><\/li>\n<\/ul>\n<p><b>Part II<\/b><\/p>\n<ul>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Creation of multiple objects and corresponding move functions to co-exist in a scene.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Using machine learning and deep learning for automatic visualization of articles, rather than the static way which is the current approach.<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\"> For the experiment, we tailored our own video segments and tags, however obtaining such a set from youtube and experimenting with that may be more realistic.<\/span><\/li>\n<\/ul>\n<p><b>Video of the Project in Action<\/b><\/p>\n<p><span style=\"font-weight: 400\">We present the following 4 videos, each demonstrating an approach for a part of the project<\/span><\/p>\n<p><a href=\"https:\/\/drive.google.com\/open?id=0B2lCWzhongHOSERteU1SVGJJTGM\"><span style=\"font-weight: 400\">https:\/\/drive.google.com\/open?id=0B2lCWzhongHOSERteU1SVGJJTGM<\/span><\/a><\/p>\n<p><a href=\"https:\/\/drive.google.com\/open?id=0B2lCWzhongHOYWVzUmI5S1k1Wnc\"><span style=\"font-weight: 400\">https:\/\/drive.google.com\/open?id=0B2lCWzhongHOYWVzUmI5S1k1Wnc<\/span><\/a><\/p>\n<p><a href=\"https:\/\/drive.google.com\/open?id=0B2lCWzhongHORGRDZXNoWm5lbUE\"><span style=\"font-weight: 400\">https:\/\/drive.google.com\/open?id=0B2lCWzhongHORGRDZXNoWm5lbUE<\/span><\/a><\/p>\n<p><a href=\"https:\/\/drive.google.com\/open?id=0B2lCWzhongHOLXhudS1BejZKNjQ\"><span style=\"font-weight: 400\">https:\/\/drive.google.com\/open?id=0B2lCWzhongHOLXhudS1BejZKNjQ<\/span><\/a><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Title: Sixth Sense &#8211; Feel What You Read Team Members: Ameya Raul, Bixi Zhang, Shruthi Racha and Zhicheng Gu Motivation &#8211; What is the goal of your project? Charity drive messages which were earlier powerful and emotive now seem plain have lost their \u00a0intensity. In this day of age, pictures aren&#8217;t as moving, and statistics [&hellip;]<\/p>\n","protected":false},"author":175,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[42],"tags":[],"_links":{"self":[{"href":"https:\/\/blogs.discovery.wisc.edu\/vr2016\/wp-json\/wp\/v2\/posts\/953"}],"collection":[{"href":"https:\/\/blogs.discovery.wisc.edu\/vr2016\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.discovery.wisc.edu\/vr2016\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.discovery.wisc.edu\/vr2016\/wp-json\/wp\/v2\/users\/175"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.discovery.wisc.edu\/vr2016\/wp-json\/wp\/v2\/comments?post=953"}],"version-history":[{"count":3,"href":"https:\/\/blogs.discovery.wisc.edu\/vr2016\/wp-json\/wp\/v2\/posts\/953\/revisions"}],"predecessor-version":[{"id":962,"href":"https:\/\/blogs.discovery.wisc.edu\/vr2016\/wp-json\/wp\/v2\/posts\/953\/revisions\/962"}],"wp:attachment":[{"href":"https:\/\/blogs.discovery.wisc.edu\/vr2016\/wp-json\/wp\/v2\/media?parent=953"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.discovery.wisc.edu\/vr2016\/wp-json\/wp\/v2\/categories?post=953"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.discovery.wisc.edu\/vr2016\/wp-json\/wp\/v2\/tags?post=953"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}