Music Visual Final Post

1. For our project we developed a music visualizer to the song ‘Hack’ by Sam Armus. The experience is immersive, taking advantage of virtual reality to surround the user with the visualizations. The program combines recorded video with a basic spectrum analyzer generated from any music file the user chooses. The video files are played using the video object code provided by Kevin. The music analysis is done using Fmod. The visuals are generated using the spectrum data and drawn using OpenGL. The rest of the program is based on the default Discover system code.

The project uses the Discover system to make a music visualizer that uses the design of the system to create a 180-degree viewing field of the visualization. In the center of the screen, columns of squares represent the different frequencies, and the longer the column, the more prominent that frequency.

In addition to the spectrum we created a custom video for the track we selected, making this project more specific to a certain song. The video compilation is live footage from shows at Moogfest 2014. We played around with the coloring of the video, inverting the spectrum at certain points to create a more immersive environment for the users.

2. Simon’s role on the project was to code up the program, including the graphical elements. This involved learning basics of OpenGL and adapting some Fmod tutorial code to work with the rest of our program. I also helped come up with concepts for what the program would do near the start of the project.

Tim mainly dealt with trying to find different aspects of the song to analyze. He found a potential program to use, and while it had a lot of potential and options for us, in the end the program wasn’t exactly what we were looking for. Other than this, he helped brainstorm ideas for the visualization at the beginning of the project.

Chelsi took live footage for the video and compiled them for the background of the visualizer. The video was created with iMovie, utilizing the filters and transitions within the program to create something that encompassed the feeling we were trying to get across. The video is approximately 7:30 long, and is separate from the spectrum and music tracking. The video loops at the end, allowing the user to experience the visualizer for as long as they’d like.

3. We are happy about how the project turned out. Many elements of its design turned out to be more complicated than we had hoped, but we had also planned for this and had many simpler ideas to fall back on. OpenGL and the music analysis were the trickiest aspects to successfully implement. The end result we think was not too tricky to design but still is effective in achieving our original goal, to make a cool virtual reality music visualizer.

In addition we think it’s great that we got to incorporate multiple aspects within the visualizer, including the song, spectrum and video. We are extremely happy with how the video turned out, because that was something we were very unsure of at the start. Overall the final product exceeded all of our expectations and we are proud of the end result.

4. Learning how to code things using OpenGL was tricky at first, especially because it was hard to test the code at home. We couldn’t find a good C++ development tool to use on my home computer, and even with one that works it’s necessary to test the code out in the Discover system itself to know if it is effective. The music analysis also took a long time to figure out, but we found an effective method by the end of the project.

We also had a very difficult time trying to understand sonic visualizer, the program we were going to use to analyze the music. We think that it is designed for something more than what we needed, but there were a lot of really interesting features that could have been implemented. However, the program was designed for someone with more background in this area, and the Fmod worked very well as a replacement.

5. The final project did not have as much interactivity as the initial idea, but otherwise I think it remained relatively faithful. We did not have enough time or knowledge to implements music that changes as the user interacts with the virtual scene; additionally, without having the project files for the music we used this would have been even more difficult.

One addition we would have liked to make would be if the user could have moved around in the space, and being in different spaces caused different parts of the song to become more prominent. Also to have a greater variety of shapes within the spectrum, but we are happy to at least have color responses within.

We also had some issues in the beginning of how to start off – none of us had ever worked on a project like this and were hesitant to begin without knowing much about the process. In the end we were all able to find a part in the project and put everything together successfully.

6. With more time, we think making the environment more interactive would be exciting. Additionally, making more different types of visualizers and including more kinds of recorded video and music would give the program much more variety. We would have also liked to implement some of the information we could get from the sonic visualizer, but that would change the way the visualizer was set up significantly. Another plan would be adding more interactivity with movement and audio changes as a result of that movement. There could be some change that happens as a result of a gesture or hand wave.

Overall having more time to pay attention to details and perfect the visualizer more would have been beneficial. We built our time structure off of creating a simple machine and adapting it as time allotted, so there is still the possibility to improve even after our final presentation

Course evals

As our class is not overly large it is extremely important that everyone finishes their course evaluations.  We will have this count as an assignment, so you will get some credit for completing it.  Once you have finished your course evaluation, make a comment to this post that says “complete” and I can mark you off the list.

Assignment 2: Project Brain Storming

For class 2/19

Brainstorm project some  ideas.  These need not be refined at this point.  Don’t worry about knowing how to accomplish your goal.  Just try to think of projects that you would be interested in pursuing.

For class, bring in an image or video and describe to the class what you would like to do.  Be prepared to answer for questions.  We we refine these ideas for the next assignment

Ultimate Presence

Of all the technologies Sterling mentions in the article, I think GPS is the most influential now and will be the most influential going forward, as it allows a close tethering between virtual, augmented and reality.

The way the article ended merged the sense of wonder these technologies bring, with a dramatic reminder that these technologies could be used in ways we might not like.

The article mentions that it would be wonderful if computers could help us understand things like non-uniform fields – at DoIT Academic Technology Media Learning Lab has created such a thing – a particle golf game which  helps students learn about thermodynamic states.

As for the separation of presence and immersion, I agree that it’s important to distinguish between the technical specifications with how humans actually perceive it – since, as the article mentions, there is many opportunities to take advantage of sensory metamers to improve the experience of virtual reality with the same computational power.

The accuracy of presence in non-real environment might not be good, but if we can’t tell, do we care?

I was curious about the nature of the debate regarding defining these terms before this article appeared, and if this 2003 article resolved it. In 1992 he wrote An experimental exploration of presence, followed by more than a 40 articles with about presence or immersion in their titles before he wrote ‘A note on presence technology’ – and while I can’t find citation numbers for that article, he has almost 12,00 citations in his career so I think it’s safe to say he was a well-regarded expert on Virtual Reality. I wasn’t otherwise able to answer my questions within the scope of this assignment. Mel Slater has written a blog about Presence at http://presence-thoughts.blogspot.com/ so he has clearly remained interested in the topic.

Article: “SAD? Virtual Reality therapy can help”

Article: “SAD? Virtual Reality therapy can help”
http://articles.economictimes.indiatimes.com/2007-06-27/news/27675858_1_phobia-classroom-condition

Video: “A virtual reality dialogue system for the treatment of social phobia”

This article interested me particularly because it addresses virtual reality as a creative way to treat health-related conditions. US-based company, Illumenta, introduced a Virtual Reality Exposure Therapy (VRET) system in which it creates situations that are most feared, mostly geared towards individuals diagnosed with the medical condition SAD-Social Anxiety Disorder,  Say, for instance, you have a fear of public speaking—VRET simulates a situation in which you are giving a speech to a group of people, to essentially  practice your fear. With use over a period of time, VRET hopes to gradually treat any type of phobia, fear, or addiction.“”We expose them repeatedly to stimulus they fear. Maybe they will have to speak in an auditorium or face interview sessions or be in a conference room” says Dr Mehta. In a world where pharmaceutical companies try to diagnose every symptom with medication, it excites me to see other mediums of treatment being used-especially with technology.

Virtuix Omni and the Oculus Rift

Video: http://www.youtube.com/watch?v=dP48cLFeBms

I have been following these two products for a while now. In my opinion, these two devices can create a fully immersive experience for gamers and what not. Imagine we can immerse into a game we are playing, running around, jumping, without requiring  lot of space. This will create a new form of exercise for gamers and for non-gamers too.

Inception

For this assignment, I watched the movie Inception (which had been discussed in class but I had never seen). You can read about the movie at ttp://en.wikipedia.org/wiki/Inception or at http://www.imdb.com/title/tt1375666/?ref_=nv_sr_1 where you’ll also find trailers.

This movie places a bank heist inside a philosophy question: What is reality and what are dreams? Other movies, such as the matrix, have also explored this realm, (though rarely in such an eerie and unsettling way) as well as books such as Tad Williams City of Golden Shadows. The closer we get to convincing virtual reality, the deeper we go into that uncanny valley, and the more we get nervous about if we’ll be able to tell the difference, and even if we do, if we’ll care.

Sadly, for people who have Derealization disorder http://en.wikipedia.org/wiki/Derealization , they take these fictions to be their reality: that reality is a fiction.

Data Visualization in VR

As far as data visualization in virtual reality goes, the New York Times’ Cascade system is one of the best examples I could think of. There’s a video on their site that provides a good overview – in short, the system tracks and models the spread of NYT articles as they’re shared across social media and other sites. or popular articles, this creates an immense amount of data. At the NYT R&D lab, this project exists on a multi-screen TV wall, perhaps the only way to effectively analyze all of its data at once. While not strictly virtual reality, I feel that big data tools like this one are perfect for VR environments; the large screen size and high resolution that VR offers is the best way to present this much data at once.

Also on the NYT labs site, there are more interesting projects, like an augmented-reality mirror. Projects like this are simple enough that I wouldn’t be surprised to see them become commonplace within the next few years.

What comes after Google Glass?

iOptik Contact Lense

Innovega is designing contact lenses that could allow it’s users to have the same experiences offered by Google Glass, the Oculus Rift, an IMAX movie theater, and more. Unfortunately, you would still need to wear some sort of glasses in addition to the lenses. These contacts allow our eyes to focus on a screen that is only half an inch away, which is the main problem that Google Glass and the Oculus Rift have to overcome. By overcoming this limitation Innovega can make glasses with any sort of screen in them. They currently have prototypes with glanceable displays, like Google Glass, but with much higher resolution and screen space. A full view transparent display can also be projected onto the glasses to give a fully immersive augmented reality.

I’m interested in this because there seems to be few limitations to the type of hardware that can be made to work with the lenses. The contacts allow you to focus on images close up without much additional work on the screen or projector providing the image, and you can keep the close up image and the rest of the world in focus at the same time. It only needs simple hardware to project images for augmented reality. I think a logical next step is to modify this to allow for viewing movies or as a virtual reality system. This would provide a cheaper and smaller version of the Oculus Rift. Not to mention that the glasses you would wear in public for augmented reality would actually look like normal sunglasses.

http://news.cnet.com/8301-11386_3-57616459-76/augmented-reality-contact-lenses-to-be-human-ready-at-ces/
http://spectrum.ieee.org/tech-talk/consumer-electronics/audiovideo/innovega-delivers-the-wearable-displays-that-science-fiction-promised
http://www.geekwire.com/2014/beyond-google-glass-innovega-shows-new-augmented-reality-prototype-ces/

Oculus Rift is coming into focus

I am very optimistic about the possibilities of this device in the world of video games.  I believe that it could help usher a new, innovative immersion level in games, and it could be a major player in the future.

Also, with such a quality VR system made, and hopefully at a relatively fair price point, what other opportunities are there other than just gaming?  How can this device change the way we perform research, especially over the human brain.

The article I have linked to is a review of the Rift from CES 2014, and according to the writer, the Oculus Rift is continuing to improve and get closer to being released.

Link to original Article:http://www.techradar.com/us/reviews/gaming/gaming-accessories/oculus-rift-1123963/review

Video Interview from CES 2014: http://www.youtube.com/watch?v=7hyeUkB44IM