Music Visual Final Post

1. For our project we developed a music visualizer to the song ‘Hack’ by Sam Armus. The experience is immersive, taking advantage of virtual reality to surround the user with the visualizations. The program combines recorded video with a basic spectrum analyzer generated from any music file the user chooses. The video files are played using the video object code provided by Kevin. The music analysis is done using Fmod. The visuals are generated using the spectrum data and drawn using OpenGL. The rest of the program is based on the default Discover system code.

The project uses the Discover system to make a music visualizer that uses the design of the system to create a 180-degree viewing field of the visualization. In the center of the screen, columns of squares represent the different frequencies, and the longer the column, the more prominent that frequency.

In addition to the spectrum we created a custom video for the track we selected, making this project more specific to a certain song. The video compilation is live footage from shows at Moogfest 2014. We played around with the coloring of the video, inverting the spectrum at certain points to create a more immersive environment for the users.

2. Simon’s role on the project was to code up the program, including the graphical elements. This involved learning basics of OpenGL and adapting some Fmod tutorial code to work with the rest of our program. I also helped come up with concepts for what the program would do near the start of the project.

Tim mainly dealt with trying to find different aspects of the song to analyze. He found a potential program to use, and while it had a lot of potential and options for us, in the end the program wasn’t exactly what we were looking for. Other than this, he helped brainstorm ideas for the visualization at the beginning of the project.

Chelsi took live footage for the video and compiled them for the background of the visualizer. The video was created with iMovie, utilizing the filters and transitions within the program to create something that encompassed the feeling we were trying to get across. The video is approximately 7:30 long, and is separate from the spectrum and music tracking. The video loops at the end, allowing the user to experience the visualizer for as long as they’d like.

3. We are happy about how the project turned out. Many elements of its design turned out to be more complicated than we had hoped, but we had also planned for this and had many simpler ideas to fall back on. OpenGL and the music analysis were the trickiest aspects to successfully implement. The end result we think was not too tricky to design but still is effective in achieving our original goal, to make a cool virtual reality music visualizer.

In addition we think it’s great that we got to incorporate multiple aspects within the visualizer, including the song, spectrum and video. We are extremely happy with how the video turned out, because that was something we were very unsure of at the start. Overall the final product exceeded all of our expectations and we are proud of the end result.

4. Learning how to code things using OpenGL was tricky at first, especially because it was hard to test the code at home. We couldn’t find a good C++ development tool to use on my home computer, and even with one that works it’s necessary to test the code out in the Discover system itself to know if it is effective. The music analysis also took a long time to figure out, but we found an effective method by the end of the project.

We also had a very difficult time trying to understand sonic visualizer, the program we were going to use to analyze the music. We think that it is designed for something more than what we needed, but there were a lot of really interesting features that could have been implemented. However, the program was designed for someone with more background in this area, and the Fmod worked very well as a replacement.

5. The final project did not have as much interactivity as the initial idea, but otherwise I think it remained relatively faithful. We did not have enough time or knowledge to implements music that changes as the user interacts with the virtual scene; additionally, without having the project files for the music we used this would have been even more difficult.

One addition we would have liked to make would be if the user could have moved around in the space, and being in different spaces caused different parts of the song to become more prominent. Also to have a greater variety of shapes within the spectrum, but we are happy to at least have color responses within.

We also had some issues in the beginning of how to start off – none of us had ever worked on a project like this and were hesitant to begin without knowing much about the process. In the end we were all able to find a part in the project and put everything together successfully.

6. With more time, we think making the environment more interactive would be exciting. Additionally, making more different types of visualizers and including more kinds of recorded video and music would give the program much more variety. We would have also liked to implement some of the information we could get from the sonic visualizer, but that would change the way the visualizer was set up significantly. Another plan would be adding more interactivity with movement and audio changes as a result of that movement. There could be some change that happens as a result of a gesture or hand wave.

Overall having more time to pay attention to details and perfect the visualizer more would have been beneficial. We built our time structure off of creating a simple machine and adapting it as time allotted, so there is still the possibility to improve even after our final presentation

Music Visual

Graphics on Screen

This week:

Simon: Mostly I just researched how to best implement timing for beat-detection and other things like that. I also did some more testing on what we already have in place. There’s still some issues that I’m not sure how to fix, like the 3D and the keyboard input, but I think the plan we have in place is on the right track.

Tim:  I am still trying to figure out what to analyze, aka take actually useful information from the song to visualize.

Chelsi: Working on rendering video for the background. Attempting to become familiar with OpenGL and find code that allows us to render a real time video. I’ll be at a music festival next week where I plan to get actual footage (lazers, lights, etc.) that will go along cohesively with what we are building

Overall we have the sphere moving and the ability to change the shapes and color. We are looking at what the best way to analyze the music is (BPM, frequency, etc.) for simplicity and effectiveness. Simon has prepared the sphere for music data, so that it is ready to adapt according to that.

By next week we hope to have the music analyzation decided. We are pretty much on track from our first plan and essentially half way done.

Music Visuals Updates

sphere2

This week:

Simon – I attached all the colors in the sphere to an array, in this case initialized to random values. The values in the array can be changed in groups, so as an example they could change in patterns with the music. I’d like to do testing in the actual lab so I can get to work on the more complicated parts.

Tim – For this week I have been working on familiarizing myself with music analysis and different features of the program. By next week I hope to start working on finding different options for analyzation of the song we selected.

Chelsi – researched geometric inspiration now that we have a set shape (videos below). Also contacted artist about getting individual track pieces for easier analyzation and should receive them over the weekend. I downloaded a sound analyzer and started to work with that – I have a few questions about how this will translate into OpenGL. I also briefly looked into using real video footage in OpenGL and it looks like rendering is a possibility (example: https://www.youtube.com/watch?v=2AVh1x-Uqjs)

Visualizer created in OpenGL:

Simple Geometric visualization:
https://vimeo.com/47085682

Another based off a 3D triangles:
https://vimeo.com/90972800

The one offers a way to include movement:
https://vimeo.com/67248097

We are a bit behind, but hope to catch up within the next week. By next Friday we hope to have the sphere moving and have our execution for the music analyzation figured out.

Music Visual Team

Name is TBD

We are playing to create semi-interactive 3D music video. We will have specific song that the visuals play to, an ideally have it loop multiple times. As of now we are hoping to build a series of shapes, displaying on a black/white/neutral background, that adapt according to the music. A possibility we are also discussing is having the music adjust according to the user’s movements.

General inspiration video: SYNESTHESIA

We have some experience in music production, hopefully enabling us to adjust our sound according. Overall we also know general programming, which will make the starting process easier. Also a background in design, which may make creating the patterns easier.

All of us are pretty unfamiliar with making 3D objects and using the CAVE and Rift systems.

UPDATE for 3/28

Song has been decided, can be found here: https://soundcloud.com/samarmus/hack-original-mix-soundcloud

Simon went through the tutorials for Open GL and feels confident about working with it but has yet to experiment with it. Wondering if we need to use Open AL for audio. We plan to experiment with the program on Friday (3/28)

Timeline:

Fri 4/2 – Have basic graphics introduced

Fri 4/9 – Music analysis programmed

Fri 4/16 – Have the basic model completed, following weeks will be spent adding on any effects

 

 

Ultimate Presence

I believe the basic functions that made the computer user friendly were the most significant – typing, mouse, etc. Before these, computers could only be seen as mathematical machines that professionals could use. By adding on more user friendly aspects, it allows the general public to have access and understand how the machine works.

Slater poses a good point that there should be a better general understanding of virtual reality and how to describe the phenomenon of it. I don’t think this is something that will happen for the general population for at least 10 or so years. Relating back to the previous question, virtual reality is not something that everyone is experiencing – at least to the point of conceptually understanding it. Once the use of VR becomes more common, the general public will not reflect on what exactly is going on.

Sutherland’s ending didn’t even strike me as ‘menacing’ until I read it the second time. He isn’t too far off; with the development of video games and virtual reality, you can almost completely submerge yourself into a game. Although fatality and actual presence are not developed yet, I wouldn’t be surprised if that were something to come in 20+ years.

I think the fascinating part about Sutherland’s article is that all of these things are still used today, and you can pretty much get all of the aspects in the palm of your hand. Phones now serve as computers, where you can take and edit photos, type to others and make notes for yourself, plan games san joysticks and controllers, combining all aspects of a computer into one. If this is what exists for us today, what is to come in the next 50 years – or better yet what are the predictions similar to these?

3D Graphics in Music Production

Flying Lotus ‘Layer 3’

Flying Lotus ‘Layer 3’ – A Red Bull Music Academy Film from Red Bull Music Academy on Vimeo.

As I’ve mentioned, my interests have drawn more away from fashion and more towards music and event production. I spend a lot of time following electronic music artists, and stage setup is always something that sticks out. A lot of musicians will have light shows, projection mapping and other visuals, but this artist Flying Lotus took raised the bar by developing this ‘Layer 3’ projection project. As you can see in the video, he uses the three transparent layers to project something different on each, but cohesively coming together to complement his musical production.

I had never seen anything like this before finding this video, and it really is a spectacular way to bring more to the audience, allowing them to be completely submersed into the show. The project isn’t too invasive, as it doesn’t require viewers to wear 3D glasses or put themselves into a secluded environment like with the cave. Although this takes a different approach to what we typically think of when we hear virtual reality, I think this is just a peak into what is to come for the future of music production.

Giving Music Geometry

Another example of creating a 3D experience with projection is below, a project designed on Modul8 by Adam Guzman and Julia Tsao. This one really interests me because it appears to be so simple, but sill tricks the mind into thinking the projections have another dimension

Nosaj Thing Visual Show Compilation Test Shoot from Adam Guzman on Vimeo.