Music Visual Final Post

1. For our project we developed a music visualizer to the song ‘Hack’ by Sam Armus. The experience is immersive, taking advantage of virtual reality to surround the user with the visualizations. The program combines recorded video with a basic spectrum analyzer generated from any music file the user chooses. The video files are played using the video object code provided by Kevin. The music analysis is done using Fmod. The visuals are generated using the spectrum data and drawn using OpenGL. The rest of the program is based on the default Discover system code.

The project uses the Discover system to make a music visualizer that uses the design of the system to create a 180-degree viewing field of the visualization. In the center of the screen, columns of squares represent the different frequencies, and the longer the column, the more prominent that frequency.

In addition to the spectrum we created a custom video for the track we selected, making this project more specific to a certain song. The video compilation is live footage from shows at Moogfest 2014. We played around with the coloring of the video, inverting the spectrum at certain points to create a more immersive environment for the users.

2. Simon’s role on the project was to code up the program, including the graphical elements. This involved learning basics of OpenGL and adapting some Fmod tutorial code to work with the rest of our program. I also helped come up with concepts for what the program would do near the start of the project.

Tim mainly dealt with trying to find different aspects of the song to analyze. He found a potential program to use, and while it had a lot of potential and options for us, in the end the program wasn’t exactly what we were looking for. Other than this, he helped brainstorm ideas for the visualization at the beginning of the project.

Chelsi took live footage for the video and compiled them for the background of the visualizer. The video was created with iMovie, utilizing the filters and transitions within the program to create something that encompassed the feeling we were trying to get across. The video is approximately 7:30 long, and is separate from the spectrum and music tracking. The video loops at the end, allowing the user to experience the visualizer for as long as they’d like.

3. We are happy about how the project turned out. Many elements of its design turned out to be more complicated than we had hoped, but we had also planned for this and had many simpler ideas to fall back on. OpenGL and the music analysis were the trickiest aspects to successfully implement. The end result we think was not too tricky to design but still is effective in achieving our original goal, to make a cool virtual reality music visualizer.

In addition we think it’s great that we got to incorporate multiple aspects within the visualizer, including the song, spectrum and video. We are extremely happy with how the video turned out, because that was something we were very unsure of at the start. Overall the final product exceeded all of our expectations and we are proud of the end result.

4. Learning how to code things using OpenGL was tricky at first, especially because it was hard to test the code at home. We couldn’t find a good C++ development tool to use on my home computer, and even with one that works it’s necessary to test the code out in the Discover system itself to know if it is effective. The music analysis also took a long time to figure out, but we found an effective method by the end of the project.

We also had a very difficult time trying to understand sonic visualizer, the program we were going to use to analyze the music. We think that it is designed for something more than what we needed, but there were a lot of really interesting features that could have been implemented. However, the program was designed for someone with more background in this area, and the Fmod worked very well as a replacement.

5. The final project did not have as much interactivity as the initial idea, but otherwise I think it remained relatively faithful. We did not have enough time or knowledge to implements music that changes as the user interacts with the virtual scene; additionally, without having the project files for the music we used this would have been even more difficult.

One addition we would have liked to make would be if the user could have moved around in the space, and being in different spaces caused different parts of the song to become more prominent. Also to have a greater variety of shapes within the spectrum, but we are happy to at least have color responses within.

We also had some issues in the beginning of how to start off – none of us had ever worked on a project like this and were hesitant to begin without knowing much about the process. In the end we were all able to find a part in the project and put everything together successfully.

6. With more time, we think making the environment more interactive would be exciting. Additionally, making more different types of visualizers and including more kinds of recorded video and music would give the program much more variety. We would have also liked to implement some of the information we could get from the sonic visualizer, but that would change the way the visualizer was set up significantly. Another plan would be adding more interactivity with movement and audio changes as a result of that movement. There could be some change that happens as a result of a gesture or hand wave.

Overall having more time to pay attention to details and perfect the visualizer more would have been beneficial. We built our time structure off of creating a simple machine and adapting it as time allotted, so there is still the possibility to improve even after our final presentation

Music Team week of 5/8

preview

This picture is a part of the video that we have playing in the background of the music visualizer.

Chelsi: For this week I complied live video footage to run behind our visualizer. I also researched stereo systems/wireless headphone for the system.

Simon: This week I refined the spectrum analyzer, added beat detection, and combined the video player code with our project code (which mostly works).

Timmy: This week I looked into some more information with textures, but we decided that we didn’t need anything more complicated than what we had. Also, I helped out with demoing the spectrum visualizer as well as brainstorming any ideas for the visualization.

We had a few problems getting the video and the music visualizer to run at the same time. Currently the program will freeze up, but the audio will continue to play.

We currently are close to on schedule, probably a workday or two behind at the most. For Monday our plan is to get everything running smoothing and to ensure the video, audio, and music visualizer are properly synced up.

Music Team 5/2

IMG_3713Simon: This week, with some help I got Fmod to work in the Discover system. The simulation now plays music and has a basic spectrum visualizer working.

Tim: I looked at the code for the texture packs.

Chelsi: I got video coverage and started to compile which videos we will alternate through in the video.

Accomplishments: This week we got the music analysis to work in real time. Also, we now have video footage to play in the background of the simulation. The model is simple now but can be made more complex later.

Problems: Everything actually went pretty smoothly this week. Learning textures in OpenGL is still challenging but otherwise mostly everything we want to do is implemented successfully.

Schedule: We are on schedule now that the music analyzer is working. Until class is over we will update what we have to make it fancier.

Next week: Put the video in the simulation, and add more details to the visualizer.

 

Music Team 4/25

IMG_3684

 

Simon: This week I coded some new shapes into the virtual environment and worked on different ways to change the colors and shapes. I also investigated some ways to use OpenAL in our program.

Tim: This week I have been trying to find a way to export data from a spectrogram of the song into a CSV. However I haven’t been able to find anything on the currentprogram that I am using. I tried unsuccessfully to find another program for the job. Next week I plan on finding more information about sonic visualizer and other
music analyzation software.

Chelsi: Chelsi is gone this week but is researching what type of sound system to get for our experiment. She also got some good background footage to use and is getting more this week.

Accomplishments this week: This week we implemented some more prototype environments and did color testing. Additionally, we tested some more music analysis features and decided on how we want to play recorded video in the environment.

Problems: The methods we have found for music analysis so far don’t really work. We still need to find a good program to use.

Schedule: We are behind schedule because the music features have yet to be implemented. All the other major parts of our program are mostly working.

Next week: Implement recorded video and some form of music analysis.

Music Visual

Graphics on Screen

This week:

Simon: Mostly I just researched how to best implement timing for beat-detection and other things like that. I also did some more testing on what we already have in place. There’s still some issues that I’m not sure how to fix, like the 3D and the keyboard input, but I think the plan we have in place is on the right track.

Tim:  I am still trying to figure out what to analyze, aka take actually useful information from the song to visualize.

Chelsi: Working on rendering video for the background. Attempting to become familiar with OpenGL and find code that allows us to render a real time video. I’ll be at a music festival next week where I plan to get actual footage (lazers, lights, etc.) that will go along cohesively with what we are building

Overall we have the sphere moving and the ability to change the shapes and color. We are looking at what the best way to analyze the music is (BPM, frequency, etc.) for simplicity and effectiveness. Simon has prepared the sphere for music data, so that it is ready to adapt according to that.

By next week we hope to have the music analyzation decided. We are pretty much on track from our first plan and essentially half way done.

Music Visuals Updates

sphere2

This week:

Simon – I attached all the colors in the sphere to an array, in this case initialized to random values. The values in the array can be changed in groups, so as an example they could change in patterns with the music. I’d like to do testing in the actual lab so I can get to work on the more complicated parts.

Tim – For this week I have been working on familiarizing myself with music analysis and different features of the program. By next week I hope to start working on finding different options for analyzation of the song we selected.

Chelsi – researched geometric inspiration now that we have a set shape (videos below). Also contacted artist about getting individual track pieces for easier analyzation and should receive them over the weekend. I downloaded a sound analyzer and started to work with that – I have a few questions about how this will translate into OpenGL. I also briefly looked into using real video footage in OpenGL and it looks like rendering is a possibility (example: https://www.youtube.com/watch?v=2AVh1x-Uqjs)

Visualizer created in OpenGL:

Simple Geometric visualization:
https://vimeo.com/47085682

Another based off a 3D triangles:
https://vimeo.com/90972800

The one offers a way to include movement:
https://vimeo.com/67248097

We are a bit behind, but hope to catch up within the next week. By next Friday we hope to have the sphere moving and have our execution for the music analyzation figured out.

Music Team, 4/4

sphere1

 

Simon: worked on prototyping the virtual environment in OpenGL

Chelsi: worked on concepts and the music we will use

Tim: researched different ways to analyze the music

Accomplishments this week: A basic prototype of the environment was created. I wasn’t able to get as much done as I hoped because I couldn’t test in the lab, so many aspects of the program are still waiting on that. The final music file was acquired and we are on track to start analyzing the music.

Problems encountered: The lab computer crashed when I went to go use it, and I couldn’t test with the code I wanted to. Testing had to be done at home, which had many limitations. This meant that many of the more complicated parts of the program are still up in the air without being able to test in the lab.

Schedule: We are slightly behind schedule because our program is not running in the lab yet. The music analysis is still on schedule, we will decide on a method of analysis and implement it next week.

Next week: Beginning of music analysis. Testing in lab and adding more parts to the environment. More experimentation in OpenGL.

Music Visual Team

Name is TBD

We are playing to create semi-interactive 3D music video. We will have specific song that the visuals play to, an ideally have it loop multiple times. As of now we are hoping to build a series of shapes, displaying on a black/white/neutral background, that adapt according to the music. A possibility we are also discussing is having the music adjust according to the user’s movements.

General inspiration video: SYNESTHESIA

We have some experience in music production, hopefully enabling us to adjust our sound according. Overall we also know general programming, which will make the starting process easier. Also a background in design, which may make creating the patterns easier.

All of us are pretty unfamiliar with making 3D objects and using the CAVE and Rift systems.

UPDATE for 3/28

Song has been decided, can be found here: https://soundcloud.com/samarmus/hack-original-mix-soundcloud

Simon went through the tutorials for Open GL and feels confident about working with it but has yet to experiment with it. Wondering if we need to use Open AL for audio. We plan to experiment with the program on Friday (3/28)

Timeline:

Fri 4/2 – Have basic graphics introduced

Fri 4/9 – Music analysis programmed

Fri 4/16 – Have the basic model completed, following weeks will be spent adding on any effects