CS699, Final Post

CS699 Final Post by Dongxia Wu

Project1: 3D video implementation

Project2: 3D scene implementation

Description:

This semester I successfully built and tested two Unity projects: one for 3D video, one for 3D scene for the tiled display system, including 6 machines and 11 monitors connected by the network. For the 3D video project, the goal is to show a high resolution 3D video on this system. To achieve this, I successfully built the tiled display environment, and then attached the 3D video to the scene. For the 3D scene project, the goal is to attach the prefabs controlling the tiled display environment for a 3D scene project, and to make sure the display system shows the correct picture from the 3D scene.

For both of the project, the first step is to build the tiled display environment to make the monitor array look like a series of windows on virtual scene from a given perspective. In order to achieve this, I used two UniCave prefabs: Network_Manager, and Tri_Monitor for projects. UniCave is the a Unity3D plugin for non-head mounted virtual reality display systems provided by the Kevin’s group. I changed the parameters with correct IP address of corresponding computer host for the head node and all the sub (display) nodes. With Yuzi’s help, we measured the relative position and size of every monitor corresponding to the head (audience’s eye position). After the adjustment, we set the head at the center of the monitor array (XY plane) and about 3 meters away from the monitor (z axis).

After that,  I attached the 3D video, a few minutes animation which is found from online source. After adjusting the position and the size the video to fit the display array, I successfully built and tested the 3D video project. In order to test the project conveniently, I also used python script to realize cluster launching, which enables starting the executable on all of the machines through the head node.

For the test part, there were some spatial misalignments since Yuzi and I had not measured the relative position of the monitors precisely. The performance looks much better after tuning the monitors. Also, I tested the time alignment, the tiled display environment successfully check and tune the frame to the similar number for each monitor automatically.

For 3D scene project, the concept is to make my prefabs for tiled display system work for other 3D scene project. In order to achieve this goal, I used the courtyard as our 3D scene, which is a free source provided by unity, and tried to add the prefabs I made for 3D video project to realize the tiled display function since these two project shared the same physical display system. The method seems straightforward: just add the prefabs under playerController object as its child in order to getting the same position and rotation information from controller. However, it seems that these UniCave prefabs will get confused about which cameras to be selected when there are more than one camera for different purposes. Therefore, I deleted all the cameras instead of the camera for player controller.

Feelings:

I feel happy of what I have achieved this semester. Since I didn’t have much background of VI, computer graphics, and special software like Unity and Microsoft Mixed Reality tools, I was really struggling on my HoloLens project. But things become better after I switch my project to tiled display system. Thanks for help from Kevin, Ross, and Benny, I successfully finished 2 projects for 3D video and 3D scene implementations. During this process, I kept challenging myself and learnt new things independently, especially the knowledge of Unity. Now I am feeling more confident to use this powerful tool for future project.

Challenges:

One of long-standing challenges is the connecting process from the head node. Since these two projects are all based on the tiled display system, they are highly dependent on correct and fast network connecting from head node to each sub nodes. However, firewall system for each machine sometimes can be very annoying for our system. The cluster launching scripts were able to work before. But after all machines updated to Windows 10, it somehow does’t work.

Another challenge for both 3D video and 3D scene project is the frame difference for different displays. When looping to the start point, the frame difference became more apparent. I have changed the related parameters, but the performance didn’t improve a lot. Such problem may result from different graphics card settings and hardware configurations for different displays.

Finally, the courtyard demo I chose for the 3D scene project is a very large and complicated  scene. There are many cameraHolders and objects for controlling. These objects are related with each other, which makes it challenging to figure out their function and relation.  I tried to delete unnecessary objects to make it simple and find the right position to add the prefab to inherit the right position and rotation from the playerController.

Future work:

I will be very excited if I can apply what I have learnt from this semester’s work to new 3D display system. It seems that the physical performance of our display and computer is not very good and becomes a limiting factor. For example,  the system sometimes has a serious delay when playing. It will be interesting to see the performance of tiled display system with new displays and computers.

Also, I want to apply UniCave prefab for polymorphic device system such as building a virtual reality display environment in a room combined with display and projector. Compared with 2D display array system, I think this way will generate a more immersive feeling.

Video Link:

project1_3D_video

project2_3D_scene_control_video

 

CS 699, week10

Accomplishment:

This week we successfully realized the synchronous playback of our 3D scene project: the courtyard project. The main problem last week is that the monitors and  head node show different screens. We solved it by change our node to the same position of the FPS controller. Then we changed the position and direction of our display array to cover the same area of the camera in the camera holder. We also found that the controller of original project can not rotate beyond XY plane. By changing our prefabs under the camera holder to inherit more controller information, we successfully solved the problem.

Challenges:

The challenge is that there is a delay problem when we used the controller for this 3D scene. Benny said that the main limitation is from the hardware, which we cannot improve. But I will try to delete some of the complicate objects shown in the 3D scene to make it more simple, and see if it will improve the performance.

Plan for next week:

  1. Try to solve or reduce the delay problem
  2. Summarize two projects

CS 699, week 9

Accomplishment:

This week we kept trying to let the UniCave prefabs work in the 3D scene project. For the courtyard project, we tried to add our monitor prefab under its playerController as its child in order to getting the position and rotation information. However, it seems that the UniCave prefabs will get confused about which cameras to be selected when there are more than one camera for different purposes. Therefore, we tried to delete some of unnecessary objects in Courtyard scene. Still, we have some unsolved problems of the UniCave prefabs, and we will try other simple 3D scene project like Tuscany next week if we cannot figure it out.

Challenges:

The courtyard scene has so many objects which are related with each other that it is challenging to figure out every object’s function and relation.  We try to delete unnecessary objects to make it simple and find the right position to add our monitor prefab to inherit the right position and rotation of camera.

Plan for next week:

  1. Try to realize the player controller mode in the tiled display system for Courtyard scene.
  2. If we can not figure it out, we will jump to Tuscany project and add the play controller by ourselves later.

CS 699, week 8

Accomplishment:

This week we created some prefabs to save our original 3D video project’s objects and tried to build a new 3D scene project. We used the courtyard as our 3D scene, which is a free source provided by unity, and tried to add the prefabs we made before to realize the tiled display function. The picture shows that the presetting works.

Challenges:

The courtyard demo seems to be too large for our head node. We may change this with a smaller 3D scene. Also, there are some camera setting problems which we will try to figure out next week.

Plan for next week:

  1. Try to build and run the 3D scene project in our tiled display setting.
  2. Try to create dynamic anaglyphs for this 3D scene.

CS 699, week 7

Accomplishment:

This week we wrote TCP Communication python scripts to realize launching. We set all the sub nodes as servers and the head node as client. There were some errors to start executable on the head node because of the firewall setting. But thanks for Kevin and Ross’s help, we successfully solved it. After that, we made some fine tuning of the position of each monitor for calibration. The performance looks good.

Challenges:

Still, the video shown on different monitors will have a few frame difference. When the video finished and looped to the start point, the frame difference became more apparent.  But we plan to jump to the next 3D scene project since the performance of the video without looping looks good.

Plan for next week:

  1. Convert 3D video project’s object to a Prefab.
  2. Create a new 3D scene project and finish the presetting part.
  3. Try to add a Kinect system to the environment for head tracking.

 

CS 699, week6

Accomplishment:

Thanks for the support from Kevin and Ross, we successfully built a Unity project which realize synchronous playback of 3D video. We have successfully connect the head node with all 6 sub nodes.  We also set the position of each screen based on their location and the position of head. The performance looks good.

Challenges:

Sometimes the video shown on different screens will have a few frame difference. Specially when we loop the video. We will try to download the video and test the video again. The problem may also result from different graphics card settings and hardware configurations.

Plan for next week:

  1. Try to realize cluster launching using python script.
  2. Tune the positions of displays.
  3. Try to add a Kinect system to the environment for head tracking.

CS 699, week4

 

Accomplishment:

Thanks for the support from Kevin and Ross, we successfully built an Unity project which realize cluster launching and synchronous playback with one head node and one sub node with two screens.

Challenges:

We are trying to connect more than one sub node. But the video does not show up for the added sub node. I have already confirmed that the additional display should show the video from the head’s position since the Display 3 shows the video when running projects in head node. Therefore, I am not very sure about the reason.

Plan for next week:

  1. Try to figure out the problem and realize the tiled display video part.
  2. Try to add a Kinect system to the environment for head tracking.

 

 

CS 699, week2

Accomplishments:

I successfully finished my first MR project. I followed the tutorial to set up the camera, project settings, create a cube scene, and build and deploy the project to HoloLens using Visual Studio. The picture above shows the the cube hologram I made.

Challenges:

It is very hard to set up the connection to HoloLens. I tried two ways. The first one is to use Unity Remoting. But it doesn’t work. I will try to figure it out next week. The second one is to build and deploy the project to HoloLens using Visual Studio. This one also makes me confused. But thanks for Ross’s help,  I successfully finished that and generated the first Unity App in HoloLens.

Plan for next week

  • Read papers and other sources and try to find a detailed direction of the project.
  • Visualize gaze using a world-locked cursor.
  • Control holograms with the select gesture.
  • Spatial mapping

 

CS 699, week1

Accomplishments:

The main thing I did this week is about background learning. I started with Windows Mixed Reality documentation. I learnt the basic concepts of mixd reality: the relationship between human, environment, and computer, AR, VR, and MR. I also watched the video about concept of hologram, gaze, gestures, spatial mapping, coordinate systems, and spatial anchors. I installed Windows 10 SDK and checked Unity’s version on Thursday.  After that, I navigated Hololens by wearing the device and go through the apps it has, and try to use gaze, gestures and voice commands to operate the holograms.

Challenges:

I found it is challenging to install tools and set up the environment for Hololens since I am not familiar with Windows system. I tried to run Unity tutorial but it seems have some error at the beginning. I have also read from the Hololens Experiment project that I may have difficulty pairing the device but I will try first.

Plan for the next week:

I will try to finish the set up, go through MR basic courses, and try to use unity to realize:

  • Set up Unity for holographic development.
  • Make a hologram.
  • See the created hologram.
  • Visualize gaze using a world-locked cursor.
  • Control holograms with the Select gesture.
  • Spatial mapping

After that, I am going to think about the main objective I want to achieve for the project.