Final Project Posting – Moral Coders (12/18)

Moral Coders – Final Post



What is the goal of your project?

  • The basic essence of our project was to create a more natural and realistic experience in which participants are confronted with an ethical dilemma, drawing inspiration from the Moral Machine and the classic ethical dilemma of the Trolley Problem.  Drawing on ideas from our different initial project proposals, we wanted to connect our idea to the actual engineering problems programmers of self-driving cars face in addition to exploring the effects of participant’s biases based on criteria such as race, age, and gender.

    Thus, what we intended to create was a simulation in which a user takes on the role of a self-driving car programmer while being placed in the driver’s seat of the car. They will travel along the road until they are confronted with individuals crossing the road. They will be forced to make a decision to either hit the pedestrians while the passenger of the car survives or swerve off the road and avoid killing the pedestrians, therefore killing the passenger of the car.  We would vary the appearance of the pedestrians to see if this had any effect on the results.  



Describe each team-member’s role as well as contributions to the project.

  • Caroline
    • I created all of the scenes for the environment in Unity in which our simulation takes place.  I started by downloading a free road asset to build the track that the car would run on.  I then built the environment around the road, using the program’s terrain tool to create realistic looking hills for the scenery.  I was able to find a seamless grass texture with a normal map online and supplemented it with the rocky mud texture and normal map that comes with Unity’s standard assets to create a mountainous looking cover.  To add details to the environment, I downloaded a free rocks asset package and placed the different rocks all around along with the coniferous tree provided in Unity’s standard assets.  I also found a free Clouds package in the Unity store that created moving clouds when the scene was played.  I created and coded the main menu along with the messages users receive after they have hit the people/rock.  


  • Tony
    • First I took responsibility for creating the project posts, uploading the media and made sure we have the lab reserved at convenient times for all of our team members. I made some research, created a Facebook group for better team collaboration and posted it my findings regarding AI cars and Trolley problems in VR there. I also researched our code collaboration options; setted up the Unity beta collaboration feature and helped others with to make sure we can share our work right away.
    • When we started planning the project, I chose to implement the HTC Vive as input controls in Unity. Also, I brought some basic ideas for the scene – self driving car, having to choose A or B scenario (like in the Moral Machine experiment).
    • Later, I helped with finding some of the object, editing the persons’ textures and as Caroline nicely put it – troubleshooting. I tried to make sure we won’t get stuck up on some technical issue.
    • At the final stage, I helped making the project ready for the presentation video (slow motion, correct scene switching and so on). I found a software for shooting the video.
    • In the end, I also added some final touch to the project video and uploaded it to Youtube, adding it to our VR playlist.


  • Frank
    • I worked on importing human assets into Unity, posing them, programming them to undergo ragdoll behavior under certain conditions, as well as working on other potential animations for them though this endeavor did not pan out. Due to the limited number of terminals available to us that could handle Unity with our full project, and the spotty nature of the 5.5 beta collaboration tool, I, like everyone else, spent a fair amount of time looking over the shoulders of other team members and collaborating on solving the various problems we encountered as well as testing the simulation with the VIVE headset.


  • Galen
    • I started by building a car asset in Unity. After I got the hang of using wheel colliders and writing scripts to control the car, I noticed that there was a default car asset with controls. I started using it and noticed that besides allowing first person control, it also allowed the definition of a path in a similar way to which I had implemented the original asset. Although the default asset was very good in first person and the path driving made the car seem organic, it couldn’t follow the path exactly the way it was designed, making it difficult to place into our environment and drive along our specifically defined path in a reckless enough way that the user doesn’t question that they are driving while at the same time being fast enough to force a decision upon approach of the pedestrians. There were other problems with the physics of our environment and the car that kept it from climbing up the hill along the path properly and slowed it down immensely before the final turn. I sped up the game once the car reached that point making it look like it was moving more quickly as it climbed. Then, immediately before contact, implemented the slow motion and popup text which was later removed from the simulation in order to instruct the user on what choices they had, as well as give them some time to make the choice.



Describe the operation of your final project. What does it do and how does it work?

  • We were able to create virtual world with road set in a mountainous forest. On the main menu, the user is instructed that they will encounter a situation in which they had to make a decision and that if they would like to swerve their vehicle, they should pull the trigger on the controller.

    The user is then placed in a self-driving car and they travel down the road through the environment. After some time, the user comes around a bend in the road where two individuals are standing. The program switches into slow motion to allow the user a brief time to make their decision. They can pull the trigger and swerve, hitting a rock, and killing the passenger (who they have the perspective of at the moment) or they can do nothing and hit the people in the road, killing them.

    Based on the choice they make, a new scene pops up, explaining what ethical theory their choice is more consistent with. After some time to read the explanation, the user is then returned to the menu screen where they can choose to attempt the simulation again.


How well did your project meet your original project description and goals?

  • We were able to create a simulation that places a user in a car and forces them to make a decision between the passenger in the car and pedestrians outside. This was our first goal for the project but we had hoped to get further. Because of the time constraints and the challenges we encountered, we were not able to vary the appearance of our individuals.  Additionally, the individuals did not have the animations to make them as lifelike as we had hoped.


As a team, describe what are your feelings about your project? Are you happy, content, frustrated, etc.?

  • We are happy with how immersive the technology is and that you can actually look around in the virtual world we created.  We wish we would have had more of the semester in order to further develop a project that incorporated our starting goals.  Because of the time constraints, we had to make compromises on what we were going to focus on creating in our final project.


Problems encountered

What were the largest hurdles you encountered in your project and how did you overcome these obstacles?

  • Overall Problems
    • We faced quite a few hurdles while we were working on our project.  We had a lot of ideas going into it and because we were limited by time, our experience, and a variety of issues we encountered, we had to pick which elements of our original idea were most important and feasible to include in our final project.
    • Technology was one of the biggest obstacles we faced.  We used the Collaboration feature to work on the project from our individual computers and then updated our work to the cloud.  This worked well at first but once our project got larger, it didn’t transfer very well between accounts.  This added a lot of time working on troubleshooting what exactly had been lost/messed up and then fixing the parts of the problem.  We found that working on one computer and taking turns adding our elements of the project to it was the most successful.  Eventually the project got too big to be able to work on our personal laptops so we had to be able to work in the lab.  This was challenging the week leading up to the project because other groups needed the lab and equipment as well.  The best solution we had to this was working on our computers as much as we could when another group was using the computers and briefly borrowing the computers + Vive when we needed to test some of our work.
    • There are a lot of great resources out there and if you look hard enough you can find quality free ones but the best ones cost money.  For example, there were plenty of free humanoid assets, but there weren’t any dressed in a way that made sense in our environment.
    • Each of us used different tutorials and which caused problems when trying to put the results together. For example, the created menu was incompatible with the VR controller, it was impossible to add the colliders. The menu was created based on mouse and keyboard tutorial, whereas the VR controllers required common objects to collide with.


  • Individual Problems
    • Caroline – I had never worked with Unity before and don’t have much coding experience so I needed to watch a lot of tutorials and read a lot on how to code in Unity.  Because Unity and its assets can be purchased, the same was true for getting help.  This meant often I had to piece together different parts of different tutorials or scripts which was confusing and took a lot of time.  Working with the Unity software on my laptop got to be very challenging.  Once the project got bigger, the program started running very slowly and would often freeze for fifteen minutes at a time.  It crashed my computer multiple times and I had to have it plugged in at all times because it went through the battery so quickly.  I would definitely recommend not using a laptop with Unity to anyone who was working on anything more than a simple project in the future.
    • Tony – I was dealing with the input controls for HTC Vive. While integrating them in Unity using the SteamVR plugin was super easy, it was harder to make them interactive. It was easy to add script to do something, but it was hard to interact with other objects. I have no prior experience with Unity and C#, so it was hard for me to find a way to alternate other objects (like collision detection for controllers lasers). For example we had a menu, that was created by some other team member using an older tutorial, which was made for using mouse. I could not find a way to make it work with the VR controllers. I would have to redo the whole menu with different tutorial, but there was not enough time at that stage. Some lesson on how to work with Unity, its objects and C# would be very helpful.
    • Galen – Most of the difficulties I encountered had to do with interacting with physics in Unity. Although unity is powerful and offers a great deal of flexibility, I had difficulty getting our standard asset to interact with a simple world in a predictable way. For a scripted event, it makes setting up the scene difficult. There were also huge framerate problems as soon as we attempted to bring the car asset into the world and drive around. A significant amount of time had to be spent adjusting the number of objects in our terrain, without removing any of the realism, in order to get the frames up to a range where the beta testers (us) wouldn’t get ill when we experienced it. We had many difficulties with controls, especially figuring out how to get passed mapping the controller to buttons and move on to ray casting for selection. Our simulation could be completed without this, but it’s something that will be mentioned in future work.
    • Frank – I encountered a great deal of trouble with getting the person assets working, and to stay working, within our simulation. I had to reimport our person assets, repose them, reprogram them to ragdoll, etc. close to a dozen times (I think we lost count around 9th time) over the course of our project. Unity just kept losing track of part of, or all of these assets during migrations of the project between computers, between collaboration updates, and sometimes when simply closing and reopening a project after making no changes on the same terminal. We never quite figured out the source of these errors, though I suspect that the issue was rooted in a naive search algorithm that the assets used to find their components in various directories, and that we ended up getting punished for trying to keep our assets folder from becoming an unmitigated disaster. Once we did get people into the simulation, and got them to behave, animating proved just as mysterious, and ultimately we were never able to get it to work. We were able to attempt simple animations, like head-turning and/or arm-raising in dummy projects, but when it came time to to try and implement animations in our main project, things fell apart. Our human assets would completely disappear from the Unity scene editor whenever we attempted to animate them. Their position would jump out into the middle of nowhere in space, their collider would remain at their intended location, and their model would disappear completely. The last issue was one we could not rectify, and forced us to delete their object from the scene entirely and replace them with a new prefab. Fortunately ragdolling was achieved with a different method from animation, so we were at least able to implement that feature within the project.


Next Steps

If you had more time, what would you do next on your project?

  • While pulling a trigger to swerve is easy for users to understand, it didn’t necessarily contribute to the realism we wanted to bring to our project.  With more time we would design a more realistic input action such as turning a wheel. There isn’t a completely certain direction to take here, however. A steering wheel would offer a user a more realistic and intuitive method of making the decision at the end of the simulation, but that intuition is a double-edged sword because it also could easily make the user expect options to be available to them that aren’t practical for us to implement, or ruin the spirit of the experiment. A simple binary device like a lever would more accurately reflect the binary choice that a user is being presented, but our explanation for this artifact would either have to be incredibly contrived, or would require implicitly reframe the problem statement, which ruins all the concessions and moral inquiry the simulation is trying to investigate.
  • Navigation of the menus and ability to control when individual scenes could transition more obviously would be important in the future, moving beyond two trigger interaction could benefit the experience greatly.
  • Because of all the problems we encountered animating the characters, they were not as lifelike as we had hoped them to be.  If we had more time, we would add more sophisticated movement to the people getting hit so we could have them walking across the street as the car came around the corner.
  • One of our original intentions with the people crossing the street was to vary physical indicators such as race, age, and gender and see if this made a difference in people’s decisions.  If we had more time, we would have liked to add these characteristics.
  • Going off of this, our project was meant to be an experiment so we would have liked to actually test it on a group of people and see what results we got.  With more time, we could have gathered a sample of people to test once we incorporated the indicators listed above.


Video of the Project in Action