Week 2

This week, I had to make some adjustments to my goals for multiple reasons – first being that this was another one of my pretty heavy workload weeks with regards to classes. This shouldn’t be an issue from now on as I have dropped a class to be able to devote more time to this project (and because of other issues).

Secondly, after doing some exploring with OpenCV and Python, it quickly become apparent to me that because of my minimal exposure to the language, I had some trouble understanding syntax and concepts so I took a step back and spent some time refreshing myself on the language to try to get a better foundation in order to be better prepared for the later stages. I did this through some lessons on CodeAcademy and skimming through some the Python Tutorials playlist by Youtuber thenewboston: https://www.youtube.com/playlist?list=PL6gx4Cwl9DGAcbMi1sH6oAMk4JHw91mC_

I highly recommend this channel for learning new technologies, it is my go-to.

Another issue I ran into and am currently trying to figure out is the type of coding environment I should use. I am unsure of whether to code directly in the Python interpreter and run it through that, or whether I should download an editor like Komodo to organize the file better in the case that there would be many, many lines of code involved. I am leaning towards either that or SublimeText, and teaching myself how to run these files through the command line. (I am already used to Eclipse and IntelliJ so I think it would be a nice transition to a Python editor rather than trying to code directly in the command line). A useful resource I found to get myself familiar with these basics of Python programming is the following: https://opentechschool.github.io/python-beginners/en/getting_started.html

Finally, speaking of the command line, I thought it might be a good idea to set up a Github repository for the project to have it all in one place and periodically update my code. That will be located at the following: https://github.com/saadbadger/IPD-Detection

Getting started – 1st post

This will be my first of many weekly posts regarding my progress on the semester project I was assigned for independent study – IPD detection.

Coming into the project I have minimal programming experience with Python and VR but have done more extensive work with Java and C, so I am excited for what this semester holds and to learn many things along the way.

For this first week I had the goal of setting up Python and OpenCV and having it open and show an image. I was hoping to get beyond that but with a very busy week that wasn’t possible, unfortunately. (I embarrassingly spent a few hours dealing with pip and python installation issues and figuring out the command line)

Attached are screenshot(s) of what I was able to accomplish so far:
.GettingStarted OpeningImage

For next week I plan to take this further and try and have OpenCV to talk to the laptop’s webcam, and also test functionality with videos. From there I could proceed to playing with the library’s blob detection capabilities.

Entry 9 – Conclusion

Just to cap things up for the semester, I would like to reflect on what I accomplished.

The good –

I think I gave a solid performance with respect to finalizing the Processing project I started a year ago. My experience with coding was very limited until I started doing this independent study. Now I am able to code and customize flocking algorithms and other creative code applications to generate interesting visual compositions. I developed three Processing sketches this semester using object-oriented programming. I see myself continuing to code for visual art and as a way of finding new ways to inspire my designs.

The bad –

I would say that I only accomplished about 50% of what I intended to do. I wanted to be able to visualize one of my projects using a real-time in-game engine. This is the type of skill that is becoming extremely useful in the age of virtual reality.While I modeled the basic shapes of all the blocks in the map below, I still needed to model the details and texture. I think I undertook a very big project in that respect. The next steps would have involved merging all of the scenes together to create the map

block-1 block-2 block-3 block-4 block-5 block-6 block-7 block-8 block-9

The Future –

It is my intention to continue learning Unity and to create an interactive architectural visualization that would allow the user to have predefined views of the building with the context of the city and also walk around in first-person. It is also my goal to find ways to use Processing sketches as concepts for architectural form creation. My idea is to be to export sketches in image, lines form or mesh forms and be able to visualize them in 3D programs and use as inspiration for further design.

As I have said before, I have learned a lot doing this and there’s still a lot I want to do.

Thanks for everything.

Entry 8 – Trailing Agents against attractors and repellents

For the second composition, I had to do quite a lot of thinking. Rather than having agents respond to steering forces from other agents, I wanted to give them the ability to react to their previous positions or the trail they were leaving behind. If there’s anything that becomes obvious with the regular flocking algorithms, is that in most cases, the movement is very erratic. This happens because the steering vectors are actionable on a given agent as long as the source is within a predetermined radius. This is equivalent to saying that an agent can see has eyes everywhere.

diagram-8

Rather than using this paradigm, it is convenient to implement an angle of vision to the agent behavioral methods:

diagram-9

But before we get to how this was done in the behavioral methods, an important difference between this algorithm and the previous one is that the tail drives the movement of agent and this is done by first extrapolating a future location vector based on the current velocity of the agent:

diagram-11

The other important distinction of this flocking algorithm is that it kinda uses a pseudo pathfollowing behavior directed by trail position vectors. Basically, the agent keeps following the path it was originally set when speed was randomly selected at initialization. In practice, this gives us almost straight paths unless an attractor or repeller is nearby, in which case the steer will slowly make them change course.

To implement the angle of vision, I had to change the methods for separation and cohesion to include a calculation for the angle between the trail positions and the velocity:

diagram-12

The separation method uses the same principle.

Lastly, I needed to include a method for tracing the tail. For some reason, I am having some artifacts in the composition which I haven’t been able to correct. Straight lines appear across the sketch.

diagram-13

The result is a really slick visualization of movement which for all intents and purposes comes fairly close to the way we navigate architectural spaces. We mostly walk in straight paths and occasionally make turns.

 

0213 1122 1150

Entry 7 – Creating my first Flock composition

Having learned the theoretical background behind flock algorithms in processing, it is time to compose a sketch.

My idea for this sketch was to introduce a little variation from the typical flock algorithm. I wanted there to be a competition between two classes of Agents, the regular agents and another class I decided to call Enemies.

First, there is an additional class called Enemies which uses the inheritance feature of OOP to acquire and expand the capabilities of the regular Agent class:

diagram-5

The separation, alignment and cohesion behaviors follow the same guidelines as explained in my previous entry. But the agent class carries  the addition of a repel method in charge of creating repelling steering of the agents from the enemies:

diagram-4

By having this function take on a different range parameter, I can tailor how close an object has to be to another based on the type. For example, I can have the enemies have a greater influence against the agents along a larger distance. I can also use it to introduce a repel force among the enemies themselves

Finally, I must call on all the functionality of the agents and enemies and apply the repelling method accordingly:

diagram-6

The result is really organic. With the red and green colors, even though we are technically seeing a war between organisms, the forms that are visualized kinda almost give a Christmas feeling.

2039 1317

 

In order to automate the process of acquiring an image from the sketch, I invoked the keyPressed() method  and connected it to an image function which uses the built-in jpeg export saveFrame.

 

diagram-7

 

 

Entry 6 Agents Theoretical Framework

How do we code Agents? Before we begin building autonomous agents, we must understand what an agent can and cannot do:

  • An agent has a limited ability to perceive the environment: An agent must have methods to reference other objects in the code. The extent to which he is able to interact with other objects is entirely up to us but will most likely be limited in some way just like living things.
  • An agent reacts to its environment by calculating an action: Actions in this context are forces that drive the dynamics of the agent. The way we have calculated forces before is through vector math and this will be no exception.
  • An agent is a follower, not a leader: Though less important than the other two concepts, it is important to understand that we are implementing code to simulate group behavior and dynamics. The trends and properties of the complex system depend on the local interaction of the elements themselves.

Much of our understanding for coding agents comes from computer scientist Craig Reynolds who developed behavioral algorithms to animate characters.

What we want to do with agents is create methods for steering, feeling, wandering, pursuing to give the elements life-like substance. These behaviors will use motion with vectors and forces.

The agents of the system we will build will have limited decision making based on a series of actions. Most of the actions we seek to simulate can be described as ‘steering forces’. These steering behaviors can may include seeking, fleeing, following a path, following a flow field of vectors and flocking with the other agents. Flocking can be further dissected into the following steering behaviors: separation, alignment and cohesion. In order to get creative with this framework, it is our responsibility to mix and match different behaviors for the agents and see what kind of system we end up simulating.

desired velocity

desired velocity

The most important concept is that of a steering force.

Steering force = desired velocity – current velocity.

So Pvector steer = PVector.sub(desired, velocity);

We use the static method of subtraction for the PVector class

PVector desired = PVector.sub(target, location);

diagram-2

Furthermore we must also limit the speed of this desired vector because otherwise the agent will move really fast and depending on how far the target is, it could just appear to simply teleport there. The other key point is that once we have the steer vector we must apply it to our agents as a force.

To do this we must write an applyForce() method

Void applyForce(PVector force) {

Acceleration.add(force);

}

 

We will use the standard Euler Integration motion method to update the agents’ position with velocity.

Another refinement for this method of steering is to use a limiting case for the velocity as the agent approaches the target, where velocity depends on the distance of the agent to the target. We can use an if statement with the magnitude of the desired vector:

// Distance from target to agent

Float d = desired.mag();

If (d < 100) {

// We map a range of values around a hypothetical circle of radius 100 around the target. Once the target gets to that area, its velocity values change from 0 to maxspeed

Float m = map(d,0,100,0, maxspeed)

Desired.mult(m);

} else {

Desired.mult(maxspeed);

 

Flock Behavior

Interesting systems can be created applying Reynold’s algorithm for steering to simulate particular group behaviors seen in nature. The three main behavioral methods in flocking are separation, cohesion and alignment.

diagram-3

 

Separation

Separation is the method that gives agents the ability to evaluate how close they should be to their neighbors depending on the magnitude of the ‘separation force ’ we give them.

When dealing with group behavior, we are going to have to create a method that accepts an arraylist of all agents.

This is how we will write our setup() and draw()

ArrayList < Agent > Agents;

 

Void setup() {

Size(320,240);

Agents = new ArrayList<Agent>();

For (int I =0; I < 100; i++) {

Agents.add(new Agent( random(width), random(height)));

}

}

 

Void draw() {

For (Agent v : agents) {

a.separate(vehicles);

a.update();

a.display();

}

 

In our vehicle class we must create the separate method.

Void separate (ArrayList<agent> agents) {

// We set what we want our desired separation distance such that when any agents is this close to //another, we want vectors pointing away from each agent to influence their velocity.

Float desiredseparation = r*2;

PVector sum = new PVector();

// count of agents that satisfy the desired separation;

Int count = 0;

For (Agent other: agents) {

Float d = PVector.dist(location, other.location);

If ((d>0) && (d < desiredseparation)) {

//Define a vector from the other agent to the agent, in other words a fleeing vector

PVector diff = PVector.sub(location, other.location);

Diff.normalize();

// We divide the vectors by their distances so that if an agent is too close, it will flee faster than if it were near.

Diff.div(d);

// We add all vectors from all near agents

Sum.add(diff);

Count++;

}

}

// Now we can apply our steering behavior so that the vectors in sum become the desired vector for the agent.

If (count > 0) {

// We are looking for the average of all the fleeing vectors

Sum.div(count);

Sum.normalize();

Sum.mult(maxspeed);

PVector steer = PVector.sub(sum, vel);

Steer.limit(maxforce);

applyForce(steer);

}

}
Alignment

Alignment is the behavior of agents that makes them want to steer in the direction as the other neighbors. Cohesion is the behavior that steers the agent towards the center of the other neighbors.

For alignment,

PVector align (ArrayList<Agent> agents) {

Float neighbordist = 50;

PVector sum = new PVector(0,0);

Int count = 0;

For (Agent other : agents) {

Float d =PVector.dist(location, other.location);

// If the distance is  less than a predetermined quantity, initiate vector collection.

If (( d>0) && (d<neighbordist)) {

Sum.add(other.velocity);

Count++

}

}

If (count > 0) {

Sum.div(count);

Sum.normalize();

Sum.mult(maxspeed);

PVector steer = PVector.sub(sum,velocity);

Steer.limit(maxforce);

Return steer;

} else {

Return new PVector(0,0);

}

}

 

Cohesion

Last but not least, we must code the cohesion behavior. Cohesion is sort of an attractive steering force. We may call this a seeking behavior which looks for the average location of all neighboring agents and applies a velocity steer vector based on the location of the agent and this target. So we code the seek behavior and then reference the seek behavior in the cohesion method.

PVector seek(PVector target) {

// Make a vector from the agent to the target, which will be fed by cohesion method.

PVector desired = PVector.sub(target,loc);

Desired.normalize();

Desired.mult(maxspeed);

PVector steer = PVector.sub(desired, vel);

Steer.limit(maxforce);

}

Now we can establish our cohesion method:

PVector cohesion (ArrayList<Agent> agents) {

Float neighbordist = 50;

PVector sum = new PVector(0,0);

Int count = 0;

For (Agent other : agents) {

Float d = PVector.dist(location, other.location);

If ((d > 0) && (d < neighbordist)) {

Sum.add(other.location);

Count++;

}

}

If (count > 0) {

Sum.div(count);

Return seek(sum);

}else{

Return new PVector(0,0);

}

}

 

With aseparation, alignment, and cohesion, we can begin to create our first flocking algorithm

Entry 5 – Back to Autonomous Agents

Before anything I conjured up abt what I wanted this project to be, there was one idea. The idea of generating code that could create beautiful organic visual compositions using agents that eventually could become architectural designs. But the fact of the matter is that as someone interested in architectural design, I have a very practical mind. I am very much concerned with form, with tectonics, with sensible spatial configuration that humans could use to live in. At first I thought it was possible to go directly from code to design, but I was very disappointed to find out that that was not the case.

Often times, what you produce with agent code is so erratic and impractical that it would never become architecture on its own without personal editing. The concepts generated by codes and agents are just that, concepts, and may very well serve as inspiration for doable design but it will always need to be challenged by the personal input of the designer in order to become clear and purposeful. In reality, what happens is that the designer creates a sketch using Processing, then exports lines (if the sketch is in 3D) or images to a CAD program to be cleaned up, or traced, sometime completely doing something else on top of it. With Processing you are also able to export meshes, but the meshes themselves need to be cleaned and expanded to resemble anything close to architecture. This limitation made it seem a bit less magical at first, so I almost completely abandoned this idea of using agents and looked at other things that could serve a more practical purpose to my learning of architecture. And this is how I went back to trying to realize something in Unreal and Unity. I am always struggling with self-doubt so I had to ask if it was indeed worth it.

After much reflection on the workflow, I came to the conclusion that creating sketches in Processing is a novel way of finding inspiration for architectural forms, as long as it is clear that a sketch in Processing is just part of the concept stage and that a lot of work will need to be done in order to translate the visuals into tangible 3D forms that could be used as part of an architectural project.

So I’ve decided to finish what I started and go back to the roots of my project for the next few days. After some early experimentation with Agents and Geometrical Forms, while I was still trying to get my game engine part of the project done and failing miserably, I think this will do.

There are a few things that I am going to do. First I am going to lay down the theoretical framework for autonomous agents in 2D and 3D and create three unique sketches.

Entry 4 – Unreal here we go.

Area to be visualized in Unreal

Area to be visualized in Unreal

Things are starting to take off. My goals for the remaining weeks are as follows:

  • to prepare a real-time visualization in Unreal of part of Regent Street neighborhood in relation to the site of my design for an Italian restaurant.
  • Point cloud generation of context using Google Map images- this is something I want to try.
  • Aerial rendered view of Italian restaurant
  • 4 Processing Sketches and architectural forms inspired by the sketches

This week I spent modeling one of the colored blocks in the map above. I will be modeling the two center rows of buildings in high detail and the outer most ones as low detail.

blog-captures

I am using Google Earth ruler system to get accurate data on heights and sizes for the buildings.

aerial-view-to-be-recreated-and-rendered

Aerial View to be recreated

Google Earth 3D

Google Earth 3D

josies-restaurant

Josie’s Restaurant

I will be updating these blog posts twice a week from now on.

 

 

Week 3 – Stripes and Ridges

Week 3 is here and I completed both a Processing sketch and 3D Model.

I completed the following Lynda.com tutorial on Photoshop:

  • Photoshop CC Essential Training

For the Processing sketch, I wanted to continue using particle systems in the hopes of creating an architectural composition. I have always been captivated by the VFX effects like shattering and collisions. Thought I could replicate a similar effect by having a program that drew lines between particles as soon as it verified that they were in the vicinity of one another. The way I achieved this was by making a function that verified the distance between two particles, where s in a arbitrary distance number of pixels, in the case of my script, 10 pixels.

void detectCollision(Particle p) {
if (PVector.dist(location, p.location) <= p.s/2) {
p.velocity = new PVector(0, 0);
p.stopped = true;

In the main draw cycle, everytime the framecount is divisible by 10, a mouse press triggers the creation of a particle. A line will be drawn between two particles as soon as the distance between the particles is s/2. Note that this results in the ridges usually seen when something shatters.

for (int u = 0; u < particles.size(); u++) {
Particle p2 = (Particle) particles.get(u);
p.detectCollision(p2);
if (PVector.dist(p.location,p2.location) <= p2.s/2)
line(p.location.x, p.location.y, p2.location.x, p2.location.y);

You can check the script here

For the 3D model, I wanted to experiment with a workflow that would allow me to use polygonal modeling by using my sketches as a start. I created a unique sketch using splines.

Splines used as sketch

Splines used as sketch

I then carefully traced the sketch using polygon planes, extruding edge by edge to conform to the stripes. Let my imagination determine the height based on a four story building. I created human sized ellipses to maintain a sense of scale.

Wireframe Model

Wireframe Model

Aerial view of the form

Aerial view of the form

Week 2 Modular Building

This week I started by getting re-acquainted with 3DS MAX and Photoshop. I watched the following two video tutorials form Lynda.com.

  • Photoshop CC 2015 One-on-One Fundamentals
  • 3ds MAX 2017 Essential Training

I had already used 3ds MAX before, to create a design for the Limnology Building next to Lake Mendota. So I was pretty familiar with a lot of the modeling tools. The 2017 version introduced a new interface though so things looks a little different.

On the other hand, I am completely new to Photoshop and the idea of compositing images and renders into architectural visualizations is still kind of daunting. Architecture is fundamentally all about drawings and the presentation of 2D still images so Photoshop will perhaps be the most important tool at my disposal and one that I need to master regardless of how much I delve into 3D real-time visualization. The ability to produce a convincing render and compositing a realistic image of a building, its surroundings, and its materials is absolutely necessary for any architectural student and this skill will not be phasing out anytime soon.

Last week, I briefly touched upon the concept of modularity in the process of expansive 3D environments. We won’t get to begin creating a town for a while but in the process of prepping for it, I wanted to create a 3D model of a building that used modular panels across its skin to create a fun playful and plastic façade. So I created a series of modules as shown below:

Modular Pieces

Modular Pieces

The resulting building is very interesting looking and consists of these 4 modules.

Completed building

Completed building