Entry 9 – Conclusion

Just to cap things up for the semester, I would like to reflect on what I accomplished.

The good –

I think I gave a solid performance with respect to finalizing the Processing project I started a year ago. My experience with coding was very limited until I started doing this independent study. Now I am able to code and customize flocking algorithms and other creative code applications to generate interesting visual compositions. I developed three Processing sketches this semester using object-oriented programming. I see myself continuing to code for visual art and as a way of finding new ways to inspire my designs.

The bad –

I would say that I only accomplished about 50% of what I intended to do. I wanted to be able to visualize one of my projects using a real-time in-game engine. This is the type of skill that is becoming extremely useful in the age of virtual reality.While I modeled the basic shapes of all the blocks in the map below, I still needed to model the details and texture. I think I undertook a very big project in that respect. The next steps would have involved merging all of the scenes together to create the map

block-1 block-2 block-3 block-4 block-5 block-6 block-7 block-8 block-9

The Future –

It is my intention to continue learning Unity and to create an interactive architectural visualization that would allow the user to have predefined views of the building with the context of the city and also walk around in first-person. It is also my goal to find ways to use Processing sketches as concepts for architectural form creation. My idea is to be to export sketches in image, lines form or mesh forms and be able to visualize them in 3D programs and use as inspiration for further design.

As I have said before, I have learned a lot doing this and there’s still a lot I want to do.

Thanks for everything.

Entry 8 – Trailing Agents against attractors and repellents

For the second composition, I had to do quite a lot of thinking. Rather than having agents respond to steering forces from other agents, I wanted to give them the ability to react to their previous positions or the trail they were leaving behind. If there’s anything that becomes obvious with the regular flocking algorithms, is that in most cases, the movement is very erratic. This happens because the steering vectors are actionable on a given agent as long as the source is within a predetermined radius. This is equivalent to saying that an agent can see has eyes everywhere.

diagram-8

Rather than using this paradigm, it is convenient to implement an angle of vision to the agent behavioral methods:

diagram-9

But before we get to how this was done in the behavioral methods, an important difference between this algorithm and the previous one is that the tail drives the movement of agent and this is done by first extrapolating a future location vector based on the current velocity of the agent:

diagram-11

The other important distinction of this flocking algorithm is that it kinda uses a pseudo pathfollowing behavior directed by trail position vectors. Basically, the agent keeps following the path it was originally set when speed was randomly selected at initialization. In practice, this gives us almost straight paths unless an attractor or repeller is nearby, in which case the steer will slowly make them change course.

To implement the angle of vision, I had to change the methods for separation and cohesion to include a calculation for the angle between the trail positions and the velocity:

diagram-12

The separation method uses the same principle.

Lastly, I needed to include a method for tracing the tail. For some reason, I am having some artifacts in the composition which I haven’t been able to correct. Straight lines appear across the sketch.

diagram-13

The result is a really slick visualization of movement which for all intents and purposes comes fairly close to the way we navigate architectural spaces. We mostly walk in straight paths and occasionally make turns.

 

0213 1122 1150

Entry 7 – Creating my first Flock composition

Having learned the theoretical background behind flock algorithms in processing, it is time to compose a sketch.

My idea for this sketch was to introduce a little variation from the typical flock algorithm. I wanted there to be a competition between two classes of Agents, the regular agents and another class I decided to call Enemies.

First, there is an additional class called Enemies which uses the inheritance feature of OOP to acquire and expand the capabilities of the regular Agent class:

diagram-5

The separation, alignment and cohesion behaviors follow the same guidelines as explained in my previous entry. But the agent class carries  the addition of a repel method in charge of creating repelling steering of the agents from the enemies:

diagram-4

By having this function take on a different range parameter, I can tailor how close an object has to be to another based on the type. For example, I can have the enemies have a greater influence against the agents along a larger distance. I can also use it to introduce a repel force among the enemies themselves

Finally, I must call on all the functionality of the agents and enemies and apply the repelling method accordingly:

diagram-6

The result is really organic. With the red and green colors, even though we are technically seeing a war between organisms, the forms that are visualized kinda almost give a Christmas feeling.

2039 1317

 

In order to automate the process of acquiring an image from the sketch, I invoked the keyPressed() method  and connected it to an image function which uses the built-in jpeg export saveFrame.

 

diagram-7

 

 

Entry 6 Agents Theoretical Framework

How do we code Agents? Before we begin building autonomous agents, we must understand what an agent can and cannot do:

  • An agent has a limited ability to perceive the environment: An agent must have methods to reference other objects in the code. The extent to which he is able to interact with other objects is entirely up to us but will most likely be limited in some way just like living things.
  • An agent reacts to its environment by calculating an action: Actions in this context are forces that drive the dynamics of the agent. The way we have calculated forces before is through vector math and this will be no exception.
  • An agent is a follower, not a leader: Though less important than the other two concepts, it is important to understand that we are implementing code to simulate group behavior and dynamics. The trends and properties of the complex system depend on the local interaction of the elements themselves.

Much of our understanding for coding agents comes from computer scientist Craig Reynolds who developed behavioral algorithms to animate characters.

What we want to do with agents is create methods for steering, feeling, wandering, pursuing to give the elements life-like substance. These behaviors will use motion with vectors and forces.

The agents of the system we will build will have limited decision making based on a series of actions. Most of the actions we seek to simulate can be described as ‘steering forces’. These steering behaviors can may include seeking, fleeing, following a path, following a flow field of vectors and flocking with the other agents. Flocking can be further dissected into the following steering behaviors: separation, alignment and cohesion. In order to get creative with this framework, it is our responsibility to mix and match different behaviors for the agents and see what kind of system we end up simulating.

desired velocity

desired velocity

The most important concept is that of a steering force.

Steering force = desired velocity – current velocity.

So Pvector steer = PVector.sub(desired, velocity);

We use the static method of subtraction for the PVector class

PVector desired = PVector.sub(target, location);

diagram-2

Furthermore we must also limit the speed of this desired vector because otherwise the agent will move really fast and depending on how far the target is, it could just appear to simply teleport there. The other key point is that once we have the steer vector we must apply it to our agents as a force.

To do this we must write an applyForce() method

Void applyForce(PVector force) {

Acceleration.add(force);

}

 

We will use the standard Euler Integration motion method to update the agents’ position with velocity.

Another refinement for this method of steering is to use a limiting case for the velocity as the agent approaches the target, where velocity depends on the distance of the agent to the target. We can use an if statement with the magnitude of the desired vector:

// Distance from target to agent

Float d = desired.mag();

If (d < 100) {

// We map a range of values around a hypothetical circle of radius 100 around the target. Once the target gets to that area, its velocity values change from 0 to maxspeed

Float m = map(d,0,100,0, maxspeed)

Desired.mult(m);

} else {

Desired.mult(maxspeed);

 

Flock Behavior

Interesting systems can be created applying Reynold’s algorithm for steering to simulate particular group behaviors seen in nature. The three main behavioral methods in flocking are separation, cohesion and alignment.

diagram-3

 

Separation

Separation is the method that gives agents the ability to evaluate how close they should be to their neighbors depending on the magnitude of the ‘separation force ’ we give them.

When dealing with group behavior, we are going to have to create a method that accepts an arraylist of all agents.

This is how we will write our setup() and draw()

ArrayList < Agent > Agents;

 

Void setup() {

Size(320,240);

Agents = new ArrayList<Agent>();

For (int I =0; I < 100; i++) {

Agents.add(new Agent( random(width), random(height)));

}

}

 

Void draw() {

For (Agent v : agents) {

a.separate(vehicles);

a.update();

a.display();

}

 

In our vehicle class we must create the separate method.

Void separate (ArrayList<agent> agents) {

// We set what we want our desired separation distance such that when any agents is this close to //another, we want vectors pointing away from each agent to influence their velocity.

Float desiredseparation = r*2;

PVector sum = new PVector();

// count of agents that satisfy the desired separation;

Int count = 0;

For (Agent other: agents) {

Float d = PVector.dist(location, other.location);

If ((d>0) && (d < desiredseparation)) {

//Define a vector from the other agent to the agent, in other words a fleeing vector

PVector diff = PVector.sub(location, other.location);

Diff.normalize();

// We divide the vectors by their distances so that if an agent is too close, it will flee faster than if it were near.

Diff.div(d);

// We add all vectors from all near agents

Sum.add(diff);

Count++;

}

}

// Now we can apply our steering behavior so that the vectors in sum become the desired vector for the agent.

If (count > 0) {

// We are looking for the average of all the fleeing vectors

Sum.div(count);

Sum.normalize();

Sum.mult(maxspeed);

PVector steer = PVector.sub(sum, vel);

Steer.limit(maxforce);

applyForce(steer);

}

}
Alignment

Alignment is the behavior of agents that makes them want to steer in the direction as the other neighbors. Cohesion is the behavior that steers the agent towards the center of the other neighbors.

For alignment,

PVector align (ArrayList<Agent> agents) {

Float neighbordist = 50;

PVector sum = new PVector(0,0);

Int count = 0;

For (Agent other : agents) {

Float d =PVector.dist(location, other.location);

// If the distance is  less than a predetermined quantity, initiate vector collection.

If (( d>0) && (d<neighbordist)) {

Sum.add(other.velocity);

Count++

}

}

If (count > 0) {

Sum.div(count);

Sum.normalize();

Sum.mult(maxspeed);

PVector steer = PVector.sub(sum,velocity);

Steer.limit(maxforce);

Return steer;

} else {

Return new PVector(0,0);

}

}

 

Cohesion

Last but not least, we must code the cohesion behavior. Cohesion is sort of an attractive steering force. We may call this a seeking behavior which looks for the average location of all neighboring agents and applies a velocity steer vector based on the location of the agent and this target. So we code the seek behavior and then reference the seek behavior in the cohesion method.

PVector seek(PVector target) {

// Make a vector from the agent to the target, which will be fed by cohesion method.

PVector desired = PVector.sub(target,loc);

Desired.normalize();

Desired.mult(maxspeed);

PVector steer = PVector.sub(desired, vel);

Steer.limit(maxforce);

}

Now we can establish our cohesion method:

PVector cohesion (ArrayList<Agent> agents) {

Float neighbordist = 50;

PVector sum = new PVector(0,0);

Int count = 0;

For (Agent other : agents) {

Float d = PVector.dist(location, other.location);

If ((d > 0) && (d < neighbordist)) {

Sum.add(other.location);

Count++;

}

}

If (count > 0) {

Sum.div(count);

Return seek(sum);

}else{

Return new PVector(0,0);

}

}

 

With aseparation, alignment, and cohesion, we can begin to create our first flocking algorithm

Entry 5 – Back to Autonomous Agents

Before anything I conjured up abt what I wanted this project to be, there was one idea. The idea of generating code that could create beautiful organic visual compositions using agents that eventually could become architectural designs. But the fact of the matter is that as someone interested in architectural design, I have a very practical mind. I am very much concerned with form, with tectonics, with sensible spatial configuration that humans could use to live in. At first I thought it was possible to go directly from code to design, but I was very disappointed to find out that that was not the case.

Often times, what you produce with agent code is so erratic and impractical that it would never become architecture on its own without personal editing. The concepts generated by codes and agents are just that, concepts, and may very well serve as inspiration for doable design but it will always need to be challenged by the personal input of the designer in order to become clear and purposeful. In reality, what happens is that the designer creates a sketch using Processing, then exports lines (if the sketch is in 3D) or images to a CAD program to be cleaned up, or traced, sometime completely doing something else on top of it. With Processing you are also able to export meshes, but the meshes themselves need to be cleaned and expanded to resemble anything close to architecture. This limitation made it seem a bit less magical at first, so I almost completely abandoned this idea of using agents and looked at other things that could serve a more practical purpose to my learning of architecture. And this is how I went back to trying to realize something in Unreal and Unity. I am always struggling with self-doubt so I had to ask if it was indeed worth it.

After much reflection on the workflow, I came to the conclusion that creating sketches in Processing is a novel way of finding inspiration for architectural forms, as long as it is clear that a sketch in Processing is just part of the concept stage and that a lot of work will need to be done in order to translate the visuals into tangible 3D forms that could be used as part of an architectural project.

So I’ve decided to finish what I started and go back to the roots of my project for the next few days. After some early experimentation with Agents and Geometrical Forms, while I was still trying to get my game engine part of the project done and failing miserably, I think this will do.

There are a few things that I am going to do. First I am going to lay down the theoretical framework for autonomous agents in 2D and 3D and create three unique sketches.

Entry 4 – Unreal here we go.

Area to be visualized in Unreal

Area to be visualized in Unreal

Things are starting to take off. My goals for the remaining weeks are as follows:

  • to prepare a real-time visualization in Unreal of part of Regent Street neighborhood in relation to the site of my design for an Italian restaurant.
  • Point cloud generation of context using Google Map images- this is something I want to try.
  • Aerial rendered view of Italian restaurant
  • 4 Processing Sketches and architectural forms inspired by the sketches

This week I spent modeling one of the colored blocks in the map above. I will be modeling the two center rows of buildings in high detail and the outer most ones as low detail.

blog-captures

I am using Google Earth ruler system to get accurate data on heights and sizes for the buildings.

aerial-view-to-be-recreated-and-rendered

Aerial View to be recreated

Google Earth 3D

Google Earth 3D

josies-restaurant

Josie’s Restaurant

I will be updating these blog posts twice a week from now on.

 

 

Week 3 – Stripes and Ridges

Week 3 is here and I completed both a Processing sketch and 3D Model.

I completed the following Lynda.com tutorial on Photoshop:

  • Photoshop CC Essential Training

For the Processing sketch, I wanted to continue using particle systems in the hopes of creating an architectural composition. I have always been captivated by the VFX effects like shattering and collisions. Thought I could replicate a similar effect by having a program that drew lines between particles as soon as it verified that they were in the vicinity of one another. The way I achieved this was by making a function that verified the distance between two particles, where s in a arbitrary distance number of pixels, in the case of my script, 10 pixels.

void detectCollision(Particle p) {
if (PVector.dist(location, p.location) <= p.s/2) {
p.velocity = new PVector(0, 0);
p.stopped = true;

In the main draw cycle, everytime the framecount is divisible by 10, a mouse press triggers the creation of a particle. A line will be drawn between two particles as soon as the distance between the particles is s/2. Note that this results in the ridges usually seen when something shatters.

for (int u = 0; u < particles.size(); u++) {
Particle p2 = (Particle) particles.get(u);
p.detectCollision(p2);
if (PVector.dist(p.location,p2.location) <= p2.s/2)
line(p.location.x, p.location.y, p2.location.x, p2.location.y);

You can check the script here

For the 3D model, I wanted to experiment with a workflow that would allow me to use polygonal modeling by using my sketches as a start. I created a unique sketch using splines.

Splines used as sketch

Splines used as sketch

I then carefully traced the sketch using polygon planes, extruding edge by edge to conform to the stripes. Let my imagination determine the height based on a four story building. I created human sized ellipses to maintain a sense of scale.

Wireframe Model

Wireframe Model

Aerial view of the form

Aerial view of the form

Week 2 Modular Building

This week I started by getting re-acquainted with 3DS MAX and Photoshop. I watched the following two video tutorials form Lynda.com.

  • Photoshop CC 2015 One-on-One Fundamentals
  • 3ds MAX 2017 Essential Training

I had already used 3ds MAX before, to create a design for the Limnology Building next to Lake Mendota. So I was pretty familiar with a lot of the modeling tools. The 2017 version introduced a new interface though so things looks a little different.

On the other hand, I am completely new to Photoshop and the idea of compositing images and renders into architectural visualizations is still kind of daunting. Architecture is fundamentally all about drawings and the presentation of 2D still images so Photoshop will perhaps be the most important tool at my disposal and one that I need to master regardless of how much I delve into 3D real-time visualization. The ability to produce a convincing render and compositing a realistic image of a building, its surroundings, and its materials is absolutely necessary for any architectural student and this skill will not be phasing out anytime soon.

Last week, I briefly touched upon the concept of modularity in the process of expansive 3D environments. We won’t get to begin creating a town for a while but in the process of prepping for it, I wanted to create a 3D model of a building that used modular panels across its skin to create a fun playful and plastic façade. So I created a series of modules as shown below:

Modular Pieces

Modular Pieces

The resulting building is very interesting looking and consists of these 4 modules.

Completed building

Completed building

Week 1 Modular Composition

I am back. It’s been a few months since I last coded in Processing and this represents a new opportunity for me to delve right back into particle systems.

This semester, however, I would like to take it to the next level by making Processing compositions that are “architectural” in nature. In any case, I needed to do some review of the major concepts of object-oriented programming.  The majority of the week I spent watching Daniel Shiffman’s very recent YouTube channel where he provides supplementary lectures to the two books I used in the Fall to learn Processing.

So I wanted my first script to be somewhat angular and modular. Modularity will be a big theme this summer during this study as I set out to model expansive urban sceneries. The best way to accomplish this is through the use of module set pieces that are then arranged, repeated and combined to create buildings. This technique is used to create the diverse open worlds of popular videogames like Fallout and Grand Theft Auto.

To do this, I thought of a random walker example in the beginning of Shiffman’s the Nature of Code. We can set a variable to be randomly chosen and have that variable determine whether the walker makes a step up, or down, left or right. In order to constrain the direction, I made this small function

void changeDir(){
    float r = random(0, 1);
    float newAngle = angle;
    if(r > 0.5){
      newAngle+=90;
    }else{
      newAngle-=90;
    }

Depending on the value of ‘r’, we can set the velocity direction of each new particle to be determined by sin(x) and cos(y)

velocity = new PVector(sin(radians(newAngle)), cos(radians(newAngle)));
    angle = newAngle;
  }

This confined the trace of the particles to be linear and in one of the 4 cardinal directions, so you end up with particles tracing a network of squares. Pretty neat!

You can see my script here http://www.openprocessing.org/sketch/361605

See you in another post this week.

 

Week 15 – Final Composition and Review of Semester

Final Composition

Final Composition

For my final composition, I really wanted to go all out. I was thinking of a way of shaping a terrain, like a generative landscape circumscribed by beautiful curves and colors. From my experience studying electromagnetism my junior year, I knew vector fields of attraction or repulsion can create beautiful compositions. So my goal was to simulate a pseudo vector field (this really isn’t exactly realistic) with 3 attractors, randomly positioned across the canvas. Toxiclibs is a great resource for simplifying the process of coding such a scenario because the its vector classes and verletphysics engine take out most of the manual setting up of forces and correspoding motion of the particles. You simply set up your grid of particles and attractors as arrays and the kinematics are automatically calculated.

 

Prof. Ponto has asked me to prepare a brief review of the semester by answering some questions.

1- What are your overall feelings on your project?

– I am very satisfied with what I accomplished this semester. This was my first time coding and each week I feel that the complexity of my compositions and the flow of writing code got a lot better. At the end of the semester, I started getting a bit sidetracked by my other academic responsibilities but I am very glad that I ended up with 14 different sketches, some of which could be used for the conceptualization of architectural form.

2- How well did your project meet your original project description and goals?

From a logistics point of view, according to my syllabus I should have ended up with 16 different sketches. I was only able to complete 14 compositions and 12 blog entries, though I am finishing up two additional ones before Monday. So that was an acceptable performance, in my opinion, but not stellar. I was not able to do Chapter 5 of the “Nature of Code” which would have introduced me to boids and flocking behaviors, which is really the real core of autonomous agent systems. With that said, I was able to get a lot of practice on particle systems.

3- What were the largest hurdles you encountered? How did you overcome these challenges?

Coding is difficult. It takes me a lot of time. I feel like I can code, but probably not for a living. I feel like I am not expedient enough. But there is something that really pushes me to want to continue doing this. I really want to prove that I can be part of the next generation of generative designers. I want to be able to use these simple algorithms in the conception of a building. When I look at the images that the code generates, I always find myself extracting the form and making it architecture in my mind. That intrigue of what it could be if I were to use the curves  or point clouds as the basis of building plan, or landscaping or actual 3D forms, was motivating enough to keep pushing harder.

4- If you had more time, what would you do next?

I definitely want there to be a next time, next semester. I have set my mind on coding  boids and meshes from their trajectories and Processing, with the addition of another library, has the capability of creating isosurfaces from point data that can then be exported to a 3D modeling package for polishing and sculpting. I want to develop a workflow between my code and the software that I already use like Rhino, 3DS MAX. I want to continue the study of agent based systems as a another of form exploration.

Thank you so much.