Entry 9 – Conclusion

Just to cap things up for the semester, I would like to reflect on what I accomplished.

The good –

I think I gave a solid performance with respect to finalizing the Processing project I started a year ago. My experience with coding was very limited until I started doing this independent study. Now I am able to code and customize flocking algorithms and other creative code applications to generate interesting visual compositions. I developed three Processing sketches this semester using object-oriented programming. I see myself continuing to code for visual art and as a way of finding new ways to inspire my designs.

The bad –

I would say that I only accomplished about 50% of what I intended to do. I wanted to be able to visualize one of my projects using a real-time in-game engine. This is the type of skill that is becoming extremely useful in the age of virtual reality.While I modeled the basic shapes of all the blocks in the map below, I still needed to model the details and texture. I think I undertook a very big project in that respect. The next steps would have involved merging all of the scenes together to create the map

block-1 block-2 block-3 block-4 block-5 block-6 block-7 block-8 block-9

The Future –

It is my intention to continue learning Unity and to create an interactive architectural visualization that would allow the user to have predefined views of the building with the context of the city and also walk around in first-person. It is also my goal to find ways to use Processing sketches as concepts for architectural form creation. My idea is to be to export sketches in image, lines form or mesh forms and be able to visualize them in 3D programs and use as inspiration for further design.

As I have said before, I have learned a lot doing this and there’s still a lot I want to do.

Thanks for everything.

Entry 8 – Trailing Agents against attractors and repellents

For the second composition, I had to do quite a lot of thinking. Rather than having agents respond to steering forces from other agents, I wanted to give them the ability to react to their previous positions or the trail they were leaving behind. If there’s anything that becomes obvious with the regular flocking algorithms, is that in most cases, the movement is very erratic. This happens because the steering vectors are actionable on a given agent as long as the source is within a predetermined radius. This is equivalent to saying that an agent can see has eyes everywhere.

diagram-8

Rather than using this paradigm, it is convenient to implement an angle of vision to the agent behavioral methods:

diagram-9

But before we get to how this was done in the behavioral methods, an important difference between this algorithm and the previous one is that the tail drives the movement of agent and this is done by first extrapolating a future location vector based on the current velocity of the agent:

diagram-11

The other important distinction of this flocking algorithm is that it kinda uses a pseudo pathfollowing behavior directed by trail position vectors. Basically, the agent keeps following the path it was originally set when speed was randomly selected at initialization. In practice, this gives us almost straight paths unless an attractor or repeller is nearby, in which case the steer will slowly make them change course.

To implement the angle of vision, I had to change the methods for separation and cohesion to include a calculation for the angle between the trail positions and the velocity:

diagram-12

The separation method uses the same principle.

Lastly, I needed to include a method for tracing the tail. For some reason, I am having some artifacts in the composition which I haven’t been able to correct. Straight lines appear across the sketch.

diagram-13

The result is a really slick visualization of movement which for all intents and purposes comes fairly close to the way we navigate architectural spaces. We mostly walk in straight paths and occasionally make turns.

 

0213 1122 1150

Entry 7 – Creating my first Flock composition

Having learned the theoretical background behind flock algorithms in processing, it is time to compose a sketch.

My idea for this sketch was to introduce a little variation from the typical flock algorithm. I wanted there to be a competition between two classes of Agents, the regular agents and another class I decided to call Enemies.

First, there is an additional class called Enemies which uses the inheritance feature of OOP to acquire and expand the capabilities of the regular Agent class:

diagram-5

The separation, alignment and cohesion behaviors follow the same guidelines as explained in my previous entry. But the agent class carries  the addition of a repel method in charge of creating repelling steering of the agents from the enemies:

diagram-4

By having this function take on a different range parameter, I can tailor how close an object has to be to another based on the type. For example, I can have the enemies have a greater influence against the agents along a larger distance. I can also use it to introduce a repel force among the enemies themselves

Finally, I must call on all the functionality of the agents and enemies and apply the repelling method accordingly:

diagram-6

The result is really organic. With the red and green colors, even though we are technically seeing a war between organisms, the forms that are visualized kinda almost give a Christmas feeling.

2039 1317

 

In order to automate the process of acquiring an image from the sketch, I invoked the keyPressed() method  and connected it to an image function which uses the built-in jpeg export saveFrame.

 

diagram-7

 

 

Entry 6 Agents Theoretical Framework

How do we code Agents? Before we begin building autonomous agents, we must understand what an agent can and cannot do:

  • An agent has a limited ability to perceive the environment: An agent must have methods to reference other objects in the code. The extent to which he is able to interact with other objects is entirely up to us but will most likely be limited in some way just like living things.
  • An agent reacts to its environment by calculating an action: Actions in this context are forces that drive the dynamics of the agent. The way we have calculated forces before is through vector math and this will be no exception.
  • An agent is a follower, not a leader: Though less important than the other two concepts, it is important to understand that we are implementing code to simulate group behavior and dynamics. The trends and properties of the complex system depend on the local interaction of the elements themselves.

Much of our understanding for coding agents comes from computer scientist Craig Reynolds who developed behavioral algorithms to animate characters.

What we want to do with agents is create methods for steering, feeling, wandering, pursuing to give the elements life-like substance. These behaviors will use motion with vectors and forces.

The agents of the system we will build will have limited decision making based on a series of actions. Most of the actions we seek to simulate can be described as ‘steering forces’. These steering behaviors can may include seeking, fleeing, following a path, following a flow field of vectors and flocking with the other agents. Flocking can be further dissected into the following steering behaviors: separation, alignment and cohesion. In order to get creative with this framework, it is our responsibility to mix and match different behaviors for the agents and see what kind of system we end up simulating.

desired velocity

desired velocity

The most important concept is that of a steering force.

Steering force = desired velocity – current velocity.

So Pvector steer = PVector.sub(desired, velocity);

We use the static method of subtraction for the PVector class

PVector desired = PVector.sub(target, location);

diagram-2

Furthermore we must also limit the speed of this desired vector because otherwise the agent will move really fast and depending on how far the target is, it could just appear to simply teleport there. The other key point is that once we have the steer vector we must apply it to our agents as a force.

To do this we must write an applyForce() method

Void applyForce(PVector force) {

Acceleration.add(force);

}

 

We will use the standard Euler Integration motion method to update the agents’ position with velocity.

Another refinement for this method of steering is to use a limiting case for the velocity as the agent approaches the target, where velocity depends on the distance of the agent to the target. We can use an if statement with the magnitude of the desired vector:

// Distance from target to agent

Float d = desired.mag();

If (d < 100) {

// We map a range of values around a hypothetical circle of radius 100 around the target. Once the target gets to that area, its velocity values change from 0 to maxspeed

Float m = map(d,0,100,0, maxspeed)

Desired.mult(m);

} else {

Desired.mult(maxspeed);

 

Flock Behavior

Interesting systems can be created applying Reynold’s algorithm for steering to simulate particular group behaviors seen in nature. The three main behavioral methods in flocking are separation, cohesion and alignment.

diagram-3

 

Separation

Separation is the method that gives agents the ability to evaluate how close they should be to their neighbors depending on the magnitude of the ‘separation force ’ we give them.

When dealing with group behavior, we are going to have to create a method that accepts an arraylist of all agents.

This is how we will write our setup() and draw()

ArrayList < Agent > Agents;

 

Void setup() {

Size(320,240);

Agents = new ArrayList<Agent>();

For (int I =0; I < 100; i++) {

Agents.add(new Agent( random(width), random(height)));

}

}

 

Void draw() {

For (Agent v : agents) {

a.separate(vehicles);

a.update();

a.display();

}

 

In our vehicle class we must create the separate method.

Void separate (ArrayList<agent> agents) {

// We set what we want our desired separation distance such that when any agents is this close to //another, we want vectors pointing away from each agent to influence their velocity.

Float desiredseparation = r*2;

PVector sum = new PVector();

// count of agents that satisfy the desired separation;

Int count = 0;

For (Agent other: agents) {

Float d = PVector.dist(location, other.location);

If ((d>0) && (d < desiredseparation)) {

//Define a vector from the other agent to the agent, in other words a fleeing vector

PVector diff = PVector.sub(location, other.location);

Diff.normalize();

// We divide the vectors by their distances so that if an agent is too close, it will flee faster than if it were near.

Diff.div(d);

// We add all vectors from all near agents

Sum.add(diff);

Count++;

}

}

// Now we can apply our steering behavior so that the vectors in sum become the desired vector for the agent.

If (count > 0) {

// We are looking for the average of all the fleeing vectors

Sum.div(count);

Sum.normalize();

Sum.mult(maxspeed);

PVector steer = PVector.sub(sum, vel);

Steer.limit(maxforce);

applyForce(steer);

}

}
Alignment

Alignment is the behavior of agents that makes them want to steer in the direction as the other neighbors. Cohesion is the behavior that steers the agent towards the center of the other neighbors.

For alignment,

PVector align (ArrayList<Agent> agents) {

Float neighbordist = 50;

PVector sum = new PVector(0,0);

Int count = 0;

For (Agent other : agents) {

Float d =PVector.dist(location, other.location);

// If the distance is  less than a predetermined quantity, initiate vector collection.

If (( d>0) && (d<neighbordist)) {

Sum.add(other.velocity);

Count++

}

}

If (count > 0) {

Sum.div(count);

Sum.normalize();

Sum.mult(maxspeed);

PVector steer = PVector.sub(sum,velocity);

Steer.limit(maxforce);

Return steer;

} else {

Return new PVector(0,0);

}

}

 

Cohesion

Last but not least, we must code the cohesion behavior. Cohesion is sort of an attractive steering force. We may call this a seeking behavior which looks for the average location of all neighboring agents and applies a velocity steer vector based on the location of the agent and this target. So we code the seek behavior and then reference the seek behavior in the cohesion method.

PVector seek(PVector target) {

// Make a vector from the agent to the target, which will be fed by cohesion method.

PVector desired = PVector.sub(target,loc);

Desired.normalize();

Desired.mult(maxspeed);

PVector steer = PVector.sub(desired, vel);

Steer.limit(maxforce);

}

Now we can establish our cohesion method:

PVector cohesion (ArrayList<Agent> agents) {

Float neighbordist = 50;

PVector sum = new PVector(0,0);

Int count = 0;

For (Agent other : agents) {

Float d = PVector.dist(location, other.location);

If ((d > 0) && (d < neighbordist)) {

Sum.add(other.location);

Count++;

}

}

If (count > 0) {

Sum.div(count);

Return seek(sum);

}else{

Return new PVector(0,0);

}

}

 

With aseparation, alignment, and cohesion, we can begin to create our first flocking algorithm

Entry 5 – Back to Autonomous Agents

Before anything I conjured up abt what I wanted this project to be, there was one idea. The idea of generating code that could create beautiful organic visual compositions using agents that eventually could become architectural designs. But the fact of the matter is that as someone interested in architectural design, I have a very practical mind. I am very much concerned with form, with tectonics, with sensible spatial configuration that humans could use to live in. At first I thought it was possible to go directly from code to design, but I was very disappointed to find out that that was not the case.

Often times, what you produce with agent code is so erratic and impractical that it would never become architecture on its own without personal editing. The concepts generated by codes and agents are just that, concepts, and may very well serve as inspiration for doable design but it will always need to be challenged by the personal input of the designer in order to become clear and purposeful. In reality, what happens is that the designer creates a sketch using Processing, then exports lines (if the sketch is in 3D) or images to a CAD program to be cleaned up, or traced, sometime completely doing something else on top of it. With Processing you are also able to export meshes, but the meshes themselves need to be cleaned and expanded to resemble anything close to architecture. This limitation made it seem a bit less magical at first, so I almost completely abandoned this idea of using agents and looked at other things that could serve a more practical purpose to my learning of architecture. And this is how I went back to trying to realize something in Unreal and Unity. I am always struggling with self-doubt so I had to ask if it was indeed worth it.

After much reflection on the workflow, I came to the conclusion that creating sketches in Processing is a novel way of finding inspiration for architectural forms, as long as it is clear that a sketch in Processing is just part of the concept stage and that a lot of work will need to be done in order to translate the visuals into tangible 3D forms that could be used as part of an architectural project.

So I’ve decided to finish what I started and go back to the roots of my project for the next few days. After some early experimentation with Agents and Geometrical Forms, while I was still trying to get my game engine part of the project done and failing miserably, I think this will do.

There are a few things that I am going to do. First I am going to lay down the theoretical framework for autonomous agents in 2D and 3D and create three unique sketches.