Panel Rationalization (OWL)

never has panel rationalization been so straighforward! k-means clustering to sort similar panels!

panels are sorted by two parameters:

  • panel area
  • surface normals

and then replaced with set panel dimensions (an average of each cluster) + 20mm offset. the results are pretty decent, with minimal overlap even at the steep bits of the surface.

panel types

number of clusters (types of rectangles) from 2 – 50, iterations = 3

clustering iterations

k (number of clusters) = 25, iterations running from 1 – 30

EDIT : a little extra definition showing colour clustering to reduce the number of colour variations needed from 1124 to 10-50.

18217830_10154861225159064_1385605154_n

colour variations from 10-30, 30 iterations

18197623_10154861722884064_1987930800_n

10 colours, iterations from 10-30, showing how the clustering works in realtime

plugins used: OWL

Advertisements

Machine Learning with OWL

Just attended a workshop last week on machine learning in grasshopper, and here are some results!

clustering final

slightly edited version of the final presentation board

The above is a combination of a few techniques in machine learning, used to find clusters of correlated sites in Dubai, based on a given combination of parameters.

here’s the breakdown:

The intention was to find out different hotspots in Dubai, based on a few parameters that affect the popularity of the given site, and then to create a set of street furniture that would be contextually sensitive to the site.

graph dark

here’s a pretty relationship graph to whet your appetite

1. Mapping the popularity of places in Dubai

Circular_Graph_Places_Properties

graph of all the parameters used and their numerical values in a circle

Mapping the popularity of areas in Dubai utilizes k-means clustering to find groups of places based on a few factors :

  • ratings (taken from the Google Places API)
  • number of reviews and their scores
  • the security presence on site (to gauge how private a given building is)
  • building capacity (size)
  • type of building (commercial, residential, utility, etc)
  • distance of that building from other buildings (to gauge the density of the area)

unnamed

k-means clustering and their averaged characteristics of the clusters (e.g. in the light blue cluster, the metro stations, power plant, burj al arab, and the police station generally have a strong security presence, get a rating of about 4.3 stars, and for some reason are considered small sized buildings in relation to other sites)

 

2. Design of chairs that would directly correlate to the clusters on the map

a parametric model of a street bench is created (the old fashioned way, in grasshopper) with a set of 12 parameters defining width, divisions, backrest height etc, and then run through an autoencoder to reduce its parametric dimensionality to 2.

this means that with two sliders, one is able to create a set of street furniture that captures the relationships of all 12 parameters that are used to create the said street furniture.

this also means that by creating a 2d grid of points, one can see the entire series of permutations of the design in question, be it a chair, a house, or a city.

3 versions of chairs were defined by the designer (well, someone from our team) and fed into the autoencoder to find out what are the strongest correlations between all three designs (the designs themselves are at the top left, bottom left, and bottom right corners of the graphic below).

these correlations are then fed back into the trained autoencoder network to ‘decode’ the relationships between the 3 objects. hence all permutations between these three given designs define some parts of the characteristics of each designed object.

say, the top left design is a simple long bench, bottom left a short bench, and bottom right a large circular bench with backseats and divisions in between them. The network then finds a set of ‘morphed’ objects that each have a bit of ‘division-ness’, ‘long-ness’, ‘backrest-ness’ in between them.

then the entire set is run through another k-means clustering algorithm to find out which version of seating is most suitable for which areas in Dubai, based on a different set of related parameters, this time being amount of seating area, number of divisions, and bench length: e.g. Dubai Mall and Emirates Mall have the highest traffic (gauged by the number of reviews and size of the building), so they would require seatings with the largest amount of area.

chair

the graphic above shows the entire array of all possible permutations of the street furniture that fit within our given definition.

I hope you would see the potential of using machine learning in architecture, as more than ever, it allows real, big data to be directly linked to design in a way that is not simply a designer’s intuition. This technique can be applied to include all the parameters that a modern building should have, like sustainibility targets, cost, environmental measures, passivhaus implementations, fire regulations.. the list is endless. We as architects should start incorporating them into design automatically so that we give ourselves more time to do the things we enjoy, like conceptual input, or our own flair in design.

the above statement doesn’t apply to people who really really enjoy wrangling with fire regulations and sustainability issues manually, or believe in the nostalgia of the architect-as-master-builder who is able to handle all the different incoming sources of requirements and create something that satisfies them ALL. disclaimer: i’m not one of them.

P.S. oh yes, and since we had a bit of free time during the workshop, and we didn’t know what to call our project, we decided to machine learn our team name by using a markov chain network.

project-name.gif

the project name. it’s a set of slightly random babbling that the definition below spit out after reading an article about machine learning in wikipedia. kinda sounds like english though, so it’s all fine.

project name process

plugins used: OWL, ghpython, anemone

Backpropagation, Machine Learning and all that jazz (anecdotal)

backpropagation in grasshopper

backpropagation in grasshopper! (in ironpython actually)

(sorry about the small text in the gif, i’ll make a nicer one in the near future)

recently went on a quest for the full understanding of backpropagation (used in training neural networks (Machine Learning/AI)), and came upon this amazing blog post detailing the implementation of backpropagation in python.

I spent one day following through the code line by line, but still wasn’t able to grasp the structure of how it worked, so I did what I usually do when I couldn’t understand difficult theories: break it up into bits and implement them in grasshopper!

The result of the deconstructed backpropagation algorithm looks pretty straightforward in hindsight, but then hindsight is always 20/20.

Actually running the loop through 500 epochs (as mentioned in the blog post) took hazardously long, so the definition doesn’t do ‘real’ learning, but even after 25 epochs one could see the accuracy actually climbing.

By comparison, here’s a gif of the same implementation in pure(r) python (it’s ironpython, inside grasshopper, inside rhino3D). note that both implementations took out cross validation, and instead did a simple 66% train set and 33% test set split:

backpropagation in ironpython

computing the same dataset for 25 epochs took 0.8 seconds. T_T

However, the grasshopper definition was, among other things, slow enough for the simple human mind to make some really important observations.

Here a few interesting observations so far:

First, Neural networks are in fact… a dictionary of weights! that’s it! (oh wait, see  update) (it was so mindblowingly simple to me I had to walk around for a few minutes wondering if i missed something). In actual fact, it is a network of weights that incrementally update over many iterations to resemble an ‘abstraction’ of what the dataset is. they are then used to extrapolate data (among other things). The dictionary thing was due to the code being written in Python.

(update 20170820: actually, it is a dictionary of weights and an ‘activation function’, as I found out just a few days after this post but didn’t get around to updating it. an activation function is a function that puts the weights on a sloped graph (like a sigmoid function/tanh, or even y=mx+b which makes a sloped line) so that it knows whether going up or down is a good thing)

Think of training a neural network as ‘finding a polynomial(ish) function that fits a given set of numbers. Once you find a polynomial function, you can extrapolate from the function to guess new data, like below:

polynomial maker

I found this to be extremely valuable in helping me intuitively grasp what a neural network ‘looks like’ (the gif above is the implementation of it in python).

In the same way, a neural network makes a best guess at this ‘function’ again and again, and it is tested against a given result (labels/classes/actual output that you use to compare), and at every iteration an error is calculated (error = how far away is the current mapping/function from the right answer) and backpropagated (see next paragraph). Note that nothing is absolute. the neural network’s ‘guess’ is a set of probabilities: 10% says its A, 63% says its B, and 27% says its C, so the best guess is B.

Secondly, Backpropagation means ‘pass that error from the result back layer by layer (opposite direction) and find out which neuron is responsible for how much of that error’, and is essentially the ‘blame game‘ played by the neurons in the network.

An anecdotal way of thinking about training goes like this:

‘the output neuron at the end of the line gets a result from his team of hard working neurons and sees that its off by -0.34, so the weights (which as numbers) need to arrange themselves so that they move upward by 0.34 to get it right. he shouts back down the line: ‘oi! which of you got it wrong?’ and the neurons huddle together and start assigning blame to each other, the first one saying, ‘i’m only reponsible for 0.1 parts of this result, check the neuron before me, he gave me 0.9 parts of the paperwork already’ and the second goes ‘i only did 0.3 parts of this, check the guy before me’ and so on and so forth, and then the output neuron sees how many parts of the blame fall on which neuron and finds that among all of them, actually the second guy (0.3) gave the most error. And he gave him a good whipping. XD The next time round, the second neuron was more careful in doing his paperwork.

And that, I believe, is my current understanding of how backpropagation works.

Third, This might be obvious to some people, but I also found that it helps if I thought of the layers of the neural network as the ‘lines’ in the network graph rather than the ‘dots’, at least in the process of coding it up. Because we type up each network layer as a function of mapping something to something else (input > hidden, hidden > output), and not a container that stores an ‘abstraction’ or mapping. I realised this after reading something like this:

9jzpy

and wondering ‘what do i need to define as my input layer!? you said there’s an input layer, where is it? its a magical layer that doesn’t exist? or its supposed to do neuron = input? and what is the hidden layer doing? its supposed to make an abstraction of the raw input? I don’t know, it just looks like its sitting there accepting neurons from the input layer…‘ (that monologue didn’t go anywhere)

Hence I posit that neural network maps be labeled this way in the future:

9jzpyb

Then it would be quite clear that the hidden layer works by mixing all raw inputs together, an outputs an abstraction, and then the output layer uses the 3 (or more) abstractions of the data to make a slightly educated guess. Backpropagate and repeat.

Fourth, some observations.

Learning rate. I realised that the neural network’s accuracy basically just ‘sat there’ for the first 20 or so epochs because the weights in the neurons are changing at a rate that was so slow that it didn’t manage to tip the balance of probabilities (e.g. iteration 1 neuron: ‘oh, i got that wrong. lemme change it a bit’ -> iteration 2: ‘oh, its still wrong, lemme change it a bit again….’up to 20 epochs). Perhaps just plodding along at the same pace wasn’t the most efficient way of doing things. This warrants further investigation.

More Hidden Layers seem a bit counterintuitive at first. Until one reads Geoff Hinton’s slide in this post. Then one realizes he might just need to feed More Data!

Initial weights matter quite a bit. Starting off from the wrong foot means there’s a lot more distance to get to a good accuracy, and more distance means more computation time. Is this where Naive Bayes comes in? by having better initial weights, the network converges faster? With the caveat of exposing the network to bias? Will try this very soon.

So, my current takeaway from the past two months of reading and practising machine learning:

  • Neural Networks are : a set of numbers (called weights) and get updated for n times, with an error check that tells the function how best to change the numbers to get less errors the next time round.
  • I guess an article called ‘weight updater with error checking’ sounds like something headed for obscurity, so Neural Networks via Backpropagation is used instead. I mean, I wouldn’t want to name a building I designed a ‘terraced house with front porch’ for the same reasons.
  • Of course, network algorithms are the secret sauce that I have yet to learn, so I expect this article to be updated as I go along!

All above stuff is purely anecdotal. I hope more knowledgeable readers might point out in the comments below the bits that I intuited wrongly, or bits of facts that have been peddled falsely or might lead to horrible misunderstandings down the road. I will update my beliefs *wink wink Bayesian Inference wink wink* based on these comments so as to arrive at a more accurate frame of understanding.

Some amazingly useless functions

shrunk2

um. rain. yes, a rain generator. nope i haven’t managed to parse weathermap data yet.

shrunk3

and i guess this is a rain occlusion calculator. also, no wind rose. yet. XD

Well.. i mean, i do get the point of calculating rain occlusion on a site. it could be used to drive landscaping design decisions, building placement, perhaps material decisions based on the amount of weathering each part of the building would get.

Or even drive the massing itself so as to utilize the natural topographical features of the site and surrounding landscape to create passive water catchment areas for purposes of rainwater collection and water management.

I could probably go on for a while citing non-existent features and their amazing potential, but, but… no.

I would love to write about potential, but only when their corresponding components are ready to be battle-tested. So, not yet.

p.s. mmm Ironpython can be such a can of worms when it wants to be.

EDIT: (2017/03/27) – finally able to split the gif from weather underground into separate pngs for passing into grasshopper!

clouds2

some dirty data