How to use Flux

TLDR : sign up for flux, click site extractor, pick region, press extract, open your program, download from flux, have site model (with caveats).

 

Heard of Flux? it’s the pretty slick project that came out of Google X in 2014 for the design and construction industry. At the time of writing, it supports connections to many stalwart programs in the design and construction industry – 3ds Max, AutoCAD, Dynamo, Excel, Revit, Google Sheets, Rhino, Grasshopper and Sketchup.

flux01.JPG

In essence, Flux turns your data into digital soup that can then turn back into data that works for different programs. Seems you can do some interesting things to it while in soup form too, but I haven’t explored them myself. Do play with the other stuff too!

Ok enough digressing. The main reason why I want to introduce Flux to you is because it has this really handy osm to your-program-of-choice extractor called Site Extractor.

flux03

Those who use Grasshopper for Rhino may have heard of Elk, a plugin that grabs osm data (open street maps ~ the Wikipedia of maps) that can then be subsequently used to create site models.

Well, Site Extractor is not really aimed at you. It’s aimed at your colleagues who have not used Elk and have been slaving away making 3d site models the manual way (by hand. with lines from AutoCAD. across a period of days. you’ve done it before, that’s why you now use Elk.)

The Site Extractor can be used to extract data into many different programs, but I will only focus on Rhino and Grasshopper.

Step 0 – Sign Up

yes.

Step 1 – Make New Project

click on, and launch flux projects. this is where you make a folder(project) for you to put your site model information into.

flux04.JPG

flux05

Step 2 – Get Site Model Data

flux06

it’s osm! pretty rendering courtesy of mapbox. tick the things you want, click select project, and click extract to ‘your_project_name’.

flux02

Step 2.5 – Download and install plugin for your 3d modelling program

flux07.JPG

Step 3 – Download Site Model from Flux (Rhino)

After you have installed the Rhino plugin (make sure your Rhino is Rhino 5 sr9 and above, or else it won’t work), open your Rhino, and you will see the flux tab.

a. login into your account, and then

b. click connect

flux09.JPG

click on the Receive Connections button (download arrow)

flux 09.JPG

download the parts that you want. Note that it will download to the current layer, so if you want to keep the data separate, make the layers beforehand, and select each one as current layer before downloading from flux.

flux10

You have questions.

Question 1 : why to the meshes look so discoloured?

Answer 1 : they are welded. unweld them (refer to GIF in Question 2).

 

Question 2 : why do some buildings look like they’ve got transparent sides?

Answer 2 : some meshes faces (single pieces of mesh that make up the total mesh) are facing the wrong way (they should all face outward from the building shape). Use UnifyMeshNormals on them.

flux unweld.gif

 

Question 3 : Why are the building heights so strange?

Answer 3 : They are random. When osm doesn’t provide height data, Flux will (optionally) extrude your buildings at random heights. Nope Flux doesn’t grab information from the google maps team to make the building heights, for reasons not explicitly stated. How do we fix that? Try setting up your own building height extrusion from Grasshopper (at least you can set how random you want them to be, e.g. random heights between 3-5 floors for suburban areas, random heights between 7-15 floors in town centers)

 

Question 4 : Why are there no road widths?

Answer 4 : Osm data doesn’t really have that either (there are some, but formats vary quite a bit) One way is to guess them (when you don’t really need correct road widths) with Grasshopper. Here’s a script. It takes a guess at road widths by checking for collisions against building footprints. you still have to stick them together yourself though, because region union/solid union in grasshopper is… something that just doesn’t seem to listen to me.

road maker

 

I have personally only used the ones for Grasshopper, Rhino, and SketchUp, but the steps shouldn’t be too different between them.

 

And there you have it! Hope you now know how to make (most of) your site model in 3 steps! Use the extra time to do something more intellectually stimulating. 🙂

Kaggle datasets in Rhino

datasets

melbourne housing dataset.csv from kaggle.

this was using that older dataset (link to kaggle) with 9 features. coordinates were polled from Google’s geocoding API based on the address in the dataset.  I believe the updated dataset provides coordinates too, possibly using the same method described.

number of rooms

showing different ranges of number of rooms per unit

kopt

‘if i’m looking for houses that are between 5 to 7 rooms, the newest ones are the light yellow ones on the northeast edge of the city.’

other details that are inherent by linking an excel datasheet to coordinates in the real world:

contour

height of property, contour of surrounding land

watershed

flow of water through property and general direction of water flow. (flash floods, landslides possibly?)

street view

pictures around the site (from Google Street View)

google directions

of course, different methods of transport to and from the city

clouds

weather data at a given time (historical data is paywalled, so i couldn’t access it)

soil_cropped.gif

soil conditions and suitability for certain forms of construction

aussie

…and some really zoomed out GIS level datasets (i’m still wondering what to do with them.)

Owl in Galapagos

TLDR: predicting rectangles with two neural networks and galapagos.

 

galapagos on owl 3

galapagos used to discover the best shapes for two time series neural networks

just realised that galapagos would potentially be very useful (or actually, another backprop NN might be even faster) for testing out optimal ‘window’ sizes for a time series neural network (the ‘view’ that the neural network sees when it learns to predict a number series).

When predicting with a time series neural network, one of the problems that bugged me has been that we don’t know what the best size is for prediction (called look_back in this tutorial). Too small and the NN learns that it should only go up or down, too large and it misses out on too many details.

This is where galapagos (or any other appropriate learner) comes in to help find the optimal range within which the best predictions can happen.

Galapagos was used to test 7 parameters that directly affected the neural network shape and learning rate (1 for window size, 3 for each NN : number of hidden neurons, learning rate, and steepness of the sigmoid activation function).

param1

after 15 minutes or so, it gave me some pretty decent answers for the parameters required for learning two separate lists of parameters.

It was quite interesting to see that the learning rate varied quite a bit between the two (one was at 0.21, and another at 0.62), and alpha( used to define the steepness of the sigmoid activation function) was at 1.344 and 0.887 respectively (and then i realised that in fact learning rate is inversely proportional to alpha).

The number of hidden neurons (defining the steepness of the sigmoid activation function) stayed relatively similar at hidden = 4 neurons and 5 neurons respectively. but then, i wouldn’t have guessed if i just used a random middling number between inputs and outputs.

param2

the resulting prediction was a prediction of a series of two parameters that define a rectangle.

Ground truth dataset in Grey, predicted dataset in Yellow.

galapagos on owl 4

the accuracy falloff after training

galapagos on owl 5 initial

before training

galapagos on owl 5 initial2

initial hand tweaking of parameters (didn’t know which ones are best to tweak)

galapagos on owl 5 learnt

so i machine learned those parameters and it got some pretty decent predictions

galapagos on owl 5 shifted

and shifted some starting rectangles and realised it predicts about up to 10 rectangles reliably enough before doing some crazy things.

plugins used: OWL, galapagos

Panel Rationalization (OWL)

never has panel rationalization been so straighforward! k-means clustering to sort similar panels!

panels are sorted by two parameters:

  • panel area
  • surface normals

and then replaced with set panel dimensions (an average of each cluster) + 20mm offset. the results are pretty decent, with minimal overlap even at the steep bits of the surface.

panel types

number of clusters (types of rectangles) from 2 – 50, iterations = 3

clustering iterations

k (number of clusters) = 25, iterations running from 1 – 30

EDIT : a little extra definition showing colour clustering to reduce the number of colour variations needed from 1124 to 10-50.

18217830_10154861225159064_1385605154_n

colour variations from 10-30, 30 iterations

18197623_10154861722884064_1987930800_n

10 colours, iterations from 10-30, showing how the clustering works in realtime

plugins used: OWL

Machine Learning with OWL

Just attended a workshop last week on machine learning in grasshopper, and here are some results!

clustering final

slightly edited version of the final presentation board

The above is a combination of a few techniques in machine learning, used to find clusters of correlated sites in Dubai, based on a given combination of parameters.

here’s the breakdown:

The intention was to find out different hotspots in Dubai, based on a few parameters that affect the popularity of the given site, and then to create a set of street furniture that would be contextually sensitive to the site.

graph dark

here’s a pretty relationship graph to whet your appetite

1. Mapping the popularity of places in Dubai

Circular_Graph_Places_Properties

graph of all the parameters used and their numerical values in a circle

Mapping the popularity of areas in Dubai utilizes k-means clustering to find groups of places based on a few factors :

  • ratings (taken from the Google Places API)
  • number of reviews and their scores
  • the security presence on site (to gauge how private a given building is)
  • building capacity (size)
  • type of building (commercial, residential, utility, etc)
  • distance of that building from other buildings (to gauge the density of the area)

unnamed

k-means clustering and their averaged characteristics of the clusters (e.g. in the light blue cluster, the metro stations, power plant, burj al arab, and the police station generally have a strong security presence, get a rating of about 4.3 stars, and for some reason are considered small sized buildings in relation to other sites)

 

2. Design of chairs that would directly correlate to the clusters on the map

a parametric model of a street bench is created (the old fashioned way, in grasshopper) with a set of 12 parameters defining width, divisions, backrest height etc, and then run through an autoencoder to reduce its parametric dimensionality to 2.

this means that with two sliders, one is able to create a set of street furniture that captures the relationships of all 12 parameters that are used to create the said street furniture.

this also means that by creating a 2d grid of points, one can see the entire series of permutations of the design in question, be it a chair, a house, or a city.

3 versions of chairs were defined by the designer (well, someone from our team) and fed into the autoencoder to find out what are the strongest correlations between all three designs (the designs themselves are at the top left, bottom left, and bottom right corners of the graphic below).

these correlations are then fed back into the trained autoencoder network to ‘decode’ the relationships between the 3 objects. hence all permutations between these three given designs define some parts of the characteristics of each designed object.

say, the top left design is a simple long bench, bottom left a short bench, and bottom right a large circular bench with backseats and divisions in between them. The network then finds a set of ‘morphed’ objects that each have a bit of ‘division-ness’, ‘long-ness’, ‘backrest-ness’ in between them.

then the entire set is run through another k-means clustering algorithm to find out which version of seating is most suitable for which areas in Dubai, based on a different set of related parameters, this time being amount of seating area, number of divisions, and bench length: e.g. Dubai Mall and Emirates Mall have the highest traffic (gauged by the number of reviews and size of the building), so they would require seatings with the largest amount of area.

chair

the graphic above shows the entire array of all possible permutations of the street furniture that fit within our given definition.

I hope you would see the potential of using machine learning in architecture, as more than ever, it allows real, big data to be directly linked to design in a way that is not simply a designer’s intuition. This technique can be applied to include all the parameters that a modern building should have, like sustainibility targets, cost, environmental measures, passivhaus implementations, fire regulations.. the list is endless. We as architects should start incorporating them into design automatically so that we give ourselves more time to do the things we enjoy, like conceptual input, or our own flair in design.

the above statement doesn’t apply to people who really really enjoy wrangling with fire regulations and sustainability issues manually, or believe in the nostalgia of the architect-as-master-builder who is able to handle all the different incoming sources of requirements and create something that satisfies them ALL. disclaimer: i’m not one of them.

P.S. oh yes, and since we had a bit of free time during the workshop, and we didn’t know what to call our project, we decided to machine learn our team name by using a markov chain network.

project-name.gif

the project name. it’s a set of slightly random babbling that the definition below spit out after reading an article about machine learning in wikipedia. kinda sounds like english though, so it’s all fine.

project name process

plugins used: OWL, ghpython, anemone

Backpropagation, Machine Learning and all that jazz (anecdotal)

backpropagation in grasshopper

backpropagation in grasshopper! (in ironpython actually)

(sorry about the small text in the gif, i’ll make a nicer one in the near future)

recently went on a quest for the full understanding of backpropagation (used in training neural networks (Machine Learning/AI)), and came upon this amazing blog post detailing the implementation of backpropagation in python.

I spent one day following through the code line by line, but still wasn’t able to grasp the structure of how it worked, so I did what I usually do when I couldn’t understand difficult theories: break it up into bits and implement them in grasshopper!

The result of the deconstructed backpropagation algorithm looks pretty straightforward in hindsight, but then hindsight is always 20/20.

Actually running the loop through 500 epochs (as mentioned in the blog post) took hazardously long, so the definition doesn’t do ‘real’ learning, but even after 25 epochs one could see the accuracy actually climbing.

By comparison, here’s a gif of the same implementation in pure(r) python (it’s ironpython, inside grasshopper, inside rhino3D). note that both implementations took out cross validation, and instead did a simple 66% train set and 33% test set split:

backpropagation in ironpython

computing the same dataset for 25 epochs took 0.8 seconds. T_T

However, the grasshopper definition was, among other things, slow enough for the simple human mind to make some really important observations.

Here a few interesting observations so far:

First, Neural networks are in fact… a dictionary of weights! that’s it! (oh wait, see  update) (it was so mindblowingly simple to me I had to walk around for a few minutes wondering if i missed something). In actual fact, it is a network of weights that incrementally update over many iterations to resemble an ‘abstraction’ of what the dataset is. they are then used to extrapolate data (among other things). The dictionary thing was due to the code being written in Python.

(update 20170820: actually, it is a dictionary of weights and an ‘activation function’, as I found out just a few days after this post but didn’t get around to updating it. an activation function is a function that puts the weights on a sloped graph (like a sigmoid function/tanh, or even y=mx+b which makes a sloped line) so that it knows whether going up or down is a good thing)

Think of training a neural network as ‘finding a polynomial(ish) function that fits a given set of numbers. Once you find a polynomial function, you can extrapolate from the function to guess new data, like below:

polynomial maker

I found this to be extremely valuable in helping me intuitively grasp what a neural network ‘looks like’ (the gif above is the implementation of it in python).

In the same way, a neural network makes a best guess at this ‘function’ again and again, and it is tested against a given result (labels/classes/actual output that you use to compare), and at every iteration an error is calculated (error = how far away is the current mapping/function from the right answer) and backpropagated (see next paragraph). Note that nothing is absolute. the neural network’s ‘guess’ is a set of probabilities: 10% says its A, 63% says its B, and 27% says its C, so the best guess is B.

Secondly, Backpropagation means ‘pass that error from the result back layer by layer (opposite direction) and find out which neuron is responsible for how much of that error’, and is essentially the ‘blame game‘ played by the neurons in the network.

An anecdotal way of thinking about training goes like this:

‘the output neuron at the end of the line gets a result from his team of hard working neurons and sees that its off by -0.34, so the weights (which as numbers) need to arrange themselves so that they move upward by 0.34 to get it right. he shouts back down the line: ‘oi! which of you got it wrong?’ and the neurons huddle together and start assigning blame to each other, the first one saying, ‘i’m only reponsible for 0.1 parts of this result, check the neuron before me, he gave me 0.9 parts of the paperwork already’ and the second goes ‘i only did 0.3 parts of this, check the guy before me’ and so on and so forth, and then the output neuron sees how many parts of the blame fall on which neuron and finds that among all of them, actually the second guy (0.3) gave the most error. And he gave him a good whipping. XD The next time round, the second neuron was more careful in doing his paperwork.

And that, I believe, is my current understanding of how backpropagation works.

Third, This might be obvious to some people, but I also found that it helps if I thought of the layers of the neural network as the ‘lines’ in the network graph rather than the ‘dots’, at least in the process of coding it up. Because we type up each network layer as a function of mapping something to something else (input > hidden, hidden > output), and not a container that stores an ‘abstraction’ or mapping. I realised this after reading something like this:

9jzpy

and wondering ‘what do i need to define as my input layer!? you said there’s an input layer, where is it? its a magical layer that doesn’t exist? or its supposed to do neuron = input? and what is the hidden layer doing? its supposed to make an abstraction of the raw input? I don’t know, it just looks like its sitting there accepting neurons from the input layer…‘ (that monologue didn’t go anywhere)

Hence I posit that neural network maps be labeled this way in the future:

9jzpyb

Then it would be quite clear that the hidden layer works by mixing all raw inputs together, an outputs an abstraction, and then the output layer uses the 3 (or more) abstractions of the data to make a slightly educated guess. Backpropagate and repeat.

Fourth, some observations.

Learning rate. I realised that the neural network’s accuracy basically just ‘sat there’ for the first 20 or so epochs because the weights in the neurons are changing at a rate that was so slow that it didn’t manage to tip the balance of probabilities (e.g. iteration 1 neuron: ‘oh, i got that wrong. lemme change it a bit’ -> iteration 2: ‘oh, its still wrong, lemme change it a bit again….’up to 20 epochs). Perhaps just plodding along at the same pace wasn’t the most efficient way of doing things. This warrants further investigation.

More Hidden Layers seem a bit counterintuitive at first. Until one reads Geoff Hinton’s slide in this post. Then one realizes he might just need to feed More Data!

Initial weights matter quite a bit. Starting off from the wrong foot means there’s a lot more distance to get to a good accuracy, and more distance means more computation time. Is this where Naive Bayes comes in? by having better initial weights, the network converges faster? With the caveat of exposing the network to bias? Will try this very soon.

So, my current takeaway from the past two months of reading and practising machine learning:

  • Neural Networks are : a set of numbers (called weights) and get updated for n times, with an error check that tells the function how best to change the numbers to get less errors the next time round.
  • I guess an article called ‘weight updater with error checking’ sounds like something headed for obscurity, so Neural Networks via Backpropagation is used instead. I mean, I wouldn’t want to name a building I designed a ‘terraced house with front porch’ for the same reasons.
  • Of course, network algorithms are the secret sauce that I have yet to learn, so I expect this article to be updated as I go along!

All above stuff is purely anecdotal. I hope more knowledgeable readers might point out in the comments below the bits that I intuited wrongly, or bits of facts that have been peddled falsely or might lead to horrible misunderstandings down the road. I will update my beliefs *wink wink Bayesian Inference wink wink* based on these comments so as to arrive at a more accurate frame of understanding.