Machine View of the City reveals the mind of a machine as it scans the city and decrypts hidden patterns among millions of buildings. By exposing the inner workings of computer vision, the project speculates on how machine-organizing maps can reveal structures of architectural form that elude human vision alone.
Architects have long assembled catalogs of form to understand precedent and typology. We can now use machine-readable imagery to consider the totality of built form as it currently exists and to systematically compare and quantify it. Machine View of the City compiles comparative libraries of architectural form automatically and computationally, and constructs map-like representations that chart the cognitive space of similarity, affinity, and perception.
By using machine learning to scan and synthesize millions of figure-ground shapes and building plans, we can apply data-science techniques to make explicit the formal associations and affinities across the entire corpus of existing buildings. We can systematically and objectively classify the universe of existing architecture into a kind of phylogenic tree of form.
Machine View of the City bridges the space between distant and close readings. At its most basic level, the project is an extension of the heuristics of human vision that maps architecture. It extends and augments human ways of seeing through new machinic lenses and imagines new ways to understand the universe of architectural form.