Urban AI

Oct2021-split-03

This year, we've had some long discussions about the emergence of artificial intelligence, including chatbots and image generators. Will they replace our jobs? Can algorithms make our work more efficient? Is this art? Should we be worried?!?

I've been astounded by the images created from deep learning models such as Midjourney, Stable Diffusion, or OpenAi's DALL·E 2. Submit a natural language prompt and a few seconds later you get back a set of rendered images. Depending on the prompt, this can lead to quirky and sometimes outlandish results, such as these:

What's happening under the hood? In simplified terms, machine learning allows Dall•E to identify what things are and illustrate a concept, attribute, and/or style in a matter of seconds. (detailed explanation here). Whether the output is original or legal is a different topic.

One potentially useful trend I've noticed is the use of image generators to visualize city streets transformed to be more walkable, bike-friendly, and car-free.

Landscape architects and urban planners have traditionally used collage-style renderings to help stakeholders visualize new urban concepts. These can be time consuming and expensive to produce. Usually only a handful are created towards the end of a project. What if you could produce a whole bunch for free, during the early brainstorming phase?

Zach Katz, creator of BetterStreets.ai, is doing this using DALL·E 2's inpainting where you start with an existing image, erase what you want to replace, come up with a clever prompt, and let AI do its magic. Let's see how this approach could be used for the streets of Portland, OR...

Eastbank Esplanade

I-405

SW Barbur Blvd

NW 23rd

NE 3rd Ave

Bonus ones from BetterStreetsAI:

So will it replace our jobs as designers, visualizers, and urbanists?

I think this tool still has a ways to go before robots are completely designing our cities. DALL·E 2 iterates four versions in less than a minute so there were a lot of uncanny results. There's also a limited amount of credits until you have to pay extra, so I had to be really selective about what type of prompt I wanted. In order to visualize what you wanted out of an image, you had to write this run-on blurb so the computer can translate it to an image... I know I'm repeating myself here but it's just a completely groundbreaking way of communicating with technology.

Looking at some of the results, the scale may be off, the people definitely look strange, and the construction feasibility/cost are not considered... but I think that's okay. I do see a potential use at the conceptual stage in taking an image of what exists and seeing how it could be much different.

kenneth-crop.jpg

Kenneth Zapata is a Designer at Fat Pencil Studio