FYI.

This story is over 5 years old.

Tech

Q+A: Stephen Wolfram Wants To Compute Everything (No, Really, Everything)

_For the past thirty years, Dr Stephen Wolfram, renowned scientist, inventor, and author, has led a series of explorations into the laws of science and mathematics, discovering along the way critical insights into the nature of technology and design...

For the past thirty years, Dr Stephen Wolfram, renowned scientist, inventor, and author, has led a series of explorations into the laws of science and mathematics, discovering along the way critical insights into the nature of technology and design. He is the person behind Mathematica and A New Kind of Science. (Here is a very long, good article about him by Steven Levy.)

We talked about his latest project, WolframAlpha, a searchable database of the world's accumulated scientific knowledge, the similarities between designing architecture and writing code, and how algorithms will determine our future cities. The conversation is long and pretty dense but very worth it if you're into that sort of thing.

Advertisement

Brendan McGetrick: In this interview series we're imagining how changes in technology will affect urban life in the future. There are several things related to your work that I think are relevant for architects and urban planners, and to begin I'd like to talk about one of your central projects, the book A New Kind of Science, in which you lay out many of the principles that shape your work. For those of us not familiar, could you provide some background on that book – how it came about and where it led you?

SW: The "new kind of science" is concerned with the general science of computation. It is concerned with exploring the computational universe. When we think about computation today we usually think about specific programs that we write on computers to perform specific tasks, but there's a more general scientific question: if we think about all possible programs out there in the computational universe, what are the characteristics of those programs, what is this computational universe like?

BM: What do you mean by "computational universe"?

SW: The computational universe is the universe of all possible programs. We know about programs from computers. Most programs that do interesting things that are useful – whether it's your CAD program or drawing program or word processor – these are big, complicated programs that are built to perform specific tasks. But these programs are built up from simple instructions and the question is: what if you just start building programs at random? What will these programs do? What type of behavior do you see?

Advertisement

You don't need the idea of a computer to talk about this, but it is the best metaphor we have today. In a sense, what we're talking about is following all possible types of systematic rules. Now, typically today as a practical matter we implement these rules in computers and that's our best modern metaphor for this, but in principle these could be rules that are applied to pieces of mosaic or something else. They don't have to be operating in a computer. We're just talking about looking at the space of all possible rules, the universe of all possible rules.

The main discovery of A New Kind of Science, and this was very unexpected to me, is that you don't have to go very far in this computational universe before you see that even very simple programs can generate incredibly rich and complex behavior. The thing that is exciting as it connects to existing science is that this abstract observation – that out in the computational universe even simple programs produce complex behavior – explains the secret that nature is using to produce a lot of the complexity that we see in the natural world, whether it's in physics or biology or elsewhere.

I think the significance of the [new kind of] science at one level is that it lets one understand things about the natural world but it also gives one a new way to get things like technology and art, because it gives one a way to go out into this computational universe. Once one is there, one finds all these rich resources that are represented by simple programs. Then the challenge is: can you mine these resources for something that is useful for technology?

Advertisement

It is sort of analogous to saying, 'go out into the natural world and find magnetic material or liquid crystals or something like that.' Can one take those things that exist in the material world and apply them for human technological purposes? The same question exists for this computational universe: can one take what one finds and apply it to human technological goals? One can do that for traditional technology but also for more artistically-oriented things.

When I look at Web analytic data, for example, of people doing things on the Web, the degree of precision of that data and the perfectness of the power laws and the functions that are involved is much greater than I've ever seen in physics. In other words, there are these very quantitative laws of human behavior. We don't understand them very well yet, but they are apparent already.

BM: Such as?

SW: We had a little experiment that we started a few years ago, this thing called WolframTones, which is a website that you can go to and generate musical pieces that are obtained by basically plucking them from this computational universe. Each musical piece is a generated program that is found in this universe. It's sort of interesting what happens there, because these musical pieces are often pretty interesting, and in fact that site seems to be widely used by sophisticated composers who find it a good place to get a little bit of inspiration about a possible melody or something. As humans, when we listen to one of these musical pieces, we can tell that they aren't just random. We can tell that it has some inner logic to it. That inner logic comes because it is ultimately generated from a simple program in the computational universe.

Advertisement

One might have thought that if one had a simple program generating a piece of music that the piece of music would be very sterile and boring, not rich. But this isn't the case. You can have this simple underlying program and yet produce this rich, engaging structure that humans latch onto from an aesthetic point of view. And so people have done the same kind of thing with spatial form, for example, using the same kind of methodology, just sort of plucking some particular form that's generated by rules in this computational universe.

BM: How would that work?

SW: Let's say you're building a building: I've always been curious to see if one can make the whole building make sense, so to speak. Can you have it be the case that, somehow, there is a consistent set of rules that go from the molding profile to the overall structure of the building? Can one have a consistent rule that we humans can, in some intuitive sense, respond to and recognize logic within, yet has something which is sufficiently rich that it doesn't appear to be just a sterile geometrical box? Those are questions that you come across when you explore this computational universe.

BM: You mentioned that computers provide the best metaphor for understanding these ideas of computation, but isn't it also true that we are reliant on computers to even grasp this universe? Even if what we find is developed and embellished by human sensitivities, don't we require technological muscle to arrive at the starting point?

Advertisement

SW: Right, well, once you know where to look, the experiments you have to do are pretty easy. I've been curious about this, and I've gone back and looked at the history of ornamental art and I wondered, did some Babylonian in fact run these little rules that I've made up and generate a mosaic pattern that looks like one of the things that I just figured out in the early 1980s or something?

It's sort of interesting that that apparently did not happen. One tends to see in ornamental art periodic patterns, things like that. So, in retrospect, you can go back and do it without computers, but for me as a practical matter I view my use of computers a little bit like the Galileo approach from 400 year ago: computers had existed as practical tools, just as telescopes had existed as tools for looking at ships before Galileo decided to look at the sky with a telescope. Similarly, I ended up looking at this computational universe with computers, and then even the most obvious things one could ask ended up having very interesting answers, as revealed by using one's computer like a telescope so to speak.

BM: That's an interesting idea – the computer as a kind of looking glass to reveal systems that were previously hard to see.

SW: Right, but in a sense, once you have the idea it's not that difficult. But even precursors of the phenomena that I found had been seen for twenty or thirty years in early experiments with computers but tended to be ignored – in fact they were always ignored – because one didn't really have a conceptual framework for thinking about these things. So people would see slightly complicated behavior in some system and they would say, 'Oh that complicated behavior is a nuisance. It interferes with our model of some piece of the brain,' or something like that.

Advertisement

They weren't really concentrating on this phenomenon of making complex behavior, which I think is a central scientific phenomenon, they were just saying that the complexity that we see is a nuisance and what we're really concentrating on is something closer to what the exact sciences have been traditionally dealing with, which are these kinds of precise mathematical structures that can be described in fairly easy ways.

It's a typical thing that happens in many fields, and certainly happened in many sciences: there are lots of phenomena that might fall within the purview of that kind of science, but in the actual practice of that science those phenomena don't get studied because the methods that have been developed don't let you say much about those kinds of things. So sciences tend to concentrate on the phenomena where the methods do allow one to say something, and absent "a new kind of science", so to speak, there wasn't really a context for thinking about these kinds of things.

I view my use of computers a little bit like the Galileo approach from 400 year ago: computers had existed as practical tools, just as telescopes had existed as tools for looking at ships before Galileo decided to look at the sky with a telescope.

BM: Let's talk specifically about Mathematica, the computational software program you developed. I know that in the past you've described it as a possible successor to CAD, but before we make that leap could you explain the basics of how the program works?

SW: Mathematica is a big, practical program. We released the first version of it twenty-two years ago now. Essentially it is a language for describing computations at a high level. When I started designing it, I thought, there are all these computations that one might want to do, computations about geometry, computations about numbers, computations about all sorts of rules, data, etc. What I viewed as my purpose in building Mathematica was to make the highest level language for performing those computations, a language where as much as possible of what had to be done was being automated.

Advertisement

The notion is: you've got all these computations and there are certain lumps of work that are repeated across a lot of computations; in the Mathematica language we want to give those lumps of computation names, and those become the "primitives" of the language, essentially the words of the language from which we build up all of the things that we're telling the language to do.

Mathematica is this language for describing computation at a high level, as I say, and a critical piece of that is, once you've described the computation that you want to have done, Mathematica does the hard work to take it and find the best algorithms and automatically do it. So you might say, 'make me a plot of this particular data or function.' That's a fairly high level description; part of our goal in Mathematica is to do automated aesthetics to figure out how this plot should best be rendered. That would be part of our job in doing the computations as automatically as possible.

BM: Who are the primary users of the program?

SW: Mathematica gets used by I don't know how many millions of people now around the world. It's kind of a staple in the world of research and development across basically every industry. When there's something new being done that's of a technical nature, you can be pretty certain that there will be Mathematica somewhere inside the work that's being done there.

For me personally, when I started building Mathematica I built it in part because I wanted to use it myself and I'd been frustrated by having lots of different computational tools that I had to glue together to do the kinds of things that I wanted to do. That posed a quantitative problem in the sense that there are five different tools that you have to use to achieve one little thing, but it actually became a qualitative issue because there were lots and lots of experiments that I didn't bother to do because they were just a little bit too hard to do. So one of my goals in building Mathematica was to make, once and for all, a system that I could use to explore whatever I wanted to explore about computational kinds of things.

Advertisement

BM: And how do you imagine these sorts of tools being used by architects or planners?

SW: I'm not the biggest expert on how Mathematica gets used in architecture, but I know that there are a bunch of high end R&D-oriented architects who use Mathematica for a whole variety of things. It's used for generation of form, among other things. I can remember a few years ago there were folks who were encoding Chinese building codes and trying to figure out algorithmically what possible structures were consistent with those building codes. So that was a case where one is representing symbolically, in the Mathematica language, these building codes and then effectively trying to solve the constraints and determine what can be built, given those codes.

Mathematica is also often used in doing engineering associated with buildings and other structures. A bunch of work has been done on designing dynamic structures where you have a structure that can be unfolded or has an element that can dynamically change the structure, and these sorts of things are often modeled with Mathematica. I remember from years ago that the standard technology for making sophisticated roller coasters was based on Mathematica. The program is doing the computational work that allows you to figure out all the curves in the roller coaster; it aids you in doing the differential geometry in order to work out what the acceleration of this particular point will be, etc. I also remember another case where a velodrome for the Olympics was designed with Mathematica because the curve of the course had to be computed as a mathematical structure.

Advertisement

I think architecture is one of these interesting fields that pulls in a lot of different areas and that's an optimal case for Mathematica, because over the years we've been trying to implement every piece of algorithmic computation that people want to do, and so whether it's image processing or computational geometry or stress analysis, all those things are part of Mathematica and are done in a coherent way.

A large part of my life has been taken up working on the design of Mathematica and trying to make sure that all these concepts, all these pieces of computational work that are represented in Mathematica, all fit together properly so that, when you're combining image processing and data analysis, all the pieces can be used together. I think that that's a really good thing for a field like architecture that itself pulls from many different disciplines and requires that one do something in an integrated way.

It's kind of an interesting process, this process of [computer] language design. It's a design process; a bit different from architectural design, I suppose, but it's a design process where the challenge is to find the appropriate primitives, to find these pieces from which everything can be built conveniently. It's not about building a single structure where you then have to live in that structure. When you do language design, there's a certain number of primitives – hundreds, thousands – from which you can construct all these other things.

Advertisement

The concept of WolframAlpha is: how much of the world's knowledge can be made computable? To what extent can we take all of the knowledge that civilization has accumulated and set it up so that when one wants to ask a specific question it can be answered from the accumulated knowledge by computing it?

BM: It seems to me that there is quite a bit of overlap between architectural design and computer language design. This notion of primitives, of basic building blocks that can be positioned in different ways to create different effects, resonates with modular design. But also it seems that architects and program developers share a common obligation to anticipate the needs of their users and to provide the most efficient, enjoyable, and flexible environment. At its best, I think that that is what architectural and computer language design aspires to do.

SW: Take something like WolframAlpha [the question answering engine that my company is developing]… There is just tons of computation going on inside WolframAlpha. There's tons of computation that you can access with it, but the question is how to connect that raw computation to what humans think about and what humans can understand. In the case of WolframAlpha the idea is to take what is natural for humans, namely their natural language, and use that as a way to access this big storehouse of computational possibilities and then to figure out, at the end, how to prune all the possible computations and present the human, so to speak, with things that resonate with the way that they think so that these computations answer the question that the person asked. At a very high level, that's what we're trying to achieve with WolframAlpha. The connections to something like traditional architecture are a bit abstract but it feels to me like the same challenge: how do you take all these things that you can build and how do you map those into things that fit in with human life?

Advertisement

BM: Let's talk a bit more about WolframAlpha. The stated goal of that project is to "make all systematic knowledge immediately computable and accessible to everyone." That's obviously an enormous, heroic ambition. I'm very impressed by its scope and also the fact that it is a project that is developing in a very public way. I think this too has some relevance to architecture, because there was a period, not so long ago, when architects proposed projects of similar levels of ambition, in terms of making major interventions on the world, attempting to provide a physical framework for new ways of life, etc. Those sorts of ambitions have, for the most part, been scaled down and with that the potential of architects to influence life has also scaled down. But I think it's still important to consider these sorts of bold, seemingly impossible goals. Would you mind describing how you've developed WolframAlpha, how you take an ambition which is so overwhelming as an end point and implement a process through which is can be pursued bit by bit?

SW: The concept of WolframAlpha is: how much of the world's knowledge can be made computable? To what extent can we take all of the knowledge that civilization has accumulated and set it up so that when one wants to ask a specific question it can be answered from the accumulated knowledge by computing it? That's the ambition and there's obviously a long history of people trying to do this kind of thing. Almost the precise description of what we're trying to do was given by [the German mathematician and philosopher Gottfried Wilhelm] Leibniz in the 1670s, except that at that time he had mechanical computers and he was trying to convince the local Dukes and so on to set up libraries to collect the information for him. So he was at least 340 years too early.

Advertisement

When I started this project, I have to say that it was not at all obvious to me that now is the time when such a project would begin to be possible. It requires that one has computers that are powerful enough to actually compute answers in a reasonable amount of time and, really, even three or four years earlier what is one second now on the Web would have been ten or twenty of seconds and that wouldn't really have been acceptable on a human feedback timescale, so to speak. That's a very superficial level of issue. Another thing is that there might just be too much data in the world, there might be too many different kinds of knowledge and there might be no overall framework that allows one to pull all this knowledge in.

I was encouraged by my work in A New Kind of Science to think that, even though what you see is very complicated there can still be simple underlying rules to it. That's the paradigm that got me to think that, yes, this might not be a completely impossible project. When you think about a project like this, there are all these pieces and the thing might be completely daunting, but I guess that I've been lucky in the case of this particular project because I've worked in enough different areas that I have some vague understanding of how much depth there is in these areas. In the early stage of the project you go into a big reference library and you look around at all the books on all the shelves and you say, 'What's it going to take to be able to make a computable version of all this stuff?' And then you just start doing it.

Advertisement

From a purely personal point of view, I've spent my life so far doing roughly three large projects – Mathematica is one, A New Kind of Science is another, and WolframAlpha is another. I think that I've sort of developed a rhythm for doing very large projects without an identifiable finish line. If I'd started on WolframAlpha when I was twenty years old, which is when I started my first big project, I think it would have been too daunting and I wouldn't have seriously considered doing it.

When the people who have been involved in it get to see – my gosh, we actually built something very big! – they're often kind of set for life in terms of viscerally understanding this idea that you can go from nothing to something quite big. That's always fun.

BM: And so then it's a matter of transferring that faith to the people working with you, many of whom probably are younger and haven't taken the same steps.

SW: Projects of this size are definitely not for everybody, so to speak. Because I have a company where I hire lots of creative people and people who do lots of different kinds of projects, I always notice that there are different timescales that people operate on. There are people who are optimized for the one-hour response project – answer a specific question from somebody, get back to them, finish the whole thing in an hour – and then there are people at the other end who are optimized for multi-year projects and they'll always ramp up a lot of structure for any project they'll do. You say, 'You've got to do this project in a day,' and they will have spent a week building the structure to start doing the project before they spend the day doing the project.

Advertisement

Over time I've gotten used to doing these really big projects and I've gotten some idea of the rhythm of what's needed. Obviously this is not a one person project: right now WolframAlpha is about 200 in-house people and a bunch of outside volunteers, and there is a certain form of leadership that's necessary for these sorts of big projects. People can see how big the project is, but many people tend to be daunted by the size of it and one ends up, from a leadership point of view, breaking it up into pieces so that different parts of it get done by different people. Actually, when the people who have been involved in it get to see – my gosh, we actually built something very big! – they're often kind of set for life in terms of viscerally understanding this idea that you can go from nothing to something quite big. That's always fun.

BM: WolframAlpha is also interesting because, beside the hundreds of people formally working on it, you have millions of visitors who indirectly contribute to its development.

SW: Right, in a sense, the world is telling us what we should do, because everyday there are millions of people who use WolframAlpha and ninety percent of the time, based on our logs and so on, they get a good result. Ten percent of the time they don't. That ten percent is a giant to-do list for us. The whole dynamic of the system shows us the ways in which we should be expanding. It's not something where we have to guess what's important in the world. We have that plainly visible from the actual queries that people make and the actual ways that people interact with the system.

Advertisement

For this particular project, one of the challenges was, as we built it over the course of a bunch of years, deciding when we should release it into the wild, when should we let people actually start using it? That was a difficult decision, because over time we knew it was going to get better and better, but, on the other hand, we knew that we were going to get a lot of information from letting people in the outside world actually use the system. So we released it when we did because we thought we'd pretty much learned what we could from studying things without actually having live users. We brought it out in 2009 and in the year since we've roughly doubled the amount of knowledge and content in the system, and that will keep going hopefully at an accelerating rate over the coming years.

It's one of these things where the challenge is to make it good enough when you first bring it out that people get the overall vision of what you're trying to do, and I think that worked fairly well. I'm a perfectionist, so I get a twinge with every one of the queries that fails. Of course, by now there are hundreds of millions of those. I would love it if everything worked perfectly all the time, but that's not the nature of a project like this. One just has to keep pushing forward and making it progressively better. The good news is that it is possible to do that and over the years it will get better.

Advertisement

Obviously it is quite an, as you said, heroic project in a sense. I like to think that it's an important project, because it's trying to encapsulate the fruits of a lot of what our civilization has produced in terms of knowledge and put it in a form where we can automatically access it, and do so very broadly. It's an effort to democratize the knowledge so that everybody everywhere can get expert level access to this stuff. I like to think that that's an important project and justified my spending many years doing it.

BM: You mentioned before that your work draws from a wide range of professional typologies, as architecture does. I'm curious to know more about how that works on a practical level. For instance, what are the backgrounds of the people working with you?

SW: Our whole company is about 650 people and we have a full database called "Who Knows What" which contains the backgrounds of everybody. It's kind of funny, I think there are hundreds of PhDs in the company. I think the single largest field is probably physics, but it's very diverse. There are PhDs in lots of different areas from the sciences, from liberal arts, etc. But PhDs are not quite so popular in computer science, so the numbers are weighted against computer science, but among the younger folk at the company there are an increasing number of computer scientists, partly because computer science education has greatly improved in recent years and become much more relevant.

Advertisement

WolframAlpha is probably the most diverse, in terms of the things that we're dealing with. So there are, gosh, many different groups. There are, for instance, the people who do linguistic curation, which means trying to encapsulate all of the variety of language. Those people are tremendously diverse in their backgrounds, and it's been kind of funny because this linguistic curation skill appears to be a skill that is largely uncorrelated with anything else. I know that one person there has a master's degree in biology. You'd not think that that has anything to do with this. There are other people who have linguistics backgrounds, and that's more expected. It's a real mixture of educational backgrounds, but they are united by this one skill of being able to understand linguistic variety well. Then there are library science people who are involved with finding and validating sources of data and so on. That's another chunk.

BM: Are there any designers involved? Earlier you mentioned "automated aesthetics". I'm curious about this phrase and whether you have people from visual or graphic backgrounds contributing to that.

SW: Absolutely. We have pretty strong in-house graphic design group of maybe a dozen or so people. Ever since the beginning of the company I, for better or for worse, made the decision that we would build up an in-house design effort, rather than doing what people usually do, which is to send it off to an agency and have somebody do the design outside. That's meant that most of our designers know Mathematica fairly well and it's routine that they produce interesting interface elements using the program. Actually, outside of the design group there are people, for instance Chris Carlson who is a user interface person who happens to have a PhD in architecture. And there are a bunch of other people in the company who are supporting design, so to speak, but their backgrounds are typically more technical.

Advertisement

For example, with WolframAlpha, obviously there is a bunch of design that had to be figured out to make the displays and so on. It's a pretty complicated design project, but of course the theory is that you don't notice that it's a complicated design process, it just looks simple and easy. The reality is different, of course, and I was personally pretty involved in that. That was done between me and the graphic design group as well as our usability group, which is made of people coming from psychology backgrounds who know about issues like how long people are prepared to wait for things, how much feedback they need when they're waiting, where people look when they've finished reading a list of items, those kinds of issues.

Perhaps untypically for a person in my position, I've been very involved in a lot detailed visual design for the things we've done. For example, I basically laid out all the pages of A New Kind of Science – much to the horror of the book designers. All of the graphic images there were things that I ended up designing, because, for me, one of the big challenges of the NKS book were things that I ended up calling "algorithmic diagrams", things which are diagrammatic in their communication yet are far more complicated than humans would draw as diagrams. They're diagrams that are created by figuring out the structure that the diagram should have and then programming that and having it produced algorithmically.

Advertisement

In Mathematica, for example, the internal language design has nothing to do with graphic design. The thing that I was mentioning, computational aesthetics of output, yes absolutely design in involved. There what we've tried to do is say [to the graphic designers] is, 'Here are hundreds of examples of graphics; tweak these to make them as good as you can make them, and then we'll try to interpolate between those human-chosen designs to try to capture the general algorithmic principles from them.'

That's a general thing that we've done a lot – particularly in WolframAlpha – taking expert-level knowledge and trying to encapsulate computationally that expert-level knowledge. There are lots of people who know about specific areas, whether it's chemistry or economics or nutrition or whatever else. There's a whole collection of people who we have involved because they have particular expertise that's relevant to domains that we're trying to cover.

We've been lucky enough, because of our work with Mathematica and so on, to get good access to world experts on almost every subject. One of the things that we've had to develop is the methodology for diving in to talk to world experts and getting expert knowledge to guide us in what we're trying to do. Because what we're trying with WolframAlpha is, in some sense, outrageously ambitious, because we're trying to get to the absolute front lines of R&D in thousands of different domains. And after you've done lots of these domains, doing the next one is a lot easier, because you have a good work flow internally and because you have a lot of good tools for getting there. But that's one of the challenges.

Advertisement

I think that through the ability to create forms by plucking them from the computational universe the cost to customize will go way down. So we'll see the mass customization of things. It will be the exception rather than the rule that all the clothes on the rack are identical, for example. It will be possible to much more easily customize based on what one's personal constraints or desires are.

BM: As you say, your work puts you on the front lines of R&D in many fields, computation and information technology being the most obvious I guess. I'm curious to know, from that position, how you envision life in the not-so-distant future. What do you think will be the role of computation and automation in how our societies operate?

SW: I can say one personal thing: I've been the remote CEO of a company for twenty years. These days I live a thousand miles away from the largest piece of the company. You might wonder, 'how does one do that?' And the answer is that for years I've done my stuff using technology, using conference calls, web conferencing, screen sharing, whatever. And after I set a bad example by being a remote CEO, our R&D workforce, for example, has become completely randomly scattered around the world.

And it's kind of interesting, because for years now, everyday I have lots of meetings, trying to figure out things, and there will be ten people on the phone in eight different countries around the world and each one of them is leading their own separate life, hanging out in a cafe in Italy or being in the middle of the mountains in Oregon or something. Very different personal situations, yet collaborating on a project in a completely seamless way, and I think that that's increasingly true in lots of areas of R&D work. We've been doing this for a long time and it's interesting because people choose their local environment but nevertheless their work is factored away from the local environment that they choose to live in.

In terms of what our experience of things will be, the symbiosis of us with our machines will obviously increase. For me, we gradually automate more and more things. It used to be the case that it really mattered whether you could read maps well. Irrelevant now; you just use a GPS and it takes you where you should go. So all these skills that have been traditional human skills are one by one getting more automated.

One thing that I think is really on the cusp right now is the thorough outsourcing of human memory. I've been a person who's stored every piece of e-mail and every keystroke on my computer for twenty-something years. So I can go back and search something that I typed into my computer eighteen years ago. I think I have a fairly good memory as far as people's memories go, right now at least, but nevertheless I've extended that with my computer, and I think that the outsourcing of human capabilities will be an increasing development. I suppose you could say that that is the big contribution of technology to civilization: you put down more and more layers of automation and there are more and more things that become easy for people because they've been automated.

Now, predicting which of those will be more important… There are some that are pretty obvious: like real soon – probably in the next couple of months – video calls between mobile devices will become much more prominent. And that will lead to yet another barrier being broken. It used to be the case that you could only see a distant place by getting someone to paint a painting of it and then having the painting shipped to you. That's vastly changed.

Another thing is mass production, which is something that is important to our experience of the world. There are lots of identical objects that we see over and over and over again, and the cost of customization has traditionally been quite high. If you're building something really big, like a house, you might as well customize it because you're spending so much money on building the thing. But for things that are cheaper to build, the cost of the design is very high. I think that through the ability to create forms by plucking them from the computational universe the cost to customize will go way down. So we'll see the mass customization of things. It will be the exception rather than the rule that all the clothes on the rack are identical, for example. It will be possible to much more easily customize based on what one's personal constraints or desires are.

You see, once you simulate what will happen, you don't just have to simulate it with respect to your static building, you can simulate it with respect to whatever algorithms are running your building … I think we're going to see an increasingly simulation-based approach where the thing you're building is described by an algorithm and then you run simulations to find what the consequences of that algorithm are.

BM: Do you also see these developments extending to how cities function?

SW: I think that that's probably true if we look at mass transit, for example. With mass transit, one is accustomed to structure: the trains run at periodic times and things like this. Increasingly as we are able to do more computation, the regularity that we're used to will become decreasingly common, and there will be much more little automatic micro-taxi sort of things that are zipping around and computing just in time that this one should go to this place, because that's where there's a person waiting.

If we could look from the bird's eye of the city and its operation, a lot of regularity that's there today I suspect will not be there in the future. [The current approach] is not optimizing things; it's only there because that's the only way we know how to set things up. I think increasingly it will look much more random, because there are things that can be done based just on the circumstances, rather than in some very structured way.

In cities we have streets and we have things organized in particular ways, but if we look at the natural world, we see that nature much less commonly does that. It seems to be much more random, patchy, ecologically-determined structure. One should ask whether we are setting things up the way that we've been because that's all that our technology has let us do or whether we are setting them up that way because that's what's most comfortable for humans, and so on. I suspect that as we think about adding more computation to what's going on, it becomes possible to have a more complicated set-up that doesn't have the simplicity of a "trains run at half past the hour" type of thing.

BM: It seems undeniable now that we're experiencing a paradigm shift in the way that organizations and cities run, and we notice this in the dispersal of workers and the decline of shared infrastructure and space. Of course, architects and urban planners have a huge investment in consolidated organizations and shared infrastructure, because they have traditionally been the ones who give these aspirations physical forms. What sorts of insights can we draw from combining computation with urban planning?

SW: We're increasingly going to be able to model how people collect in a city. We're already able to get data from cell phone location services and other sources that show, for instance, what this blob of people does on Friday evenings in the middle of the street in such-and-such city. How do these people diffuse out, how do they flow? Even within a building, how do they flow around? These are things that we are beginning to have some sort of scientific analysis of, and I think the concept of simulating what will actually happen will become increasingly accepted.

You see, once you simulate what will happen, you don't just have to simulate it with respect to your static building, you can simulate it with respect to whatever algorithms are running your building – whether those algorithms have to do with which paths you open at particular times of day or what displays are up on various screens, and so on. So I think we're going to see an increasingly simulation-based approach where the thing you're building is described by an algorithm and then you run simulations to find what the consequences of that algorithm are.

I started life being a physicist and do physics every so often. You see data in physics and it's got data points and arrow bars and things like this. When I look at Web analytic data, for example, of people doing things on the Web, the degree of precision of that data and the perfectness of the power laws and the functions that are involved is much greater than I've ever seen in physics. In other words, there are these very quantitative laws of human behavior. We don't understand them very well yet, but they are apparent already. You're thinking, for instance, that there are a million people typing away on keyboards, navigating these websites and you might not expect much regularity to what they're doing, but it turns out that, statistically, there is amazing regularity to something like that. And as we understand more about that it will become part of standard practice to do simulations based on the rules that we know.

WolframAlpha, in the primary instance that everybody gets to see on the Web, is something that is dealing with the world's public data. But there are actually several versions of WolframAlpha that we've been building for specific companies based on computable data associated with a particular organization or I think, potentially, a particular city. So there's this question, What does it look like when everything about the city is computable? What happens when everybody can ask, 'Where is there a parking space?' 'What was the history of this particular spot?' 'How has the value of this building changed over the last twenty-five years?' 'What were the spots on this street where people were collecting, on average, last Christmas?'

What is going on in a city becomes much more transparent if you can provide a computable interface to the city. Take tree growth, for example: one will be able to know what this oak tree will look like in twenty years and whether or not it will obscure the view from my apartment. These things will all become knowable and I haven't quite thought through what consequence that has for the discipline of urban planning and so on, but I think that there will be widespread access by the citizens to that kind of computable knowledge and this will probably have some interesting effects. What I would assume is that the operation of the city would become more efficient as a result of people being able to know a lot more about it.

I don't know quite what the feedback loop looks like between the designers having to put a definite road in a definite location and hoping that it's going to work well for the next twenty years, when people are readily able to see that actually that road didn't work that well, based on the computable data that they can see about their city. If I were to guess, it will become a little bit more like what happens in the stock market with [quantitative analysis]. This whole thing about the behavior of people and how they interact with their environment, the kinds of things that people have studied for purposes of finance, for example the collective behavior of things, will become more relevant to questions of urban planning, I imagine.

A longer version of this interview originally appeared on Brendan McGetrick’s blog, Very Feel