The reason designing a building is faster is that there are fewer decisions. The reason there are fewer decisions is because buildings are way better understood than software.
A lot of designing the structure of a building is just the implementation of a few core concepts that have been perfected for thousands of years, like doors and windows and beams and arches.
There is sufficient human knowledge about buildings that people expect every single building to stand up, work properly, and be safe the very first time it is built. There aren't many self-taught building engineers who just picked it up in their spare time during high school.
And a lot of the design is simply choosing fixed options. Architecture and construction firms don't design and engineer the ceiling light fixtures or the faucets. They select a supplier + model, and install them with standard hardware in standard ways. In many cases there is even government code that tells them exactly how it must be done.
Compare to software. Every single decision is up for grabs on every new project. Programmers have no standard certifications, and in fact are often actively hostile to standardization and formal training. The state of the art in making sure things work is just brute-force, constant testing. Of course projects can run into problems.
Software, as an engineering discipline, still has a long way to go toward being a mature, well-understood human endeavor. Which makes sense, since we've been doing it about 1/100th as long as we've been making buildings.
> Programmers have no standard certifications, and in fact are often actively hostile to standardization and formal training.
I hold the unpopular opinion that we need those things. The current state of things is that we're not much more than kids hacking away on computers with minimal supervision.
I'd love to see most (if not all) software written with the rigor required to write safety-critical systems.
The problem, to add to the buildings metaphor, is that we don't have a few thousand years of experience building software under our collective belt yet. We quite literally don't know how to build good software. Or rather: we are still figuring out how to build good software, and we don't know yet how far along we are in this process. Therefore it looks like a questionable idea to set the things that we currently believe to be true in stone.
We're also never building the same software (or at least very rarely). The industry is such that we're always trying to invent something new, something that hasn't been done before.
If you build the same (or near same) piece of software 100 times, you can know almost exactly how long it'll take and you can do it quickly, same as if you were building 100 buildings. But we don't do that, because you build software once then just copy it 100 times.
You only build software if you're making something new that hasn't been made before.
> You only build software if you're making something new that hasn't been made before.
Or has been proprietary. This is why we should choose copyleft licenses over permissive ones.
This project I am working on is a bit boring, outsourced (or exploited) here in my country. It's about ETL, transferring data from prod db to analytics db.. I am of the opinion that all this is a solved problem and I am probably repeating mistakes. However, due to the nature of Capitalism, exploitation ....
I've been there. I even got a fair way into discussing starting a 'data migration' startup. Customers only care about getting their data migrated and wouldn't mind you holding the rights to the software tools you write to do it so you could get better and better as you build up your toolkit.
You'd have to have it mandated to everyone. Otherwise the few conscientious teams would take 10 times as long as the risky ones. And since riskiness often doesn't bear deadly fruit for months or years, the careful teams would never stand a chance in the free and ignorant market.
>The state of the art in making sure things work is just brute-force, constant testing.
I'm pretty sure the state of the art is using good type systems and libraries that use that type system. It takes some of the brute force out of testing. Unfortunately, many places don't even use the full power of the types their chosen language have. Others choose PHP or C++, piling on technical debt because they sell software the way HP sells printers - the initial payment is low, but you'll have to pay for support and bugfixes forever.
The only thing that can make software cheaper is the customers demanding better tools, languages and processes.
The reason there are fewer decisions is because buildings are way better understood than software.
I think buildings are better understood in large part because they are not nearly as malleable. Because software is highly malleable by nature, the complexity and scope of decisions can grow much faster than in any other engineering discipline.
I'm convinced malleability of the software isn't the issue; the issue is that the thing that the software is modeling is malleable and generally unknown to the level of detail necessary.
Nobody in a business knows all the business rules, no manager is aware of all the data that their underlings create, manipulate or consume, no individual or single location at FAA really knows all the rules on how traffic control actually works, etc etc etc.
But when we create software to automate any of those things then, at that point, need to have a full understanding of all those things. And then we generally also discover that the rules discovered are inconsistent, violate some other rules or laws, etc. And once those are fixed, then and only then people realize that what they had isn't quite what they wanted. And not only what they want changes, the whole underlying system was evolving at the same time, so it sometimes feel that you're starting again instead of tweaking the implementation (and this may also be why software methodology research focus obsessively on approaches that make things easy to change at a later date)
When someone builds a multistory building, they will build the first floor to support the second floor. And in general it is exceedingly obvious why that is the case.
The problem with software is the abstraction. The first floor and the second floor aren't connected in any physical fashion. It's easy to rip out the first floor after the second is build, and only later realize you cannot support the necessary load.
Buildings have a far smaller state space, and that space is highly decomposable. There are only so many doors in the building that can be open or closed, lights that can be on or off, HVAC zones that can be on or off, or elevators that can be going up or down or stopped at a floor, etc. And few of those states interact.
Software systems have astronomical numbers of states, and while most of the discipline of software engineering is about how to minimize unintended interactions between them while still producing the intended interactions, we still wind up with lots of the unintended kind.
There's a lot more to the state of a building than a human interfaces. When the ambient temperature changes, the state of the building changes. When the wind blows, the state of the building changes. When it rains, the state of the building changes. As materials age, the state of the building changes.
We're just so much more familiar with these forces and states that we can reliably model and design for them, and then (as with your comment) not worry about them anymore.
We take it for granted that our buildings won't fall down in a storm. But the knowledge of how to do that had to be developed and standardized at some point.
For buildings not collapsing, the main thing that matters is that the structure's strength is larger that the applied forces. Edge cases can be solved by adding more material, or more intelligently by having enough redundancy that you have enough strength even if a small number of components fail.
This, of course, does not apply to software functionality - you can't fix bugs by "more CPU power". However, if you look in the places where you can apply this methodology - like cloud services - you find that they are indeed very reliable.
If the materials of software are for-loops or text files, I think one can say that "we" are familiar with them.
A simple program or a simple piece of a large is something one can be very familiar with.
Indeed, the particular parts are more predictable than particular parts of buildings, who behavior changes over time, which literally rather than metaphorically wear-out, which have to simultaneously fulfill a number of functions simultaneously, etc.
So I think it is ultimately a matter of the state-space of the ingredients rather than a lack of familiarity with the materials.
I am not sure malleability is really the core issue... the core issue is that there are many different approaches to making a feature with different trade offs and costs that are not readily apparent at the outset. Not to mention that poor design upfront causes a plethora of issues down the line if you need to scale it.
This reminds me of a metaphor: "If you want to know the maximum load of a bridge, you don't drive progressively larger trucks over it until it collapses then rebuild the bridge."
I think you're right that thousands of years of experience play a part. But, overall, that metaphor has had me thinking a bit about how much less predictive building software can be compared to engineering and wondering why that's the case.
Assuming building software and traditional engineering are about as complex and assuming that engineering it is easier to predict (construction deadlines slip too,) I'm curious to see if we can overcome fundamental issues like the halting problem to become as predictive.
> If you want to know the maximum load of a bridge, you don't drive progressively larger trucks over it until it collapses then rebuild the bridge
For most of human history, we basically did this. Bridges have only become very reliable in the past 100 years or so. Before that, bridges collapsed very regularly and people were very wary about going over a newly built bridge.
I think the reason designing a building is faster is peoples' standards are lower.
Almost no one wants a building designed uniquely for their lifestyle. They don't even realize you could ask for such a thing. They just pick and choose from what they've already seen.
If that were true of software, it would be just as simple. But people keep asking for things no one has ever done before, exactly, and that leads to unpredictability. We keep seeing unique new software, so we are more likely to ask for the same.
The same is true for buildings when the architect is trying to do something new. Buildings could be just as interesting as software, but most people don't think to ask.
I think in the long term, buildings will be exactly as custom and complicated as software, and designing them will be just as difficult to estimate.
A lot of people hire architects precisely because they want a building designed for their lifestyle. No one goes to an architect and says "I'd like four walls, some rooms - I don't care what they do - and a roof. Can you do that?"
And the architect never says "Maybe. I wish I could be more specific, but it's just hard, you know?"
There's very little genuinely new in software. Even outside of the CRUD treadmill and corporate Java land, there isn't much of a leap between a Visual Basic application and an iPhone app. There are implementation and platform details, and lots of them. But the core concepts are recognisably similar.
The only real difference is that the tools keep changing - often for no good reason.
In architecture, stone is stone and concrete is concrete. In software, C++11 is not C++17, except for the bits that are, mostly, assuming you can find a toolchain that implements the differences properly.
Angular 1.0 is not Angular 2.0. Metal is not OpenGL, even though sometimes it smells like it. React is not jQuery is not a long list of other things, including Haskell, although you can bet someone somewhere is working on Category Theory as the definitive industry-changing conceptual model for MVC on web pages.
Most of the productivity costs associated with the constant churn are self-inflicted - the result of an industry more motivated by ADHD than by empirical analysis of which language and toolchain features make a real difference to getting shit done, and which are just unthinking tradition, random opinion, and noise.
From what I've been told, it's very rare for an architect to find a client who will let them actually creatively design a space for utility. Clients are primarily interested in appearance, surfaces, size, and to some extent layout. Very few will pay an architect to prototype new concepts, custom design amenities, etc.
Not that that's a good idea for most people... it's better for resale if you make a cookie cutter house. Most codebases will never be sold. Almost every house will.
If you had to resell codebases they'd probably be a lot more standardized.
That isn't the majority of people, though. Most of humanity lives in either apartments, cookie cutter developments, decades old homes they didn't build, or shanties. They don't get to choose, and customization is a luxury.
A lot of designing the structure of a building is just the implementation of a few core concepts that have been perfected for thousands of years, like doors and windows and beams and arches.
There is sufficient human knowledge about buildings that people expect every single building to stand up, work properly, and be safe the very first time it is built. There aren't many self-taught building engineers who just picked it up in their spare time during high school.
And a lot of the design is simply choosing fixed options. Architecture and construction firms don't design and engineer the ceiling light fixtures or the faucets. They select a supplier + model, and install them with standard hardware in standard ways. In many cases there is even government code that tells them exactly how it must be done.
Compare to software. Every single decision is up for grabs on every new project. Programmers have no standard certifications, and in fact are often actively hostile to standardization and formal training. The state of the art in making sure things work is just brute-force, constant testing. Of course projects can run into problems.
Software, as an engineering discipline, still has a long way to go toward being a mature, well-understood human endeavor. Which makes sense, since we've been doing it about 1/100th as long as we've been making buildings.