Full Silver Jacket
Last week, I listened to the (then) latest episode of .NET Rocks with guest Mark Seemann. Mark, Carl, and Richard discuss Fred Brooks’ 1986 essay, No Silver Bullet – Essence and Accident in Software Engineering, and Mark expresses his belief that, to a significant extent, Mr. Brooks got it wrong. (Or, at the very least, the conclusions no longer apply.)
As I wrote in my comment on the episode, I became frustrated at not being able to join the conversation, as I felt the participants disregarded some crucial points. I later discovered Mark’s article in which he deals with the topic more thoroughly, and, as he pointed out in his reply to my comment, he does address many of these points, although his viewpoints don’t always coincide with mine.
In this post, I’d like to elaborate on what I agree and disagree with Mark about, as I find the topic fascinating. (Alas, I felt I couldn’t possibly express everything succinctly enough for a mere comment.)
Note: Even though I partly disagree with Mark’s overall perspective, I find both his blog post and the .NET Rocks episode excellent and stimulating, and I’d definitely recommend them both. Having said that, I’ve tried to make this article self-contained with no prerequisite reading or listening.
Where’s the werewolf?
In No Silver Bullet, Mr. Brooks presents a relatively complex line of argumentation that leads to a grim conclusion: You shouldn’t expect technology- or process-driven order-of-magnitude improvements in software engineering productivity (in the decade following the mid-eighties). At the risk of (grossly) oversimplifying the reasoning, I’ll try to summarize (my understanding of) the gist here.
The idea is that software developers tackle two broad categories of problems: Essential and accidental. The theory originates in Aristotelian philosophy, and “accidental,” in this case, doesn’t mean “occurring by chance” but is closer to incidental. The “essence” is the distillation of the problem at hand that you can’t possibly simplify any further; the “accident” is the annoying stuff that comes as a result of the practical limitations and imperfections of the tools we use to fashion a particular solution. (For example, in a mathematical algorithm, the actual mathematical notation would represent the essence of the problem; dealing with the imprecision of floating-point types or integer range constraints would be part of the accident.)
I like this theoretical basis but find the vocabulary confusing: Instead of essential and accidental, I prefer to think of the problems as conceptual and technical. Conceptual (or business) problems are rooted in the problem domain (i.e., the real world), whereas technical ones stem from a given implementation platform.
Mr. Brooks posits that only accidental (i.e., technical) problems are amenable to order-of-magnitude increases in productivity as a result of technological and process advancements. Further, as most of these problems had been eliminated (as of the mid-eighties), one should expect no more such large-scale improvements. (Please note that he defines “order-of-magnitude” as tenfold or better.)
Simply put: Dealing with the “messy real world” is what makes software development hard these days, and you shouldn’t expect any breakthroughs in that department; there won’t be any silver bullets.
If this isn’t silver, why is it so shiny?
Mark doesn’t disagree with the classification of problems (essential/accidental; or, in my nomenclature, conceptual/technical). In his opinion, however, Mr. Brooks (brutally) underestimated the amount of time and effort developers spend grappling with technical (accidental) problems, as opposed to conceptual (essential) ones – not only back in 1986 but even today. (Incidentally, I acknowledged this point in my original comment on the .NET Rocks episode – before I discovered and read Mark’s article.)
He argues that accidental/technical problems still take up a significant proportion of most developers’ time – in his words, “accidental complexity abounds” – and that technology-based order of magnitude improvements to productivity – i.e., “silver bullets” – can and do exist. I think I’d agree with some of Mark’s description but not necessarily his conclusion. Before I get to that, though, I’d like to recap and comment on the (candidate) silver bullets he identifies.
World Wide Web
The idea is that back in the 80s, 90s, and early 2000s, when you encountered a technical obstacle (issues with the compiler, framework, libraries, operating system, etc.), you could be stuck for days, weeks, or months on end. Sometimes, you’d have to abandon a given component and use a different one, as you simply couldn’t figure it out no matter what. Today, you’ll likely find a solution within minutes by “Googling” (or “Binging”) your problem. (Obviously, you’ll find all the answers on Stack Overflow – duh!)
Of all the bullets proposed, I think this is easily the “silverest” one – I agree and won’t question Mark here (much) 😊
The classic (and compelling) argument for automated testing is that it gives you a “safety net”: If you’ve changed something and all the tests (whether unit or integration tests) still pass, you know you haven’t introduced a bug (i.e., caused a regression). That, in turn, should give you the confidence to add and release features and bug fixes faster, thus increasing your productivity. I get it, but for reasons I talk about in detail in Quality Assurance Histrionics, I think this is an idealized, somewhat unrealistic vision.
Test suite quality is, unfortunately, completely unmeasurable; code coverage is a necessary but not a sufficient condition of meaningful coverage; it can only prove the absence of quality, not its presence. Especially as a new person on a project, you have no easy way of telling whether its test suite is any good, other than examining it. (I.e., you have to adopt an interpretivist/qualitative approach, which can be far from straightforward.)
Writing tests takes time (and time is money). Opinions vary, but if you want to do it well, you’ll likely write at least as much test code as production code – not only that, you’ll also have to maintain this code in the future. To be fair, Mark acknowledges this in his article, but I think he underestimates the cost of this undertaking (and he’s not the only one). In theory, this investment should pay for itself in the long run – but that assumes there is a long run. (In the words of John Maynard Keynes, “in the long run we are all dead.”) Is it legitimate not to accept the value of automated testing religiously and question its ROI?
As a contractor/consultant, I’ve seen a lot of codebases, and, in my experience, most test suites leave something to be desired – to put it diplomatically. I think there’s a lot of art to writing useful unit tests, and, unfortunately, I’m afraid relatively few developers can do it well. Anecdotally, I’ve found (automated) integration/system tests more valuable than unit tests, but those typically require a dedicated team.
In my opinion, automated testing can help you achieve higher quality but won’t necessarily lead to increases in productivity – certainly not “order-of-magnitude” increases.
Note: Don’t take my word for it – Dave Thomas, one of the authors and original signatories of the Manifesto for Agile Software Development, appears to question automated testing in Agile Is Dead. (The entire talk is excellent!)
Mark seems to believe that distributed version control systems (Git in particular) represent an order-of-magnitude improvement over centralized version control systems (such as TFVC or Visual SourceSafe). I’ve used Git for some time now, and I love it. I’d say it’s infinitely better than using no version control at all (or Visual SourceSafe) but not that much better than, say, TFVC; at least for small teams, I find TFVC perfectly adequate. (And no, Mark, you certainly shouldn’t end up with “lots of code that was commented out” – any version control should help you with that 😊)
I agree with Mark 100% here – he claims it’s a significant but “hardly a ten-fold improvement in productivity.”
Agile software development
This one’s interesting, in my view, in that it has the potential to address not just the accident but also the essence. (I.e., it doesn’t just deal with technical problems but also conceptual ones.) Specifically, it’s the “iterative” nature of agile development that could help us address this. (Faster feedback enables you to understand the problem domain better.) The difficulty is that finding the right mindset is hard. I’ve seen too much adoption of what I call “cargo cult agile.” People don’t change much about what they do (or how they do it), but they start using “agile vocabulary” to talk about it (and they have daily standup meetings and biweekly retrospectives and whatnot). (Do watch/listen to Agile Is Dead!)
Note: Git, garbage collection, and agile software development are among Mark’s “honorable mentions.”
Statically typed functional programming
Mark doesn’t identify this as an existing silver bullet; rather, it represents his attempt at predicting the future. (I.e., this could be the “next big thing.”) Honestly, I don’t know enough about the subject to have an opinion on it. Mark admits that this is a wild guess and claims that “breakthroughs tend to be unpredictable.” The latter is a statement I’d sign in blood 😊
The article doesn’t mention CI/CD, but Mark, Carl, and Richard do talk about it. Not least because it facilitates “iterativeness,” I think the potential is enormous. The only thing that’s stopping me from leaping for joy (apart from my bad back, of course), is that I believe this has only become indispensable as the software we build has grown in complexity. In other words, this, to an extent, represents a new solution to a new problem, which throws the overall gains in productivity into question. (Mark acknowledges this counterargument at one point in the podcast.)
I recently helped one of my clients maintain a custom WPF-based computer-aided engineering (CAE) application. The product itself is relatively (very) complex – by most .NET applications’ standards – and, in my opinion, well-architected. Given that it’s one of those now rare programs that run on the engineers’ workstations and don’t require integration with a vast ecosystem, though, they’ve never automated the deployment/delivery process and instead publish it to their ClickOnce server manually. Would it be nice to automate this (and move away from ClickOnce)? Absolutely. Should they? Hell, yeah! But I agree that even with a fairly aggressive release schedule (say, once a week), it doesn’t create a significant bottleneck, which is why they've never prioritized it. I would never for a millionth of a second consider this approach for a cloud-based web application of any size.
Richard brought this one up, and while Mark appeared to disagree, I believe this came as a result of a misunderstanding. Contrary to Mark’s interpretation, Richard didn’t talk about the productivity of open-source software development; he seemed to be alluding to the boost to software development productivity in general enabled by the existence of open-source frameworks and libraries. I, for one, think this is another huge one 😊
Did you miss me, honey? With every bullet so far!
I’ve critiqued each of Mark’s alleged silver bullets to make a straightforward point: I agree that many/most/all of them are fantastic, but I don’t think they represent “order-of-magnitude” (or better) productivity improvements – I guess I’m more of a pessimist 😊
That said, I think there’s a more fundamental reason why I don’t share Mark’s view (that the tenet of No Silver Bullet is wrong/outdated): I think my understanding of productivity differs from his.
At one point in the talk with Carl and Richard, Mark asks, “what happens if you shoot a silver bullet in the foot?” I don’t have an answer, but I do have a related question: What happens if you discharge a silver bullet and miss your target (or hit the wrong one)? (In the words of Peter Drucker, “management is doing things right, leadership is doing the right things.”)
You can measure productivity by the number of “software artifacts” produced/released per “man-month” or the length of your release cycle (as Richard suggests – the shorter, the better). I find this perspective too “programming-oriented,” as opposed to “engineering-oriented.” It fails to tell you what business or domain value the software creates. As value is notoriously hard to define (and context-specific), let alone measure, this debate is bound to remain subjective. (So is Mark’s argument – by his own admission.)
I agree that software developers spend a lot of time working on accidental/technical problems. But while the “essence” of the problem may, from that perspective, appear small, I believe it’s also way more consequential. Many people have tried to figure out why software projects fail. My impression has been that they typically conclude that the reasons aren’t primarily technical (i.e., accidental). (Take a look at Why software fails?, for example – I know it’s dated, but I believe it’s still relevant, and it's consistent with virtually everything I’ve read on the subject, as well as my own experience.)
You might argue that I’m stretching the definition here, but Mr. Brooks explicitly talks about “software engineering” productivity, not “programmer” productivity.
Note: If you find my standards a tad too high, I’d like to point out that Mr. Brooks uses advances in hardware as a de facto benchmark. One way to summarize his essay would be ‘do not expect a software equivalent of Moore’s law,’ even though he doesn’t use that term. (A direct quote: “We cannot expect ever to see twofold gains every two years.”)
Coders and pilots
In his response to a reader’s comment, Mark compares software development to piloting an airplane. (Actually, the commenter, Karsten Strøbæk, brings this up initially.) To Mark, software development seems “orders of magnitude more complex” than flying. As a consequence, the latter “is extremely safe … predictable, and manageable,” unlike the former.
OK, Mark – I agree! But why is it? What makes software development so much more complicated than flying?
Here’s my answer: Unlike most software development projects, commercial flying, although extremely challenging, has well-defined requirements – functional (origin & destination, departure & arrival date/time, etc.) and non-functional (safety, security, etc.). As a result, a repeatable, “one right way” to fly has emerged – now we can just keep executing it “cookie-cutter” style.
Software requirements are usually a nebulous hodgepodge of heterogeneous goals, wishes, and dreams. We keep having to invent novel solutions to novel (ill-defined) problems. In other words, the essence of software development is what makes it difficult.
The anecdotal evidence I’ve assembled as a software development contractor/consultant, as well as a user of software, tells me that software projects and products still underachieve: They run over budget, don’t meet user expectations and deadlines, or outright fail. Productivity is still lacking.
The underlying reasons for this deficiency aren’t, in my opinion, technical/accidental, even though technical problems take up the bulk of most software developers’ time. Instead, they’re conceptual, rooted in the “messy real world” – the essence. In that regard, I deem No Silver Bullet as relevant in 2020 as it was in 1986.
Perhaps you believe that only bad teams fall short – because they fail to adopt the silver bullets that do exist. But a silver bullet should, by definition (“a quick solution to a difficult problem”), be easy to employ. If it’s such a no-brainer, why doesn’t everyone do it?
One final note: If you’re frustrated by naysayers who keep repeating “there’s no silver bullet” as an excuse to dismiss new ideas, I suggest the following response: A silver bullet is an order-of-magnitude improvement – tenfold or better; are you telling me that, say, a fivefold boost ain’t worth it for you?