I only recently started experimenting with nullable reference types in C#. (Yes, I’m a laggard.) I decided to convert an ASP.NET Core (API) project to use the feature. So far, I have succeeded in getting rid of all the warnings that swamp you the first time you enable the setting, and, overall, my experience has been very positive. In this post, however, I’d like to make two observations:
- Some values may be required and still missing (i.e., null) – specifically, I find it hard to avoid this problem when dealing with external systems, such as HTTP clients (or even databases); as a result, I caught myself possibly overusing the “overrides” that allow you to effectively suppress the compiler’s well-intentioned cautionary voice
- When converting my application, I realized that I (we) often, perhaps sub/unconsciously, fall back on default values in values types, and I started questioning the practice
I love the new feature. But I also wonder if it could benefit from a few accompanying runtime checks. At the risk of possibly (i.e., without a doubt) embarrassing myself, I’m going to show one way this could be accomplished by introducing a new C# construct that, if it existed, would (at least partially) address my concerns – and maybe others.
To illustrate my points, in a distilled form, I built a simple demo client/server API application. You can find the source code on GitHub. It’s a solution with two console app projects (server and client). Although the server is an actual HTTP server (and the client an actual HTTP client), I decided to implement everything manually, using HttpListener and HttpClient, as opposed to relying on a framework (e.g., ASP.NET Core). The “product” also contains a rudimentary in-memory repository as a stand-in for an actual persistence mechanism. I made these decisions for two reasons:
- I wanted to keep things simple with fewer dependencies (web server, database, etc.)
- Some of the experiments I present below wouldn’t work with the built-in ASP.NET Core JSON serializer, validation engine, Entity Framework Core, etc.
I recently became involved in an interesting exchange of opinions with Mark Seemann (and others) regarding No Silver Bullet – Essence and Accident in Software Engineering, a famous essay by Fred Brooks first published in 1986. I forced my way into the debate after I listened to a .NET Rocks episode that had Mark as its guest, and in which he questioned at least some aspects of the essay’s central tenet and its continued relevance. I later read his article, Yes Silver Bullet, where he elaborates on his ideas. (It predates the podcast.)
As I felt Mark (as well as Carl and Richard, the podcast hosts) had possibly missed some critical points, I wrote Full Silver Jacket as my response. The discussion didn’t end there: Mark kept it alive by writing Modelling versus shaping reality. In addition, we exchanged several comments 😊
In the latest installment to date, Mark steers the conversation to questions about the purpose of software and its relationship with the broader world. Specifically, he ponders whether software products should merely “model reality” and concludes that they should generally be a lot more ambitious: They should aim to actively alter it – shape it.
Last week, I listened to the (then) latest episode of .NET Rocks with guest Mark Seemann. Mark, Carl, and Richard discuss Fred Brooks’ 1986 essay, No Silver Bullet – Essence and Accident in Software Engineering, and Mark expresses his belief that, to a significant extent, Mr. Brooks got it wrong. (Or, at the very least, the conclusions no longer apply.)
As I wrote in my comment on the episode, I became frustrated at not being able to join the conversation, as I felt the participants disregarded some crucial points. I later discovered Mark’s article in which he deals with the topic more thoroughly, and, as he pointed out in his reply to my comment, he does address many of these points, although his viewpoints don’t always coincide with mine.
In this post, I’d like to elaborate on what I agree and disagree with Mark about, as I find the topic fascinating. (Alas, I felt I couldn’t possibly express everything succinctly enough for a mere comment.)
Some time ago, I came across three articles that attempt to answer the question from this post’s subtitle. Okay, I lied: I found Eric Sink’s Do elite software developers exist?, which references the other two. I have since spent some time pondering this riddle, and I think I’ve formed an opinion of my own.
Several terms have been used to refer to these (purportedly) extraordinary individuals: Rock star developer, 10x developer, and elite developer – among others. I’ll (mostly) stick to the rather prosaic exceptional developer.
Every one of the articles I mention is well worth reading, and I certainly recommend that you do. I will, however, attempt to very briefly summarize the gist of each of them. I also think I’ve discovered a few deficiencies in their reasonings, which I’ll try to address. Finally, I’ll offer my own contribution to this debate.
Once upon a time, in a land not distant at all, my friend and I had the privilege of maintaining a legacy code base that neither of us had any previous exposure to. As you can probably appreciate if you have any experience with legacy code bases, some of the stuff wasn’t as clean and tidy as you might want it to be. A lot of people had obviously worked on the project. You’d find it difficult to discover the original intent behind many of the design decisions you’d encounter.
There was a suite of unit tests, but at first glance it didn’t look very comprehensive. To my surprise, however, having dabbled in the source code for a few minutes, my friend exclaimed: “The code coverage is actually pretty good – almost 100%!” He had used a tool that tells you what proportion of your code gets executed by your tests…
Once the initial excitement faded and I regained my composure, I realized that you had to give whoever developed the tests credit for one thing: Efficiency. One test of no more than a dozen lines covered thousands and thousands of lines of production code.
In addition to being passionate about software development, I’m interested in, among other things, world politics, as well as economic affairs. When I recently came across Dealing with China: An Insider Unmasks the New Economic Superpower by Henry “Hank” Merritt Paulson, Jr., a former CEO of Goldman Sachs and later U.S. Treasury Secretary, I bought it on a whim and started reading it immediately.
I have found it reasonably insightful and engaging (as of this writing, I haven’t finished it), but I don’t intend to attempt to sell it to you here, nor do I want to review it. I’m bringing it up because of one specific episode in it, a relatively insignificant one (for the overall narrative), that made me think about situations that software developers (as well as other professionals) face, perhaps not on a daily basis, but still relatively frequently.
Note: I would like to assure all those with strong opinions on Mr. Paulson as a politician and businessman, whether negative or positive, that I really want this article to remain completely apolitical. The book is a memoir written by a man with a lot of experience, both as a senior executive of one of the world’s largest investment banks and later a high-ranking United States politician, and, as such, it sparked my interest, irrespective of what I or anyone else may think of the author. The truth is that my view of Mr. Paulson is completely neutral.
When you know your client is wrong…
A long, long time ago, in a dimly remembered past life, I had a client that I developed a software solution for. It was not your typical contract where the consultant gets paid an hourly or daily rate: I actually had the opportunity (and responsibility) to build and deliver the entire product, turn-key style, although the process was long-term, iterative and involved ongoing maintenance.
I essentially developed a custom sales support information system for these people. Apart from converting data from an organically grown database they had used previously, I had to do pretty much everything from scratch. Over the course of several years, the product grew to become quite large and sophisticated. I must admit that I was proud of my achievement – in a way, I think I still am :-)
Now, here’s the interesting part: The client was a global company operating in the medical devices industry, but my solution was built for and used by only a few local branches. These offices had no local IT departments (everything was handled centrally and by a few contractors), and none of the people that I negotiated with appear to have understood some very basic software development concepts. And so it happened that our legal agreement didn’t stipulate that they “owned” the source code, nor did it require me to even allow them access to it.
I daresay that any longtime .NET developer must have had at least one dream (or nightmare) about the following two sentences: Any public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe. If you have any experience reading the MSDN documentation, you know this piece of text is ubiquitous there – it denotes types whose instances aren’t safe for parallel access from multiple threads (surprise, surprise).
One such type is the generic Dictionary class. Exactly what implications does this “lack of thread safety” have for its instances? In a nutshell, in this particular case, concurrent read operations are safe, but simultaneous write attempts (i.e., addition, modification or removal of elements) could make your dictionary lose its internal consistency. Simply put, you could break it.
Today I would like to compare the Dictionary class and its thread safe counterpart, ConcurrentDictionary. I’ll start, however, by pretending the latter doesn’t exist and writing my own version of it.
In the article that popularized the leaky abstraction concept, Joel Spolsky claims that “all non-trivial abstractions, to some degree, are leaky.” Although I don’t disagree at all, I also feel that some abstractions leak way more than others. Why is this? I’m sure it depends on a lot of things, but I have one specific theory that I would like to present today. Let’s start, however, by briefly exploring what the idea is all about.
According to Wikipedia, a leaky abstraction is “an implemented abstraction where details and limitations of the implementation leak through.”
Abstractions aim to allow their users not to worry about details not directly related to the problem they’re trying to solve. You use a library, because you want your application to send an email, not to deal with various idiosyncrasies of SMTP. The library’s implementers, in turn, want a guaranteed mechanism for packet delivery, which is why they rely on TCP/IP. The protocol, then, doesn’t need to know whether the data gets transmitted by means of electrical signals, optical signals, or radio waves, as that is the responsibility of the network card’s device driver. I skipped a lot of layers and tiers – but I believe you get the idea.
I recently listened (again) to some ancient .NET Rocks episodes (yes, I do like the podcast), and it got me thinking about regulation in the field of software development. In episode 774, Stephen Bohlen claims that “… somebody who calls themselves a software engineer involved in writing software for life safety systems is going to make a mistake, and that mistake is going to kill some group of people or badly injure them, and there’ll be a hue and a cry, and the industry will have lost its opportunity to regulate itself”. In episode 768, Jay Rockefeller, a United States Senator at the time, is quoted as saying “… all the apps folks, not just the big ones, but the little ones that just may have three or four people… but there are still hundreds of thousands of them, they’re pumping out apps … they’re totally unregulated… And so the question is, what do we do about that? Or what do you do about that? Or do you want us to do something about that? They have to be regulated …” Interesting conversations can also be heard in episodes 934 and 1094 with Uncle Bob Martin, and I’m sure much, much more on the topic exists in countless articles and podcasts.
How should, or perhaps how could software development be regulated, if at all? I spent quite a number of hours thinking about this (while driving at the same time, endangering everyone around me), and I came to the tentative conclusion that it may not be possible or desirable to regulate the field as a whole, in general. We may need regulation, but it should always apply to specific types of software, and it might and probably should vary widely depending on its “jurisdiction”. I also believe we might be better off regulating software “indirectly” – I explain what I mean by this below.
Look before you leap
For starters, no one in their right state of mind (with the possible exception of the Thought Police) could argue against people’s right to write software for fun. Now, once I have built an app for myself, who or what will prevent me from publishing it on my blog? I don’t have any personal experience using 1980s computers, but it’s my understanding that many programs written in BASIC back then were published in magazines, and you could manually type them into your Commodore 64. You generally had to pay for those magazines, as they were real, paper publications, and the authors, at least in some instances, were presumably paid for their work. In that case, they were making money writing software and should be subject to any hypothetical industry-wide regulation. Yet, such regulation, should it exist, would violate the First Amendment to the United States Constitution (freedom of expression), as well as the constitutional laws of any other western democracy.