Four Philosophies behind Technology Choices

This week I have been watching some very enlightening introductory lectures on philosophy. I didn't know much about philosophy, so I found that by contrasting some of the philosophical theories, I gained an insight not just into the meaning of different belief systems, but the reasoning that underpins them. Highly recommended.

While posting to a discussion thread about my current technology stack, I thought about how philosophical theories could help to explain the rationale behind how different people make technology choices. Thus, I present to you: The Philosophy behind Technology Choices.

1. Utilitarian (Webformsianism)

A utilitarian believes that when asked to make a technology choice, the right tool is the one that maximises the result. They are much less concerned with the intrinsic qualities of the technology, and more about the results the technology can bring them. Technology is a means to an end.

A utilitarian will choose whatever technology they think will maximise their result, and have no qualms choosing "lesser" technologies like Web Forms or Silverlight. They aren't afraid of drag-and-drop tools, and they rarely stop to refactor code. If it ain't broke, don't fix it.

You can spot a utilitarian by their use of copious amounts of third-party UI control libraries and messy code organisation.

2. Paternalist (Redmondism)

Paternalists believe in the power of big vendors to provide. The reasoning behind their choices isn't based on subjective evaluation of the intrinsic qualities of each technology, but rather where it came from. While they may still need to make a choice from vendor solutions, their scope of possible options will be limited.

If asked to choose, for example, an ORM, a paternalist will see their options as being limited to Entity Framework, DataSets or Enterprise Library Data Access Application Block. The paternalist would prefer to use poor quality, slow, un-testable technologies while they wait for the Vendor to provide in the next framework release. They may end up with the same conclusion as the other philosophies, but for different reasons.

You can spot a paternalist by examining their NuGet packages folder, which will only contain Microsoft-produced packages.

3. Libertarian (Githubianism)

The opposite of paternalists, libertarians reject the idea of heavy influence from the Vendor. They believe that the Vendor exists to serve them, rather than the other way around. They think developers should be self-sufficient and should be trusted to make good choices without needing to be coerced by the vendor. They see dependence on the vendor as dangerous.

To a libertarian, just as important as the technology choice is the freedom to choose in the first place. When no choice exists, they are happy to write their own. A libertarian would prefer to find open-source solutions to their problems before becoming dependent on a vendor.

You know you are looking at a libertarian because the only Microsoft-produced references in their project are for the .NET framework (and only because they haven't had time to test on Mono).

4. Kantian (Unclebobism)

Followers of Kantianism are the opposite of utilitarians. They believe that a technology choice is right because of the nature of the technology itself, rather than the results it achieves. That is, they believe that some choices are inherently right, and some are inherently wrong, and it doesn't always "depend". While the utilitarian sees technology as a means to an end, Kantianism followers see technology as both an end and a means.

Kantianism followers have a well defined system for evaluating technology: the SOLID principles. They will choose ASP.NET MVC over Web Forms not because MVC is more productive (a utilitarian reason), nor because it is being hyped more by the vendor (the paternalist's reason), but because MVC is much closer to the SOLID principles, and is therefore right (the Libertarian, by contrast, would choose an open source alternative).

Summary

When a look at the technology stack behind a project, it's important to understand that there are many possible motivations behind the technology choices. A choice might be justified by its expected results alone, by preference for a vendor, by a preference for retaining control and mistrust of vendors, or by inherent value that we judge to be in the technology regardless of the results. More often than not, any choice is going to be influenced by more than one of these factors.

Which of these theories had a role in your current technology stack? What does it look like? Did we miss any important motivators?

A picture of me

Welcome, my name is Paul Stovell. I live in Brisbane and work on Octopus Deploy, an automated deployment tool for .NET applications.

Prior to founding Octopus Deploy, I worked for an investment bank in London building WPF applications, and before that I worked for Readify, an Australian .NET consulting firm. I also worked on a number of open source projects and was an active user group presenter. I was a Microsoft MVP for WPF from 2006 to 2013.

04 Jan 2012

I think you forgot one: Objectivism (Pragmatism)link text

The understanding that all software is transient and fluid. Todays best practice is tomorrows legacy. An implementation today is technical debt tomorrow. Software moves at a fast rate. Understanding when to use a vendor technology, rather than blind disregard is simply bias rather than being practical and has implications based on aforementioned fluidity. Disregarding any of the properties of software that fall into each of your four classifications is a decision that I dont believe takes in a broader, more objective view. Of course, this could also be classified as "FenceSitter" :-)

04 Jan 2012

Nice post Paul.

I reckon there might be another group called "Maslowian" :)

I'm probably a mixture of all 4 of the aforementioned groups at different times. At work, I stick to a predominantly Microsoft stack but outside of work I tend to look for "the right solution" more often than not.

I'm also lazy though - and that's probably got more to do with why I would stay in 1 group longer than needed. What motivates me to explore different options from time to time is the concern (fear?) that my knowledge about a given area (e.g. version control) has shrunk and it is therefore time to regain knowledge so I act on that.

I call that Maslowian because I'm probably more concerned about getting stuck in a rut and losing the agility to maintain forward momentum.

04 Jan 2012

@Glav:

I love it :)

Todays best practice is tomorrows legacy

While best practices and fads may come and go, is it possible to separate those from our underlying principles? For example, the principles of OO programming may evolve slightly, and we might get better at following them, but I'd say they haven't really changed for a very long time. A lot of software still suffers from too many layers of indirection, but there have been principles out there warning us against that for quite some time.

If we can isolate a set of core principles away from the hype and "best practices" - for example, writing (not generating) clean code, separation of concerns, doing one thing well, don't hard-code stuff - could we create a set of criteria that would last for at least a few decades and allow us to objectively evaluate a technology on its merit?

@Darren:

Nice addition. I was considering "CVian" - the people who use 'what will that look like on my CV?' as a driving factor in their technology decisions.

It sounds like your suggestion is different though. Instead of controlling your decision when evaluating technological choices, it just trigger when you evaluate them. Is that a fair statement?

I guess as an example, would you be more likely to "choose" to commit to using Node.js on a project because of your "Maslowian" influence, or just more likely to spend a few days evaluating it, and dumping it if you decide it's not a good fit once you've learned enough?

04 Jan 2012

or just more likely to spend a few days evaluating it, and dumping it if you decide it's not a good fit once you've learned enough?

A bit like that yeah. It's about agility and adaptability. You don't want to become so set in stone that you can't adapt to change.

05 Jan 2012

Hey Paul,

Re: "If we can isolate a set of core principles away from the hype and "best practices" - for example, writing (not generating) clean code, separation of concerns, doing one thing well, don't hard-code stuff - could we create a set of criteria that would last for at least a few decades and allow us to objectively evaluate a technology on its merit?"

Absolutely we can, but more often than not these core principles seem so commonsense and quite obvious. You will also find unanimous agreement on most points. I think it gets to a point where it is really hard to separate ideals from practical usage as they are often tied closer than we would like because getting all the strings, smoke and mirrors to co-operate can often take a less than ideal approach. This ofcourse does not mean we should not strive to adhere to them. Take code gen for example. Everyone prefers clean code. Often though, this drudgery is monotonous and potentially error prone, so people try to make it cleaner and repeatable, and template it, and code-gen it and then it grows and.... the cycle continues... ;-)

06 Jan 2012

@Glav, wouldn't the notion of one true best practice be "Platonism"? The effect of those practices resembles the blind watchmaker at work. The disruptive forces described by Dawkins were best kept at bay by stable, modular, reusable and coherent components. It seems to me that the same forces are at work in software development.

David Miller
David Miller
06 Jan 2012

Hey Paul,

I suggest "Historicism" as a common technology philosophy:

What we've always done has served us well - any change going forward should be small and incremental, or not at all.

The general reluctance to change is something I've seen in many places, but thankfully not in my current job.

BTW - I hope London is treating you well.

vish
vish
10 Jan 2012

Think I like Paul G opinion. The best practice lies somewhere in the middle, taking into account the specifics of the particular project being worked on.

Clearly a light-hearted post so I will resist a temptation to criticize, as comes easy with oversimplification and generalizations.

I should read 'uncle bob', he seems highly thought of. I opened his book at a random page once, and the particular topic seemed very naive, and almost dangerous. He was stating that every class that is written should implement all methods (and parameters/variants to such methods) that the programmer could reasonably expect might be needed in the future. I used to do this when I was at that dangerous point in time where I had about 5yrs experience out of university, and was very confident in my abilities. Over the years since then I came to realize that I was wasting my employers money doing this, and creating a lot of refactoring / dead code for the future. I flush red with embarassment now that I think of all those times. I now operate under the '2nd person pays' principle or YAGNI as I have heard it called. Perhaps the book is much better than that but I was really disturbed by this particular point and I am not sure I could trust the guidance.

11 Jan 2012

I think I'm a paternalist myself, this has always saved the overhead of thinking my decisions when it comes to choosing company hardware.