A picture of me

Welcome, my name is Paul Stovell. I live in Brisbane and work on Octopus Deploy, an automated deployment tool for .NET applications.

Prior to founding Octopus Deploy, I worked for an investment bank in London building WPF applications, and before that I worked for Readify, an Australian .NET consulting firm. I also worked on a number of open source projects and was an active user group presenter. I was a Microsoft MVP for WPF from 2006 to 2013.

When I first started selling Octopus Deploy licenses, the initial orders came directly from small-medium companies, and most were paid using a credit card. Now, approximately a fifth of our orders are coming as purchase orders, and many of those now come from resellers buying software on behalf of corporate clients.

I didn't have any experience dealing with this before going into business for myself, so I've been making it up as I go along. It was all new to me, so perhaps some of it will be new to you, too. In this post, I'll share my current strategy for dealing with purchase orders and resellers. I'd also love your thoughts on how it could be improved.


Octopus Deploy Pty. Ltd. is an Australian company. Licenses for our software are sold online through FastSpring, a Californian company. Approximately half or our sales are to US customers, then the UK, then Australia, then other countries.


Usually the first step in the process is that an email comes to our sales email address, asking for a quote. Someone has heard of the product, tried it, decided to buy it, but before they can begin the process of purchasing a license, they need the price written on a quote that they can record in their system.

Our pricing model is simple and on the website, so there's usually no surprise in the price. We do provide discounts to some resellers (which I'll go into shortly) but overall there's nothing complex here.

What does the quote look like? We'll, it's just an invoice, except it says quote at the top instead. Here's an example.

Quote PDF

I use Xero for accounting (a service I can't recommend enough, it truly makes bookkeeping fun), and initially I also used Xero to create the quotes. But each time I needed to set the customer up as a contact, then create the quote, and so on. Now, I just use a Word document template, and save it as a PDF.

Each quote has a unique quote number at the top. This number is generated by a very sophisticated algorithm:

  1. I type "ODQ", which is short for "Octopus Deploy Quote"
  2. Then I mash the numbers on my keyboard
  3. There's no step 3

I don't currently keep track of the quote numbers, since they don't really matter from my perspective.

W9 forms

The IRS require (some? all?) American companies to record information about companies they purchase software from, which is usually supplied as a W9 form. Customers will occasionally email me to fill in such a form before they can place an order.

The Form W-9, Request for Taxpayer Identification Number and Certification, serves two purposes. First, it is used by third parties to collect identifying information to help file information returns with the IRS. It requests the name, address, and taxpayer identification information of a taxpayer (in the form of a Social Security Number or Employer Identification Number). The form is never actually sent to the IRS, but is maintained by the person who files the information return for verification purposes.

Since the company is actually buying the software from our reseller (FastSpring), it's FastSpring's W9 form that they need. Sometimes I've been asked to provide a W8 form (which is used when purchasing from a non-US company), but this is because the customer assumes they are buying from us directly.

Purchase orders

Once the customer has a quote, they'll create a purchase order in their system. This usually gets emailed to us as a PDF. I manually enter those order details into FastSpring, and the customer gets an email to let them know the order is ready to be paid. I usually also save the invoice as a PDF and email it to the customer.

The invoice is created by FastSpring and usually looks like this:

Invoice example

Finally, while FastSpring do provide an option to enter purchase orders online (so there's no need for me to be involved) I find it's usually easier to just ask customers to email the purchase order to me, especially where resellers are involved.

Deliver before or after payment

Normally, when an order is placed, we don't send the license key until payment has actually been received. Most people pay by credit card, so their license key is generated and delivered within a few minutes of ordering. But customers using a purchase order normally expect to be able to wait 30 days or so before making payment, and they like to pay using check/money order.

When a purchase order arrived, I used to generate a 45-day trial license key to send to the customer manually. That way they could use the product in full, and by the time the trial expired, they would have their real license key because the order should have been paid.

This actually caused a lot of problems for large customers, because the need to record the fact that they're using a trial license of software for production deployments in their configuration management system. And if resellers are involved (more on that later), there might be confusion as to the terms of payment, so the payment process can drag on.

Then I found out that FastSpring provide an option to deliver the license key before payment is received:

Purchase order delivery options in FastSpring

So now, my process is to accept the purchase order and deliver the license key right away, trusting that customers big enough to use a purchase order will eventually pay. I haven't had any problems with this so far, and the customer is usually happier as a result.


A reseller is usually engaged to purchase software on behalf of a customer.

In an ideal world, resellers know about your product and are out there advertising it and promoting it to their customers. When they make the sale, they keep the difference between what you sell it to them for, and what they sell it to their customer for. Usually you might give them a discount to do this.

In reality, all of the resellers I've had have gotten in touch this way:

  • Bob is a developer lead, he learns how wonderful Octopus is for ASP.NET deployment, uses it, and wants his company to buy a license
  • The organization where Bob works has a policy that they only buy software through approved resellers
  • Bob asks his reseller to buy the license for him
  • Bob's reseller contacts me to ask for reseller pricing

Why does Bob's company have a reseller? It seems to be down to accounting: Bob's company would prefer to pay a handful of invoices for software purchases a month rather than hundreds (and I can't fault that). Resellers probably provide value to Bob's company beyond that when it comes to bulk purchases of more mainstream software, but for a small business like ours, that seems to be the extent of it.

When asked to provide reseller pricing, I point them to this page on partners and bulk licenses, which has a table of discounts:

Licenses Discount
1 licenseNo discount
2-9 licenses10%
10-14 licenses15%
15+ licenses20%

We do offer discounts to anyone who buys multiple licenses. But for one-off licenses (even from resellers), we don't provide any discount. As explained in this Business of Software thread:

When a reseller is contacting you it is because a customer has requested your product. They are buying from you no matter what and there is no reason to offer a discount. This is particularly true for the larger resellers: they are not promoting your product, they just handle the order for the client.

So far we haven't had anyone decide not to go through with an order through a reseller because they didn't get discounted pricing. I don't know if the reseller is charging anything on top or not (making it more expensive to use the reseller than to just buy direct), but I suspect that for small purchases like us they probably don't.


I've learned that quotes, W9 forms, purchase orders and resellers are all part of the business of selling software. They're all pretty simple at the end of the day, though they can be somewhat time consuming to deal with. As my business grows I'll no doubt need to find more streamlined ways to handle them, but for now my current process seems to be working OK.

Do you have a suggestion on how the above could be improved, or a tip to share? Leave a comment in the box below.

A question on StackExchange is "What's the best platform for blogging about coding?", to which the accepted answer is, of course: make your own.

This blog was originally written by me, then rewritten, and that rewrite went on to become FunnelWeb. This weekend I rewrote it again, for the third time, using:

One of the concepts I was going for in this rewrite was to make the home page more of a profile and less of a plain old list of posts. I also wanted to make it more likely that I would blog something, as opposed to just Google+'ing it or Tweeting.

So for example, while the home page looks like this for you:

Home page

I see a box to compose posts quickly:


Is it better than Wordpress/Blogger/all of the other services out there? Probably not. But hopefully you'll see me blogging more frequently.

A few weeks ago I received the following bug report for Octopus from a customer:

We have recently rolled out Octopus deploy [..] and we are having severe memory leak problems on the tentacles. After only 1 deploy one of the tentacles had used 1.2GB ram (Yes gigabytes). The package was quite large - circa - 180MB - but there must be a serious leak somewhere. Other projects are having similar issues - up-to about 600 MB memory usage.

I put together a test harness and created a NuGet package containing a pair of 90mb files. The harness simply used the NuGet.Core library's PackageManager class to install the package to a local folder. 55 seconds later, and having used 1.17GB of memory (as measured by GC.GetTotalMemory(false)), NuGet had finished extracting the package.

The good news is that given time, the memory usage reduced to normal, so the GC was able to free the memory (though much of it stayed allocated to the process just in case). The memory wasn't being leaked, it was just being wasted.

Wierdly, the NuGet API is designed around streams. And the System.IO.Packaging classes which NuGet depends on is also designed around streams. Looking into the implementation, the problem seemed to be down to NuGet.Core's ZipPackage class.

When you ask a NuGet.Core ZipPackage to list the files in its packages (GetFiles()) the implementation looks like this:

private List<IPackageFile> GetFilesNoCache()
    using (Stream stream = _streamFactory())
        Package package = Package.Open(stream);

        return (from part in package.GetParts()
                where IsPackageFile(part)
                select (IPackageFile)new ZipPackageFile(part)).ToList();

The ZipPackageFile constructor is being passed a PackagePart, which exposes a stream. But what happens with that stream?

public ZipPackageFile(IPackageFile file)
    : this(file.Path, file.GetStream().ToStreamFactory())

The ToStreamFactory call looks innocuous, but here's the implementation:

public static Func<Stream> ToStreamFactory(this Stream stream)
    byte[] buffer;

    using (var ms = new MemoryStream())
            buffer = ms.ToArray();

    return () => new MemoryStream(buffer);

That's right - it's reading the entire stream into an array, and then returning a new MemoryStream populated by the array anytime someone requests the contents of the file.

The reason for this appears to be that while System.IO.Packaging is designed to use streams which need to be disposed, the NuGet API and classes like ZipPackage are intended to be passed around without needing to be disposed. So instead of opening/closing the .nupkg file to read file contents when required, it copies it to memory.

This isn't a problem when your packages are less than a few MB, but it's pretty harmful when you're distributing applications.

After spending half a day trying to patch NuGet.Core to avoid reading the files into memory, in hopes of sending a pull request, I found that other people had also tried and been rejected - it seems like this is a problem the NuGet team plan to solve in an upcoming release.

Instead, I gave up and decided to write a package extraction function to suit my needs. This gist extracts the same file in 10 seconds using only 6mb of memory:

public class LightweightPackageInstaller
    private static readonly string[] ExcludePaths = new[] { "_rels", "package\\services\\metadata" };

    public void Install(string packageFile, string directory)
        using (var package = Package.Open(packageFile, FileMode.Open, FileAccess.Read, FileShare.Read))
            var files = (from part in package.GetParts()
                         where IsPackageFile(part)
                         select part).ToList();

            foreach (var part in files)
                Console.WriteLine(" " + part.Uri);

                var path = UriUtility.GetPath(part.Uri);
                path = Path.Combine(directory, path);

                var parent = Path.GetDirectoryName(path);
                if (parent != null && !Directory.Exists(parent))

                using (var fileStream = new FileStream(path, FileMode.Create, FileAccess.Write, FileShare.Read))
                    using (var stream = part.GetStream())

    internal static bool IsPackageFile(PackagePart part)
        var path = UriUtility.GetPath(part.Uri);
        return !ExcludePaths.Any(p => path.StartsWith(p, StringComparison.OrdinalIgnoreCase)) &&

There are other parts of NuGet.Core that break when large files are used. For example, some NuGet repositories use SHA hashes so that consumers can verify a package download. This is implemented in NuGet.Core's CryptoHashProvider. The method signature?

public interface IHashProvider
    byte[] CalculateHash(Stream stream);

    byte[] CalculateHash(byte[] data);

    bool VerifyHash(byte[] data, byte[] hash);

Again, instead of passing the stream (even though the underlying crypto classes we use accept streams), it will just read the entire file (180mb in this case) into an array just to hash it.

Here's hoping these problems are fixed soon. For now, at least, Octopus has a workaround that's only a few lines of code and performs much faster. As of Octopus 1.3.5, you'll be able to install large packages without dedicating most of your memory to it.

I'm working from home now, so staying productive is something I'm thinking about a lot. Because I am generous, I shall share my top five productivity improvement tips.

  1. Subscribe to every productivity blog you can find, and spend at least three hours a day reading top-X blog posts on improving productivity. Since the best tips come first, a lower value for X is better. Don't waste your time reading anything that can't be summarized in at most ten bullet points.
  2. Commit yourself to reading at least three books a week on productivity. Look for large books that only really have one new central idea, which is usually made clear in the title, work best. Only buy books from well-known authors that make a living from running conferences on productivity improvement.
  3. Start every day by committing two hours to making a list of goals for the day. Start with the biggest and hardest items, to increase the chances of never having a 'win' in a single day.
  4. End every day by taking an hour to berate yourself for not achieving anything on your list. Remember: even if you achieved 90% of your goals for the day, you are a failure. The guilt will make you work harder tomorrow (you loser).
  5. Leave comments on other people's posts about productivity, to explain to them why they are wrong. Tell them how some simplistic system you learnt from a book is working wonders for you and they should give it a try. Ensure each comment contains at least 500 words. After all, you're so productive you have time to spare.

Gratuitous Dilbert comic

I hope this post has helped :)

App.net recently caused a stir by promising to create a social network that users pay to use. This is an interesting idea because it means that the network will make money directly from users, rather than through advertisers, so there is more incentive to make the network nice to use.

That said, I'm not sure how new this is. I've been "paying" to use social networks for some time. Not directly to the network, but indirectly through the costs of being on the network:

  • The cost of maintaining my profile and keeping it up to date (photos, current job, location, interests)
  • The cost of responding to friend requests and other emails/notifications
  • The cost of ensuring my privacy settings are set correctly and I am sharing content with the right audiences
  • The risk of my account being compromised and having to respond to events like resetting my LinkedIn passwords

I put up with these costs because there's some value I expect to get out of the social network. Although I deleted my Facebook account a while ago, I re-created it recently just to connect to some relatives, because that's valuable to me. I get value from Google+ in the interesting links people share, and I get a lot of value from Twitter due to the discussions that take place.

LinkedIn, on the other hand, is a social network that I could never figure out. For years, I kept my profile up to date, accepted friend requests, and put up with recruiter emails. I even joined a bunch of discussion lists, only to find they were really just recruiter hang outs too. I would log in, and look through my stream, but the only interactions were "Joe is connected to Bob", or occasionally "Joe has a new job" (which, depending on who Joe was, I either already knew or didn't care).

So after years of "paying" to be a member of LinkedIn, I asked myself, what value am I getting from this network? I have 306 connections, but what does that mean?


Real networking is about interacting with someone - discussing ideas, finding shared interests, learning about who they are and what they stand for. You can interact with many people in many places. Chat with them on Twitter or Jabbr. Join mailing lists that they frequent. Talk to them.

The kind of faux networking created by LinkedIn reminds me of when I was a six year old at school, and kids would ask each other "will you be my friend?". As adults, friendships come about implicitly because of interactions and communication, not from asking to be someone's friend and then accepting said friendship request.

After all, the primary piece of information on LinkedIn is your employment history. And who cares about employment history? Only employers and recruiters. Banks won't hire you unless you've worked in banking (because they only want people who will repeat the same mistakes over and over, not people who will make new mistakes), so they want to see that you worked at a bank in 2003. But the people you really want to network with - the programmer who could expose you to a cool concept in Erlang, or who could collaborate with you on your open source project - they aren't interested.

LinkedIn is a network built around "Joe now works for Acme". But I realized that where Joe works doesn't actually matter. I follow tons of interesting people on Twitter without knowing where they work, because it's their ideas that are interesting. I learn a lot from interacting with other people, not reading about what their job responsibilities were at some company I never heard of in 1998.

In fact, the more I think about it, the more I think LinkedIn is just a giant scam invented by recruiters to improve the chances of finding the right candidate. It's like every other job service/employment gateway except there's a social graph to improve results. 99% of people get no value from it, but don't want to leave for Fear of Missing Out. So like me, they log on once a month to accept 34 meaningless "connections" from people they either already know or have never heard of, and delete a handful of spam, just for the one in a million chance that this is the week that there'll be a job offer that involves a writing Ruby on a tropical island and a six figure signing bonus.

To cut the story short, I couldn't see how the benefits of being on LinkedIn outweighed the costs, so I deleted my account.

Bye bye LinkedIn

You might be one of the 5% that actually get value from LinkedIn. Maybe you do a lot of short term work, or you are looking for new opportunities (but between us, I think the people who you really want to work for are probably more interested in your blog and Twitter and GitHub than where you worked in 1998). I'm not, so I'm out.

The $50 fee to join app.net may be high, but it's not the only cost of joining a social network. The real question will be what kind of value will the $50 get me? What kinds of meaningful interactions will I get from the service?

(This started as a Google+ post, but since those words are wasted, I decided to blog it instead)

I do all of my work on a laptop, which I recently repaved for the new software Microsoft released last week. I thought it might be interesting to share the configuration.


At the moment I survive on the single LCD monitor plus laptop screen, though when I return to Australia I'll add a second 22" monitor. I've used more expensive keyboards from Logitech in the past but the Microsoft Wired Keyboard has a very nice feel to it when typing.

Here is my WEI rating. The graphics performance is normally much higher, but I'm only using the built-in Windows 8 driver instead of drivers from the graphics vendor because, well, they seem very crashy.

Windows Experience Index


Although I'm using Windows 8, I spend all of my time in the desktop, and haven't installed any Metro Windows 8 style applications from the application store yet.

So far I've managed to avoid installing SQL Server or any older versions of Visual Studio. I think if that time comes I'll set up a virtual machine using VirtualBox.

What does your 2012 hardware/software configuration look like?

Visual Studio 2012 shipped last week, so I'm working on a branch in Octopus Deploy to upgrade the ASP.NET frontend of Octopus to ASP.NET MVC 4.0. My main reason for upgrading was the inbuilt bundling and minification support, which is awesome.

Octopus provides a RESTful API (browse to /api on your Octopus server), which serves up documents in JSON like this:

    "Id": "DeploymentEnvironments-1",
    "Name": "Production",
    "Description": "A production environment",
    "SortOrder": 0,
    "Links": {
      "Self": "/api/environments/DeploymentEnvironments-1",
      "Machines": "/api/environments/DeploymentEnvironments-1/machines"

Web API wasn't stable when I first built it, so I implemented it on top of ASP.NET MVC 3.0 using a custom ActionResult that used Json.NET. It doesn't support content negotiation, so everything is JSON-only. So when it came time to upgrade, I was also excited to try and convert my REST API to use ASP.NET Web API.

The ASP.NET team has been focussed on "one ASP.NET" for a while. Scott Hanselman sums it up with a nice image, which I'll shamelessly hotlink:

One ASP.NET to bring them all and in the darkness bind them

ASP.NET MVC and Web Forms have definitely done a lot of work to integrate, but ASP.NET Web API, I think, has a long way to go.

For instance, you'll be familiar with routing in ASP.NET:

  new { controller = "Dashboard", action = "Index", id = UrlParameter.Optional }

You can also do routing in ASP.NET Web Forms:


Of course, as one would expect, both of these build on top of a shared routing system provided by the green ASP.NET box at the bottom of the diagram.

Naturally, you can also do routing in ASP.NET Web API:

  new { id = RouteParameter.Optional }

You would expect this to be built on top of the same routing framework. Right? Right? Wrong!

ASP.NET MVC and Web Forms use classes like RouteCollection and the RouteBase abstract class. ASP.NET Web API on the other hand makes use of HttpRouteCollection and IHttpRoute. Of course, the reason they need a second copy becomes clear when you see the vast differences between the two.

For example, IHttpRoute has methods like:

public interface IHttpRoute
    IHttpRouteData GetRouteData(string virtualPathRoot, HttpRequestMessage request);  
    IHttpVirtualPathData GetVirtualPath(HttpRequestMessage request, IDictionary<string, object> values);

While ASP.NET's routing RouteBase class has completely different methods like:

public abstract class RouteBase 
    public abstract RouteData GetRouteData(HttpContextBase httpContext); 
    public abstract VirtualPathData GetVirtualPath(RequestContext requestContext, RouteValueDictionary values);

My guess is that this duplication exists because ASP.NET Web API is designed to also operate outside of IIS. But if we're serious about "One ASP.NET", why not do it properly and make the whole stack run outside of IIS? (It can be done now actually, but it is very painful and breaks a lot).

The madness doesn't stop here. ASP.NET Web API ApiController classes don't inherit from the ASP.NET MVC Controller class. ASP.NET gives you a UrlHelper class to generate URI's from the route table, while ASP.NET Web API has its own completely separate (and less nice) UrlHelper class. ASP.NET provides some well known basic types like HttpRequest and HttpContext. Web API has its own alternatives. Filters are the same but different. Both stacks have their own dependency resolver. Both have their own model binders. Both have their own model state/validation. The list goes on and on.

Here's a more correct image:

One more ASP.NET

While both frameworks work, it's hard to say they "work together". I'm finding myself creating a bunch of wrapper classes and adapters so that I can do basic things like generate URL's when I'm in either context. It all feels very messy.

Right now, the only reason I can think of moving to ASP.NET Web API is content negotiation, but making ASP.NET Web API generate nice looking XML and JavaScript off the same model also seems to be a lot of work, so I'm not sure there's much to gain there.

(Part of me wonders whether content negotiation and a few REST idioms should just have been implemented in ASP.NET MVC, which would really obsolete the whole Web API project.)

If I could have one wish for ASP.NET 5.0, it would be to stop having "one ASP.NET", and instead to have one ASP.NET.

Prior to working full time on Octopus Deploy, I spent a year building a risk system using WPF, for traders at an investment bank. Before that I worked as a consultant and trainer, mostly with a focus on WPF. I've lived and breathed the technology for the last six years, and in this post I'm going to share some thoughts about the past and future of WPF and the XAML-ites.

Six years ago, I wrote an article about validation in WPF on Code Project. I also wrote a custom error provider that supported IDataErrorInfo, since, would you believe, WPF in version 3.0 didn't support IDataErrorInfo. Later, I worked on a bunch of open source projects around WPF like Bindable LINQ (the original Reactive Programming for WPF, back before Rx was invented) and Magellan (ASP.NET-style MVC for WPF). I was even in the MVVM-hyping, Code Project-link sharing club known as the WPF Disciples for a while.

As I look back at WPF, I see a technology that had some good fundamentals, but has been really let down by poor implementation and, more importantly, by a lack of investment. I'm glad those days are behind me.

Back in 2006, here's what the markup for a pretty basic Window looked like (taken from an app I worked on in 2006):

<Window x:Class="PaulStovell.TrialBalance.UserInterface.MainWindow"
  Icon="{StaticResource Image_ApplicationIcon}"
  Background="{StaticResource Brush_DefaultWindowBackground}"

I mean, look at all that ceremony! x:Class! XML namespace imports! Why couldn't any of that stuff be declared in one place, or inferred by convention?

Fortunately, it's now 2012, and things have come a long way. Here's what that code would look like if I did it today:

<Window x:Class="PaulStovell.TrialBalance.UserInterface.MainWindow"
  Icon="{StaticResource Image_ApplicationIcon}"
  Background="{StaticResource Brush_DefaultWindowBackground}"

Spot the difference? Of course not, it was a trick question, nothing has changed since 2006 that would have made that less verbose.

In contrast, here's what a web page looked like in ASP.NET in 2006 (also taken from a project in 2006):

 <%@ Page Language="C#" MasterPageFile="~/TrialBalance.Master" AutoEventWireup="true" EnableViewState="false" CodeBehind="Builds.aspx.cs" Inherits="PaulStovell.TrialBalance.MainWebsite.Builds" Title="Downloads - TrialBalance" %>
 <asp:Content ID="Content1" ContentPlaceHolderID="MainContentPlaceholder" runat="server">
  <asp:PlaceHolder runat="server" Visible="false" ID="_downloadAreaPlaceholder">

What would that markup look like today?

@model BuildsViewModel

@section Main {

I originally became a WPF developer because I didn't like ASP.NET Web Forms and models like View State. But now, when I look back at the journey ASP.NET has taken, it has made huge changes. From the Web Forms model to the MVC model, from ASPX syntax to Razor, there's been some real innovation in the ASP.NET camp in that time. There have been some real innovators in the ASP.NET camp in that time.

Here's a list of things ASP.NET has done in six years that WPF hasn't:

  1. Created a new, human-friendly markup language (Razor). Razor makes writing markup fun. XAML has never been fun. In fact before ReSharper introduced the 'Import namespace' support for XAML, it was downright torture.
  2. Embraced design patterns. You can't claim MVVM here for WPF - WPF supports data binding, but the core of WPF doesn't actually contain a single feature that helps with MVVM; it's all layered on top through Blend Behaviors and third party frameworks. ASP.NET has an entire stack built on top of MVC.
  3. Embraced the pit of success. You can actually build a maintainable application using ASP.NET MVC using the default project template. In contrast, without third party frameworks, the default WPF project template is the path to misery.
  4. Embraced extensibility. Nearly everything in ASP.NET MVC has an interface or abstract class that you can extend to change how the framework works. There's a beautiful pipeline that you can plug into. I'd swear the WPF team never even heard of an interface, and the only abstract classes have internal constructors.
  5. Embraced open source. ASP.NET MVC bundles jQuery and JSON.NET, and it's designed to work with a ton of open source tools. WPF, despite the litany of MVVM frameworks, and despite it being impossible to develop maintainable WPF applications without one, still hasn't embraced any of them.
  6. Become open source. ASP.NET MVC was open source since early on, but now the entire ASP.NET stack is open source, and accepts contributions. WPF isn't, and frankly, you wouldn't want to look at the WPF code anyway; it's hideous.

On top of all of this, you've got the innovation that's happening on the web stack itself. Don't like CSS? Try Less or SaaS. Don't like JavaScript? Try CoffeeScript or Dart. There's a rich ecosystem of innovation happening in the web space at the moment, innovation that has never been present in WPF since 2006.

Apples and oranges and all that

I'm not contrasting ASP.NET and WPF in an attempt to say ASP.NET is better, that would be ridiculous, since they clearly serve very different purpose. I'm simply trying to show how one has come so far in six years, while the other has barely changed at all. I think it's all down to a lack of investment.

What's disappointing is that WPF started out quite positively during its time. Concepts like dependency properties, styles, templates, and the focus on data binding felt quite revolutionary when Avalon was announced.

Sadly, these good ideas, when put into practice, didn't have great implementations. Dependency properties are terribly verbose, and could have done with some decent language support. Styles and templates were also verbose, and far more limited than CSS (when WPF shipped I imagined there would be a thousand websites offering high quality WPF themes, just like there are for HTML themes; but there aren't, because it is hard).

Data binding in WPF generally Just Works, except when it doesn't. Implementing INotifyPropertyChanged still takes way too much code. Data context is a great concept, except it totally breaks when dealing with items like ContextMenus. ICommand was built to serve two masters; the WPF team who favored routed commands, and the Blend team who favored the command pattern, and ended up being a bad implementation for both.


And then there's the failure that is XAML. XAML is so verbose it's hard to imagine that humans were ever supposed to write it. And that's because they weren't! In the land of lollipops and rainbows, designers were supposed to use Blend and developers were going to use the VS designer, and no one would even look at XAML. Yet, it's 2012, and even as Blend has improved most people are still hand-writing XAML. This will not change in VS 2012.

The biggest failing in XAML wasn't that the tooling was bad though; it's that the language was never modified to cope with bad tooling. And unlike HTML, XAML isn't semantic. It's not interpreted. It is compiled, a serialization format, so there's no real separation of the markup and the implementation.

Here's half a dozen things off the top of my head that could be done to improve the XAML experience:

  1. Allow XML namespace imports to be declared at a project level rather than redeclared in every single file
  2. Allow binding events directly to methods instead of via commands
  3. Make the binding syntax shorter and more easily memorable
  4. Allow C# expressions like basic boolean logic instead of requiring converters all the time
  5. Allow a boolean to be implicitly converted to the tri-state (dual-state in Silverlight) Visibility enum without a converter.
  6. Stop making me use XML prefixes for my own custom controls

The ASP.NET team were able to create an entirely new parser (Razor) for their platform; why can't even minor changes be made in WPF?


I can't begin to tell you how tired I am of hearing about this pattern, especially from ex-WinForms developers who think it is the bees knees because they read Silverlight Unleashed and were amazed by MVVM Light.

The reality is that every WPF project I've been brought on to has involved some guy thinking he was smart enough to invent his own MVVM framework, only for it to be a half-baked knock-off of someone's Code Project article. All WPF projects end up with a ViewModelBase that is so choc full of inherited members for threading and progress bars and INotifyPropertyChanged. Showing a view in a dialog takes 20 times more code than it would if you just put the code in Button1_Click, and it's equally as well tested since most people using MVVM are doing it because they claim it is testable but no one is actually writing unit tests for their view models, except the architects inventing the MVVM frameworks in the first place.

There's plenty of hype about MVVM out there, but the lack of platform support for it mean that every WPF developer will need to build a few bad, hard-to-maintain WPF applications before they figure out how to do it properly. That's a real shame.


Ultimately, as I look back over six years of working with WPF, I feel that it was a bunch of good ideas, that weren't very well implemented. You could say the same for the first versions of ASP.NET too (anyone remember Web Parts in ASP.NET 2.0?).

Only there's a big difference: ASP.NET evolved. The web stack evolved. Not only did the code change; the philosophies of the team changed. WPF on the other hand hasn't had a major change since 2006.

What's really sad is that there really isn't an alternative. In the web world, ASP.NET competes with Ruby and PHP; if I don't like one I can switch. On the Windows desktop, I'm pretty much stuck with WPF. And if WPF isn't evolving, that's a real shame.

You might enjoy working with WPF. You might think XAML is a beautiful, terse, fun to write language. That's how I felt in 2006, and if the platform is still enjoyable for you, that's great. It's going to be around for a long time, since there's really no alternative yet. But for me, I'm glad my WPF days are behind me, and that my full time job now is now ASP.NET-focused.

This post took quite a while to write, but the result is pretty cool .From my Octoblog:

In this post, I created an ASP.NET MVC 4 site, and checked it in to my hosted TFS Preview instance. Then, I used OctoPack to create NuGet packages from Team Build, using MyGet to host the packages. Finally, I configured Octopus to deploy the application to a staging machine, and two production machines.

Automated deployment from TFS Preview via Octopus Deploy

Today I installed the Windows Server 2012 RC, planning to test Octopus Deploy with it.

First question: How do I open Notepad?

Where is the start menu?

(Click to zoom)

Unlike Windows 8, there's no start button near the bottom (the new looking icon is for Server Manager). How do I open applications with this? See if you can work it out.