A picture of me

Welcome, my name is Paul Stovell. I live in Brisbane and work full time bootstrapping my own product company around Octopus Deploy, an automated deployment tool for .NET applications.

Prior to Octopus Deploy, I worked for an investment bank in London building WPF applications, and before that I worked for Readify, an Australian .NET consulting firm, where I was lucky enough to work with some of the best in the business. I also worked on a number of open source projects and was an active user group presenter. I've been a Microsoft MVP for WPF since 2006.

In April 2006 I received a Microsoft MVP award for WPF. It was mainly due to some popular articles I wrote on CodeProject when WPF was brand new, and for the user group presentations I was doing. At one point, I think I managed a user group presentation every month for a year. When working at Readify, I also put together and taught a two-day WPF training course.

I kept receiving the award each year until this year, when my re-award date came and went, and I noticed I'm no longer an MVP. I guess that's fair. After all, my last post about WPF was a 18 months ago when I blogged about how nothing has improved in WPF for the last 6 years. I'm probably not the best candidate for being a WPF MVP anymore.

The BizSpark membership for Octopus Deploy is coming to an end soon too, though, so this leaves me wondering: what am I going to do when I don't have an MSDN subscription anymore? Am I supposed to pay for this stuff?! :-)

This post is part of a little series I've been writing about selling software. In this post, I want to talk about the customer experience of using resellers vs. processing orders yourself.

The Digital River experience

This morning I went to fix a bug in our Java-based TeamCity plugin, when IntelliJ IDEA (the IDE that I use) told me that my trial license had expired. I like JetBrains, and I like their products, so I went ahead and bought a license.

After entering my details, I was taken to a payment page, where I was told my payment would be processed by Element 5:

My payment will be processed by Element 5

After placing the order, I got a receipt via email. Except instead of seeing Element 5, like I expected, it said MyCommerce, with "Thank you for placing an order with MyCommerce, an authorized reseller of JetBrains s.r.o...":

Apparently I actually purchased from MyCommerce

There was an invoice attached to the email, though, and it said Element 5, not My Commerce, and it had Digital River underneath. Confused? So am I. There are so many brands and logos involved in my order that I'm left feeling pretty unsure about who I actually gave my credit card to.

The invoice from Digital River and Element 5

Now, I understand that Digital River is a large company that has a number of different brands that they use to process e-commerce payments (and I suspect that they rename these companies often because they develop not-so-positive reputations). But to me, this lack of attention and confusion about brand identity comes across as very amateurish.

I was in a hurry to get my bug fixed. So I scrolled down to find my license key in the email:

Your product will be delivered in 24 hours

At first I assumed the 24 hours was just a standard disclaimer, but as time went by and no license key came, I realized they were serious.

My license key arrived 5 and a half hours later

Eventually, 5 hours later, my license arrived. I’m not sure why it took so long to email a license key. Perhaps their license algorithm relies on the unique entropy of Hadi Hariri doing the hammerdance? Who knows.

Having to wait for something that should be automated was disappointing. But that confusing invoice from a company suffering an identity crisis did not leave me feeling like JetBrains particularly care about how I felt.

Now, the point of this post isn't to bash JetBrains (I really love their products!) or even Digital River. This experience just reminded me of why I moved away from using resellers.

While outsourcing to resellers might be attractive, and might even make commercial sense, the downside is is that you no longer own the customer experience. JetBrains have earned my trust with their brilliant products, and they have a nice website that explains the features in detail and convinces me to buy. But the buying experience was impersonal, confusing and disappointing.

How I used to do it: FastSpring

For Octopus Deploy, I previously outsourced order processing to SWREG (another Digital River company), and then quickly moved to FastSpring. I wrote about how I take orders with FastSpring previously.

But in December I made a change, and decided to process the orders myself. I figured it would allow me to save a few thousand dollars a month in processing fees. And it would make reconciling with my accounting system easier too. But the real driving force behind deciding to process orders myself?

I wanted to own the customer experience.

I hated that FastSpring couldn’t actually create quotes and I had to issue them myself on their behalf (since the order will ultimately be paid to them). I hated that FastSpring’s invoices couldn't be turned into PDF's very nicely, since many customers ask for PDF invoices (the invoice email just linked to a web page with details). I hated that while my store was branded to look like my site, the invoice had FastSpring's color scheme and logo, and couldn't be branded. I hated that when large customers asked for extra documentation or tax forms, I had to send them off to someone else, because I wasn’t actually the merchant of record for the transaction.

Now, FastSpring were great, and their support really is as good as they say. And I think at the time, they were the right choice: I didn't even know if anyone would buy my product back then, let alone have the time to implement order processing myself. And I'd recommend them to anyone else who is launching a product. But after 12 months with them, I decided it was time to grow up and begin to own the customer experience.

Current strategy

In December I launched an update to the Octopus Deploy website, and implemented a new order processing system. It is built using ASP.NET MVC and RavenDB, and hosted on EC2. It took about a week of work, but deploying it was easy :)

In our system, quotes, purchase orders and orders are all effectively the same thing:

  • A Quote is a request for pricing that doesn't represent a commitment to buy
  • A Purchase Order is a commitment to buy, that hasn't yet been paid
  • An Order has been paid

For payment methods, we accept Visa/Mastercard (through a merchant facility with NAB, and the Braintree gateway), or PayPal. PayPal also lets customers pay with credit card without registering, so we use that for AMEX and other cards.

Customers can come to the site and buy right away, or they can request a quote, turn it into a purchase order, and then eventually pay it. Here's how the order process looks at a high level (click to zoom):

Order processing system

When a customer requests a quote or places an order, a background task sends the quote/invoice to Xero, using the Xero API. We then download that invoice as a PDF, and it gets attached to the outgoing invoice email.

We use three different invoice templates in Xero because quotes, purchase orders and paid orders all look a little different. Click below to see examples:

Quote:

Quote

Purchase order:

Purchase order, unpaid

Paid order:

Invoice, fully paid

The license key is also generated and sent immediately via email. But there’s more.

After purchasing/placing the order, if you stay on the confirmation page, the page will eventually refresh and show you your license key and invoice as a PDF. You don’t even need to check your email. It’s right there. We generate them in the background while you wait, but usually it’s just a few seconds.

We give you the license key immediately

What I love about doing it ourselves

Firstly, I love that our invoices and license key emails all come immediately, and that they come from us, with our logos and no third parties. It feels clean and professional. And I love that customers don’t even actually have to wait for the email to get on with things.

Processing the payments ourselves is also great for cash-flow. With a reseller like FastSpring, we could only get paid fortnightly at the most. With our own merchant account, a customer can buy today, and the funds are cleared and ready to be spent tomorrow. And did I mention we’re saving a lot of money in transaction fees? (FastSpring was 5.9%, while accepting cards ourselves is 2-3%)

Finally, I love that everything pretty much reconciles itself. In Xero, as soon as a customer places a purchase order or pays an invoice, I can see it in my profit and loss statement. A few days later when I import the transactions, I tick some boxes and the reconcile is done. Our accounting system IS the invoicing system. Reconciling accounting with FastSpring and SWREG was much harder.

The downside

The one downside to not using a reseller is that actually accepting payments is harder.

FastSpring allowed us to fix prices in a dozen different currencies, accept pretty much any credit card, and even accept checks or bank transfers as payment methods (our US customers could mail a check to FastSpring, they’d clear it, and a few days later the order gets marked as paid).

Now that we’re doing it ourselves, we’re a lot more limited. To price in different currencies you have to actually have a bank account and merchant facility in that currency (currently in Australia, only NAB can provide this). We price our product in USD, so we have a NAB multi-currency facility and a USD bank account. We can’t accept AMEX in USD because AMEX are, frankly, idiots. We have to send customers to PayPal to use their AMEX cards.

And since our bank accounts are in Australia, it’s really hard for customers overseas to pay by bank transfer. It took about 7 emails for one of our customers in France to be able to pay us via bank transfer, and we lost about $50 in processing fees by the time it arrived.

So far, though, limiting the payment methods hasn’t actually been that bad - sales have been growing despite reducing the payment options, and I’m getting emails occasionally from people about how they enjoyed the purchasing experience, which we never got with resellers before.

Is it worth it?

If a small company like ours can do it, why haven’t a 300+ person company like JetBrains done it? Your Octopus Deploy license will be ready within seconds of ordering, yet it’s been 6 hours since I ordered my IntelliJ IDEA license and I am still waiting. They have 300 employees, we have four.

When it comes to outsourcing, it’s always recommended to outsource your non-key competencies. And while processing payments might not be my core competency, I think it’s so essential for the customer experience. Purchasing is the last part of your sales funnel, and it’s too important to be left to the Digital Rivers of the world if you can help it.

Update: Sounds like JetBrains might be making some changes

I'm a developer by trade, and I've never had any interest in marketing. In fact, at a startup meeting some months ago, I gave a presentation about the business aspects of Octopus and confessed that I simply have no idea about marketing, which was met with laughter.

My attitude was that we'd just build an awesome product, and everyone would use it. The experts (including the hosts of Startups for the rest of us) always emphasise that marketing is more important than the product. But I didn't know anything about marketing; I wasn't interested in marketing. And I was fine with that. I figured we'd just hire a "marketing guy/girl" someday.

This year as I thought about my goals for 2014, I realized something:

That was a really dumb attitude to have.

Background

In 2011 I decided to transition to working on my own ideas, and Octopus Deploy was born. In 2012 it became my full time job, and in 2013 revenue had grown to a point where we were also able to bring Nick on full time. I'm really happy with how the last year has gone.

Revenue between 2012 and 2013 tripled, and I'd like to triple it again by the end of 2014. But I don't think that's possible if I continue my Sergeant Schultz approach to marketing.

Most of 2013 was spent on building Octopus 2.0, and I think the product that we have is awesome. My goal for 2014 is to focus on growing the business, by spending 60-80% of my time on growth-oriented tasks, rather than writing code (after all, Nick is way better at it than me).

I'm going to approach this goal in five phases.

Phase 1: Stop wasting money

I have a confession to make: I was spending about $2,500 a month on AdWords for most of the year, and I was too scared to tone it down in case sales dried up. Then a few weeks ago Josh Ames showed me how to see if those clicks were converting to anything in Google Analytics. How much revenue do you think AdWords contributed? A big fat zero.

AdWords might in fact be a good platform, but it turns out that if you don't take the time to measure and optimize the conversions from it, it can be a big black hole for money. So my first step has been to turn off all advertising until I figure it out properly. That was an expensive lesson.

Phase 2: Identify and measure the funnel

The sales funnel for Octopus Deploy looks something like this:

Our sales funnel

Visitors stumble onto our website. If we're lucky, we might convince them to join our mailing list or otherwise get in touch with us, so they become prospects. At some point, they might decide to install Octopus and give it a trial. If they are still using it after three days (and so are getting value from it) they become users. And at some point, they'll be using it enough to decide to buy a license.

The mistake I made in 2013 was to spend money to try and drive more traffic in to the top of the funnel, without even measuring or optimising their way into the subsequent stages of the funnel. So, with each of these stages, I'm going to start to measure not just the number of people in each part of the funnel, but the conversion rate between them.

Aside: It's not exactly clear to be how to measure this yet: for example, to see what my trial/paid conversion ratio is, do I divide the number of people trialling last week with the number of people who bought a license this week? Or do I use figures from just this week? I'm not really sure, but I'll figure that out.

Since the data for each of these stages comes from different sources (visitors from Google Analytics, trials/customers from our license database, etc.) I'll probably just use a spreadsheet to track it.

Octopus Deploy users can opt-in for anonymous usage tracking - I'm going to be using that information to track abandonment rates, especially in the critical first few days between convincing someone to install the product, and for them to still be using it a week later.

Phase 3: Optimize the funnel

Once I can measure what the funnel actually looks like, I'm going to look at strategies for optimising it. For example, how can I convince visitors to the site to become prospects? This will probably involve a lot of A/B testing on the landing pages, and adding resources like whitepapers and offering email crash courses.

By optimizing the conversion rate between each stage of the funnel I'm hoping to achieve some growth without actually driving any more traffic. Hopefully this will also result in a better product (e.g., how can we reduce the abandonment rate in the first 7 days? Perhaps more tutorials?).

Phase 4: Stay in contact

We have a mailing list with over 700 subscribers, but I only seem to email it once every 3 months when I remember to. Our Twitter account has a decent following but I rarely post to it. When people ask for a trial key, I don't email them to follow up 30 days later to see how it's going. When people use the free version of Octopus I don't even ask for an email address.

I'm going to focus on using these channels better:

  • Emailing our mailing list monthly like clockwork
  • Crafting really useful content for that mailing list rather than just "here's a new feature we added"
  • Setting up auto responder sequences when people trial or use the free version, to try and help them make the most of it

Phase 5: Amplify

Only once the funnel can be measured and has been optimized, will it make sense to try and drive more people into the top of the funnel. But I'm not just going to re-enable AdWords or try and get a mention on TechCrunch. The goal is to drive the right kind of traffic. That might mean experimenting with different ad networks that are more developer/DevOps oriented, as well as looking at other, non-paid forms of acquisition such as content marketing.

Importantly, I'm going to be measuring this rigorously and applying the same optimizations. There's no point driving traffic that isn't actually converting or isn't the right audience.

Conclusion

The most important thing I learned in 2013 is that "marketing" isn't just something you can spend money on and forget. Like anything worth having, it requires significant time investment and dedication. I'm going to dedicate this year to learning more about marketing and growing businesses, and putting that into practice, and less time writing code.

I believe that we've built an awesome product, and it could be helping a lot more people to improve their ability to deliver working software to production successfully. I'm going to dedicate 2014 to finding and helping those people.

Last Saturday I went to DDD Brisbane. I was giving a talk on Octopus Deploy and TeamCity.

I caught four other talks that day:

  • Brendan Kowitz, who talked about Real-time Web Applications with SignalR
    The highlight was Brendan demoing SignalR integrated with Excel for real time pivot chart updates. Crazy.
  • Damian Maclennan and Andrew Harcourt who talked about Messaging patterns for scalable, distributed systems
    This was a good introduction for those who have some familiarity with messaging but haven't put them into practice yet. The talk used a lot of good examples and anecdotes to help explain the differences between the patterns. Damian and Andrew also present well together; they're like a comedy duo. They also introduced their open source Azure service bus library to the world.
  • Scott Hanselman on What's new for Web Developers in Visual Studio 2013
    Scott is always entertaining, and he doesn't just present a bunch of demos - he tells a good story using demos to help narrate it.
  • Artem Govorov on JavaScript tracing, debugging, profiling made simple with spy-js
    Artem demonstrated a technology that he developed, called SpyJS, which uses a proxy server to rewrite JavaScript with instrumentation - the end result is similar to how code coverage works on C# code. Artem now works for JetBrains, and SpyJS is being integrated into WebStorm.

One theme at the conference made me very happy. A few years ago, every conference I went to in Australia was all about: "What's new in some Microsoft product". Very rarely did anyone speak about something they built. Watching Andrew and Damian present their Azure library, watching Artem present his own technology, and ourselves with Octopus Deploy, I noticed that people were now finally talking about things they built themselves.

"Look at what Microsoft built" presentations are important, but in the future, I hope to see a lot more of these "look at what I built" presentations. I think they are a hundred times more interesting.

12 months ago I blogged about how I'd left LinkedIn. At the time, LinkedIn as a service wasn't providing me with any value; it seemed like a resource for recruiters.

My LinkedIn profile

So why did I recreate my profile? Well, Octopus is growing, and I'm also starting to get help from contractors for short pieces of work. When I have good experiences, leaving a recommendation on LinkedIn seems like a good way to pay it forward.

I've been building out the new UI for Octopus 2.0 in Angular.js, which has been a fun process and something I hope to blog more about later.

Angular layouts/master pages and sections

Coming from an ASP.NET MVC background, one of the concepts I found myself missing was that of "sections". A solution I came up with was to use two custom directives:

  • octo-placeholder designates an area on the parent layout where content will be placed
  • octo-section designates content that will be added to the parent layout

My layout page looks like this (simplified):

<... snip ...>
<body>
  <div class='top'>
    <div class='tools' octo-placeholder='tools'></div>

    <div class='breadcrumbs' octo-placeholder='breadcrumbs'></div>
  </div>

  <ng-view></ng-view>
</body>

My view looks like this:

<script octo-section="breadcrumbs" type="text/ng-template" >
  <p>This is the breadcrumbs area {{ someVariable }}</p>
</script>

<script octo-section="tools" type="text/ng-template" >
  <a ng-show="loaded">Hello</a>
</script>

<p>This is the main content</p>

The directives that make this all work are:

.directive("octoPlaceholder", function(octoUtil, $compile, $route, $rootScope) {
  return { 
    restrict: 'AC',
    link: function(scope, element, attr) {
      // Store the placeholder element for later use
      $rootScope["placeholder_" + attr.octoPlaceholder] = element[0];

      // Clear the placeholder when navigating
      $rootScope.$on('$routeChangeSuccess', function(e, a, b) {
        element.html('');
      });
    }
  };
})

.directive("octoSection", function(octoUtil, $compile, $route, $rootScope) {
  return {
    restrict: 'AC',
    link: function(scope, element, attr) {
      // Locate the placeholder element
      var targetElement = $rootScope["placeholder_" + attr.octoSection];

      // Compile the template and bind it to the current scope, and inject it into the placeholder
      $(targetElement).html($compile(element.html())(scope));
    }
  };
})

Unlike solutions that use ng-include, the directives use the current view's scope rather than creating a new scope. This means that you can use bindings within the sections without a problem.

Hopefully this helps someone else!

Octopus Deploy utilizes X.509 certificates to allow for secure communication between the central Octopus server, and the remote agents running the Tentacle service. Upon installation, both services generate a self-signed X509 certificate. An administrator then establishes a trust relationship between the two by exchanging the public key thumbprints of each service to the other.

The trust relationship

This is a common security model in B2B applications, and it means both services are able to authenticate without exchanging a shared secret or password, or being on the same active directory domain.

But dealing with X.509 certificates on Windows is, well, a pain in the ass. It's the source of a lot of bug reports. In this post, I'm going to share what I've learned about dealing with them so far.


Tip 1: Understand the difference between certificates and PKCS #12/PFX files

In .NET, the X509Certificate2 object has properties for the PublicKey and PrivateKey. But that's largely for convenience. A certificate is something you are supposed to present to someone to prove something, and by design, it's only the public portion of the public/private key pair that is ever presented to anyone. When an X509 certificate is presented to someone, .NET of course strips out the private key. Having the private key property on the certificate object is a bit of a misrepresentation, especially since, as we'll see, there's a big difference in how the public and private key are dealt with.

On Windows a certificate typically has a .cer extension, and they don't contain a private key. You create them like this:

File.WriteAllBytes("Hello.cer", cert.Export(X509ContentType.Cert));

Sometimes it's handy to export the X.509 certificate (which is the public stuff) and the private key into a single file. On Windows we typically use the .PFX extension, which is a PKCS#12 file. In C# we do it like this:

File.WriteAllBytes("Hello.pfx", cert.Export(X509ContentType.Pkcs12, (string)null));

If you are planning to persist a certificate and a private key into a string to store somewhere (like we do), then you can use that Export call above, giving you both the certificate and private key.

Tip 2: Understand the certificate stores

Windows has an MMC snapin that allows you to store certificates. You might think that Windows has some special file on disk somewhere that this snapin manages. In fact, the certificates live in the registry and in various places on disk, and the certificate store just provides convenient access to them.

When you run MMC.exe and go to File->Add/Remove Snap-in..., you can select the Certificates snap-in. When you click Add, you can choose three different stores to manage:

Adding the certificates snap-in

These are the equivalent of the StoreLocation enum that you pass to the X509Store constructor. Each certificate in the store lives in the registry, and the private keys associated with the certificate live on disk.

For example, if I do this:

var store = new X509Store(StoreName.My, StoreLocation.CurrentUser);
store.Open(OpenFlags.ReadWrite);
store.Add(certificate);
store.Close();

StoreLocation.CurrentUser specifies that I want the "My user account" store. StoreName.My maps to the Personal folder in recent versions of Windows. The X509 certificate (not the private key, see the discussion above) is actually added to the registry. Certificates for the current user can go to:

HKEY_CURRENT_USER\SOFTWARE\Microsoft\SystemCertificates

Or to disk, at:

C:\Users\Paul\AppData\Roaming\Microsoft\SystemCertificates\My\Certificates

While certificates for the machine (StoreLocation.LocalMachine, or the "Computer account" option) go to:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\SystemCertificates

What exactly is written there? A key exists for each store name (folder), and then under the Certificates sub key is a key with a long, random-looking name.

Certificate in the registry

That name is actually the public thumbprint of the certificate. You can verify this by looking at the thumbprint properties from the snap-in.

The certificate thumbprint

The only value stored against this key is a blob containing the public portion of the X509 certificate:

The certificate blob

There's an MSDN article with more information about these paths if you need more details.

Tip 3: Understand that private keys live somewhere else

As I mentioned, while in .NET you have an X509Certificate2 object containing both a private and public key, the "certificate" is only the public part. While the certificate is stored in the paths above, the private keys are stored elsewhere. They might be stored under the Keys subkey for the store, or, they might be stored on disk.

For example, if I do this:

var cert = new X509Certificate2(bytes, password, X509KeyStorageFlags.MachineKeySet | X509KeyStorageFlags.PersistKeySet | X509KeyStorageFlags.PersistKeySet | X509KeyStorageFlags.Exportable);
var store = new X509Store(StoreName.My, StoreLocation.LocalMachine);
store.Open(OpenFlags.ReadWrite);
store.Add(cert);
store.Close();

Then I'll end up with the private key stored in the registry. Since I'm specifying StoreLocation.LocalMachine, they go to:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\SystemCertificates\MY\Keys

However, if I did this:

var cert = new X509Certificate2(bytes, password, X509KeyStorageFlags.UserKeySet | X509KeyStorageFlags.PersistKeySet | X509KeyStorageFlags.Exportable);
var store = new X509Store(StoreName.My, StoreLocation.LocalMachine);
store.Open(OpenFlags.ReadWrite);
store.Add(cert);
store.Close();

Then I have a problem. Keep in mind that I'm adding the certificate to the same place; but I'm using the UserKeySet option instead of the MachineKeySet option. In this case, the key actually gets written to:

C:\Users\Paul\AppData\Roaming\Microsoft\SystemCertificates\My\Keys\62207B818FC553C92CC6D2C2F869603C190544FB 

Umm, that's no good. I'm importing a certificate for the whole machine to use, so the certificate goes to the registry. But the private key is being written to disk under my personal profile folder. If other users on the machine (including service accounts) don't have access to that file (which they won't by default) they'll be able to load the certificate, but not the private key.

That's not all. When the certificate is loaded, the private key is also written to a path that looks like:

C:\ProgramData\Microsoft\Crypto\RSA\MachineKeys\6cf6a27d290e81ccab98cbd34c112cb7_68b198b5-4c92-4b3e-9d30-8e2a81ccb3d7

Or when importing a user key:

C:\Users\Paul\AppData\Roaming\Microsoft\Crypto\RSA\S-1-5-21-992800734-1677258167-2839820197-1001\31c8414d419a75bb6417bc744bf81592_68b198b5-4c92-4b3e-9d30-8e2a81ccb3d7

So again, there's a chance that other accounts don't have access to this file. That leads to a common exception:

System.Security.Cryptography.CryptographicException: Keyset does not exist
 at System.Security.Cryptography.Utils.CreateProvHandle(CspParameters parameters, Boolean randomKeyContainer)
 at System.Security.Cryptography.Utils.GetKeyPairHelper(CspAlgorithmType keyType, CspParameters parameters, Boolean randomKeyContainer, Int32 dwKeySize, SafeProvHandle& safeProvHandle, SafeKeyHandle& safeKeyHandle)
 at System.Security.Cryptography.RSACryptoServiceProvider.GetKeyPair()
 at System.Security.Cryptography.X509Certificates.X509Certificate2.get_PrivateKey()

The stupid thing about this exception is that you'll know you have a private key. Certificate.HasPrivateKey returns true. You might have just loaded the certificate from a blob with the key. And it might even work under other user accounts or when running interactively rather than as a service. But the cause will probably be because you don't have permissions to that key file.

Here are some examples of times I've seen this:

  • When I forgot to specify PersistKeySet for a certificate that I planned to import once and use many times. I figured the key would be imported. In reality, the file on disk just gets linked to. If the key isn't persisted, it can't be used.
  • When I created the certificate using UserKeySet and then tried to use it from another account
  • When I created the certificate using MachineKeySet, but my user account didn't have access to the default paths above. In one case, the Local System account didn't even have access. That prevented the user from being able to use the key.

The best way to diagnose these issues is to run Procmon from SysInternals and to monitor the disk and registry access that happens when the key is imported and accessed.

Tip 4: Understand the key storage flags

As you might have gathered from above, getting the key storage flags right is crucial. And there's no one sized fits all.

  • X509KeyStorageFlags.Exportable - I like to always specify this because it's nice for users to be able to back up the private key
  • X509KeyStorageFlags.MachineKeySet - the key is written to a folder owned by the machine. Note that your user account may or may not have access to this location
  • X509KeyStorageFlags.UserKeySet - the key is written to a folder owned by you. This is more likely to work the first time, but other users will have trouble accessing the key. Also, beware of temporary profiles, which I'll discuss later.

The note on X509KeyStorageFlags.MachineKeySet is important. Sometimes, you can create a certificate from a blob in memory using the X509KeyStorageFlags.MachineKeySet option. But when you try to access the private key, you'll get the "keyset does not exist" error above. That's because the file couldn't be written or read, but you won't actually see an error message about this.

Tip 5: Don't load direct from a byte array

We used to do this in Octopus:

var certificate = new X509Certificate2(bytes);

It turns out that this writes a temporary file to the temp directory that on some versions of Windows doesn't get cleaned up.

That's a big problem because the file is created using GetTempFile. Once you have more than 65,000+ files, the process will stall as it endlessly tries to find a file name that hasn't been taken. End result: hang.

To be safe, create your own file somewhere, and make sure you delete it when done. Here's how I do it:

var file = Path.Combine(Path.GetTempPath(), "Octo-" + Guid.NewGuid());
try
{
    File.WriteAllBytes(file, bytes);
    return new X509Certificate2(file, /* ...options... */);
}
finally
{
    File.Delete(file);
}

Tip 7: Temporary profiles

Sometimes you'll get this error:

The profile for the user is a temporary profile

A user typically has a profile folder like C:\Users\Paul. When you load a key using the UserKeySet option, the key will be written underneath that profile.

But sometimes, a process might be running under an account with a profile path set to C:\Windows\Temp. Since that folder isn't really meant to be a profile folder, the Windows cryptography API will prevent you from trying to write anything.

This commonly happens when you are running under an IIS application pool, and the Load Profile option is turned off on the application pool.

However it can also happen just sometimes, randomly. Maybe there was a problem with the registry that prevented a profile directory being created. Maybe someone got a little overzealous with group policy. I've had all kinds of bug reports about this. One option is to try stopping any services that run under that account (including application pools) and then logging in interactively to the computer as the user to force a profile to be created. Then log out, and restart the services.

Tip 8: Know the tools to use

There are two tools that will help you to understand what's going on with certificate issues.

The first is SysInternals Process Monitor, which will show you the file IO and registry access that's happening when you try and use your certificates. This is a good way to see where the certificates and keys are being read from and written to.

The other useful tool is a .NET sample called FindPrivateKey.exe which does what it says on the tin. We're actually going to embed some of this code into Octopus vNext to help provide better log errors when we have certificate problems.

Conclusion

The cryptography capabilities in Windows were obviously designed by someone way smarter than me. But I can't help but feel like they were also designed for someone way smarter than me. There are plenty of ways that permissions, group policies, and other issues can creep in to really mess with your use of X.509 certificates in .NET. I wish I'd known of all these pitfalls when I first started using them in Octopus, and hopefully this post will be useful to you. Happy cryptography!

"We do two week sprints here," said Daniel. "Each fortnight we pick the items from the backlog that we're going to do, and we do them. At the end of the fortnight, we demo it to Michael, and do a retrospective. The project has been going for about 8 months."

"Who is Michael?" I asked.

Daniel pointed, and said: "His title is business analyst, but we made him the product owner. He writes the specs."

"Great!" I exclaimed. "And who are the end users? When do they see the product?"

"They are on the other side of the building; they're too busy to look at the product right now, but going by Harry's Gantt chart they'll start using the product in about 4 months."

I was naive then, but to me this seemed like a great example of an Agile project. Two week sprints, a well maintained backlog, reviews and retrospectives. Automated builds running every checkin, and plenty of unit tests. The team had been humming along for 8 months now. Every build produced an MSI ready to be used. Someone had even spent days just tweaking the size of the icons used in the MSI. In four months a real user was going to use that MSI to install the software. We were ready!

If you've been around for a while, you'll recognize that this project eventually became a death march. Once real users actually saw the application, everything changed. The system performed terribly. A lot of the assumptions that had been made about how the software was meant to work were flat out wrong. The numbers came out wrong. The forms took way too long to fill in. "You should have just built it in Excel" said the users. In fact the users who ended up using the system weren't even the people we had in mind. Working long hours and weekends, we eventually delivered something of value 6 months later, not too long after the auditors arrived from HQ to find out what on earth was going on.

Feedback

What was missing from this "Agile" project was one thing: feedback. Not just any feedback, but valuable feedback from people that mattered. Delivering software successfully, like anything in life, is all about tightening the feedback loops. The sooner you learn what you are doing wrong, the sooner you can do it better.

The two week sprint and demos to a BA gave us some feedback, but it was no where near as valuable as the feedback real users gave. Software that isn't in the hands of real users is inventory, and the longer it sits on the shelf, the smellier it's going to get.

With the benefit of hindsight, I would have done this project differently. We should have focused on delivering one small part of the product early, something that would be useful to a real user, and convinced them to use it. Then those two week sprints would have delivered software to someone who mattered.

That experience is why I love hearing stories like this about Octopus Deploy:

We are saving about 1.5 days man days a week, not to mention that we are no longer zombies the day after the deployment. It used to be impossible to do more than one a week. Now I can easily push code with OctopusDeploy to QA and then do some rudimentary checks and then promote to Production as well as our fail over environment.

If releasing software to real users is hard, people avoid it, which means the feedback loop is reduced. By making deployments consistent and easier, people are more likely to do it. That gets working software to end users faster, which makes a real difference in the quality of the feedback. That's good for everyone.

I mentioned before that I'm rebuilding the Octopus Deploy web portal to be API-first. The goal is that all of the user functionality will be provided through the HTTP API, with the web UI using JavaScript to communicate with the API. This post is an update on how that is progressing.

Lots of resources

The Octopus API is closer to a service like Xero than it is to a service like Twitter, because we have a lot of different types of resources:

  • Environment groups (new in 2.0)
  • Environments
  • Machines
  • Machine roles
  • Project groups
  • Projects
  • Releases
  • Deployments
  • Deployment processes (new in 2.0)
  • Deployments
  • VariableSets (new in 2.0)
  • Tenant groups (new in 2.0)
  • Tenants (new in 2.0)
  • Retention policies
  • Events (used for auditing)
  • Feeds
  • Users
  • Teams (new in 2.0)

Most of these resources support basic CRUD operations - get a single item, list them (with pagination), create them, edit them, delete them. Then they also have some custom operations like changing the sort order of a bunch of items, or logging in, or finding resources owned by another resource.

Defining the API

I started out building the API as basic Nancy modules, but quickly found myself copying and pasting and renaming since so many of the actions were the same. I also had no idea how I would document all of these operations.

My next strategy was to look at the commonality between the different API operations to try and create a set of reusable implementations. So far I've been able to reduce the API down to a small DSL-like language for API definition:

(You can view a larger example of this here)

Each action type (e.g., Create<,>) returns an object which describes the API operation - mostly metadata. This description also specifies a class that implements the operation, which is used when the operation is invoked.

This DSL is processed, and then a custom Nancy module acts as an "adapter", surfacing the API operations as Nancy operations.

Documentation

With the DSL in place I can now generate documentation for the API. Here is an example of what the documentation page for environments will look like. The notes and annotations on each of the operations and resource types all come from metadata that describes the different conventions I'm using to define the API.

Testing

Since the API is going to become so central to Octopus 2.0, it's also going to have a test harness. The harness is a C# console application that starts up an Octopus server, and exercises each of the API endpoints:

API test runner

The API tests make use of a C# client for the API that we're also going to provide on NuGet when Octopus 2.0 ships. That means as a .NET developer you'll be able to use our library to integrate with Octopus rather than going to the raw HTTP API.

Conclusion

It still has a way to go and the API probably only covers a third of what it will eventually need to cover, but the results are looking positive so far. On the one hand, I worry that trying to generalize everything is going to make the application much more complicated. On the other hand, eliminating code duplication is usually always a good thing.

I'd love to hear your experiences when it comes to building API's in a simple and documentable way.

Tonight I'm having fun setting up a new build server with TeamCity. Since my build server is accessible over the internet, I wanted to use SSL. I also wanted requests using HTTP to be redirected to HTTPS.

I ran into a few problems because I was using a wildcard certificate that had been issued from a root CA, while most of the guides focus on using self-signed certificates, but I eventually got it working. Here are my notes in case they help you, or in case I need them later.

My configuration

I'm using TeamCity 7.1.5. I installed TeamCity into a non-standard location, T:\TeamCity, for reasons I won't go into in this post, so of course the paths I'm using below will be different to yours.

Exporting the certificate

The certificate I'm using is a wildcard certificate that had been issued months ago and installed into the Windows certificate store on a web server. My first step was to export the certificate:

Exporting the certificate

When prompted, the private key needs to be included. The file will be saved as a PFX file:

Export options for the PFX file

I secured the PFX file with a password, which I wrote down. Then I copied the PFX file to the server.

Converting the PFX to a keystore

TeamCity relies on Tomcat, so PFX files aren't much use. The next step is to convert the PFX file into a Java Key Store file.

Since these are Java Key Store files, we'll of course be needing a Java tool to do the conversion. But fear not, conveniently there's a Java runtime bundled inside of TeamCity. In my case it's at T:\TeamCity\jre\bin.

The command to do the conversion was:

T:\TeamCity\jre\bin\keytool.exe  \ 
  -importkeystore                \
  -srckeystore T:\TeamCity\conf\ssl\OctopusHQ.com.pfx \
  -srcstoretype pkcs12 \
  -destkeystore  T:\TeamCity\conf\.keystore2

During this conversion you'll be prompted to enter and confirm a new password for the keystore file that you are creating. You'll also be prompted to enter the password for the PFX file when it was exported.

While you may think you can enter two different passwords here, don't be fooled; make sure you enter the same password for both the new key store and the imported PFX file. Otherwise you'll get an error message when Tomcat tries to load the private key:

java.security.UnrecoverableKeyException: Cannot recover key

You can verify the contents of the keystore file using:

T:\TeamCity\jre\bin\keytool.exe -list -keystore T:\TeamCity\conf\.keystore2

Changing the connectors

Next you need to tell the TeamCity bundled Tomcat server to use SSL. Open T:\TeamCity\conf\server.xml and look for the <Connector> elements. I replaced the existing connectors with:

<Connector port="80" protocol="HTTP/1.1" 
    redirectPort="443"
    />

<Connector port="443" protocol="HTTP/1.1"
    SSLEnabled="true"
    scheme="https" secure="true"
    connectionTimeout="60000"
    redirectPort="8543"
    clientAuth="false"
    sslProtocol="TLS" 
    useBodyEncodingForURI="true"
    keystoreFile="T:\TeamCity\conf\.keystore2" keystorePass="<password used for the PFX and keystore>"
    />

The second connector is my HTTPS listener, the first connector just forwards connections to it. At this stage after restarting TeamCity I was able to browse to either http://myserver or https://myserver, but the HTTP endpoint still served the contents over HTTP rather than redirecting to HTTPS.

Requiring SSL

The final step was to edit T:\TeamCity\conf\web.xml. At the bottom of this file, just before the closing </web-app> element, I added a constraint to force HTTPS:

<security-constraint>
    <web-resource-collection>
        <web-resource-name>HTTPSOnly</web-resource-name>
        <url-pattern>/*</url-pattern>
    </web-resource-collection>
    <user-data-constraint>
        <transport-guarantee>CONFIDENTIAL</transport-guarantee>
    </user-data-constraint>
</security-constraint>

Diagnostics

If you have problems with your SSL configuration, the best place to look is:

T:\TeamCity\logs\catalina.*.log

Also, don't forget to enable port 443 in Windows firewall!

This took me a few hours to get working, so hopefully my notes can save you some time. The following resources were helpful to me: