A picture of me

Welcome, my name is Paul Stovell. I live in Brisbane and work on Octopus Deploy, an automated deployment tool for .NET applications.

Prior to founding Octopus Deploy, I worked for an investment bank in London building WPF applications, and before that I worked for Readify, an Australian .NET consulting firm. I also worked on a number of open source projects and was an active user group presenter. I was a Microsoft MVP for WPF from 2006 to 2013.

In a small but important milestone in the life of my MicroISV, Octopus is finally available for purchase. The final price will be £699, but if you buy before November 1, 2011 (when Octopus 1 will ship (I hope!)), you'll save over 25%.

When designing the licensing scheme for Octopus, I considered a few options - charging per Tentacle, charging per project, charging per user, and combinations of the above. But I like simple systems, so I went with a simple licensing system: a free version, which has a max of one project, or the 'Enterprise' version which has no limitations at all. Simple.

Choosing a payment provider was also a difficult choice. Initially I wanted to control the experience, integrating with a nice company like BrainTree. Then I looked at what Ayende was doing with NHProf, using SWReg, which seemed much simpler. I'd be curious to hear any feedback you have on SWReg.

PS: If you'd like to learn a bit more about Octopus, check out the Hanselminutes episode!

When I started building Octopus and released a first beta, I had a pretty rough idea of what Octopus would be about. The beta experience has definitely shaped the product, and I'm much happier with the result that has been created by so much good feedback from the Octopus beta testers. Last week I had a couple of really useful phone calls with Octopus users, and I used some of that feedback to come up with a list of features that will make the final 'v1' cut of Octopus.

What does v1 actually mean?

Octopus currently has version numbers starting with 0.8, suggesting it's still in beta. Octopus 1.0 will be the version that is ready for production use (though a few companies are already using Octopus in production at the moment). The 1.0 builds will also be the first builds where a license key will be required to unlock multiple projects. But in truth, there will still be new builds every few days after 1.0, and the product will continue to evolve incrementally.

V1 features

Before 1.0 is stamped, I'll add the following 'big' features:

  • Auto upgrade Tentacle
    Tentacles will be automatically kept up to date with the Octopus (as per this blog post).
  • Ad-hoc PowerShell execution on Octopus
    As a user, I can define a deployment step that involves executing arbitrary PowerShell scripts on the Octopus, so I can perform tasks like configuring load balancers.
  • Role based security
    Create application-managed groups of Windows groups/users, and give them project/environment permissions - e.g., only 'Release Managers' can deploy Project A to Production (suggestion)
  • Pre-defined variable substitutions
    For example, you can define VarA = Hello${VarB}, and a number of useful pre-defined variables will be available (current path, date, etc.) (suggestion)

Those features will be built first, so they have the most time to be tested and to get feedback on.

The following small features/bug fixes are also going to be added:

  • Download log files
    When viewing the results of a deployment, instead of trying to read the results in my browser, I'd like a link to download the output as a .txt file that I can open in my preferred text editor (suggestion)
  • Show NuGet descriptions
    When viewing a release, I want to see the the NuGet package description and release notes in the 'Packages' tab (suggestion).
  • Proxy server support
    Ensure Octopus can contact Tentacles and NuGet repositories via the default proxy server (suggestion)
  • Command line
    A command line Tentacle.exe, and a command line Octopus.exe, for programmatically executing a deployment (either locally or remotely) (suggestion)
  • Project clone
    Create a copy of an existing project to reduce effort required. Useful for example when setting up deployment of a branch of the same code (suggestion).

There are also some small usability features, like better rendering of times, making it obvious deployment steps can be sorted using drag and drop, better naming of some buttons, and adding a favicon.

There's also a nice long backlog of post-v1 features, such as automatic deployments, pull-based deployments,

I estimate that this will take 6 to 8 weeks. Tomorrow I'll announce a special discount that will apply until v1 is released.

In Octopus, you use one central web portal to push NuGet packages out to many servers. On each server is a "Tentacle", a little Agent that receives NuGet uploads, and installs and configures packages according to conventions.

One of the annoying things about Tentacles at the moment is that as you get more than a few of them, the upgrade experience isn't very nice.

  1. First you download the new Octopus installer
  2. Remote desktop to the Octopus server and install it
  3. Now download the new Tentacle installer
  4. Remote desktop into every single staging/test/production server, and install it

Improving the upgrade process

I'd like to make this process easier, so I'm using this blog post as a way to come up with a solution. I'm going to make use of bootstrapping - that is, I'm going to use Tentacle to install Tentacle.

Currently, the Tentacle is a Windows Service EXE that works something like this:

  1. It starts
  2. It hosts a WCF service (net.tcp on port 10933)
  3. Packages are uploaded to it, and it installs them

Bootstrapping the Tentacle

To allow Tentacle to bootstrap itself, I'm going to do two things:

  1. Turn Tentacle into a Console App instead of a Windows Service
  2. Create "Suction cup" - a bootstrapper Windows Service that launches Tentacle

New installs

When you first install Tentacle to your test/staging/production server, the MSI will include a bundled NuGet package. On start-up, Suction Cup will install the package using the standard conventions. Then it will launch Tentacle.exe as a child process (much like IIS's main process launches each ASP.NET application pool child process).

Upgrades

Suppose a few weeks later, you download and install a new version of Octopus. The Octopus will include a Tentacle NuGet package matching the Octopus version. You can then navigate to the Environments page in Octopus, and click a button to deploy the new Tentacle package.

Now, here's the interesting bootstrapping part:

  1. Tentacle (v1) will deploy Tentacle (v2) to a side-by-side folder
  2. Tentacle (v1) will shut itself down (Environment.Exit(0))
  3. Suction Cup will realize that the Tentacle child process has shut down
  4. Suction Cup will find the latest installed version of Tentacle (now v2) and launch the new child process

Notes

This has some added benefits - for example, if something catastrophic happened during a deployment and Tentacle crashes, Suction Cup will automatically restart it.

One thing I'll need to detect is that if Tentacle repeatedly crashes in a short period of time, e.g., 5 times in 30 seconds, Suction Cup should wait a while before trying again - this way we don't fill up the event log.

Finally, since Suction Cup itself isn't upgraded during this process, I'm going to go with the approach that Suction Cup just needs to be 100% perfect from day 1 :)

Since I started working on Octopus, I've been using AgileZen as a task board. The graph below gives an idea of how the work has progressed:

Cumulative flow diagram from AgileZen

There are a few nice things this graph suggests:

  • The backlog is always growing thanks to great suggestions from the beta testing group
  • The archive is also always growing, because work is actually getting completed

(The slowdown in change during July/August is due to my moving overseas)

On a fixed time, fixed scope project I'd ideally want to see backlog growing much slower, if at all, as the project nears completion. For product development, however, I think it's actually healthy for the backlog to be growing as quickly as the 'done' column.

So far I've been following this methodology, but as I prepare for an Octopus version 1.0, I want to get a little more disciplined about my process. One of the things I miss working as a one-man-band is the "closure" that comes when working on actual sprints - the sprint planning and sprint review sessions in Scrum are a good way to book-end a few weeks of work.

To improve my process, here's what I am going to start doing:

  1. Run two-week 'sprints'
  2. On the first Monday, choose the work to do, and move it from the backlog to the 'Sprint' column
  3. Post the sprint plan on my blog
  4. Do the work
  5. On the last Sunday (since most of the Octopus work is done on weekends), move work from 'done' into 'archive'
  6. Send an email to the beta testing list with what features were completed, so people know what to expect
  7. Celebrate with cake

During the two weeks I'll also spend a good amount of time on support (I'll blog about how I use Tender for that), so each sprint probably won't go exactly as planned.

How do you approach your personal projects? What else could I do to improve my process? My first 'sprint' should start on Monday, 12th September.

I wanted to set up the infrastructure for Octopus properly. This meant that I needed:

  • A SQL server
  • A TeamCity server
  • A Web server or two (for OctopusDeploy.com)
  • A domain controller
  • A handful of servers for an Octopus test farm

Some of these servers are public, most would be private. I just moved overseas so I didn't want to put another server in my living room. I started by looking at some cloud providers. Here are some really rough, back-of-the-envelope calculations (for Windows servers):

ProviderCPURAMPer hourPer month (750 hours)
Amazon EC2 1 core 1.7gb $0.12 $90
Azure Compute 1 core 1.75gb $0.12 $90

Discounts are available if you buy spot instances or reserve hours, but from what I can gather, I'd be looking at $400-$500/month to have half a dozen 'small' servers on the cloud.

For my needs, it turned out to be cheaper to rent a high-spec dedicated server, and to have my own mini-cloud. LeaseWeb came to the rescue - for the last three months I've been renting a Quad core, 24GB RAM server for €190 (about $270 today). With 24gb I can run 10 'small' Azure instances, or about $900 worth of 'cloud'. What I do miss out on is the rapid provisioning. If I needed a second of these servers, it could take over a week to provision one.

If Octopus was a cloud service, I'd definitely look at offloading this to a real cloud provider. But for running TeamCity and a handful of test VM's, it appears to me that putting a server in a data centre is still more cost effective. Is that your experience?

New builds of Octopus are available immediately from my CI server, and can be downloaded from the Octopus downloads page. Every couple of weeks, though, I'll publish a short list of what's going on.

The last couple of weeks saw a lot of new features added.

  • A dashboard, to give you a high level overview of which projects are deployed to what environment.
  • Edit release notes for releases that have already been created.
  • Scope variables on a per-package basis, in addition to per-environment and per-machine
  • A more streamlined release user experience

A number of bug fixes and tiny enhancements are also included, from a quick list of projects that appear when you hover over the Projects tab, to correctly uninstalling old NuGet packages.

Below is a screenshot of the new dashboard - you can see larger versions of this screenshot and more on the Features page of the new Octopus website.

The Octopus dashboard

Decades from now, I want the IT industry to be known not just for its innovation and creativity, but for its ability to deliver software projects reliably. There's a lot of work to do, and I think Agile principles of frequent, high-quality collaboration and embracing change are playing a big part in getting us there. Along with the human to human improvements, we're also adopting engineering practices to boost our chances of success - source control, continuous integration, and unit testing for example make an enormous difference.

Delivering even a small software project is a huge undertaking. There are an unlimited number of practices that we'd like to adopt, but since time is always so critical, we always have to make trade-offs. In my experience, there's one practice that always falls into the too-hard basket: automated deployment.

The state of the art in deployment on the .NET stack leaves a lot to be desired. At big companies and small, I see programmers routinely remoting into production machines to patch configuration files, or writing long and semi-complete documents because automating deployments was too time consuming. After months of repeated test deployments, production deployments still fail and are rushed because of bad automation or incomplete documentation. It's hampering our ability to deliver and give our customers confidence. We can do better.

I believe wholeheartedly that automated, frequent, repeatable deployments is one of the most important practices we can follow. There's no compelling, standardised, complete solution out there, and I don't know if Octopus will ever become one either. But I know that we as an industry would be much better off if we could nail this problem, and I'm going to try my best to do something about it.

.NET 4.0 includes a new few new classes called Tasks, which are part of the Task Parallel Library. You can learn all about them in an article by my friend Sacha on Code Project.

The TPL is useful, but I'm starting to see a lot of coders using the Task class. I may be an old fuddy-duddy, but I can't quite understand what advantage Task gives me over plain old ThreadPool in .NET 2.0. Here are some examples of how the Tasks "features" would be implemented using ThreadPool.

Starting a task

ThreadPool.QueueUserWorkItem(delegate
{
    DoSomeWork();
});

Waiting for a task to complete

var handle = new ManualResetEvent(false);
ThreadPool.QueueUserWorkItem(delegate
{
    DoSomeWork();
    handle.Set();
});
handle.WaitOne();

Returning

int result = 0;

ThreadPool.QueueUserWorkItem(delegate
{
    DoSomeWork();

    result = 42;

    handle.Set();
});
handle.WaitOne();

Console.WriteLine(result);

Chaining tasks

var handle = new AutoResetEvent(false);
ThreadPool.QueueUserWorkItem(delegate
{
    DoSomeWork();
    handle.Set();
});
handle.WaitOne();

ThreadPool.QueueUserWorkItem(delegate
{
    DoSomeMoreWork();
    handle.Set();
});
handle.WaitOne();

Waiting for multiple tasks to complete

var handle1 = new ManualResetEvent(false);
var handle2 = new ManualResetEvent(false);

ThreadPool.QueueUserWorkItem(delegate
{
    DoSomeWork();
    handle1.Set();
});

ThreadPool.QueueUserWorkItem(delegate
{
    DoSomeMoreWork();
    handle2.Set();
});

WaitHandle.WaitAll(new WaitHandle[] { handle1, handle2 });

Exception handling

var handle = new ManualResetEvent(false);

Exception error = null;
ThreadPool.QueueUserWorkItem(delegate
{
    try
    {
        DoSomeWork();
    }
    catch (Exception ex)
    {
        error = ex;
    }
    finally
    {
        handle.Set();
    }
});

handle.WaitOne();

if (error != null)
    Console.WriteLine("Error! " + error);

Word of caution: you should probably always do this when using ThreadPool, since exceptions thrown on a ThreadPool thread will tear down the AppDomain if not caught

Cancellation

var cancel = false;

ThreadPool.QueueUserWorkItem(delegate
{
    while (!cancel)
    {
        Thread.Sleep(100);
    }
});

cancel = true;

Note: this is like a gazillion times more complex in Tasks

Dispatching to UI thread

var dispatcher = Application.Current.Dispatcher;

ThreadPool.QueueUserWorkItem(delegate
{
    DoSomeWork();

    dispatcher.BeginInvoke(new Action(() => progressBar.Value += 10));

    DoSomeMoreWork();

    dispatcher.BeginInvoke(new Action(() => progressBar.Value += 10));
});

Being notified when a task is complete without blocking

Action<int> done = (int x) => Console.WriteLine("Done! " + x);

ThreadPool.QueueUserWorkItem(delegate
{
    DoSomeWork();

    done(42);
});

So help me learn the Task API - how would using Task make the examples above look better?

I originally posted as an answer to a StackOverflow question.

The original poster wanted to be able to determine the model type used by a Razor view, so that the controller could fetch the right model. I thought it was a fun problem so decided to have a go.

Models

First, the models. I decided to create two 'widgets', one for news, and one for a clock.

public class NewsModel
{
    public string[] Headlines { get; set; }

    public NewsModel(params string[] headlines)
    {
        Headlines = headlines;
    }
}

public class ClockModel
{
    public DateTime Now { get; set; }

    public ClockModel(DateTime now)
    {
        Now = now;
    }
}

Controller

My controller doesn't know anything about the views. What it does is returns a single model, but that model has the ability to dynamically fetch the right model as required by the view.

public ActionResult Show(string widgetName)
{
    var selector = new ModelSelector();
    selector.WhenRendering<ClockModel>(() => new ClockModel(DateTime.Now));
    selector.WhenRendering<NewsModel>(() => new NewsModel("Headline 1", "Headline 2", "Headline 3"));
    return PartialView(widgetName, selector);
}

Delegates are used so that the correct model is only created/fetched if it is actually used.

ModelSelector

The ModelSelector that the controller uses is pretty simple - it just keeps a bag of delegates to create each model type:

public class ModelSelector
{
    private readonly Dictionary<Type, Func<object>> modelLookup = new Dictionary<Type, Func<object>>();

    public void WhenRendering<T>(Func<object> getter)
    {
        modelLookup.Add(typeof(T), getter);
    }

    public object GetModel(Type modelType)
    {
        if (!modelLookup.ContainsKey(modelType))
        {
            throw new KeyNotFoundException(string.Format("A provider for the model type '{0}' was not provided", modelType.FullName));
        }

        return modelLookup[modelType]();
    }
}

The Views - Simple solution

Now, the easiest way to implement a view would be:

@model MvcApplication2.ModelSelector
@using MvcApplication2.Models
@{
    var clock = (ClockModel) Model.GetModel(typeof (ClockModel));
}

<h2>The time is: @clock.Now</h2>

You could end here and use this approach.

The Views - Better solution

That's pretty ugly. I wanted my views to look like this:

@model MvcApplication2.Models.ClockModel
<h2>Clock</h2>
@Model.Now

And

@model MvcApplication2.Models.NewsModel
<h2>News Widget</h2>
@foreach (var headline in Model.Headlines)
{
    <h3>@headline</h3>
}

To make this work, I had to create a custom view engine.

Custom view engine

When a Razor view is compiled, it inherits a ViewPage<T>, where T is the @model. So we can use reflection to figure out what type the view wanted, and select it.

public class ModelSelectorEnabledRazorViewEngine : RazorViewEngine
{
    protected override IView CreateView(ControllerContext controllerContext, string viewPath, string masterPath)
    {
        var result = base.CreateView(controllerContext, viewPath, masterPath);

        if (result == null)
            return null;

        return new CustomRazorView((RazorView) result);
    }

    protected override IView CreatePartialView(ControllerContext controllerContext, string partialPath)
    {
        var result = base.CreatePartialView(controllerContext, partialPath);

        if (result == null)
            return null;

        return new CustomRazorView((RazorView)result);
    }

    public class CustomRazorView : IView
    {
        private readonly RazorView view;

        public CustomRazorView(RazorView view)
        {
            this.view = view;
        }

        public void Render(ViewContext viewContext, TextWriter writer)
        {
            var modelSelector = viewContext.ViewData.Model as ModelSelector;
            if (modelSelector == null)
            {
                // This is not a widget, so fall back to stock-standard MVC/Razor rendering
                view.Render(viewContext, writer);
                return;
            }

            // We need to work out what @model is on the view, so that we can pass the correct model to it. 
            // We can do this by using reflection over the compiled views, since Razor views implement a 
            // ViewPage<T>, where T is the @model value. 
            var compiledViewType = BuildManager.GetCompiledType(view.ViewPath);
            var baseType = compiledViewType.BaseType;
            if (baseType == null || !baseType.IsGenericType)
            {
                throw new Exception(string.Format("When the view '{0}' was compiled, the resulting type was '{1}', with base type '{2}'. I expected a base type with a single generic argument; I don't know how to handle this type.", view.ViewPath, compiledViewType, baseType));
            }

            // This will be the value of @model
            var modelType = baseType.GetGenericArguments()[0];
            if (modelType == typeof(object))
            {
                // When no @model is set, the result is a ViewPage<object>
                throw new Exception(string.Format("The view '{0}' needs to include the @model directive to specify the model type. Did you forget to include an @model line?", view.ViewPath));                    
            }

            var model = modelSelector.GetModel(modelType);

            // Switch the current model from the ModelSelector to the value of @model
            viewContext.ViewData.Model = model;

            view.Render(viewContext, writer);
        }
    }
}

The view engine is registered by putting this in Global.asax.cs:

ViewEngines.Engines.Clear();
ViewEngines.Engines.Add(new ModelSelectorEnabledRazorViewEngine());

Rendering

My home view includes the following lines to test it all out:

@Html.Action("Show", "Widget", new { widgetName = "Clock" })
@Html.Action("Show", "Widget", new { widgetName = "News" })

On a project recently I had a marker interface like this:

public interface IMessage
{
}

I wanted the ability to attach arbitrary headers to a message - for example:

message.SetHeader("Expires", "2011-09-03 15:30:00");
message.SetHeader("CorrelationId", 45);

I didn't want SetHeader to be a method on the interface, so I created an extension method. But how can we store the arbitrary header values against the object instance?

My solution was to use ConditionalWeakTable. This allows properties to be stored against objects without preventing the object from being garbage collected.

The implementation looked something like this:

public static class MessageExtensions
{
    private static readonly ConditionalWeakTable<IMessage, HeaderCollection> headerMap = new ConditionalWeakTable<IMessage, HeaderCollection>();

    public static void SetHeader(this IMessage message, string header, string value)
    {
        var headers = GetHeaders(message);
        headers[header] = value;
    }

    public static string GetHeader(this IMessage message, string header)
    {
        var headers = message.GetHeaders();
        string value;
        headers.TryGetValue(header, out value);
        return value;
    }

    public static HeaderCollection GetHeaders(this IMessage message)
    {
        return headerMap.GetValue(message, x => new HeaderCollection());
    }
} 

I put the extension methods in the global namespace, so that they are available on any IMessage without the coder needing to include the namespace.

Here is a unit test to demonstrate the garbage collection continuing to work:

[TestMethod]
public void HeadersDoNotPreventGarbageCollection()
{
    WeakReference reference = null;

    new Action(() =>
    {
        var message1 = new MessageA();
        message1.SetHeader("Test", "Foo");

        reference = new WeakReference(message1, true);
        Assert.IsNotNull(reference.Target);
    })();

    GC.Collect();
    GC.WaitForPendingFinalizers();

    Assert.IsNull(reference.Target);
}