A picture of me

Welcome, my name is Paul Stovell. I live in Brisbane and work on Octopus Deploy, an automated deployment tool for .NET applications.

Prior to founding Octopus Deploy, I worked for an investment bank in London building WPF applications, and before that I worked for Readify, an Australian .NET consulting firm. I also worked on a number of open source projects and was an active user group presenter. I was a Microsoft MVP for WPF from 2006 to 2013.

I've been working with ASP.NET MVC on a few different projects now, and yet I've never been happy with my controllers. My view models were normally pretty simple, but my controllers, especially when saving, were starting to feel like spaghetti.

Let me give you a typical example. Below is a controller for editing a user with first name, last name and country. You can see the full Gist for the details, but the controller looked like this:

public ActionResult Edit(string id)
{
    var user = string.IsNullOrEmpty(id) ? session.Find<User>(id) : new User();
    var countries = session.FindAll<Country>();

    var model = EditUserModel.CreateFrom(user, countries.Select(c => CountryModel.CreateFrom(c)).ToList());
    return View(model);
}

[HttpPost]
public ActionResult Edit(EditUserModel model)
{
    if (!ModelState.IsValid)
    {
        // Since the posted data won't contain the country list, we have to re-fill the model. This code feels hacky
        var countries = session.FindAll<Country>();
        model.Countries = countries.Select(c => CountryModel.CreateFrom(c)).ToList();
        return View(model);        
    }

    var user = string.IsNullOrEmpty(model.Id) ? session.Find<User>(model.Id) : new User();
    session.Store(user);

    user.FirstName = model.FirstName;
    user.LastName = model.LastName;
    // Messy handling for referenced entity
    user.Country = session.Find<Country>(model.CountryId);

    session.SaveChanges();
    return RedirectToAction("Index");
}

Yuck! So tonight, I started looking for ways to clean it up. In researching, I came across a suggestion by Chuck Norris (yes, THE Chuck Norris!) that involved using an intermediary object. I posted a Gist with the pattern applied to my first scenario. Now, my controller was much simpler:

class UserController : Controller
{
    IModelBuilder<EditUserModel, User> builder = new EditUserModelBuilder();

    public ActionResult Edit(string id)
    {
        var user = session.Find<User>(id) ?? new User();
        return View(model, builder.CreateFrom(user));
    }

    [HttpPost]
    public ActionResult Edit(EditUserModel model)
    {
        if (!ModelState.IsValid)
        {
            return View(builder.Rebuild(model);       
        }

        builder.ApplyChanges(model);
        session.SaveChanges();
        return RedirectToAction("Index");
    }
}

This was achieved by outsourcing the shaping and saving to a 'builder' class. Here is how the builder looked:

class EditUserModelBuilder : IModelBuilder<EditUserModel, User>
{
    readonly ISession session;

    public EditUserModelBuilder(ISession session)
    {
        this.session = session;
    }

    public EditUserModel CreateFrom(User user)
    {
        var model = new EditUserModel();
        model.Id = user.FirstName;
        model.FirstName = user.FirstName;
        model.LastName = user.LastName;
        model.Country = user.Country.Id;
        model.Countries = GetCountries();
        return model;
    }

    public EditUserModel Rebuild(EditUserModel model)
    {
        model.Countries = GetCountries();
        return model;
    }

    public User ApplyChanges(EditUserModel model)
    {
        var user = string.IsNullOrEmpty(model.Id) ? new User() : session.Find<User>(model.Id); 
        session.Store(user);

        user.FirstName = model.FirstName;
        user.LastName = model.LastName;
        user.Country = session.Find<Country>(model.CountryId);
        return user;
    }

    ICollection<SelectListItem> GetCountries()
    {
        var countries = session.FindAll<Country>();
        return countries.Select(c => new SelectListItem { Value = c.Id, Text = c.Name }).ToList();
    }
}

After I posted that Gist, Jay McGuinness pointed out that at his company, they use a slightly different technique: they seperate the "read" part of the builder from the "write" part by introducing the command pattern. I posted an example Gist, again applied to my first scenario. The command looks like this:

class UserEditCommand : IModelCommand<UserEditModel>
{
    readonly ISession session;

    public UserEditCommand(ISession session)
    {
        this.session = session;
    }

    public void Execute(UserEditModel model)
    {
        var user = string.IsNullOrEmpty(model.Id) ? new User() : session.Find<User>(model.Id); 
        session.Store(user);

        user.FirstName = model.FirstName;
        user.LastName = model.LastName;
        user.Country = session.Find<Country>(model.CountryId);

        session.SaveChanges();

        // Auditing and other interesting things can happen here
    }
}

Note that the command and builder take two different parameters. This is achieved by splitting the view model into two and using inheritance:

class UserEditModel
{
    public string Id { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public string CountryId { get; set; }
}

class UserViewModel : UserEditModel
{
    public ICollection<SelectListItem> Countries { get; set; }
}

What's great about this is that the view model can add additional view-specific properties, while the command only depends on the basic information needed to save details.

I like this approach for a few reasons:

  1. The controller is more readable
  2. The builder and command can both take dependencies - e.g., the UrlHelper or repositories - without filling up the controller
  3. The commands become a great place to put auditing and permission checking code

Thanks to Jay and Chuck for these suggestions, I certainly feel a lot better about my MVC controllers going forward.

The full Gist of the final solution is below:

This page has been translated into Spanish language by Maria Ramos from Webhostinghub.com/support/edu.

Bob Yexley shared a positive experience getting started with Octopus:

Every .NET developer I know has war stories about past horrors of deployments gone wrong. My theory is that’s because there’s not a good, universally accepted way to handle deployments of .NET apps on windows. Octopus steps in and offers a solution that, in my recent experience, dramatically simplifies the .NET deployment problem with a really elegant solution.

Read more on Bob's blog.

Sidenote: I like that his blog is powered by Octopress. Octopuses all around!

I just pushed the first version of Octo.exe to Github. Octo.exe is a command-line tool for triggering Octopus releases and deployments, and potentially other uses later.

For example, Octo.exe allows you to do:

octo create-release --server=http://your-octopus/api --project=HelloWorld --deployto=Production

A common scenario of where this is useful is if you want to trigger a deployment from your daily build server. The process would work something like this:

  1. Compile and test your code
  2. Publish the package to a NuGet repository
  3. Use Octo.exe to create a release and deploy it to a "Demo" or "Test" environment

Octo.exe is pretty basic at the moment, because it was only designed to perform the above scenario. However it is also a good demonstration of how the new Octopus HTTP API works. As time goes on it will be extended to include more useful features. If you have suggestions, consider adding them to the issues tab or forking the code yourself!

Happy new year!

For the last month or so I've been working on a new release of Octopus, which is finally out! This release includes a few big features as well as a lot of small bug fixes and nice additions. You can read more about what changed in the release notes, but there are a few important changes that I want to call in this post.

Docs, docs, docs

I've spent a lot of time on documentation this release. Instead of Tender, Octopus is now using Confluence for product documentation. The new documentation includes lots of useful pages, including:

Package creation

NuGet packages used for Octopus don't follow all of the standard NuGet conventions, which has always made them somewhat tricky to create. While speaking with Maarten Balliauw about his upcoming book on NuGet, we bounced some ideas around, and came up with a solution. OctoPack is a tool that makes creating Octopus-flavored NuGet packages from your desktop or build server easy. Give it a try!

Edit: the source for OctoPack is available on GitHub.

API support

My goal this release was to make it possible for other tools to automate Octopus - the primary scenario being:

A CI build just finished; deploy the latest packages to my Test environment!

To that end, most actions that can be done in the Octopus UI can also now be done through a RESTful HTTP API, which Octopus itself uses - Octopus now follows an API-first design.

This week I'll publish a command line tool for triggering deployments, and a class library for building your own tools, to GitHub. Although Octopus itself is still a commercial product, I'm trying to find ways to make many of its "parts" open source, and this is one of them.

Edit: the tool is now published!

Deployment workflow changes

Build servers tend to be pretty sequential:

  1. Get the code
  2. Compile the code
  3. Test the code
  4. Publish artefacts from the code

Release management tools, on the other hand, have a lot of room for parallelism. Let's take this example:

Bob has an environment with five machines - one SQL box and 4 web boxes. The project he is deploying contains two packages - a database package, which has to go first on one machine, followed by an ASP.NET package for his website.

Octopus has to work through these high-level steps:

  1. Download the packages from his NuGet server
  2. Securely upload the packages to the applicable machines
  3. Extract and configure the DB package
  4. Extract and configure the Web package

Those steps are also sequential, and for a good reason: package uploading often takes a long time. It wouldn't be good to upload the DB package, extract it, only to find that the web servers are offline and the package couldn't be uploaded to them. Since this is the slowest part of deployment, we do it all at the start (this idea was thanks to DamianM).

However, within the step there is certainly room for parallelism. We don't need to wait for the upload to machine A to finish before we upload to machine B. We don't have to wait for the web package install on B to finish before we start installing on C.

In addition, there's also no reason why Jill has to wait for Bob to finish deploying project A to Test before she deploys project B to Production.

So from now on, most tasks in Octopus will run in parallel where possible.

One of the trickiest things about this was actually coming up with a way to capture and display the logs correctly. You can see an example of what the result in this gist. As the deployment runs, a hierarchy of logs appear, all updating at the same time - it's quite cool to watch!

Retrospective

Getting this release out took a longer than I planned, which is down to a mix of features taking longer than expected, infrastructure issues, as well as various holidays. In between this release, I migrated everything to Amazon, followed by migrating back to LeaseWeb. In future I'm going to try a more disciplined branch-oriented model so that I can make a new release every couple of weeks, and work towards Octopus 1.0.

If you'd like to know what I'm working on next, visit the Octopus Trello board. I'm also hanging out on Jabbr a lot lately, and there is an Octopus room if you have any questions.

Happy deployments!

This week I have been watching some very enlightening introductory lectures on philosophy. I didn't know much about philosophy, so I found that by contrasting some of the philosophical theories, I gained an insight not just into the meaning of different belief systems, but the reasoning that underpins them. Highly recommended.

While posting to a discussion thread about my current technology stack, I thought about how philosophical theories could help to explain the rationale behind how different people make technology choices. Thus, I present to you: The Philosophy behind Technology Choices.

1. Utilitarian (Webformsianism)

A utilitarian believes that when asked to make a technology choice, the right tool is the one that maximises the result. They are much less concerned with the intrinsic qualities of the technology, and more about the results the technology can bring them. Technology is a means to an end.

A utilitarian will choose whatever technology they think will maximise their result, and have no qualms choosing "lesser" technologies like Web Forms or Silverlight. They aren't afraid of drag-and-drop tools, and they rarely stop to refactor code. If it ain't broke, don't fix it.

You can spot a utilitarian by their use of copious amounts of third-party UI control libraries and messy code organisation.

2. Paternalist (Redmondism)

Paternalists believe in the power of big vendors to provide. The reasoning behind their choices isn't based on subjective evaluation of the intrinsic qualities of each technology, but rather where it came from. While they may still need to make a choice from vendor solutions, their scope of possible options will be limited.

If asked to choose, for example, an ORM, a paternalist will see their options as being limited to Entity Framework, DataSets or Enterprise Library Data Access Application Block. The paternalist would prefer to use poor quality, slow, un-testable technologies while they wait for the Vendor to provide in the next framework release. They may end up with the same conclusion as the other philosophies, but for different reasons.

You can spot a paternalist by examining their NuGet packages folder, which will only contain Microsoft-produced packages.

3. Libertarian (Githubianism)

The opposite of paternalists, libertarians reject the idea of heavy influence from the Vendor. They believe that the Vendor exists to serve them, rather than the other way around. They think developers should be self-sufficient and should be trusted to make good choices without needing to be coerced by the vendor. They see dependence on the vendor as dangerous.

To a libertarian, just as important as the technology choice is the freedom to choose in the first place. When no choice exists, they are happy to write their own. A libertarian would prefer to find open-source solutions to their problems before becoming dependent on a vendor.

You know you are looking at a libertarian because the only Microsoft-produced references in their project are for the .NET framework (and only because they haven't had time to test on Mono).

4. Kantian (Unclebobism)

Followers of Kantianism are the opposite of utilitarians. They believe that a technology choice is right because of the nature of the technology itself, rather than the results it achieves. That is, they believe that some choices are inherently right, and some are inherently wrong, and it doesn't always "depend". While the utilitarian sees technology as a means to an end, Kantianism followers see technology as both an end and a means.

Kantianism followers have a well defined system for evaluating technology: the SOLID principles. They will choose ASP.NET MVC over Web Forms not because MVC is more productive (a utilitarian reason), nor because it is being hyped more by the vendor (the paternalist's reason), but because MVC is much closer to the SOLID principles, and is therefore right (the Libertarian, by contrast, would choose an open source alternative).

Summary

When a look at the technology stack behind a project, it's important to understand that there are many possible motivations behind the technology choices. A choice might be justified by its expected results alone, by preference for a vendor, by a preference for retaining control and mistrust of vendors, or by inherent value that we judge to be in the technology regardless of the results. More often than not, any choice is going to be influenced by more than one of these factors.

Which of these theories had a role in your current technology stack? What does it look like? Did we miss any important motivators?

When I'm working on a client project, especially one that's been under development for a while, I often find myself wondering what would I do differently given the chance to start over. Octopus, as my own product, is no exception. They say hindsight is 20/20, so it's often interesting to think about how to apply some of the lessons learnt.

Things that didn't go as expected

The IIS/ASP.NET/WCF/SQL stack which Octopus relies on is great for building Enterprise applications. I have plenty of experience with that stack, which is mostly why I went with it in designing Octopus.

I learnt that an enterprise stack isn't necessarily great for building ISV shrinkwrap products like Octopus. The following decisions have been beneficial, but have also come with downsides:

  1. Depending on IIS
    • Getting the configuration right is very tough - people need to make sure they enable static content to be served, ASP.NET 4.0 is registered and the extension isn't blocked
    • Permissions: by default IIS AppPools run under the context of a machine-specific AppPool user. If using a remote SQL database, users get all kinds of login problems.
    • Since IIS websites get recycled, I can't use it to run any long-running or scheduled tasks. So I had to create an additional Windows Service, which is just an extra thing to maintain. Sharing configuration between the two is also difficult.
  2. Using SQL server
    • A bundled database like RavenDB or SqlLite would probably have been a better choice from a deployment point of view, and a document database would probably suit my application model easier. Also, not having to rely on remote Windows authentication would probably save some of the IIS issues above.
  3. Using WCF and certificates
    • The Windows Certificate Store caused so many permission-related issues that I eventually gave up, and instead base64 encoded certificates and stored them in the registry
  4. Hosting PowerShell
    • Writing a custom PowerShell host that works is easy. Writing one that really, really works, is very hard
    • Not to mention x86/x64 issues and the availability of modules/snapins on each (IIS7/7.5 is painful for this)
  5. NuGet
    • Overall this has been beneficial, though it had teething issues (this caused many problems)
    • CI solutions, especially TFS, are still not very good at bundling an application into a NuGet package
    • Some of the normal NuGet conventions don't make sense in the context of Octopus, but that's hard to educate people on

Lesson 1: If using IIS, have a really sophisticated installer

On the .NET platform, if you want a Windows Service that has a web UI, you have three choices:

  1. Use IIS:
    • Automated configuration is messy
    • Requires users to install it with all the right options selected
    • Can't do long-running tasks reliably; you'll need a Windows Service too, and find a way to share configuration
    • Permissions are going to be a problem
  2. Use HttpListener (aka HTTP.SYS)
    • Also hard to configure due to permissions
    • Miss out on a lot of IIS goodies like SSL configuration
    • ASP.NET makes a lot of assumptions that you have to cater for if hosting it in your own process
  3. Use raw sockets (bypass HTTP.SYS)
    • You'll never be able to get port 80 because HTTP.SYS has probably taken it anyway
    • Same issues as above

IIS seems like the only reasonable solution. But my experiences supporting a product that depends on IIS have taught me that it can be darn hard to rely on.

If I'm going to stick with IIS, I'll need a much smarter installer. My installer needs to be able to:

  • Verify that IIS is installed
  • Verify that the right options are installed
  • Verify that ASP.NET 4.0 is installed, registered and enabled
  • Create the site and AppPool
  • Configure the security credentials of the AppPool to run under an account that has permissions to contact whatever SQL database the user selects
  • Verify that said user has permissions to run as an AppPool in IIS

On the one hand, it seems a shame to put so much effort into creating a clever installer when I could be focussed on the product itself. On the other hand, the solution of simply having pages and pages of documentation and a FAQ seems like a cop-out, and prevents people from easily getting started with the product.

Lesson 2: Don't use SQL Server

A lot of my IIS issues would actually probably be resolved, strangely, by not using SQL. By switching from SQL Server to an embeddable, local database, means I don't have to deal with Windows Auth issues when using a remote SQL server. That means no need to run under a custom user account most of the time, which simplifies things a lot. That's something I'm probably going to start looking at seriously over the next few weeks.

Lesson 3: Have a good test environment

I can't begin to count how many issues have been the result of differences on x86/x64, Server 2003/2008/2008R2/2008R2SP1 issues. I always knew I'd eventually need a test rig with lots of OS versions, but I hoped most things would "just work" and that could wait until later. It turns out that building a server product that runs on more than one version of Windows Server is hard work!

Conclusion

The great thing about all of this is that Octopus is bearing the pain so that others don't have to. Octopus makes it darn easy to take the ASP.NET project you built on your internal CI server, securely upload it to a locked-down server deep in a colocation facility, extract it and execute your PowerShell scripts to configure it. And that makes me happy!

The Octopus beta continues, and this weekend I released version 0.9, which is now available for download.

The big new feature in this release is automatic updates for the Tentacles. I wrote about the design for this feature previously, but in a nutshell, the goal is to make it easy to install new versions of Octopus without having to remote desktop onto dozens of servers to run the Tentacle installer. In a nice example of bootstrapping, Tentacles are upgraded using the same NuGet conventions that applications deployed using Octopus use.

Let's go through how this works.

First, you download the Octopus MSI, and install it manually on the main Octopus server.

Second, Octopus will periodically (every 5 minutes) check the health of each of the Tentacles you have configured. You can also trigger these health checks by manually clicking the Check Health button. If the Tentacles are running older versions than Octopus, you'll see something like this:

Some servers that are out of date

A button will also appear to upgrade them:

Perform an upgrade

Clicking that schedules a task that deploys the latest version to all machines. Here's an example of the output:

Output of upgrading the Tentacles

Once upgraded, you'll see the health and version numbers on the Environments page:

Upgraded

This feature should make it much easier to manage many machines while keeping up to date with new Octopus features.

Octopus uses log4net, my favourite logging library. It also performs a lot of background tasks - for example, deploying applications.

I wanted the output from log4net to be written to the event log (in the case of warnings/errors), as well as to the UI. However, I didn't want to end up with this:

log.Error(ex);
job.Output.Append(ex);

One of my tricks was to set up a "log tapper" - much like a wire tap. This allows me to "listen in" on the log4net chatter within a specific context. For example:

var output = new StringBuilder();

log.Info("Starting the job");

using (LogTapper.CaptureTo(output))
{
    log.Info("Doing the work...");

    someObject.DoSomethingThatAlsoHappensToLog();
}

log.Info("Finished the job");

The LogTapper sets up a scope - within the current thread, and as long as the using block is open, and log messages written by log4net will be appended to the StringBuilder in addition to their normal destinations.

Doing this means I can present the log4net output of a job in the UI, without it being mixed up with the output from lots of other jobs.

The LogTapper implementation is quite simple:

public class LogTapper : IDisposable
{
    LogTapper(StringBuilder builder)
    {
        ThreadContext.Properties["LogOutputTo"] = builder;
    }

    public static IDisposable CaptureTo(StringBuilder builder)
    {
        return new LogTapper(builder);
    }

    public void Dispose()
    {
        ThreadContext.Properties.Remove("LogOutputTo");
    }
}

ThreadContext is a log4net class, and it stores properties that are available to all log4net appenders on a per-thread basis.

To make the implementation work, I then set up a log4net Appender. Whenever a log is written, if a "log tapper" is active on the current thread, it will log the same message to the tapper:

public class LogTapAppender : IAppender
{
    public string Name { get; set; }

    public void DoAppend(LoggingEvent loggingEvent)
    {
        var capture = ThreadContext.Properties["LogOutputTo"] as StringBuilder;
        if (capture == null)
            return;

        capture.AppendLine(DateTime.UtcNow.ToString("yyyy-MM-dd HH:mm:ss") + " " + loggingEvent.Level.DisplayName.PadRight(6, ' ') + " " + loggingEvent.RenderedMessage);

        if (loggingEvent.ExceptionObject != null)
        {
            capture.Append(loggingEvent.ExceptionObject.ToString());
            capture.AppendLine();
        }
    }

    public void Close()
    {
    }
}

What I like about this approach is that all of my components just write useful information to the log4net ILog - they don't need to know if they are being called within the context of a UI, or a job, or any other alternative way of recording their progress. My higher-level controlling classes can siphon the output of any components they call into the place that makes the most sense for them.

One of the things I'm noticing about running my Micro ISV is that I have to constantly change my mind :)

Until today I had been using AgileZen to manage the Octopus backlog. There is also a discussion board on Tender for people to post suggestions/ideas on improving Octopus.

AgileZen is great, but one of the downsides was that a suggestion would get added to the board, but people could never see how it was progressing or where it sat in the queue. There was also no voting, so I couldn't gauge how many people thought a suggestion was good.

Today I switched to Trello, which is very similar to AgileZen but also allows public visibility. I'm hoping this will create much more transparency in how Octopus develops:

View the Octopus Trello

If you create an account on Trello (or use your Google account) you can:

  • Vote for suggestions/features you like
  • Add comments to items
  • See how a suggestion progresses

Trello doesn't yet allow you to create new suggestions, so they should still be posted to the Octopus discussion board. I'll then add the suggestion to Trello so you can see how it progresses.