A picture of me

Welcome, my name is Paul Stovell. I live in Brisbane and work on Octopus Deploy, an automated deployment tool for .NET applications.

Prior to founding Octopus Deploy, I worked for an investment bank in London building WPF applications, and before that I worked for Readify, an Australian .NET consulting firm. I also worked on a number of open source projects and was an active user group presenter. I was a Microsoft MVP for WPF from 2006 to 2013.

These are the projects that I work on actively:

  • FunnelWeb
    A blog engine for real developers, built using ASP.NET MVC. It powers this site.
  • Magellan
    A lightweight framework for building WPF navigation applications, inspired by ASP.NET MVC.
  • DbUp
    A tool for streamlining SQL database deployments

These projects were fun ideas that I put out to see if they gained traction, but I don't actively work on them anymore:

  • Bindable LINQ
    An extension to LINQ that handles collection changed events and propagates the changes into the resulting collections. It's like a "live" LINQ to Objects.
  • Tape
    A simple dependency management scheme for .NET
  • Pigeon
    A lightweight alternative to WCF designed for high throughput, using Google Protocol Buffers and raw TCP sockets.
  • Observal
    A library for managing observable hierarchies of objects.
  • MicroModels
    A library for building very tiny ViewModels for WPF.
  • StovellMocks
    An excursion into what is involved in building a mocking library. Inspired by RhinoMocks.
  • jQueryPad
    A lightweight tool for prototyping snippets of JavaScript and HTML.

Magellan 2.2 is available from Google Code:

http://code.google.com/p/magellan-framework/

Here are some of the new changes:

  • Navigators can be nested - for example, the frame on your main window can create child navigators for a frame in a popup window. You can use the Navigator.Parent property to walk up the tree.
  • Ability to queue a background task that is automatically cancelled when the page is closed
  • An IsBusy flag on ViewModels that changes to true when background operations are executing
  • Ability to "flash" a message on a page
  • Abstractions to create background tasks that run on a timer in ViewModels
  • Various bug fixes

I'll blog some of these features in more detail later, but for now, the source code includes a new Fluent Validation sample that shows off some of these features.

Magellan packages are also finally available from NuGet.

Special thanks go to Steven Nagy for inspiring some of the nested navigator features.

I just released an open source library called DbUp, which embodies some of the goals I talked about in my how to deploy a database article.

A sample application using DbUp

The sample application included in the code shows how the API works.

Managing dependencies in .NET can be painful. You download the latest version of NServiceBus, only to find that it uses an old version of Castle Windsor, which isn't compatibile with the new version of Windsor that you need for Caliburn.

Package managers like OpenWrap and NuGet attempt to solve this problem, but they don't. Unless the NServiceBus team (or someone else) releases a new version of NServiceBus for the latest Castle Windsor, you're out of luck.

Personally, I like the way NDesk.Options goes about things. When you download the ZIP package, there's a single C# file under the ndesk-options folder that has all of the source code combined. You can just copy that C# file and paste it into your solution. No more DLL references.

Thus, I've created Tape, the solution to dependency management in .NET**.

Getting started with Tape

Let's use Tape to set up a project that depends on Autofac.

Step 1: Download tape.exe

Step 2: Download the latest Autofac source code

Step 3: Run tape over the directory containing Autofac.csproj:

The command line interface for Tape

Here I'm telling Tape to package the code in the Source\Autofac directory, packaging it into Autofac.cs. The -i switch tells tape to turn all public types into internal.

Step 4: Create a new VS project using Autofac.cs. I like to put my dependencies into a lib folder:

A VS solution with Autofac source code embedded

Step 5: Autofac source code does have a couple of MEF dependencies, so you'll need to add a reference to System.ComponentModel.Composition.

Step 6: Add some code that uses it:

interface IFoo
{
    void DoSomething();
}

public class Foo : IFoo
{
    public void DoSomething()
    {
        Console.WriteLine("Done!");
    }
}

class Program
{
    static void Main(string[] args)
    {
        var builder = new ContainerBuilder();
        builder.RegisterType<Foo>().As<IFoo>();

        var container = builder.Build();
        var foo = container.Resolve<IFoo>();
        foo.DoSomething();

        Console.ReadKey();
    }
}

And you are done!

What it does

Tape scans the directory for all .cs files, and turns them into one big, unified .cs file. Along the way, it:

  • Removes [assembly:] attributes
  • Moves using statements into the namespace body
  • If the -i switch is passed, changes public types to internal types

What does the output look like?

Here's a couple of tapes that I taped with tape (wow, a verb AND a noun):

  1. Autofac.cs
  2. Castle.cs

Why this approach

If NServiceBus, NHibernate and Castle were available as single .cs files, making the latest version of any library work with another might be much easier.

Also, for really small libraries, it's annoying to have to reference an entire DLL. The ability to download (and embed as internal types) a single .cs file and paste it into my solution is pretty attractive.

Disclaimers

  1. Horn would probably be a better tool to use.
  2. I didn't test this on anything but Autofac and Castle
  3. It probably doesn't work on anything else
  4. There are probably a heap of edge cases it doesn't support
  5. Assemblies contain much more than just code, so yes, a lot of projects won't work with it

** probably not

I'm still experimenting with building games, and one of my projects is a little client/server game. Rather than using WCF and dealing with the leaky abstractions, I decided to write something small and custom.

Pigeon is an alternative to WCF designed for high throughput.

On my local machine, WCF NetTcpBinding maxes out at about 10,000 messages/second, while Pigeon achieves 40-50,000 messages/second.

Messages

Messages are encoded using Google Protocol Buffers. You just have to decorate your C# classes with the following attributes:

[Message(57)]
public class CreateCustomer
{
    [MessagePart(1)] public string FirstName { get; set; }
    [MessagePart(2)] public string LastName { get; set; }
    [MessagePart(448)] public int Age { get; set; }
}

You don't have to use the same class library/DLL on the client and server. Instead, the number 57 in the Message attribute above is used to identify the type. So long as the client and server have a type with the number 57, and attributes numbered 1, 2 and 448, even if the classes have different names, it will just work.

Client example

First you configure the client - here I'm connecting to my loopback IP address on TCP port 90001.

var builder = new MessageClientBuilder("127.0.0.1", 90001);
builder.KnownTypes.Add(typeof(CreateCustomer));

var client = builder.Build();

We need the KnownTypes.Add call to make the deserializer aware of the CreateCustomer class, so that if it is told to deserialize 57, it knows which class to create.

After we create the client, we can listen for messages from the server:

client.MessageReceieved += MessageReceieved;
client.Start();

...

private void MessageReceieved(object sender, object message) 
{
    Console.WriteLine("Got message: " + message);
}

The call to client.Start creates a new background thread, which sits in a loop raising the MessageReceieved event each time a message is read from the TCP socket. Note that this means your MessageReceived handler will be called from a background thread.

Finally, the client can send messages to the server:

client.Send(new CreateCustomer { FirstName = "Paul" });

This will queue the message for sending by another background thread, leaving your application code to continue running uninterrupted.

Server example

Writing a server is a little more complicated, since you need to track which clients are connected, and send messages to specific clients.

The server is configured in a similar way to the client - it needs a TCP port number and known types:

var builder = new MessageServerBuilder(90001);
builder.KnownTypes.Add(typeof(CreateCustomer));

var server = builder.Build();
server.MessageReceived += MessageReceived;
server.Start();

...

private void MessageReceived(object sender, MessageReceivedEventArgs e)
{
    var createCustomer = e.Message as CreateCustomer;
    if (createCustomer != null) 
    {
        // Create the customer
        var newCustomerId = SaveNewCustomerToDatabase(createCustomer.FirstName);

        // Reply back to the client (e.Sender), informing them that the customer 
        // was created
        server.Send(new CreateSuccess(newCustomerId), e.Sender);
    }
}

The server can also broadcast a message to all clients:

server.Broadcast(new HappyNewYear());

FAQ

How many threads are used?

A simple client application would use four threads:

  1. The main application thread
  2. The send thread, which sends outbound messages to the server
  3. The receive thread, which queues receieved messages from the server for dispatch
  4. The dispatch thread, which raises the MessageReceived event

A simple server application would also use five threads:

  1. The main application thread
  2. The listen thread, which accepts incoming socket requests
  3. The send thread, which sends outbound messages to any client
  4. The receieve thread, which queues receieved messages from the client for dispatch
  5. The dispatch thread, which raises the MessageReceieved event

Note that each of these threads sleep when there is no work to do

Will I run out of memory?

If your application is producing messages faster than they can be written to the sockets, or if you are receiving messages from the socket faster than your MessageReceieved event handler can handle them, messages will be discarded. The memory usage should hit a limit, since there will never be more than a fixed number of messages on the queue at once.

To illustrate, imagine an MMORPG. As the characters walk around the online world, they continually call client.Send(new Moved(currentPosition)) messages to the server. Chances are, if the server is struggling to cope with the number of messages, you'd be happy to discard the Moved message that was sent 20 seconds ago in favour of processing the Moved message that was sent 1 second ago.

Over the Christmas break, my little brother Andrew and I got together and experimented with building games for the Xbox 360 with the XNA framework. We ended up building a game called StarFight. It started as a clone of Asteroids, until we found that killing each other was more interesting than killing big space rocks.

Star Fight in action - here you can see me kicking Andrew's butt

The Xbox 360 controllers have two thumb-sticks. In our game, the left thumb-stick controls movement of the ship, while the right thumb-stick controls the direction the ship is facing. The right trigger button fires the ships cannon.

We also added boosters, which randomly appear and give the player who picks them up a special bonus. Here you can see Andrew picking up a health boost - I think I'm in trouble:

Andrew picked up a health boost

Publishing the game to the app store

I find the game quite fun to play, and I'm curious to see if anyone else does. We've submitted the game to the XNA Creators Club - there, it will be peer reviewed before being published to the Xbox 360 indie app store. I've never published anything to an app store before, so I'm curious to see how the process goes.

Open source

We submitted the game to the app store, but the primary reason for that is so other people can play it (there's no "free" option on the store, and there's no way to play XNA games on the Xbox without buying them from the marketplace or having an XNA creators club membership). But if anyone is interested, the source code to the game is at BitBucket.

On Learning XNA

XNA is a very nice framework, and does a good job of straddling the line between making life easy and avoiding leaky abstractions. It handles a lot of the complexity of loading and converting content (textures, 3D models, etc.), but most of the code you write still comes back to triangles, vectors, matrices and trigonometry. All you need to get started with XNA for PC games is the XNA Game Studio 4.0, which is free.

On Xbox Development

I'd been writing a few XNA applications for PC, and making them run on Xbox was very easy. First, you'll need to fork out some cash for an XNA Creators Club membership. Once you've done that, install the XNA Game Studio Connector onto your Xbox.

Your PC can then connect to the Xbox via the XNA Game Studio device center. The debugging experience is fantastic - just hit F5 in Visual Studio and the game will be deployed and run on your Xbox in no time.

On Australia

The recent Hanselminutes podcast is what got me into using XNA for Xbox games. While I always thought of the Xbox as a platform for large corporations to publish major game titles, the Indie store makes it more like an app store.

The Indie game store relies on peer review to establish ratings (G, PG, M15+, for example). That's fine in the USA and other countries, but in Australia apparently it's not allowed. Since games are required to be rated before being sold in Australia, the indie game store isn't available here. That's a real shame.

The funny side to this is that as an Australian, I can develop and publish an Xbox game for people in other countries to play, but I can't download and play it myself.

More links

When I'm building an application that stores data, there are a few things I try to make sure we can do when it comes to managing the lifecycle of our storage. I'm going to focus on SQL Server, but this applies to other relational storage.

What's in a database?

Even outside of production, a database is more than schema. A database "definition" consists of:

  • The database schema (tables, views, procedures, schemas)
  • Reference data (ZIP codes, salutations, other things I expect to always "be there")
  • Sample master/transactional data (for developers/testers)
  • Security (roles, permissions, users)

It may even include management job definitions (backup, shrink), though they tend to be left to DBA's.

Transitions, not States

As each iteration progresses, we'll make changes to our database.

One approach to upgrading existing databases is to "diff" the old state of the database with our new state, and then to manually modify this diff until we have something that works. We then employ the "hope and pray" strategy of managing our database, which generally means "leave it to the DBA" for production. Tools like Red Gate SQL Compare and Visual Studio Database Edition encourage this approach.

I never understood the state or model-driven approach, and I'll explain why. Here's a simple example of some T-SQL:

create table dbo.Customer (
    Id int not null identity(1,1) constraint PK_CustomerId primary key, 
    FullName nvarchar(200) not null
 );

See the "create table"? That's not a definition - it's an instruction. If I need to change my table, I don't change the statement above. I just write:

alter table dbo.Customer 
    add IsPreferred bit not null 
        constraint DF_Customer_IsPreferred default(0);

Again, an instruction. A transition. I don't tell the database what the state is - I tell it how to get there. I can provide a lot more context, and I have a lot more power.

You'll note that I used a default constraint above - that's because my table might have had data already. Since I was thinking about transitions, I was forced to think about these issues.

Our databases are designed for transitions; attempting to bolt a state-based approach on them is about as dumb as bolting a stateful model on top of the web (and Visual Studio Database Edition is about as much fun as ViewState).

Keep in mind that making changes to databases can be complicated. Here are some things we might do:

  • Add a column to an existing table (what will the default values be?)
  • Split a column into two columns (how will you deal with data in the existing column?)
  • Move a column from one table onto another (remember to move it, not to drop and create the column and lose the data)
  • Duplicate data from a column on one table into a column on another (to reduce joins) (don't just create the empty column - figure out how to get the data there)
  • Rename a column (don't just create a new one and delete the old one)
  • Change the type of a column (how will you convert the old data? What if some rows won't convert?)
  • Change the data within a column (maybe all order #'s need to be prefixed with the customer code?)

You can see how performing a "diff" on the old and new state can miss some of the intricacies of real-life data management.

Successful database management

Here are some things I want from my database deployment strategy:

  1. Source controlled
    Your database isn't in source control? You don't deserve one. Go use Excel.
  2. Testability
    I want to be able to write an integration test that takes a backup of the old state, performs the upgrade to the current state, and verifies that the data wasn't corrupted.
  3. Continuous integration
    I want those tests run by my build server, every time I check in. I'd like a CI build that takes a production backup, restores it, and runs and tests any upgrades nightly.
  4. No shared databases
    Every developer should be able to have a copy of the database on their own machine. Deploying that database - with sample data - should be one click.
  5. Dogfooding upgrades
    If Susan makes a change to the database, Harry should be able to execute her transitions on his own database. If he had different test data to her, he might find bugs she didn't. Harry shouldn't just blow away his database and start again.

The benefits to this are enormous. By testing my transitions every single day - on my own dev test data, in my integration tests, on my build server, against production backups - I'm going to be confident that my changes will work in production.

Versions table

There should be something that I can query to know which transitions have been run against my database. The simplest way is with a Versions table, which tells me the scripts that were run, when they were run, and who they were run by.

When it comes to upgrading, I can query this table, skip the transitions that have been run, and execute the ones that haven't.

Sample data

Development teams often need access to a good set of sample data, ideally lots of it. Again, these should be transition scripts that can be optionally run (since I might not want sample data in production), and in source control.

Document Databases

Most of these principles apply to document databases too. In fact, in some ways the problems are harder. While you don't have a fixed schema, you're probably mapping your documents to objects - what if the structure of the objects change? You may need to run transitional scripts over the document database to manipulate the existing documents. You may also need to re-define your indexes. You want those deployment scenarios to be testable and trusted.

Migration libraries

Rails popularized the approach of using a DSL to describe data migrations. There are a number of .NET ports of this concept, like Fluent Migrator and Machine.Migrations.

Personally, I actually find T-SQL a perfectly good DSL for describing data migrations. I love my ORM's, but for schema work, T-SQL is perfectly adequate.

Again, these libraries focus on transitions (create table, add column), not states, so they're useful, unlike Visual Studio Database Edition, which isn't.

sdsdsdsdsd

When using Prism, it's common to end up with code like this:

private void ShowHome() 
{
    var view = CreateView();
    var viewModel = CreateViewModel();
    view.DataContext = viewModel;

    var region = regionManager.Regions["SomeRegion"];

    region.Add(view, null, true);
    retion.Activate(view);
}

Using MVVM, we can make the assumption that every view has a view model following a naming convention - HomeView will always have a HomeViewModel.

Here's a shorter way we could write it:

private void ShowHome() 
{
    regionManager.AddViewModel<HomeViewModel>("SomeRegion");
}

The following extension method shows how AddViewModel might be implemented:

public static void AddViewModel<TViewModel>(this IRegionManager regionManager, string regionName)
{
    // Figure out the view based on the ViewModel class 
    var viewTypeName = typeof (TViewModel).FullName.Replace("Model", "View");
    var viewType = typeof (TViewModel).Assembly.GetType(viewTypeName);

    // Build the view and model, and bind them
    var view = (FrameworkElement)ServiceLocator.Current.GetInstance(viewType);
    var model = ServiceLocator.Current.GetInstance<TViewModel>();
    view.DataContext = model;

    // Render
    regionManager.Regions[regionName].Add(view, null, true);
    regionManager.Regions[regionName].Activate(view);
}

The direct reference to ServiceLocator from within the extension method is a code smell, and makes testing a little messy. There are some tricks we could use like wrapping the RegionManager in some kind of object which has a ViewLocator or ViewModelLocator, but you get the picture.

Parameters

To pass parameters to the view model, we could write this:

private void EditCustomer() 
{
    regionManager.AddViewModel<EditCustomerViewModel>("SomeRegion", customerId => 31);
}

Our view model constructor could look like this:

public class EditCustomerViewModel
{
    public EditCustomerViewModel(int customerId, ILogger logger, IFoo foo, IBar bar) 
    {
        ...
    }
}

The other constructor parameters will be resolved by the IOC container, but the customerId is a parameter we'd like to pass manually. At this point, using the service locator isn't enough - it doesn't support parameter passing.