A picture of me

Welcome, my name is Paul Stovell. I live in Brisbane and work on Octopus Deploy, an automated deployment tool for .NET applications.

Prior to founding Octopus Deploy, I worked for an investment bank in London building WPF applications, and before that I worked for Readify, an Australian .NET consulting firm. I also worked on a number of open source projects and was an active user group presenter. I was a Microsoft MVP for WPF from 2006 to 2013.

I just released some source on Google Code called Observal:

http://code.google.com/p/observal/

Observal was extracted from work on a recent WPF project. In our application, we had a deep hierarchy of view model objects, with some very complicated interrelationships - setting one property over here means adding or removing items from a collection over there - and since WPF applications are so stateful, we had to do it all reactively.

The project home page gives a simple example of how Observal might be used. I'll use the remainder of this post for a deeper example.

Example

Suppose we have an object model to represent an organization chart:

An employee class, with a collection of direct reports

We'll build a view to show and edit a hierarchy of employees, and provide a filter to show a list of items from the hierarchy:

A window with a treeview of employees, their details, and a list of employees with under $100,000 salary

Working with the hierarchy in WPF is easy - we just build a hierarchical object model and bind it to the tree view. We could build that view model using code like this:

public partial class OrgChartWindow : Window
{
    public OrgChartWindow()
    {
        InitializeComponent();

        var sampleEmployees =
            new Employee("Ryan Howard", 200000,
                new Employee("Michael Scott", 130000,
                    new Employee("Dwight Schrute", 80000),
                    new Employee("Jim Halpert", 80000,
                        new Employee("Andy Bernard", 75000,
                            new Employee("Stanley Hudson", 70000),
                            new Employee("Phyllis Lapin", 70000)))));

        DataContext = new OrgChartViewModel(new[] { sampleEmployees });
    }
}

That gives us the tree view, ability to add new employees and editing support. But how to we manage the list of employees earning under $100,000?

Enter Observal

The "Employees with salary < $100,000" panel is effectively a flattened view of the employee hierarchy. To build it, we'd need to subscribe to the CollectionChanged event on every employee's DirectReports collection, and to subscribe to the PropertyChanged event on every employee.

Observal makes this trivial. We can make the following addition to our view model:

public OrgChartViewModel(IEnumerable<Employee> employees)
{
    _rootEmployees = new ObservableCollection<Employee>(employees);

    var observer = new Observer();
    observer.Extend(new TraverseExtension()).Follow<Employee>(e => e.DirectReports);
    observer.Extend(new CollectionExpansionExtension());
    observer.Extend(new PropertyChangedExtension()).WhenPropertyChanges<Employee>(x => FilterEmployee(x.Source));
    observer.Extend(new ItemsChangedExtension()).WhenAdded<Employee>(FilterEmployee);
    observer.Add(_rootEmployees);
}

private void FilterEmployee(Employee employee)
{
    if (employee.Salary < 100000)
    {
        if (!FilteredEmployees.Contains(employee))
            FilteredEmployees.Add(employee);
    }
    else
    {
        FilteredEmployees.Remove(employee);
    }
}

The idea behind observal is that there is an Observer, which keeps a list of items being observed. Observers can accept IObserverExtensions, which are notified when items are added or removed. In the example above, we make use of four different extensions:

  • TraverseExtension - any time an employee is added to the collection, we'll add the DirectReports collection too.
  • CollectionExpansionExtension - when the DirectReports collection is added, we'll add all items in the collection to the observer.
  • PropertyChangedExtension - this is called any time a property on an existing object changes
  • ItemsChangedExtension - this notifies us whenever an item is added or removed

Each extension is useful by itself, but they become very powerful when combined together. In this example, we were able to monitor an entire hierarchy of objects, and to react whenever parts of the hierarchy changes. I'd urge you to check out Observal on Google Code and let me know what you think.

In starting a project, one of the important steps we need to do is to and define and agree on what "done" means. There are plenty of good examples around that we can draw from as inspiration for our own lists. On my current personal project, I just put together my own definition of done.

A feature is "done" when:

  1. Users can use the feature, are happy with it, and any major design flaws or bugs are closed
  2. UI for the feature has been styled and designed to look nice
  3. UI for the feature works for normal as well as high-contrast mode, and works equally well for colour blind users
  4. Keyboard accessors/shortcuts and tab order are set
  5. End user help/tooltips work
  6. If necessary, any sample data needed to test the feature is ready
  7. Automated unit tests are written, with a high enough level of coverage
  8. Automated UI "smoke" tests have been recorded and pass
  9. Load tests written and pass
  10. Each button click/interaction results in the minimal number of service calls
  11. All UI components are hardware rendered, virtualization is enabled, and no data binding errors
  12. Any auditing/tracing code is added, and the output is useful and readable
  13. Any potential failure scenarios are catered for
  14. Feature is tested for any major memory leaks
  15. Automated database migration scripts are ready and tested
  16. Security permission checks have been implemented and validated via automated tests

Some of this revolves around performance. This isn't about premature optimization, just checking for obvious problems. If I find out I'm accidentally doing a SELECT N+1 that isn't a problem on my laptop, but is a problem in a reasonable set of sample data, I want to resolve that before claiming the feature to be "done".

Every project is different. What does your definition of done look like?

I'm kicking off a new project - a WPF application using WCF and SQL Server, with a little ASP.NET MVC portal.

A fun part of new projects is getting to decide which technologies and libraries to use. For me, it's going to be:

  1. Autofac, the IOC container
  2. AutoMapper, for that left-hand = right-hand code
  3. Magellan, the navigation framework
  4. Moq, for mocking
  5. NBuilder for building test objects easily
  6. NHibernate and FluentNHibernate for that pesky database
  7. NUnit, for testing

A long-term goal for this application is to use a message bus for communication. For now, I don't want the infrastructure hassles, and it's not so important, so I'm hoping to make my code feel like I'm using a bus while really using point-to-point WCF. I was hoping to use Agatha, which makes WCF feel more like a messaging layer, but it would have introduced a third logging library that I just don't care for. Perhaps I'll fork the code or just borrow the ideas and write my own.

The ASP.NET MVC app will probably use the default ASP.NET view engine, but I'll switch to Razor if they ever release it.

For database management, I'm planning to use the same database deployment tool that I use for managing my blog. I just can't stand DataDude.

It's interesting to look on this list and compare it to what it might have been a few years ago. There's no trace of patterns & practices, and no third party control libraries - and especially no data grids :)

What does the lib folder on your current project look like?

I gave a presentation titled "Real-world MVVM in WPF" at the DeveloperDeveloperDeveloper event in Sydney. Thanks to everyone who attended the talk, and for the DDD organizers and sponsors for enabling such a fun and well-run event.

You can download the slides and sample code from my file server.

I'll be doing the same talk at CodeCampSA in Adelaide this Saturday, 24th of July.

What might the next competitor to WPF look like? How would I design one?

My overall goals would be:

  1. Easy to learn
  2. Favors composition over inheritance
  3. Great performance
  4. Encourages the use of patterns
  5. Cross platform and open source

I think a basic UI library would be composed of:

Diagram of the major components of a UI toolkit

  1. A markup language that programmers would use to design user interfaces
  2. An object model (like a DOM) that the markup is translated into
  3. A renderer, which sends the scene to the graphics card

What follows are some unordered thoughts about how I'd build the next UI library.

Markup

XAML and HTML are very similar. But if you ask most developers, they'll rate HTML as being much easier to learn than XAML. Markup languages are a fine way to describe a user interface, so I want to keep that concept, but there are some things about XAML that I don't quite like:

HTML is a markup format, while XAML is a serialization format. This meant that a lot of the implementation details leaked through. Property element syntax exists because the XAML parsers (which are really just serialization engines) would get confused and try to instantiate a ContextMenu object rather than setting the ContextMenu property. In contrast, with HTML, the interpreter would just know what ContextMenu meant in that particular context. Tags like DataTemplate would also be a thing of the past.

I would also want to avoid all of the "tax" that applies to XAML files. You know, XML namespaces, x:Class entries, and so on. I should be able to throw <strong>Hello</strong> into a .view file, hit F5, and see it.

My markup language would also be designed for programmers, not designers. HTML was designed to be hand-written, so it's easy to hand write. XAML was designed for tools to emit, so hand-writing XAML is an exercise in verboseness and RSI.

Composition

In WPF, the simplest controls have a huge hierarchy of inheritance, each layer of which adds different capabilities: Button : ButtonBase : ContentControl : Control : FrameworkElement : UIElement : Visual : DependencyObject : DispatcherObject : Object.

If you open the source code (in Reflector or via the recent source code debugging support) you'll find each of those are made up of thousands of lines of code, if not tens of thousands. And of course most of it is internal so you have no chance of changing any of it :)

The DependencyObject concept is one from WPF that I'd definitely keep. I think we also want to be able to write code to just new Button() and add it to a scene. I just want to cut out the middle layers of inheritance.

A button should look like this:

public class Button : DependencyObject 
{
    public Button()
    {
        AddCapability(new Visual());
        AddCapability(new Sizeable());
        AddCapability(new Clickable());
    } 
}

Each capability would be an aspect that hooks on to the element. Aspects that are commonly found together (most controls that support hit testing support probably also support MouseOver events) can be grouped into composite capabilities.

Performance

WPF is built on DirectX, which is a very welcome break from GDI+. But I suspect I'd go with OpenGL just to keep it open source. I think I'd like to combine some of the concepts though - my UI stack should allow me to mix a retained scene rendering mode (like WPF) with immediate mode rendering (GDI+). Concepts like UI virtualization should also be supported.

The other major area of performance I'd look at is having multiple UI threads. The low level graphics systems are capable of doing this, but most UI toolkits don't expose it. Ideally, I'd be able to demarcate a branch of the scene tree as being rendered by a different thread.

For example, in my version of Outlook, each of these panels could have its own rendering thread:

Outlook

Binding as law

In WPF you can do this:

label.Text = customer.FullName;

In my UI toolkit, you can only do this:

Bind(label.Text, customer.FullName);

By forcing binding to be used, we change from a push model (where the UI logic thread can throw things at us), to a pull model, where we only accept updates when ready.

When property changed events are fired, we can mark a UI property as having a pending change, but that is all. When the UI rendering thread is ready, it can read the new value. This means instead of having to lock and limit to one UI thread, we can have many, since each thread reads property values when it is ready.

Enforcing patterns

My UI toolkit wouldn't have the notion of code behind. Instead, it would be geared towards something that looks like Presentation Model/MVVM, and the infrastructure would make it easy to work with. For anything else, you would write a custom Capability to extend the behavior of the object.

The toolkit would also support concepts like DI and IOC, and have out of the box support for mediator, navigation and composition patterns.

Open source and cross platform

I think I'd write this UI toolkit on top of OpenGL, probably using C++ for the rendering portion.

C# is a fine language, but it is too closed to be a good way of writing UI code. The fact that automatic properties still don't support INotifyPropertyChanged tells me C# just isn't meant for UI coding. I'd probably consider a nice dynamic language, like Ruby, or maybe something on the JVM.

Conclusion

This is the point where someone will kindly tell me that [insert Java/Linux widget toolkit] already does all of this :)

A good rule of thumb to live by is that long-lived objects should avoid referencing short-lived objects.

The reason for this is that the .NET garbage collector uses a mark and sweep algorithm to detemine if it can delete and reclaim an object. If it determines that a long-lived object should be kept alive (because you are using it, or because it's in a static field somewhere), it also assumes anything it references is being kept alive.

Conversely, going the other way is fine - a short-lived object can reference a long-lived object because the garbage collector will happily delete it if nothing else uses it.

For example:

  1. You shouldn't add items to a static collection, if those items won't be around for a while
  2. You shouldn't subscribe to static events from a short-lived object

The second example often throws people not familiar with how events work in .NET. When you subscribe to an event, the event handler keeps a list of subscribers. When the event is raised, it loops through the subscribers and notifies each one - it's a simple form of the observer pattern.

If you do find yourself needing to write this kind of code, and there isn't a good alternative design, then you generally need to have an unhook option. You might have a way to "remove" the short-lived object from the collection managed by the long-lived object, or you might unsubscribe from an event.

When unsubscribing isn't an option (because you don't trust people to call your Dispose/Unsubscribe method), you can make use of weak event handlers. WPF has its own implementation, but it's too complex for my feeble mind. Here's a simple snippet that I use:

[DebuggerNonUserCode]
public sealed class WeakEventHandler<TEventArgs> where TEventArgs : EventArgs
{
    private readonly WeakReference _targetReference;
    private readonly MethodInfo _method;

    public WeakEventHandler(EventHandler<TEventArgs> callback)
    {
        _method = callback.Method;
        _targetReference = new WeakReference(callback.Target, true);
    }

    [DebuggerNonUserCode]
    public void Handler(object sender, TEventArgs e)
    {
        var target = _targetReference.Target;
        if (target != null)
        {
            var callback = (Action<object, TEventArgs>)Delegate.CreateDelegate(typeof(Action<object, TEventArgs>), target, _method, true);
            if (callback != null)
            {
                callback(sender, e);
            }
        }
    }
}

When subscribing to events, instead of writing:

alarm.Beep += Alarm_Beeped;

Just write:

alarm.Beeped += new WeakEventHandler<AlarmEventArgs>(Alarm_Beeped).Handler;

Your subscriber can now be garbage collected without needing to manually unsubscribe (and without having to remember to). Here are some tests:

[TestFixture]
public class WeakEventsTests
{
    #region Example

    public class Alarm
    {
        public event PropertyChangedEventHandler Beeped;

        public void Beep()
        {
            var handler = Beeped;
            if (handler != null) handler(this, new PropertyChangedEventArgs("Beep!"));
        }
    }

    public class Sleepy
    {
        private readonly Alarm _alarm;
        private int _snoozeCount;

        public Sleepy(Alarm alarm)
        {
            _alarm = alarm;
            _alarm.Beeped += new WeakEventHandler<PropertyChangedEventArgs>(Alarm_Beeped).Handler;
        }

        private void Alarm_Beeped(object sender, PropertyChangedEventArgs e)
        {
            _snoozeCount++;
        }

        public int SnoozeCount
        {
            get { return _snoozeCount; }
        }
    }

    #endregion

    [Test]
    public void ShouldHandleEventWhenBothReferencesAreAlive()
    {
        var alarm = new Alarm();
        var sleepy = new Sleepy(alarm);
        alarm.Beep();
        alarm.Beep();

        Assert.AreEqual(2, sleepy.SnoozeCount);
    }

    [Test]
    public void ShouldAllowSubscriberReferenceToBeCollected()
    {
        var alarm = new Alarm();
        var sleepyReference = null as WeakReference;
        new Action(() =>
        {
            // Run this in a delegate to that the local variable gets garbage collected
            var sleepy = new Sleepy(alarm);
            alarm.Beep();
            alarm.Beep();
            Assert.AreEqual(2, sleepy.SnoozeCount);
            sleepyReference = new WeakReference(sleepy);
        })();

        GC.Collect();
        GC.WaitForPendingFinalizers();
        GC.Collect();

        Assert.IsNull(sleepyReference.Target);
    }

    [Test]
    public void SubscriberShouldNotBeUnsubscribedUntilCollection()
    {
        var alarm = new Alarm();
        var sleepy = new Sleepy(alarm);

        GC.Collect();
        GC.WaitForPendingFinalizers();
        GC.Collect();

        alarm.Beep();
        alarm.Beep();
        Assert.AreEqual(2, sleepy.SnoozeCount);
    }
}

Got to love passing tests

Observant readers will note that this example does keep a small "sacrifice" object alive in the form of the weak event handler wrapper, but it allows the subscriber to be collected. A more complicated API would allow you to unsubscribe the weak handler when the target is null. In my case, I'll keep the simple API and sacrifice the small object.

As a pattern, there's a lot of flexibility and choice available when implementing the Model-View-ViewModel pattern. However, no matter how you go about it, there are a few things you'll have to do:

  • Instantiate the view
  • Instantiate the view model
  • Connect the view to the view model, so that you can bind to it

There are many ways to accomplish this, and different resources on the pattern show different ways. This page discusses some of those approaches, and attempts to give them a name.

I would love to know if you have seen alternative patterns in the wild, and any pros/cons you have experienced using these approaches.

Option 1: Internal creation:

This is the approach I generally start with when introducing the pattern. In this case, the VM is just a private "implementation detail" of the view, while still being testable.

public CalculatorView()
{
    InitializeComponent();
    DataContext = new CalculatorViewModel(); 
}

Option 2: ViewModel as a dependency:

This is what I usually evolve the first example into, so that we can start to talk about DI and the use of containers.

public CalculatorView(CalculatorViewModel viewModel)
{
    InitializeComponent();
    DataContext = viewModel;
}

Then when you navigate to the view (note: you'd probably want to use an IOC container to instantiate these):

var viewModel = new CalculatorViewModel();
var view = new CalculatorView(viewModel);

Option 3: External creation and assignment:

In this approach, the View doesn't even know how its DataContext will be set - our navigation code "peers in" to the view:

var view = new CalculatorView();
var viewModel = new CalculatorViewModel();
view.DataContext = viewModel;

Option 4: ViewModel as a XAML property value:

Instead of code, some people like to use XAML to create and assign the view model:

<UserControl ...>
    <UserControl.DataContext>
        <local:CalculatorViewModel />
    </UserControl.DataContext>

Some choose to just use this at design time, and replace it at runtime, using one of the options above to override the DataContext.

Option 5: ViewModel as a XAML resource:

Some eschew DataContext all together, and go with a resource. This approach worked very well with earlier versions of Expression Blend.

<UserControl ...>
    <UserControl.Resources>
        <local:CalculatorViewModel x:Key="Model" />
    </UserControl.Resources>

    <TextBox Text="{Binding Source={DynamicResource Model}, Path=...}" />

This is also sometimes replaced at runtime using options 1-3, for example:

public CalculatorView(CalculatorViewModel viewModel)
{
    InitializeComponent();
    Resources["Model"] = viewModel;
}

Option 6: A XAML View Model Locator:

Rather than constructing the view model, some use a locator to resolve the ViewModel, while still allowing it to be used as a resource (and thus get a nice design experience). Others use ObjectDataProvider for a similar purpose. This approach has been popularized by MVVM Light:

<UserControl ...>
    <UserControl.Resources>
        <ViewModelLocator x:Key="ViewModelLocator"/>
    </UserControl.Resources>

    <TextBox 
        Text="{Binding Source={DynamicResource ViewModelLocator}, Path=CalculatorViewModel...}" 
        />

Option 7: DataTemplate as views

Some lunatics don't use a real view at all, but instead just use a DataTemplate. These people should not be allowed near sharp objects :-)

<DataTemplate DataType="{x:Type local:CalculatorViewModel}">
    <... />
</DataTemplate>

<!-- Because of the DataType, this will automatically select the template above -->
<ContentPresenter Content="{Binding Path=Model}" />

Option 8: Data Template and View

Similar to 7, this approach uses a data template to select the appropriate view for a given view model, but the view still has its own class. Thanks to Marek, Ian and Daniel Spruce (see comments below) for pointing out this alternative:

<DataTemplate DataType="{x:Type ViewModels:CalculatorViewModel}">
    <Views:CalculatorView />
</DataTemplate>

Personally, I tend to use Option 2 (ViewModel as dependency) if I'm not using a framework, otherwise Option 3 (External creation and assignment). I either forgo designer support or rely on tools such as the newer Blend sample data support, so the other approaches aren't too useful to me. Magellan uses option 3 by default.

Which approach do you use? Do you do something different? Care to share a sample?

Update: I also posted this to the WPF Disciples list - you can see what some of the other disciples had to say here.

If you have been following the Magellan change log, you might have seen some pretty big changes go through recently. I'm preparing for a Magellan 2.0 preview release with some pretty significant features, which also require some design changes.

Rethinking Magellan

Previously, Magellan was all about the MVC pattern, taking the learnings from ASP.NET MVC and applying them to WPF. Magellan 2.0 builds around that, creating a new layer that makes Magellan a general navigation framework rather than just an MVC framework.

The following diagram gives an idea of the major components of Magellan in 2.0:

Magellan 2.0 component diagram

Routing takes center stage in Magellan 2.0, with a route being registered with a route handler. This allows you to have different handlers associated with different routes. For example:

var routes = new RouteCollection();
routes.Register("R1", "patients/{action}", new ControllerRouteHandler(controllerFactory));
routes.Register("R2", "message/{message}", new MessageBoxRouteHandler());

This code creates two routes, each with a different route handler - one using Magellan's MVC support, the other using a custom route handler. Route handlers look something like this:

public class MessageBoxRouteHandler : IRouteHandler
{
    public void ProcessRequest(NavigationRequest request)
    { 
        var message = request.RouteValues["message"];
        MessageBox.Show(message);
    }
}

A route collection is then associated with a Navigator Factory. The navigator factory creates and manages navigators associated with frames. For example:

var navigation = new NavigatorFactory(routes);
var navigator = navigation.For(myFrame);

Navigators can be created for:

  1. Frame controls
  2. Plain old ContentControls
  3. Anything implementing INavigationService

A navigator factory is used to create a navigator for a frame - think of it the way NHibernate's ISessionFactory creates ISessions for each session. The navigator then controls all navigation within that frame.

In a multi-tabbed navigation application, you would generally have one route collection, one navigation factory, and many navigators, one per frame.

A key concept is that Navigators only know how to resolve a navigation request to a route, and execute the route handler. They don't know anything about MVC, MVVM, or any other patterns - they just know about IRouteHandlers.

This is nice, because it decouples three different concepts:

  1. Frame management
  2. Request->route matching
  3. Presentation patterns

Navigation-aware MVVM

Magellan 2.0 will include full MVVM support. Previously, you could use external MVVM frameworks with Magellan, but only if you used them alongside MVC. Now, you'll be able to use MVVM without MVC, and I'll include an MVVM framework out of the box (though you can still use your own).

For example, the following routes use MVVM:

var routes = new RouteCollection();
routes.Register("R1", "patients/list", new ViewModelRouteHandler<ListPatientModel, ListPatientView>());
routes.Register("R2", "patients/edit/{patientId}", new ViewModelRouteHandler<EditPatientModel, EditPatientView>());

var navigation = new NavigatorFactory(routes);
var navigator = navigation.For(MainWindow);     // MainWindow is a ContentControl
navigator.Navigate("patients/list");            // Shows the ListPatientView with view model

The ViewModelRouteHandler will resolve the view and view model, and set the model as the data context. Route parameters will be passed as constructor arguments to a view.

If the view or view model implements INavigationAware, it will be given the navigator that it was loaded within. For example, our List Patient view model might navigate to Edit Patient:

public class ListPatientModel : ViewModel, INavigationAware
{
    public ListPatientModel() 
    {
        Edit = new RelayCommand<PatientRecord>(EditExecuted);
    }

    public ICommand Edit { get; private set; }

    public INavigator Navigator { get; set; }

    private void EditExecuted(PatientRecord record)
    {
        Navigator.Navigate(patientId => record.Id);
    }
}

Summary

Magellan 2.0 will support the following combinations of usage scenarios:

  1. Navigation containers:
    1. You want to build page/frame based applications, with back/forward capability
    2. You just want a place to put views, without back/forward support (ContentControl)
  2. Navigation frames:
    1. You just one frame of navigation
    2. You have multiple frames of navigation, that can possibly be closed or opened dynamically (tabbed browsing)
  3. Presentation patterns:
    1. You want to use an ASP.NET-like MVC framework
    2. You want to use an ASP.NET-like MVC framework, with a ViewModel for each view
    3. You want to use MVVM without MVC
    4. You want to use MVVM without MVC, and you have your own/a third party MVVM framework
  4. Platforms
    1. You want to use WPF
    2. You want to use Silverlight

I am very interested in what you think of these plans, and whether you like the new design for navigation. Let me know what you think in the comments :)

Recently I worked with a team to design a Silverlight client that consumes some WCF services. Most of the services talk in the Silverlight space is about WCF/RIA data services and how we can drag and drop our way to SOA. That wasn't a suitable approach for this project.

When designing the services, a trend that we found was that a single service might return a message containing two categories of data:

  • The real data we wanted
  • Metadata, related to the real data, that was needed purely for client purposes

Examples of real data included the name and salutation of a patient, or the results of an investigation.

Metadata included whether the requesting user has seen that record before, whether the requesting user would have permission to view the details of the full record or just the summary, whether the result was safe to be cached, and so-on. It was information about the data we were receiving, distinct from the data itself - much like the way a HTTP response includes headers and content.

Normally, this kind of data might appear as headers in the message contract. But since the metadata was related to individual elements within the response, and not the response as a whole, this wasn't much of an option.

Annotated Objects

The eventual design was to use a system of annotations. The idea was that any DTO in our message contracts could be "tagged" with this additional metadata.

The annotated object model

The DTO's don't know what the annotations are for, how they got them, or why they are carrying them. But they faithfully carry them across the wire to the client, ready to be consumed. It is much like the attached dependency property design in WPF and Silverlight.

Applying Annotations

The metadata is often sourced from a different location than the real data. Patient records might come from our health system, but information about whether the a doctor has seen this patient might come from an auditing system. We might source patient data using many different queries, but the way we apply auditing data is much the same.

Behind our services, the intention is to use a Content Enricher design. We can write different services to fetch patient information in many different shapes as needed, and the content enricher will be automatically applied at the service boundary to apply the annotations.

The content enricher would need to visit every object in the DTO object graph, and add an annotation. Here's how such a content enricher might be implemented:

public class PreviouslyViewedPatientsContentEnricher : IContentEnricher
{
    public object Enrich(object message)
    {
        var patients = new List<PatientReference>();

        // Find all patient references in the object graph
        var messageVisitor = new MessageVisitor();
        messageVisitor.OnEncountering<PatientReference>(p => patients.Add(p));
        messageVisitor.Visit(message);

        // Enrich the patients by tagging them with a previously viewed annotation
        foreach (var patient in patients)
        {
            var viewed = // Has this patient been viewed before?
            patient.Annotations.Add(new PreviouslyViewed(viewed));
        }

        return message;
    }
}

A pipeline of content enrichers could be associated with each service. This content enricher pipeline could be applied in a few different ways:

  • Using Policy Injection or any other form of aspect oriented programming
  • Using a WCF extension point, such as IDispatchMessageFormatter.

This decouples the acquisition of the metadata with the acquisition of the real data.

Consuming Annotations

From the client, we can use the Annotations property on any Annotated object to check if it has an annotation we are interested in. Our Silverlight application might use some Blend Behaviors to check for a specific annotation and apply a visual state group. The styles and templates for a UI element, such as a Hyperlink, could then make use of those visual state groups.

The behavior below checks for the PreviouslyViewed annotation:

public class SeenBeforeBehavior : Behavior<DependencyObject>
{
    public static readonly DependencyProperty SeenBeforeProperty = DependencyProperty.RegisterAttached("SeenBefore", typeof(bool), typeof(SeenBeforeBehavior), new PropertyMetadata(false));

    public SeenBeforeBehavior()
    {
    }

    protected override void OnAttached()
    {
        base.OnAttached();

        var associatedControl = (Control)AssociatedObject;
        var annotated = associatedControl.DataContext as Annotated;
        if (annotated == null)
            return;

        var previouslyViewed = annotated.Annotations.Get<PreviouslyViewed>();
        if (previouslyViewed == null)
            return;

        associatedControl.SetValue(SeenBeforeProperty, previouslyViewed.SeenByThisUser);
    }
}

From WPF, we can then use a simple style trigger to check for the attached property and style our controls accordingly. Or from Silverlight, I'm sure we could do it with the Visual State Manager with enough lines of XAML :)

Summary

In this post I outlined a design for associating metadata with DTO's, using the content enricher pattern on the server to attach them, and using Blend behaviors on the client to consume them. I'm interested in what you think of the above design and any holes or issues you can see.

I have been using Visual Studio 2010 since it RTW'd on a pretty sophisticated WPF application. Although I still hand-write 90% of my XAML, I have gotten used to the VS 2010 Cider designer view sitting above my XAML, and using the designer screen to preview my XAML and to quickly navigate to it.

What has really impressed me is how resilient and reliable the designer is - I have to actively go out of my way to break it, even in some pretty complicated places. I don't find much reason to use the toolbox, property window or data features, but the designer surface is proving very useful now that it works so reliably.

My thanks to the team for surprising me by how well the 2010 designer works.