A picture of me

Welcome, my name is Paul Stovell. I live in Brisbane and work on Octopus Deploy, an automated deployment tool.

Prior to founding Octopus Deploy, I worked for an investment bank in London building WPF applications, and before that I worked for Readify, an Australian .NET consulting firm. I also worked on a number of open source projects and was an active user group presenter. I was a Microsoft MVP for WPF from 2006 to 2013.

Back to: Magellan Home

Magellan is a lightweight framework that makes it easy to build WPF navigation applications. It is inspired by the ASP.NET MVC framework. The main features are:

  • Model-View-Controller support
  • Action filters for cross-cutting concerns such as authorization and redirection
  • Blend behaviors to make navigation easy
  • Transitions between pages

Magellan was drawn from a number of samples I had put together early this year and some work done on a client project.

The source download includes an "iPhone" application for demonstrating the features.

The sample iPhone application

We start with a simple project structure:

A VS2008 project with a number of folders for controllers, models and views

A controller implementation typically looks like this:

public class PhoneController : Controller
    public ActionResult Group(Group group)
        var contacts = _contactRepository.GetContacts(group);

        Model = new GroupViewModel(group.Name, contacts);
        return Page();

Views are XAML Page objects, and can optionally have a model. Here's an example:

View models

The idea is that upon navigation, a controller is created, the action is executed, and the view and view model are created. The view then becomes the focus of the frame. Put simply, the view and viewmodel are stateful, and the controller is stateless.

Navigation between views (with nice transitions) can be done either programatically:

Navigator.For(Frame).NavigateWithTransition("Home", "Main", "ZoomOut");

Or through Blend behaviors:

Blend Navigate behavior

The framework supports the ASP.NET MVC concepts of Action Filters, Model Binders, View Engines and more - I'll cover them in a later post.

Back to: Magellan Home

In C# we have anonymous methods. You can read about them and how they are implemented in this post on Raymond Chens blog:


One of the features of anonymous methods is they allow you to use variables that would normally be out of scope. Raymond explains that this works by generating a class under the hood, populating it with the variables and then passing the method. Here is an example of what C# can do:

public void Main() 
    var developers = new List<string>(new []  { "Woody Allen", "Bill Gates" } );

    var greatDeveloperFirstName = "Bill";

    var greatDevelopers = developers.FindAll(
        delegate (string developerName) {
            // Note that I am using a variable that would be "out of scope" if 
            // this wasn't an anonymous method:
            return developerName.StartsWith(greatDeveloperFirstName);

In VB.NET, since you can't do anonymous methods, it also makes it hard to pass arguments to methods that expect a Predicate<T> (such as List<T>.FindAll()).

Here's a VB.NET class that you can reuse to "wrap" a Predicate<T> in order to pass arguments to it.

Public Delegate Function PredicateWrapperDelegate(Of T, A) _
    (ByVal item As T, ByVal argument As A) As Boolean

Public Class PredicateWrapper(Of T, A)
    Private _argument As A
    Private _wrapperDelegate As PredicateWrapperDelegate(Of T, A)

    Public Sub New(ByVal argument As A, _ 
        ByVal wrapperDelegate As PredicateWrapperDelegate(Of T, A))

        _argument = argument
        _wrapperDelegate = wrapperDelegate
    End Sub

    Private Function InnerPredicate(ByVal item As T) As Boolean
        Return _wrapperDelegate(item, _argument)
    End Function

    Public Shared Widening Operator CType( _
        ByVal wrapper As PredicateWrapper(Of T, A)) _
        As Predicate(Of T)

        Return New Predicate(Of T)(AddressOf wrapper.InnerPredicate)
    End Operator

End Class

To use this class in your VB.NET code:

Sub Main()

    Dim developers As New List(Of String)
    developers.Add("Paul Stovell")
    developers.Add("Bill Gates")

    Dim greatDeveloperFirstName as string = "Paul" 

    Dim greatDevelopers As List(Of String) = developers.FindAll(_ 
        New PredicateWrapper(Of String, String)(  _
            greatDeveloperFirstName, _
            AddressOf IsGreatDeveloper))

    For Each greatDeveloper As String In greatDevelopers


End Sub

Note that you can now pass arguments to your "FindAll" predicate :)

Function IsGreatDeveloper(ByVal item As String, ByVal argument As String) As Boolean        
    Return item.StartsWith(argument)
End Function

Enterprise applications typically deal with many categories of strings. Human names, reference codes, SKU identifiers, email addresses - the list is huge. There are subtle rules that apply to many of them:

  • Whitespace at the start and end of many strings should probably be ignored
  • Human names probably shouldn't contain newlines, tab characters, the percentage symbol, or 27 dashes in a row
  • For some strings, casing makes no difference when deciding equality, and sometimes it does

It's common to litter our code with these assumptions, which leads to inconsistency. Sometimes we assume that the UI will handle all of these issues, and the domain layer will simply use what it's given.

Recently I have started experimenting with creating custom strings to encapsulate a lot of these subtle things. On my blog, when you browse to a URL like /enforced-strings, instead of the page name being passed around as a string, it's passed as a PageName object. PageName supports implicit conversion operators, so it can be dealt with as a regular string too. Here is part of a unit test:

PageName expected = "hello-world";

// When cast to a PageName, each of these should be converted into the above
var logicallySameAsExpected = new string[] {
    " hello -world ",
    " hello$-world ",
    " hello $-world ",
    " hello-$-world ",
    " hello world ",
    " -   hello   world   - ",
    " -   HeLLo  WoRLD  - ",
    " -   HeLLo  %^@#@#*()[]WoRLD  - ",
    " -   HeLLo  %^@#@#*()[]WoRLD  - $%",
    "@# -   HeLLo  %^@#@#*()[]WoRLD  - $%"

foreach (var match in logicallySameAsExpected)
    var castMatch = (PageName) match;
    Assert.AreEqual(expected, castMatch);

The assumption I make with PageName is that while it may be instantiated with dirty input (a malformed URL, for example), I can probably infer what was meant. PageName is used throughout my domain model and even at the data access layer - in my case, I use a custom IUserType with NHibernate to treat strings from the database as page names.

To build your own enforced strings, here are the key things to consider doing:

  • Create a class to wrap a real string
  • Make it immutable, and ideally sealed
  • In the constructor, massage the input string
  • Override all of the equality operators, GetHashCode, etc., and implement IEquatable, and IComparable
  • Override ToString (obviously)
  • Add an implicit cast operator to automatically convert from your string to real strings and back

You can see an example of this in PaulPad - first I setup a base class with most of the overloads, then I inherit from that to setup the specific string type.

Sheldon is a WPF command line control, and code to integrate it with IronPython. It's designed as a sample that demonstrates how a WPF application might be made scriptable:

A screenshot of Sheldon

This sample was created to pitch an idea to a client about enabling a macro system in their application. Users might be able to make use of functions like OpenAccount("ACME"), ExecuteJob("SalesForecast2009"), and so on. Using the Command Pattern, commands could be written to an Output window in the application while the user uses the UI - that could be used as a learning tool for learning the command line.

The demo application shows how object models can be shared between your application and scripting environment. In C#, I set up an AutomationContext, which is made available to IronPython:

public class AutomationContext
    public ApplicationDefinition Application { get; set; }
    public IScriptingContext ScriptingContext { get; set; }

    public void Exit()

Then in IronPython, I create a friendly "API" that users can consume:

def GetWindow(name):
    return automation_context.Application.MainWindow
def Clear():
def Exit():

The ScriptingContext is an object that manages the execution and rendering of scripts, which the command line control can hook into. Multiple controls can talk to a single scripting context - here's a custom shell (I simply overrode the style and control template of the Shell control):

A custom shell style and template

You can download the code here:

I am a big fan of Rhino.Mocks, and like all the good tools that I use, I've often wondered how hard it must have been to implement. The two things that intrigued me most were the fluent interface, and where the mock objects come from.

One night I decided to find out. Before looking at the Rhino.Mocks code first though, I figured I'd have a go at implementing the bits and pieces myself, and then I could compare with the real code when I was finished. When you want to learn something, looking at someone else's code first is like reading the last page of a book first - it spoils all the fun :)

Note: this is not a useful mocking framework. I was so proud of all I'd done, but when I eventually looked at the Rhino.Mocks code, I was amazed - it does so much more than I'd even thought of doing. If anything, this process has given me a deeper respect for how mature and feature packed Rhino.Mocks is. In fact, don't even bother reading this post, just go read Rhino.Mocks :)

Building It

The fluent interface is pretty simple. You write code like this:

MockRepository mocks = new MockRepository(); 
IAdder adder = mocks.CreateMock<IAdder>(); 
Expect.Call(adder.Add(3, 7)).Returns(10); 
Expect.Call(adder.Add(4, 7)).Returns(11); 

There are a few parts here to create:

  • MockRepository class. I figured this would just hold a List<IMockObject>, and would have the ability to tell them to "Replay". It can call out to some kind of generator to create the mock object.
  • IMockObject interface. I figured that every mock object would implement an interface which would let the mock framework talk to it. I ended up going away from this approach slightly.
  • Expect class. This is a static class with a few methods, shouldn't be too hard.
  • Expect.Call(). This is a static method that somehow figures out what you were calling, and then returns some kind of interface that allows you to set what you expect the result should be. The big question is: how on earth does it know what you are calling?
  • IMethodCallOptions. Since you're usually invoking a method when you use Expect.Call, I figured the thing that allows you to set the expected return value and other options probably deserves this name. Not sure if it is generic or not though?

Static class or Extension Method?

As I started stubbing out the Expect class, I decided to play with extension methods. This gives me two options - the classic Rhino.Mocks syntax:

MockRepository mocks = new MockRepository(); 
IAdder adder = mocks.CreateMock<IAdder>(); 
Expect.Call(adder.Add(3, 7)).Returns(10); 
Expect.Call(adder.Add(4, 7)).Returns(11); 

Or the new extension method syntax:

MockRepository mocks = new MockRepository(); 
IAdder adder = mocks.CreateMock<IAdder>(); 
adder.Add(3, 7).ShouldReturn(10); 
adder.Add(4, 7).ShouldReturn(11); 

The extension method just calls the Expect class anyway, so if you wanted to use it with Rhino.Mocks, it wouldn't be hard. I personally find it more natural, but I'm probably bias.

Mock Object Implementations

As I started stubbing out classes, I began to have a think about where mock objects come from. I always figured they were probably generated at runtime, but I wasn't sure what that generated code looked like.

For my first try, I wasn't going to bother generating objects yet. Instead, I'd just write the mocks by hand, but in a way that they could be tracked and have expectations set.

First I created an interface which I would mock:

public interface IGstRateProvider 
    decimal? GetGstRate(string productType); 
    decimal DefaultGstRate { get; } 

I experimented with a few different approaches, before settling on something that looked like this. Note that this code is what would be automatically generated at runtime:

public class MockIGstRateProvider : IGstRateProvider 
    private IMockRecorder _recorder; 

    public MockGstRateProvider(IMockRecorder recorder) 
        _recorder = recorder; 

    decimal? IGstRateProvider.GetGstRate(string productType) 
        return _recorder.MethodCall<decimal?>(
            MethodBase.GetCurrentMethod(), productType); 

    decimal IGstRateProvider.DefaultGstRate 
            return _recorder.MethodCall<decimal>(MethodBase.GetCurrentMethod()); 

This was a bit of a change to my original approach. Instead of generated mock objects implementing an interface (IMockObject), they would instead take an interface as a constructor parameter. This probably makes it hard to mock abstract classes, but I only care about interfaces, so I wasn't too worried.

The IMockRecorder interface just has two methods:

public interface IMockRecorder 
    TReturn MethodCall<TReturn>(MethodBase methodInfo, params object[] arguments); 
    void Replay(); 

The first method is called from the mock object to either record a method call, or to replay it, depending on the state of the recording. The second method changes the state from recording to replaying. By doing it this way, the generated mock objects would remain pretty simple.


As I was implementing the IMockRecorder, I started thinking about how Expect.Call must work. I figured it must return a value but also does some kind of trick so that it knows what was called, so that the method information and arguments could be passed to the Expect.Call method to be recorded. It was at a simple solution hit me.

Since I'm passing the call information to the IMockRecorder, all I have to do is have my IMockRecorder record the call, and stick it in a global variable somewhere. Then my Expect.Call method could pull the information out, wrap it in the IMethodCallOptions interface, and return it. I then reasoned that since my IMockRecorder was created by the MockRepository and linked to it, there was no reason I couldn't store it as a property on the MockRepository. It wouldn't quite be global, but it didn't feel quite right. Since you only record one method call at once though, it seemed OK.

I figured Rhino.Mocks must have a smarter way, but when I took a look, it turned out quite similar: when you call the mock object method, Rhino Mocks stores that as a LastCall against the MockRepository, and the Expect.Call method pulls it back out. Phew, I wasn't too far off :)

What this meant from the Expect.Call method's perspective is that although it took a parameter, it didn't really care about the parameter - that was just a return value from the method that was being recorded (default(T) when recording). My method ended up looking like this:

public static IMethodCallOptions<TReturn> Call<TReturn>(TReturn ignored) 
    IMethodCall methodCall = MockRepository.Current.LastMethodCall; 
    return (MethodCall<TReturn>)methodCall; 

The MethodCall<TReturn> class implements IMethodCallOptions<TReturn>. IMethodCall, on the other hand, is used internally. It feels a bit clunky at this point, so I might try to tidy that up.

At this point I almost had something working. All I needed to do was to implement the MockRepository.ReplayAll() method.

My plan for the MockRepository was to store a list of all the mock objects it created. But since my mock objects weren't implementing a specific interface, I had to change it to store the IMockRecordersinstead. ReplayAll() could then just loop through all the recorders and tell them to Replay(). Here's what it looks like:

public TMock CreateMock<TMock>() 
    IMockRecorder recorder = new MockRecorder<TMock>(); 

    return MockObjectFactory.CreateMock<TMock>(recorder); 

public void ReplayAll() 
    foreach (IMockRecorder mock in _mockRecorders) 

At this point I was able to write a basic test case. I had to create the mock objects by hand, using the solution above, but I was able to record/replay to my hearts content:

// Setup the expectatons of the mock GST rate provider 
MockRepository repository = new MockRepository(); 
IGstRateProvider provider = repository.CreateMock<IGstRateProvider>(); 

// Test the GST Calculator using the mock provider (instead of, say, the database 
// GST rate provider it would use in production). 
GstCalculator calculator = new GstCalculator(provider); 
Assert.AreEqual(2.57M, calculator.ApplyGst("Bread", 2.57M), "Bread should be exempt from GST"); 
Assert.AreEqual(4.00M, calculator.ApplyGst("Milk", 4.00M), "Milk should be exempt from GST"); 
Assert.AreEqual(2200M, calculator.ApplyGst("Laptop", 2000M), "Laptops should not be exempt from GST"); 
Assert.AreEqual(26400M, calculator.ApplyGst("Car", 24000M), "Cars should not be exempt from GST"); 

And that night, I released it to the masses on Readify Tech.


I figured I might use System.Reflection.Emit to generate the mock objects, rather than hand writing them, but having not used it before I was tempted to just leave it how it was. It wasn't until Corneliu sent me his Dynamic WCF ClientProxy class. It uses Reflection.Emit to generate all the boilerplate code WCF client method calls. Looking at his code, it didn't seem that hard at all, so tonight I gave it a go.

Since I'd written the mock objects by hand, all I needed to do was compile them, and open them in Reflector and view the IL that was generated. Then I just translated those IL statements into Reflection.Emit calls. The whole thing took about 2 hours of fiddling around having not used the API before, and when all was said and done it totalled just over 160 lines of code. Here's an example of how a method is generated:

private void ImplementMethod(MethodInfo methodToImplement) 
    IEnumerable<Type> parameterTypes = methodToImplement.GetParameters()
        .Select(p => p.ParameterType); 
    MethodBuilder methodBuilder = _typeBuilder.DefineMethod(methodToImplement.Name, 
        MethodAttributes.Public | MethodAttributes.Virtual, methodToImplement.ReturnType, 
    methodBuilder.CreateMethodBody(null, 0); 
    ILGenerator methodILGenerator = methodBuilder.GetILGenerator(); 
    LocalBuilder resultLocalBuilder = methodILGenerator.DeclareLocal(methodToImplement.ReturnType); 
    LocalBuilder parametersLocalBuilder = methodILGenerator.DeclareLocal(typeof(object[])); 
    methodILGenerator.Emit(OpCodes.Ldfld, __recorder0); 
    methodILGenerator.Emit(OpCodes.Call, GetMethodBaseGetCurrentMethod()); 
    methodILGenerator.Emit(OpCodes.Newarr, typeof(object)); 
    for (int parameterIndex = 0; 
         parameterIndex < methodToImplement.GetParameters().Length; 
        methodILGenerator.Emit(OpCodes.Ldc_I4, parameterIndex); 
        methodILGenerator.Emit(OpCodes.Ldarg, parameterIndex + 1); 
    _typeBuilder.DefineMethodOverride(methodBuilder, methodToImplement); 


Doing this experiment gave me a feel for the kinds of challenges that Rhino.Mocks and TypeMock must have faced, and why certain decisions were made in the API's. It also gave me a sense of just how much work must have gone into those frameworks. It's certainly not for the feint hearted.

That said, the whole thing didn't take too long to build. The entire solution (with comments, blank lines, etc. included) came out to just over 800 lines, which is more than enough to go live with. I spent three hours one night, and three hours the next night. That's not a bad price for learning a few lessons:

  • Fluent interfaces are easy once you decide on what the language will look like. I suspect designing the language is half the battle.
  • Reflection.Emit is easy, just a little time consuming. Try to emit as little as possible.
  • Extension methods are a good way to make anything look nice :)
  • Don't challenge Ayende or Corneliu to a coding competition
  • Don't roll your own mocking framework
  • Trying to code things yourself before looking is a great way to test your own skills

If you're interested in checking out the code more, you can browse it using the Subversion web interface:


Can you spot the danger in the following code?

public void HandleAdd(IEnumerable<T> addedItems) 
    lock (_itemsLock) 
        foreach (T item in addedItems) 

Don't see it? Consider the following:

  1. Thread A, a background thread, is synchronizing changes made in-memory with the database:
    • It already holds a lock on a class called "ChangeQueue".
    • It calls HandleAdd(), passing in a list of changes.
  2. Thread B, the UI thread, is searching for something:
    • It calls GetItem(), which acquires the lock for _itemLock.
    • The delegate it passed to GetItem() accesses the ChangeQueue

See the problem now?

This is a deadlock scenario - thread A holds one lock and wants another, while thread B holds the second lock but wants the first. Neither can continue, the window stops processing messages, and Windows kindly renames your application to "(Not Responding)" and gives it a pretty shade of white :)

To lock A, or to lock B: that is the question

The common advice for these scenarios is that you should always acquire locks in the same order. Indeed, that's easy to do within the same class. In this scenario above, however, I only wrote code for one of the locks. As the author of that code, I don't even know about the other class or that it takes locks. What can I do to avoid deadlocks?

I learnt this the hard way with Bindable LINQ. The code-base was littered with code like the above, until a couple of the unit tests I wrote to test threading started to fail just once every now and then. It took a while, but I eventually tracked it down to this pattern, and created my rule:

Never invoke code you don't control whilst holding a lock.

The danger is that any code supplied externally could try to gain a lock themselves, and if you already hold one, you run the risk of deadlock. Delegates passed to methods, objects implementing an interface, or even calling virtual methods on classes you wrote yourself, can spell danger. For example, the IEnumerable<T> passed into the HandleAdd() method could try to acquire a lock in the GetEnumerator() method.


To avoid this problem, there are three tricks I normally use:

  1. Avoid locking where possible
  2. Take snapshots of the arguments, or invoke the methods before locking
  3. Build snapshots of my internals while holding a lock, then invoke outside methods after

For example, the method above could be re-written as follows:

public void HandleAdd(IEnumerable<T> addedItems) 
    List<T> items = new List<T>();

    lock (_itemsLock) 
        foreach (T item in items) 

While not as efficient as the first, by executing the external code outside of the locks, I've avoided any accidental deadlock caused by interaction between my code and external code. You might consider caching the snapshots for next time - then you only pay when they have changed, versus every time you iterate.

Out of interest, I couldn't find an FxCop rule for this. Anyone interested in writing one? :)

A number of people have asked whether it is possible to implement INotifyPropertyChanged without hard-coding property names as strings inside code.

For example, you would normally write:

public string FirstName
    get { return _firstName; }
        _firstName = value;
        OnPropertyChanged(new PropertyChangedEventArgs("FirstName"));

Unfortunately the default refactoring tools won't search in string literals, so if LastName was renamed to Surname, you'd need to change the string manually.

One option to enforce compile-time detection would be the following:

public string FirstName
    get { return _firstName; }
        _firstName = value;
        OnPropertyChanged(Property.GetFor( () => this.Name ));

It could be implemented quite simply with the following static method:

public class Property
    public static PropertyChangedEventArgs GetFor(Expression<Func<object>> propertyNameLambda)
        MemberExpression member = propertyNameLambda.Body as MemberExpression;
        if (member != null)
            return new PropertyChangedEventArgs(member.Member.Name);
        return new PropertyChangedEventArgs("");

Personally, I don't mind hard-coding property names too much, but it's hardly a best practices, so you may find the above useful. Just beware that it does come with some overhead at runtime.

When you use the Outlook 2007 search, Vista's start search, or the Search bar in Explorer, there's often a short delay between when you press a key, and when the search begins.

In WPF, we could simulate this through a series of event handlers, timers and code-behind directly on controls, but we usually want to be able to use this alongside WPF's data binding capabilities. To use WPF data binding in a delayed fashion, I created a simple markup extension which creates a binding and manages the timer delay between commits.

Here's how you can use it:

<TextBox Text="{z:DelayBinding Path=SearchText}" />

You can also set an explicit delay. By default, it uses 0.5 seconds, which felt consistent with Outlook, though I didn't spend that much time working out exactly how long Outlook waits. I did look to see if there was a SystemParameters class for something like "SearchDelay", but couldn't find one. Suggestions for a better default are welcome.

<TextBox Text="{z:DelayBinding Path=SearchText, Delay='00:00:01'}" />

Instead of creating a new type of Binding, I'm using the standard WPF Binding, but setting the UpdateSourceTrigger to Explicit. As the text changes, the timer is reset, and when it ticks the value is pushed to the source.

Delay binding - as the user types, the results do not change

After the short delay:

...but after the short delay, the results change

First, the code to the markup extension (XML-doc comments removed):

public class DelayBindingExtension : MarkupExtension
    public DelayBindingExtension()
        Delay = TimeSpan.FromSeconds(0.5);

    public DelayBindingExtension(PropertyPath path) 
        : this()
        Path = path;

    public IValueConverter Converter { get; set; }
    public object ConverterParamter { get; set; }
    public string ElementName { get; set; }
    public RelativeSource RelativeSource { get; set; }
    public object Source { get; set; }
    public bool ValidatesOnDataErrors { get; set; }
    public bool ValidatesOnExceptions { get; set; }
    public TimeSpan Delay { get; set; }
    public PropertyPath Path { get; set; }
    public CultureInfo ConverterCulture { get; set; }

    public override object ProvideValue(IServiceProvider serviceProvider)
        var valueProvider = serviceProvider.GetService(typeof (IProvideValueTarget)) as IProvideValueTarget;
        if (valueProvider != null)
            var bindingTarget = valueProvider.TargetObject as DependencyObject;
            var bindingProperty = valueProvider.TargetProperty as DependencyProperty;
            if (bindingProperty == null || bindingTarget == null)
                throw new NotSupportedException(string.Format(
                    "The property '{0}' on target '{1}' is not valid for a DelayBinding. The DelayBinding target must be a DependencyObject, "
                    + "and the target property must be a DependencyProperty.", 

            var binding = new Binding();
            binding.Path = Path;
            binding.Converter = Converter;
            binding.ConverterCulture = ConverterCulture;
            binding.ConverterParameter = ConverterParamter;
            if (ElementName != null) binding.ElementName = ElementName;
            if (RelativeSource != null) binding.RelativeSource = RelativeSource;
            if (Source != null) binding.Source = Source;
            binding.ValidatesOnDataErrors = ValidatesOnDataErrors;
            binding.ValidatesOnExceptions = ValidatesOnExceptions;

            return DelayBinding.SetBinding(bindingTarget, bindingProperty, Delay, binding);
        return null;

Now the DelayBinding class, which as you can see above is being instantiated by the DelayBindingExtension. You could also create it manually in code:

public class DelayBinding
    private readonly BindingExpressionBase _bindingExpression;
    private readonly DispatcherTimer _timer;

    protected DelayBinding(BindingExpressionBase bindingExpression, DependencyObject bindingTarget, DependencyProperty bindingTargetProperty, TimeSpan delay)
        _bindingExpression = bindingExpression;

        // Subscribe to notifications for when the target property changes. This event handler will be 
        // invoked when the user types, clicks, or anything else which changes the target property
        var descriptor = DependencyPropertyDescriptor.FromProperty(bindingTargetProperty, bindingTarget.GetType());
        descriptor.AddValueChanged(bindingTarget, BindingTarget_TargetPropertyChanged);

        // Add support so that the Enter key causes an immediate commit
        var frameworkElement = bindingTarget as FrameworkElement;
        if (frameworkElement != null)
            frameworkElement.KeyUp += BindingTarget_KeyUp;

        // Setup the timer, but it won't be started until changes are detected
        _timer = new DispatcherTimer();
        _timer.Tick += Timer_Tick;
        _timer.Interval = delay;

    private void BindingTarget_KeyUp(object sender, KeyEventArgs e)
        if (e.Key != Key.Enter) return;

    private void BindingTarget_TargetPropertyChanged(object sender, EventArgs e)

    private void Timer_Tick(object sender, EventArgs e)

    public static object SetBinding(DependencyObject bindingTarget, DependencyProperty bindingTargetProperty, TimeSpan delay, Binding binding)
        // Override some specific settings to enable the behavior of delay binding
        binding.Mode = BindingMode.TwoWay;
        binding.UpdateSourceTrigger = UpdateSourceTrigger.Explicit;

        // Apply and evaluate the binding
        var bindingExpression = BindingOperations.SetBinding(bindingTarget, bindingTargetProperty, binding);

        // Setup the delay timer around the binding. This object will live as long as the target element lives, since it subscribes to the changing event, 
        // and will be garbage collected as soon as the element isn't required (e.g., when it's Window closes) and the timer has stopped.
        new DelayBinding(bindingExpression, bindingTarget, bindingTargetProperty, delay);

        // Return the current value of the binding (since it will have been evaluated because of the binding above)
        return bindingTarget.GetValue(bindingTargetProperty);

I imagine this would be useful for Silverlight, but since Silverlight does not support custom MarkupExtensions, and since Silverlight Binding's can't have an UpdateSourceTrigger (to set it to Explicit), I expect you would end up creating it through an attached dependency property and triggering the binding to push manually. Let me know if you write one.

One difference between XAML in a C# project and VB.NET projects is in specifying the x:Class of your XAML root element. For example, in a C# Window XAML file:

    Title="Window1" Height="300" Width="300"


By contrast, the VB.NET version must leave out the namespace:

    Title="Window1" Height="300" Width="300"


If you forget to make these changes, a typical error message you might encounter is "Name 'InitializeComponent' is not declared", or similar. This occurs because the generated VB.NET file to load the XAML is placed into a namespace different to the one of your code-behind, and thus the code behind can't find it's partial class which declares the method.

Where this becomes inconsistent is when you include other namespaces in your project, whereby you do need to use the fully qualified namespace:

    xmlns:me="clr-namespace:BigBank.UI.Controls" Title="Window1" Height="300" Width="300" 


This issue typically comes up when converting VB.NET XAML files to C# projects or vice-versa.