Rethinking UI programming
What might the next competitor to WPF look like? How would I design one?
My overall goals would be:
- Easy to learn
- Favors composition over inheritance
- Great performance
- Encourages the use of patterns
- Cross platform and open source
I think a basic UI library would be composed of:
- A markup language that programmers would use to design user interfaces
- An object model (like a DOM) that the markup is translated into
- A renderer, which sends the scene to the graphics card
What follows are some unordered thoughts about how I'd build the next UI library.
Markup
XAML and HTML are very similar. But if you ask most developers, they'll rate HTML as being much easier to learn than XAML. Markup languages are a fine way to describe a user interface, so I want to keep that concept, but there are some things about XAML that I don't quite like:
HTML is a markup format, while XAML is a serialization format. This meant that a lot of the implementation details leaked through. Property element syntax exists because the XAML parsers (which are really just serialization engines) would get confused and try to instantiate a ContextMenu
object rather than setting the ContextMenu
property. In contrast, with HTML, the interpreter would just know what ContextMenu meant in that particular context. Tags like DataTemplate
would also be a thing of the past.
I would also want to avoid all of the "tax" that applies to XAML files. You know, XML namespaces, x:Class
entries, and so on. I should be able to throw <strong>Hello</strong>
into a .view
file, hit F5, and see it.
My markup language would also be designed for programmers, not designers. HTML was designed to be hand-written, so it's easy to hand write. XAML was designed for tools to emit, so hand-writing XAML is an exercise in verboseness and RSI.
Composition
In WPF, the simplest controls have a huge hierarchy of inheritance, each layer of which adds different capabilities: Button : ButtonBase : ContentControl : Control : FrameworkElement : UIElement : Visual : DependencyObject : DispatcherObject : Object
.
If you open the source code (in Reflector or via the recent source code debugging support) you'll find each of those are made up of thousands of lines of code, if not tens of thousands. And of course most of it is internal so you have no chance of changing any of it :)
The DependencyObject
concept is one from WPF that I'd definitely keep. I think we also want to be able to write code to just new Button()
and add it to a scene. I just want to cut out the middle layers of inheritance.
A button should look like this:
public class Button : DependencyObject
{
public Button()
{
AddCapability(new Visual());
AddCapability(new Sizeable());
AddCapability(new Clickable());
}
}
Each capability would be an aspect that hooks on to the element. Aspects that are commonly found together (most controls that support hit testing support probably also support MouseOver events) can be grouped into composite capabilities.
Performance
WPF is built on DirectX, which is a very welcome break from GDI+. But I suspect I'd go with OpenGL just to keep it open source. I think I'd like to combine some of the concepts though - my UI stack should allow me to mix a retained scene rendering mode (like WPF) with immediate mode rendering (GDI+). Concepts like UI virtualization should also be supported.
The other major area of performance I'd look at is having multiple UI threads. The low level graphics systems are capable of doing this, but most UI toolkits don't expose it. Ideally, I'd be able to demarcate a branch of the scene tree as being rendered by a different thread.
For example, in my version of Outlook, each of these panels could have its own rendering thread:
Binding as law
In WPF you can do this:
label.Text = customer.FullName;
In my UI toolkit, you can only do this:
Bind(label.Text, customer.FullName);
By forcing binding to be used, we change from a push model (where the UI logic thread can throw things at us), to a pull model, where we only accept updates when ready.
When property changed events are fired, we can mark a UI property as having a pending change, but that is all. When the UI rendering thread is ready, it can read the new value. This means instead of having to lock and limit to one UI thread, we can have many, since each thread reads property values when it is ready.
Enforcing patterns
My UI toolkit wouldn't have the notion of code behind. Instead, it would be geared towards something that looks like Presentation Model/MVVM, and the infrastructure would make it easy to work with. For anything else, you would write a custom Capability to extend the behavior of the object.
The toolkit would also support concepts like DI and IOC, and have out of the box support for mediator, navigation and composition patterns.
Open source and cross platform
I think I'd write this UI toolkit on top of OpenGL, probably using C++ for the rendering portion.
C# is a fine language, but it is too closed to be a good way of writing UI code. The fact that automatic properties still don't support INotifyPropertyChanged
tells me C# just isn't meant for UI coding. I'd probably consider a nice dynamic language, like Ruby, or maybe something on the JVM.
Conclusion
This is the point where someone will kindly tell me that [insert Java/Linux widget toolkit] already does all of this :)