What’s the correlation between .NET 5, WinUI and MAUI (Xamarin.Forms)

Over the last couple of years there have been a couple of key developments in the .NET world. However, despite a lot of rhetoric from Microsoft about building a better developer ecosystem, the reality is that the current landscape for building apps using .NET is a mess and no amount of sugar coating is going … Continue reading “What’s the correlation between .NET 5, WinUI and MAUI (Xamarin.Forms)”

Over the last couple of years there have been a couple of key developments in the .NET world. However, despite a lot of rhetoric from Microsoft about building a better developer ecosystem, the reality is that the current landscape for building apps using .NET is a mess and no amount of sugar coating is going to fix that. In this post I’m going to try to position a number of technologies in the hope that I can share where I think things are going.

UWP

If you’re thinking that I missed UWP out of the list of technologies in the title, you’d be 100% right. Microsoft hasn’t confirmed this (and in much the same way that Silverlight was never discontinued, I doubt anyone will say this) but UWP as it stands today is effectively end-of-life. I wouldn’t be expecting any major updates to UWP any time soon. I doubt it will get NetStandard 2.1 or .NET 5 support.

Let’s keep going and we’ll touch back on this point in order to discuss what the replacement is.

.NET 5

From the .NET homepage, it reads that .NET is “Free. Cross-platform. Open source. A developer platform for building all your apps.” Most of this is true but selling .NET 5 as a technology for building cross platform apps is a straight out lie, unless you include being able to run a command line utility on different operating systems. When we talk about cross platform apps we’re talking about building apps for iOS, Android, Windows etc and currently you cannot do this with .NET 5.

Ok, so why stretch the truth. Well, other than it being a good marketing angle, the reality is a bit more nuanced than simply “it’s not possible.” For example, you can use Xamarin.Forms to build apps for iOS and Android (and Windows at a stretch). Alternatively you could use the Uno Platform to build for Windows, iOS, Android, MacOS, Linux and Web. However, neither of these technologies are based on .NET 5. Both technologies are limited by the lack of .NET 5 support in Xamarin iOS, Xamarin Android and of course UWP.

If not .NET 5, then what’s in store for .NET 6? Well the good news is that there are tons of great changes coming that will make building cross platform apps using .NET much nicer. Microsoft has committed to combining iOS, Android and Windows into a single SDK style project format. They’ve also committed to align with .NET 6, making using of the new TFMs such as .NET6-ios and .NET6-android. This will also rebrand iOS and Android support to .NET for iOS and .NET for Android. These changes alone will have a significant impact on simplifying the development of cross platform apps (although it will need Visual Studio to pull finger and finally provide better tooling for multi-targeted projects).

.NET MAUI (aka Xamarin.Forms vNext)

Earlier this year the Xamarin.Forms team announced some grand plans to evolve the platform with some fundamental changes to how renderers and other plumbing works. This was also going to include adding tier one support for Windows and MacOS. This was going to be coupled with a name change to .NET MAUI to shake the industry perception that XF was just for building forms based applications. The team have documented most of these advances on the roadmap page.

Whilst it appears that progress is being made, the reality is that there is a long way to go to implement these changes. The roadmap started with an 18 month target but recently Microsoft has acknowledged that this has slipped and in fact will be delivered in a number of phases. Microsoft has committed to providing an update shortly, which should clarify the direction.

So, where does that leave Xamarin.Forms developers today? The good news is that the 5.0 release of Xamarin.Forms has a ton of fixes adding to the stability and performance of the platform. The team has committed to a migration path to .NET Maui, so there’s no reason for you not to start or continue building apps with Xamarin.Forms.

However, one thing to be aware of is that there is no .NET 5 support and that until the stars align with .NET 6, Xamarin developers (whether using Xamarin.Forms or not) are going to feel a little exposed as library developers start dropping support for .NET Standard in favour of supporting only .NET 5 (this has already been demonstrated by the EF Core team that have dropped NS2.0 in favour of NS2.1).

Windows UI (aka WinUI)

For those who have built UWP applications you may already be familiar with the Windows UI library of controls (aka WinUI 2.x). It would be easy to confuse WinUI 2.x with WinUI3 and assume that WinUI 3 is simply the next iteration of the controls. This is, in part, true – the controls for WinUI 2.x will be available in WinUI 3, along with a slew of new features. However, what sets WinUI 3 apart is that it is effectively an entire UI platform for building apps. Where in the past developers may have referred to themselves as a UWP developer, going forward developers will refer to themselves as Windows developers, which will translate to building apps using Windows UI.

Currently, there are two flavours of WinUI 3 that are hosted as a desktop app or a UWP app respectively. Initially you might think that the UWP flavour is the priority, considering that WinUI 2.x was a library of controls for UWP. This is not the case – see my earlier point about UWP being end of life. The reality is that WinUI for desktop is the way forward. WinUI for desktop is built on .NET 5 and will enable scenarios from WinForms and WPF, with an opt in model for capabilities taken from UWP like app container (see Project Reunion)

What’s the release timeline for WinUI? According to the roadmap and recent community call, WinUI is still on track for a 2021 release. I suspect that a bunch of features (including UWP and open source) will be dropped in order to make some artificial go-live date, most likely in first half of 2021 (this is pure speculation based on where the previews are at today and hints from the team in their monthly community calls!!!).

Uno Platform

So far I’ve concluded that UWP is end of life and that WinUI is the future of Windows development but what about cross platform? Well this is where the Uno Platform kicks in – the platform today uses UWP as the definition for the cross platform APIs. In essence, the apps you build for Windows using UWP can be taken cross platform to iOS, Android, Web etc by leveraging Uno.

Since UWP is end of life, where does this leave Uno. Well they’ve been working along side the preview releases of WinUI to make available a WinUI compatible build of Uno. Going forward developers will build Windows apps, using WinUI, and will be able to take them cross platform using Uno.

Where does this leave Xamarin.Forms? The reality is that that the availability of WinUI and Uno doesn’t directly impact the trajectory of Xamarin.Forms. However, if you’re comparing technologies, it’s important to remember that at the core Xamarin.Forms and Uno take a vastly different approach. Xamarin.Forms builds on the look and feel of the native controls on each platform. Uno looks to have controls appear uniform across all platforms. Xamarin.Forms allows developers to control the look and feel by adjusting properties, and implementing renderers. Uno unlocks the power of XAML templates to allow developers to completely change the layout of controls without impacting their behaviour.

What is consistent across Xamarin.Forms and Uno is that both technologies are reliant on Xamarin iOS/Android being updated to support .NET 6. Going forward Uno (and potentially Xamarin.Forms if they adopt WinUI for desktop) won’t be limited by the constraints of UWP.

Summary

Currently the landscape for building cross platform apps using .NET can be quite confusing. Let’s simplify it:

  • Today: If you’re working on a project today that needs to release in the coming 3-6 months, you should pick between Xamarin.Forms 5 or Uno
  • Future: For projects in the 6-12 months timeframe, you can consider being an early adopter of WinUI (with Uno) and/or .NET Maui.

Start and Restart Windows (UWP/WinUI) Applications on Windows Startup

A while ago Windows introduced a little-known feature that allows applications to automatically restart when Windows is restarted. However, rather than just look at this feature, which Windows app developer get for free, in this post we’re going to look at different options for starting and restarting a Windows application when Windows starts/restarts. Launch on … Continue reading “Start and Restart Windows (UWP/WinUI) Applications on Windows Startup”

A while ago Windows introduced a little-known feature that allows applications to automatically restart when Windows is restarted. However, rather than just look at this feature, which Windows app developer get for free, in this post we’re going to look at different options for starting and restarting a Windows application when Windows starts/restarts.

Launch on Windows Startup

The first thing we’re going to look at is how to configure a Windows (UWP/WinUI) application to automatically start when Windows starts. This is done by adding a StartupTask to the manifest of the application (i.e. the package.appxmanifest file).

<?xml version="1.0" encoding="utf-8"?>
<Package
  xmlns="http://schemas.microsoft.com/appx/manifest/foundation/windows10"
  xmlns:mp="http://schemas.microsoft.com/appx/2014/phone/manifest"
  xmlns:uap="http://schemas.microsoft.com/appx/manifest/uap/windows10"
  xmlns:uap5="http://schemas.microsoft.com/appx/manifest/uap/windows10/5"
  IgnorableNamespaces="uap mp">
	...
	<Applications>
		<Application Id="App"
		  Executable="$targetnametoken$.exe"
		  EntryPoint="LaunchOnWindowsStart.App">
			...
			<Extensions>
				<uap5:Extension Category="windows.startupTask">
					<uap5:StartupTask
					  TaskId="LaunchOnStartupTaskId"
					  DisplayName="My Launchable App" />
				</uap5:Extension>
			</Extensions>
		</Application>
	</Applications>
	...
</Package>

The points worth noting here are:

  • An additional namespace needs to be included for the StartupTask. In this case the namespace has been imported with a prefix of uap5.
  • TaskId – This is a string that you’ll use within the app in order to access the StartUpTask.
  • DisplayName – This is the text that will appear in the list of Windows startup tasks where a user can manually enable/disable startup tasks.

Including the StartupTask extension in the manifest file simply registers a startup task for the application with Windows. By default, the startup task will be disabled but can be enabled by the user either directly using the Windows startup task list, or when prompted by the application. Let’s look at these two options.

Toggling Startup Task Via Settings

Windows currently provides two ways to access the list of registered startup tasks. The first is via the Startup tab of Task Manager (Press Ctrl+Shift+Esc to launch Task Manager).

The other option is the Startup page within the Settings app.

In either location the user can toggle any startup task between Enabled and Disabled (or On/Off in the case of the settings app).

Just to clarify, if you add the StartupTask extension into the package.appxmanifest, a startup task for your application will appear in this list, showing the DisplayName and the publisher. The startup task will be disabled by default. From this list, the user can enable/disable your startup task. If the user enables the startup task for your application, it will launch the next time Windows starts up (or restarts).

Toggling Startup Task via the Application

A more common scenario is that you’ll want to provide an interface within the application itself for the user to toggle the behaviour when Windows starts. For example Spotify provides an option under Settings to customise the behaviour of the application when the user logs into the computer (i.e. Windows startup).

The process for toggling the state (i.e. enabled or disabled) of the startup task is to first, get a reference to the startup task, and then to either request the startup task be enabled, or to disable the task. Let’s see this in code.

In our application we’re going to have a very simple ToggleButton called LaunchOnStartupToggle and for the purpose of this post we’re going to manually set the IsChecked state (Please use data binding to a view model when implementing this in an actual app!!). When the application launches and navigates to the MainPage, we’re going to retrieve a reference to the startup task and update the IsChecked state on the ToggleButton based on the state of the startup task.

protected override async void OnNavigatedTo(NavigationEventArgs e)
{
    base.OnNavigatedTo(e);
    var startup = await StartupTask.GetAsync("LaunchOnStartupTaskId");
    UpdateToggleState(startup.State);
}
private void UpdateToggleState(StartupTaskState state)
{
    LaunchOnStartupToggle.IsEnabled = true;
    switch (state)
    {
        case StartupTaskState.Enabled:
            LaunchOnStartupToggle.IsChecked = true;
            break;
        case StartupTaskState.Disabled:
        case StartupTaskState.DisabledByUser:
            LaunchOnStartupToggle.IsChecked = false;
            break;
        default:
            LaunchOnStartupToggle.IsEnabled = false;
            break;
    }
}

Note that we’re also adjusting the IsEnabled state of the ToggleButton as there are some states where the computer policy will prevent the user overriding the state of the startup task.

Now, we need to handle when the ToggleButton changes state. For this, we’re simply going to handle the Click event on the ToggleButton (and yes, alternatively we could have handled the Checked and Unchecked events). A reference to the startup task can be retrieved using the StartupTask.GetAsync method, passing in the TaskId used in the package.appxmanifest.

private async void ToggleClick(object sender, RoutedEventArgs e)
{
    await ToggleLaunchOnStartup(LaunchOnStartupToggle.IsChecked??false);
}
private async Task ToggleLaunchOnStartup(bool enable)
{
    var startup = await StartupTask.GetAsync("LaunchOnStartupTaskId");
    switch (startup.State)
    {
        case StartupTaskState.Enabled when !enable:
            startup.Disable();
            break;
        case StartupTaskState.Disabled when enable:
            var updatedState = await startup.RequestEnableAsync();
            UpdateToggleState(updatedState);
            break;
        case StartupTaskState.DisabledByUser when enable:
            await new MessageDialog("Unable to change state of startup task via the application - enable via Startup tab on Task Manager (Ctrl+Shift+Esc)").ShowAsync();
            break;
        default:
            await new MessageDialog("Unable to change state of startup task").ShowAsync();
            break;
    }
}

To enable the startup task requires a call to RequestEnableAsync on the startup task reference. This will display a prompt for the user to choose whether to Enable or Disable the startup task for the app – note that the DisplayName set on the StartupTask in the package.appxmanifest is used in this dialog.

One important thing to note about this dialog – if the user opts to Disable the startup task, the state is changed to DisabledByUser and cannot be Enabled from within the application – calling RequestEnableAsync again will do nothing. Instead, the user should be directed to the startup tasks list in Settings or Task Manager.

Disabling the startup task from within the application is done by calling Disable on the startup task reference. Since the startup task has been disabled by the application, it can be enabled again by calling RequestEnableAsync again and allowing the user to select the Enable option.

Automatic Restart on Windows Restart

Most application don’t necessarily want to register a startup task and have the application launch every time Windows starts. However, what is convenient is that if the application is running when Windows has to restart, the application should be launched again (and ideally the application should be able to resume where the user left off). A lot of desktop applications already do this but this option wasn’t available to Windows (UWP/WinUI) applications until relatively recently.

Under Sign-in options in the Settings app, there is an option to Restart apps. This option is disabled by default, presumably because it’s being progressively rolled out to avoid disrupting users too much.

When the Restart apps option is switched on, any Windows applications that are running when Windows restarts, or if the user sings out and back in, will get relaunched.

As a Windows application developer you don’t need to do anything in order to take advantage of this. You don’t need to include a StartupTask (as previously described). The StartupTask is only required if you want your application to be launched every time Windows is started (irrespective of whether the application was running prior to Windows being restarted).

NOT WORKING!!!!

Ok, so by now if you’ve attempted to add a StartupTask, or played with the Restart apps option in Settings, you may well be getting frustrated because your application fails to launch – the splashscreen appears but then the application fails to launch.

This is because the default new project template you get when creating a new Windows (UWP/WinUI) application is significantly broken. Whilst it appears to include the basics required to run an application (i.e. the Launch event), it doesn’t include the code necessary to handle application activation. Application activation, triggered by things like a StartupTask, takes an alternative code path in a Windows application and does not call the OnLaunched method in the App class (why it doesn’t, I’ve never quite understood, since the application is indeed being launched *shrugs*).

Luckily the fix for this is relatively straight forward. You can either copy the code from the OnLaunched method override into an OnActivated method override, or you can move the OnLaunched logic into a separate method, HandleActivation, that can be called by both OnLaunched and OnActivated methods.

protected override void OnLaunched(LaunchActivatedEventArgs e)
{
    HandleActivation(e);
}
protected override void OnActivated(IActivatedEventArgs args)
{
    base.OnActivated(args);
    HandleActivation(args);
}
private void HandleActivation(IActivatedEventArgs e) {
    Frame rootFrame = Window.Current.Content as Frame;

    // Do not repeat app initialization when the Window already has content,
    // just ensure that the window is active
    if (rootFrame == null)
    {
        // Create a Frame to act as the navigation context and navigate to the first page
        rootFrame = new Frame();

        rootFrame.NavigationFailed += OnNavigationFailed;

        if (e.PreviousExecutionState == ApplicationExecutionState.Terminated)
        {
            //TODO: Load state from previously suspended application
        }

        // Place the frame in the current Window
        Window.Current.Content = rootFrame;
    }

    var launch = e as LaunchActivatedEventArgs;
    if (!(launch?.PrelaunchActivated??false))
    {
        if (rootFrame.Content == null)
        {
            // When the navigation stack isn't restored navigate to the first page,
            // configuring the new page by passing required information as a navigation
            // parameter
            rootFrame.Navigate(typeof(MainPage), launch?.Arguments);
        }
        // Ensure the current window is active
        Window.Current.Activate();
    }
}

Hopefully in this post you’ve seen how easily your application can register as a StartupTask. Don’t forget to handle the OnActivated method in your application.

Testing Cosmos DB in Azure DevOps Pipeline

As part of writing code to read and write data to Azure Cosmos DB I created a bunch of test cases. However, I didn’t want to have my test cases reading and writing from an actual Cosmos DB instance. For this, the Cosmos DB Emulator is perfect. That is, until you come to want to … Continue reading “Testing Cosmos DB in Azure DevOps Pipeline”

As part of writing code to read and write data to Azure Cosmos DB I created a bunch of test cases. However, I didn’t want to have my test cases reading and writing from an actual Cosmos DB instance. For this, the Cosmos DB Emulator is perfect. That is, until you come to want to run the test cases as part of your Azure DevOps Pipeline.

Don’t Do This

Ok, so there is a preview of a Cosmos DB Emulator extension for Azure DevOps that you can install and then invoke by following the instructions, which cumulates in adding the following yaml to your pipeline.

- task: azure-cosmosdb.emulato[email protected]2
  displayName: 'Run Azure Cosmos DB Emulator'

Unfortunately this doesn’t work with the latest windows host agent, giving the following error:

Error response from daemon: hcsshim::CreateComputeSystem 658b0f0e635e4c5bbdf4c5b3d5a8823da5d3b5183b7a7a10fe5386977cdccb5d: The container operating system does not match the host operating system.

This has also been documented, and let unresolved, on this GitHub issue.

Do This Instead

As someone has pointed out in the GitHub issue talking about the issue using the extension, the resolution is actually quite simple. On the latest Windows host agents, the Azure Cosmos DB Emulator is already preinstalled, it just needs to be started. Simply add the following to your yaml build pipeline.

- task: [email protected]2
  displayName: 'Starting Cosmos Emulator'
  inputs:
    targetType: 'inline'
    workingDirectory: $(Pipeline.Workspace)
    script: |
        Write-Host "Starting CosmosDB Emulator"
        Import-Module "C:/Program Files/Azure Cosmos DB Emulator/PSModules/Microsoft.Azure.CosmosDB.Emulator"
        Start-CosmosDbEmulator

And now you can access the Cosmos DB Emulator from your test cases. In my case I made sure to include the Endpoint and AuthKey (which is a static predefined key) in my .runsettings file.

<?xml version="1.0" encoding="utf-8"?>
<RunSettings>
	<TestRunParameters>
		<Parameter name="CosmosDBEmulatorEndpoint" value="https://localhost:8081" />
		<Parameter name="CosmosDBEmulatorAuthKey" value="C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==" />
	</TestRunParameters>
</RunSettings>

Persisting Cloud Events to Cosmos DB in Azure

If you start down the path of implementing Event Sourcing, you’ll most likely come across https://cloudevents.io/ which has the tag line “A specification for describing event data in a common way”. This project seems to be well supported (take a look at the contributors list) and has language projections for a number of different languages. … Continue reading “Persisting Cloud Events to Cosmos DB in Azure”

If you start down the path of implementing Event Sourcing, you’ll most likely come across https://cloudevents.io/ which has the tag line “A specification for describing event data in a common way”. This project seems to be well supported (take a look at the contributors list) and has language projections for a number of different languages. It also has some out of the box support for working with various event systems, such as Azure Event Grid. In this post we’re going to look at how you can simply save and retrieve a CloudEvent (the implementation of the Cloud Events spec) to Azure Cosmos DB.

Getting Started

We’ll start with a basic ASP.NET Core Web API project. Given the .NET 5 release is just around the corner, we’re going to pick the .NET 5.0 (Preview) option. We’re also going to enable the OpenAPI support – this is awesome as it not only gives us OpenAPI (aka next gen of Swagger) documentation for our API, it also gives us a neat test page that we’ll be using to send CloudEvents to our API.

We’re going to remove the default WeatherForecastController and the associated WeatherForecast class. Then we’re going to add a new controller, CloudEventsController, which has a single method, UploadCloudEvent, which will be invoked when a POST is made to the /api/cloudevents endpoint.

[ApiController]
[Route("[controller]")]
public class CloudEventsController : ControllerBase
{
    private readonly ILogger<CloudEventsController> _logger;

    public CloudEventsController(ILogger<CloudEventsController> logger)
    {
        _logger = logger;
    }

    [HttpPost]
    public async Task<ActionResult<string>> UploadCloudEvent(
        [FromBody] CloudEvent cloudEvent)
    {
        return Ok("TBD");
    }
}

At this point if we attempt to build the project, we’ll get a build error because the CloudEvent class isn’t recognised. Luckily Visual Studio can help us with this by recommending an appropriate NuGet Package.

We can build and run the project at this point and attempt to send a CloudEvent, as JSON, using the test UI.

Unfortunately what we get back is an error:

System.NotSupportedException: Deserialization of types without a parameterless constructor, a singular parameterized constructor, or a parameterized constructor annotated with 'JsonConstructorAttribute' is not supported. Type 'CloudNative.CloudEvents.CloudEvent'. Path: $ | LineNumber: 0 | BytePositionInLine: 1. ---> System.NotSupportedException: Deserialization of types without a parameterless constructor, a singular parameterized constructor, or a parameterized constructor annotated with 'JsonConstructorAttribute' is not supported. Type 'CloudNative.CloudEvents.CloudEvent'.

Luckily the C# implementation for CloudEvents also has an ASP.NET Core support package, CloudNative.CloudEvents.AspNetCore. This package includes a Json formatter, CloudEventJsonInputFormatter, which can be registered in the ConfigureServices in order to deserialise CloudEvent objects from Json.

public void ConfigureServices(IServiceCollection services)
{
    services.AddControllers(opts =>
    {
        opts.InputFormatters.Insert(0,new CloudEventJsonInputFormatter());
    });

    services.AddControllers();
    services.AddSwaggerGen(c =>
    {
        c.SwaggerDoc("v1", new OpenApiInfo { Title = "CloudEventsSample", Version = "v1" });
    });
}

At this point we’re pretty close to being able to send CloudEvents to our API. However, if we attempt to POST a CloudEvent, we’ll get the following error

{ "type": "https://tools.ietf.org/html/rfc7231#section-6.5.1", "title": "One or more validation errors occurred.", "status": 400, "traceId": "00-67ef18df9db33e49bb9aeb0b0c717e32-87b5edd8ba995c4d-00", "errors": { "DataContentType.Name": [ "The Name field is required." ] } }

For those who’ve worked with the ASP.NET model validation, this type of error would look fairly familiar. Essentially, it’s failing because we’re not setting the Name property on the DataContentType element. What’s confusing is that neither the CloudEvent class, nor any of it’s dependent types, have been attributed with any validation attributes (eg Required).

In .NET 5, the default validation has changed, with all properties required by default. This is a careless change and one that’s going to cause in-ordinate amount of frustration both on new projects and existing project that are upgrading. Unfortunately, the ship has sailed on this one, so the best we can do is to code around it. Luckily, there’s a simple fix – add the following line to your ConfigureServices method.

services.AddControllers(options => options.SuppressImplicitRequiredAttributeForNonNullableReferenceTypes = true);

At this point you can run the project and use the test interface to submit a CloudEvent as Json. Everything appears to work and the test interface returns the string “TBD” as the code ssuggets. However, if you set a break point in the UploadCloudEvent method and inspect the cloudEvent object, you’ll see that none of the properties have the correct value.

Turns out that if you use the default options in the test UI, most of the properties of the CloudEvent are not set. However, if you select application/cloudevents+json from the content type dropdown, the CloudEvent object is correctly deserialised.

As you can see the CloudEvents has a number of standard properties, and then a nested Data property – this is where your application specific data will reside. You’ll also notice that there’s an Extensions property, which we’ll come back to later but is useful for capturing additional metadata about the event.

Saving To Azure Cosmos DB

Every application has their own requirements for how to process and store events. In our case, we opted to save CloudEvents to an Azure Cosmos DB. In this post we’re not providing advice as to whether this is appropriate for your application, I’m simply going to walk through how to save and retrieve CloudEvent objects to/from Azure Cosmos DB.

I’m not going to walk through the process of setting up Cosmos DB, since there’s some great tutorials in the documentation for the product. What we are going to do is to define a couple of constants that we’ll need.

private string EndpointUrl = "https://yourcosmosdb-westus.documents.azure.com:443/";
private string AuthorizationKey = "Ut5NuthyYUBXlL0bxBY.........";
private string DatabaseId = "cloudeventssample";
private string ContainerId = "cloudevents";
private string PartitionKeyPath = "/type";

The EndpointUrl and AuthorizationKey come from your instance of CosmosDB. The DatabaseId, ContainerId and PartitionKeyPath are all specific to your application. Not however, that the PartitionKeyPath has to match to a property on the CloudEvent object. For the moment we’re going to use the Type property on the CloudEvent. However, we’ll see later how we can use an extension to define a partitionkey property that is more suited for this purpose.

We can then add code to the UploadCloudEvent to save the CloudEvent to a Cosmos DB container.

[HttpPost]
public async Task<ActionResult<string>> UploadCloudEvent(
    [FromBody] CloudEvent cloudEvent)
{
    var cosmosClient = new CosmosClient(EndpointUrl, AuthorizationKey);
    var databaseReq = await cosmosClient.CreateDatabaseIfNotExistsAsync(DatabaseId);
    Debug.WriteLine("Created Database: {0}", databaseReq.Database.Id);
    var containerReq = await cosmosClient.GetDatabase(DatabaseId).CreateContainerIfNotExistsAsync(ContainerId, PartitionKeyPath);
    Debug.WriteLine("Created Container: {0}", containerReq.Container.Id);

    var container = containerReq.Container;

    var response = await container.CreateItemAsync(cloudEvent);

    return Ok($"Created {response.StatusCode}");
}

As you can probably have predicted by now, this fails – nothing worth doing is easy! This time it fails stating:

Microsoft.Azure.Cosmos.CosmosException : Response status code does not indicate success: BadRequest (400); Substatus: 0; ActivityId: 70c79590-fdf2-4bf6-80a1-cd4af23774c5; Reason: (Message: {"Errors":["The input content is invalid because the required properties - 'id; ' - are missing"]} ActivityId: 70c79590-fdf2-4bf6-80a1-cd4af23774c5, Request URI: /apps/548fba61-21b5-4fac-a478-8b7b5a1b3640/services/81adfd47-798e-4614-9c70-64cacf31b2e7/partitions/c9dabeda-0b5b-4ab3-8156-d4c63069599d/replicas/132485574048358760p/, RequestStats: Please see CosmosDiagnostics, SDK: Windows/10.0.19042 cosmos-netstandard-sdk/3.14.0);

This is weird, considering the CloudEvent class does have an Id property and the instance passed into the UploadCloudEvent method does have a non-null string value for Id. Which leads us to an issue with the serialization. The default Json serialization would be to change the Id property to id. However, it would appear that the Cosmos DB client doesn’t realise this, so is looking for an “id” property on the original entity (which is different from the Id property that exists).

The fix for this is to change the serialization options for the CosmosClient, as follows:

var options = new CosmosClientOptions
{
    SerializerOptions = new CosmosSerializationOptions { PropertyNamingPolicy = CosmosPropertyNamingPolicy.CamelCase }
};
var cosmosClient = new CosmosClient(EndpointUrl, AuthorizationKey, options);

Then, just when you thought there can’t possibly be anything more to saving a CloudEvent to CosmosDB, think again. Attempting to upload a CloudEvent now generates the following error:

Newtonsoft.Json.JsonSerializationException: Unable to find a constructor to use for type CloudNative.CloudEvents.CloudEvent. A class should either have a default constructor, one constructor with arguments or a constructor marked with the JsonConstructor attribute. Path 'dataContentType', line 1, position 19.

I’m sure there are other ways to resolve this issue but I went the route of handling the serialization and deserialization of the CloudEvent myself. The CloudEvents SDK already has helper methods that can do this but relies on reading from and writing to a stream. Luckily, the CosmosClient also has overloads that can take a stream for the contents of the item being created.

The following CreateCloudEventAsync method follows a similar structure to the CreateItemAsync method, accepting the same parameters. However, the implementation involves using a MemoryStream to serialize the CloudEvent before calling the CreateItemStreamAsync method on the Container. This uses the JsonEventFormatter, that’s part of the CloudEvents SDK.

public static class ContainerHelpers
{
    public static async Task<ResponseMessage> CreateCloudEventAsync(
        this Container container, CloudEvent item, 
        ItemRequestOptions requestOptions = null, 
        CancellationToken cancellationToken = default)
    {
        var formatter = new JsonEventFormatter();
        var bytes = formatter.EncodeStructuredEvent(item, out _);
        using (var ms = new MemoryStream(bytes))
        {
            var createResponse = await container.CreateItemStreamAsync(ms, new PartitionKey(item.Type), requestOptions, cancellationToken);
            return createResponse;
        }
    }
}

That completes the process of saving a CloudEvent to CosmosDB.

Reading CloudEvents from CosmosDB

Unfortunately, for the same reason that we weren’t able to use the CreateItemAsync method to save a CloudEvent, we’re also prevented from using the ReadItemAsync method. Instead we have to use the ReadItemStreamAsync method and then again use the JsonEventFormatter to deserialize the CloudEvent from the returned stream.

We’ll add the ReadCloudEventAsync extension method to the ContainerHelpers class shown earlier.

public static async Task<CloudEvent> ReadCloudEventAsync(
    this Container container, 
    string id, string partitionKey, 
    ItemRequestOptions requestOptions = null, 
    CancellationToken cancellationToken = default)
{
    var responseMessage = await container.ReadItemStreamAsync(id, new PartitionKey(partitionKey), requestOptions, cancellationToken);
    if (responseMessage.StatusCode == HttpStatusCode.NotFound) throw new CosmosException("Missing", responseMessage.StatusCode, 0, null, 0);
    var formatter = new JsonEventFormatter();
    var cloudEvent = await formatter.DecodeStructuredEventAsync(responseMessage.Content, null);
    return cloudEvent;
}

At the beginning of the UploadCloudEvent method, we’ll call the ReadCloudEventAsync method to see if the CloudEvent already exists, before attempting to create it only if it doesn’t exist.

try
{
    var ce = await container.ReadCloudEventAsync(cloudEvent.Id, cloudEvent.Type);

    return Ok($"Item already exists {ce.Id}");
}
catch (CosmosException ex) when (ex.StatusCode == HttpStatusCode.NotFound)
{
    var response = await container.CreateCloudEventAsync(cloudEvent);
    return Ok($"Created {response.StatusCode}");
}

CloudEvent Extensions

Earlier I mentioned that we were using the Type property on the CloudEvent class as the partition key. For a bunch of reasons I don’t think this is a good idea. Luckily, the CloudEvents SDK supports extending the CloudEvent class, not through inheritance (which always causes issues with serialization) but through extensions which can be added to the CloudEvent. In fact, there are a number of extensions already provided by the SDK. One of which is a PartitioningExtension, which makes it possible to add a partitionKey property to the CloudEvent JSON object, for example:

{
"specversion" : "1.0",
"type" : "com.github.pull.create",
"source" : "https://github.com/cloudevents/spec/pull",
"subject" : "123",
"id" : "A234-1234-1234",
"time" : "2018-04-05T17:31:00Z",
"partitionKey" : "defect-123",
"datacontenttype" : "text/xml",
"data" : ""
}

To get this to work in our Web API project, there are a couple of things we need to do:

PartitionKeyPath

We’ll need to change the PartitionKeyPath to be /partitionKey instead of /type. Unfortunately this means you’ll need to recreate the container (or just create a new one).

private string ContainerId = "paritionedcloudevents";
private string PartitionKeyPath = "/partitionKey";

PartitionKey for Reading and Writing

Instead of passing cloudEvent.Type into the ReadCloudEventAsync method, we now need to extract the value of the ParitioningExtension.

var ce = await container.ReadCloudEventAsync(cloudEvent.Id, cloudEvent.Extension<PartitioningExtension>().PartitioningKeyValue);

Also, in the CreateCloudEventAsync method, we need to change how the PartitionKey is created

var createResponse = await container.CreateItemStreamAsync(ms, new PartitionKey(item.Extension<PartitioningExtension>().PartitioningKeyValue), requestOptions, cancellationToken);

Use PartitioningExtension in CloudEventJsonInputFormatter

The one place where it’s not easy to add in the PartitioningExtension is with the CloudEventJsonInputFormatter, which is used to deserialize the CloudEvent that’s being sent to the /api/cloudevents endpoing. Currently, if the posted JSON includes the partitionKey property, it will be ignored and no extensions will be added to the generated CloudEvent object.

Unfortunately the CloudEventJsonInputFormatter that comes as part of the CloudEvents SDK provides no mechanism to include extensions as part of the deserialization process. Luckily, this is also quite easy to fix.

We’ll start by taking a copy of the CloudEventJsonInputFormatter class and add it to our project. To avoid issues with naming conflicts, we’ll rename our class to CloudEventWithExtensionsJsonInputFormatter.

Then we need to add support for instantiating the necessary extensions as part of the ReadRequestBodyAsync method. Here we’re simply going to use a function callback as this seemed an easy way to generate the extensions each time the method is called – Do NOT simply create a single instance of the extension, otherwise all CloudEvent objects will end up with the same extension. The changed lines are marked in bold.

public class CloudEventWithExtensionsJsonInputFormatter : TextInputFormatter
{
    private Func<ICloudEventExtension[]> cloudExtensionsFactory;
    public CloudEventWithExtensionsJsonInputFormatter(Func<ICloudEventExtension[]> extensionsFactory)
    {
        cloudExtensionsFactory = extensionsFactory;

        SupportedMediaTypes.Add(MediaTypeHeaderValue.Parse("application/json"));
        SupportedMediaTypes.Add(MediaTypeHeaderValue.Parse("application/cloudevents+json"));
        SupportedEncodings.Add(Encoding.UTF8);
        SupportedEncodings.Add(Encoding.Unicode);
    }
    public override async Task<InputFormatterResult> ReadRequestBodyAsync(InputFormatterContext context, Encoding encoding)
    {
        if (context == null)
        {
            throw new ArgumentNullException(nameof(context));
        }
        if (encoding == null)
        {
            throw new ArgumentNullException(nameof(encoding));
        }
        var request = context.HttpContext.Request;
        try
        {
            var cloudEvent = await request.ReadCloudEventAsync(cloudExtensionsFactory?.Invoke());
            return await InputFormatterResult.SuccessAsync(cloudEvent);
        }
        catch (Exception ex)
        {
            return await InputFormatterResult.FailureAsync();
        }
    }
    protected override bool CanReadType(Type type)
    {
        if (type == typeof(CloudEvent))
        {
            return base.CanReadType(type);
        }
        return false;
    }
}

Now we just need to update our Startup to register the CloudEventWithExtensionsJsonInputFormatter class, with the appropriate callback that returns a PartitioningExtension instance.

opts.InputFormatters.Insert(0,new CloudEventWithExtensionsJsonInputFormatter(()=>new[] { new PartitioningExtension() }));

After all that, we’re good to go – When we POST a CloudEvent with the partitionKey property we can see that it’s included in the PartitioningExtension.

Hopefully this has given you enough information to work with CloudEvents and CosmosDB.

If there’s anything you get stuck on, or to provide feedback, please don’t hesitate to leave a comment.

Deploy Azure Bicep (ARM Templates) to Multiple Environments using Azure DevOps Pipelines

This post looks at deploying resources defined using Azure Bicep to multiple environments using Azure DevOps Pipeliens

Previously I’ve posted on developing Azure Bicep code using Visual Studio Code and on how to use an Azure DevOps Pipeline to deploy bicep code to Azure. In this post we’re going to go one step further and look at deploying resources defined using Azure Bicep to multiple environments. Our goal is to fully automate this process, so we’re going to leverage Azure DevOps Pipelines to define a build and release process.

Let’s summarise the goals:

  • Azure resources to be defined using Azure Bicep
  • Build and Release process to be defined in yaml in Azure DevOps Pipelines
  • Build process to be triggered whenever the bicep code changes
  • Build process should output the compiled ARM template as an artifact
  • Release process needs to support multiple environments
  • Each environment should use a different resource group
  • Release process should gate the release to particular environments based on a mix of branch and approval gates

You might think these goals are too hard to achieve using a single build and release process but the good news is that most of the heavy lifting is already done by Azure DevOps.

Here’s a quick summary of what we needs to setup:

  1. Azure Bicep file – this will define the resource(s) that we’re going to deploy to each environment
  2. Azure DevOps Environments – this will define the different environments we’re going to deploy to. At this stage this is limited to defining the approvals and checks that will be done before code/resources are deployed to an environment
  3. Azure DevOps Variable Groups – this will define variables that are used across all build and deployment steps, as well as variables that are specific to individual environments
  4. Azure DevOps Pipeline – this will define the actual build and release process for the bicep code.

Defining Resources Using Azure Bicep

Let’s start by defining a very simple Azure Bicep (services.bicep) that defines a storage account.

/* 
******************  
    Literals
****************** 
*/

// Resource type prefixes
var storageAccountPrefix = 'st'

/* 
******************  
    Parameters
****************** 
*/

// ** General
param applicationName string 
param location string 
param env string 

/* 
******************  
    Resources
****************** 
*/

var appNameEnvLocationSuffix  = '${applicationName}${env}'

// Storage Account

var storageAccountName  = '${storageAccountPrefix}${appNameEnvLocationSuffix}' 

resource attachmentStorage 'Microsoft.Storage/[email protected]' = {
    name: storageAccountName
    location: location
    sku: {
        name: 'Standard_LRS'
        tier: 'Standard'
    }
    kind: 'StorageV2'
    properties: {
        accessTier: 'Hot'
        minimumTlsVersion: 'TLS1_2'
        supportsHttpsTrafficOnly: true
        allowBlobPublicAccess: true
        networkAcls: {
            bypass: 'AzureServices'
            defaultAction: 'Allow'
            ipRules: []
        }
    }
}

A couple of things to note about this bicep file

  • It defines a constant, storageAccountPrefix, that is used to define the full name of the generated resource. This prefix comes from the list of recommended resource-type prefixes that the Microsoft documentation lists
  • There are three parameters defined: applicationName, location and env. Location is used to define the region that the resource will be created (alternatively you could use resourceGroup().location to use the same location as the resource group where this resource is being created). The applicationName and env parameters are combined with the storageAccountPrefix to define a unique name for the resource being created.
  • All three parameters are required, meaning that they will need to be supplied when deploying the generated ARM template. To simplify the process you may want to define default values for applicationName and location. It’s important that you supply the env value during deployment to ensure the resources in each resource group are unique across your subscription.

Setting Up Multiple Environments

Next we’ll define the different environments in Azure DevOps. We’ll keep things relatively simple and define three environments:

  1. Development – from the develop branch
  2. Testing – from the release branch
  3. Production – from the release branch but requiring approval

To create these environments, click on the Environments node under Pipelines from the navigation tree on the left side of the Azure DevOps portal for the project. Click the New environment button and enter a Name and Description for the each environment.

Branch Control

For each environment we need to limit deployments so that only code from the appropriate branch can be deployed to the environment. To setup a branch control check for an environment you first need to open the environment by selecting it from the list of Environments. From the dropdown menu in the top right corner, select Approvals and checks. Next, click the Add check (+) button. Select Branch control and then click the Next button.

In the Branch control dialog we need to supply the name of the branch that we want to limit deployments from. For example for the Development environment we would restrict the Allowed branches to the develop branch (specified here as refs/heads/develop).

Note here that we’ve also included refs/tags/* in the list of Allowed branches. This is required so that we can use the bicep template from the Pipeline Templates repository. I’d love to know if there’s a way to restrict this check to only a specific repository, since adding refs/tags to the Allowed branches will mean that any tagged branch in my repository will also be approved. Let me know in the comments if you know of a workaround for this.

The Testing and Production environments are both going to be restricted to the Release branch. However, we’re also going to enforce a check on the branch to Verify branch protection.

What this means is that the Release branch will be checked to ensure branch policies are in place. For example the Release branch requires at least one reviewer and a linked work item.

Approvals

The only difference between the Testing and Production environments is that deployment to the Production environment requires a manual approval. This makes sense in most cases considering you may need to co-ordinate this with a marketing announcement or a notification to existing customers.

Setting up an approval gate starts similar to adding branch control. Open the environment and go to Approvals and checks. Click the Add check button and select Approvals. In the Approvals dialog, enter the list of users that can approve the deployment, adjust any of the other properties and then click Create.

We typically set a very short approval timeout period to avoid the scenario where multiple deployments get queued up behind each other. If we create another deployment before the first has been approve, we prefer to only have the latter deployment pushed to the Production environment. Of course, this is something your team should discuss and agree on what strategy you want to employ.

Environment Variable Groups

Since we’re going to be deploying the same set of resources to each of the environments, we’re going to need a way to specify build and release variables, some that are common across all stages of the pipeline, and some that are specific to each environment. To make it easy to manage the variables used in the pipeline we’re going to use variable groups which can be defined within the Library tab within Azure DevOps Pipelines.

Common Build Variables

We’ll create a variable group called Common Build Variables and we’ll add two properties, ResourceGroupLocation and AzureSubscriptionConnectionName.

As you can probably deduce, the ResourceGroupLocation will be the region where all the resources will be created. For simplicity this will assumed to be the same across all environments. The AzureSubscriptionConnectionName we’ll come back to but needless to say, it’s the same connection for all stages in the pipeline.

Environment Specific Variables

For each environment we’re going to define a variable group that is named Common.[enviornment]. These variable groups will contain variables that are specific to each environment. In this case, we’re going to define the EnvironmentName, the EnvironmentCode and ResourceGroupName

We’ll show these variable in action shortly but it’s important to remember that any variable that needs to vary, based on which Enviornment it’s being deployed to, should be defined in the appropriate variable group.

Build and Release Process

The build and release process is going to be defined as a yaml pipeline in Azure DevOps.

Service Connections

Before we can jump in to write some yaml, we need to setup a couple of service connections.

  • Pipeline-Templates – this is a github service connection so that the build process can download the appropriate pipeline template to assist with the compilation of the bicep file.
  • Azure-Subscription – this is a link to the Azure subscription where the resource groups will be created and subsequently the resources created.

I’m not going to step through process of creating these connections, since you can simply follow the prompts provided in the Azure portal. However, it’s important to take note of the name of the service connections.

Build and Deploy Process

Here’s the full build and release pipeline, which we’ll step through in more detail below.

trigger:
  branches:
    include:
    - '*'  # must quote since "*" is a YAML reserved character; we want a string
  paths:
    include:
    - azure/services.bicep
    - pipelines/azure-services.yml
  
resources:
  repositories:
    - repository: pipelinetemplates
      type: github
      name: builttoroam/pipeline_templates
      ref: refs/tags/v0.7.0
      endpoint: Pipeline-Templates
  
pool:
  vmImage: 'windows-latest'
  
variables:
  - group: 'Common Build Variables'
  - name: application_name
    value: inspect
  - name: bicep_filepath
    value: 'azure/services.bicep'
  - name: arm_template_filepath
    value: '$(Pipeline.Workspace)/BicepArtifacts/services.json'
  
stages:
- stage: Compile_Bicep
  pool:
    vmImage: 'windows-latest'

  jobs:
  - job: Bicep
    steps:
      - template: azure/steps/bicep/[email protected]
        parameters:
          name: Bicep
          bicep_file_path: '$(System.DefaultWorkingDirectory)/$(bicep_filepath)'
          arm_path_variable: ArmFilePath

      - task: [email protected]2
        displayName: 'Copying bicep file to artifacts folder'
        inputs:
          contents: '$(Bicep.ArmFilePath)'
          targetFolder: '$(build.artifactStagingDirectory)'
          flattenFolders: true
          overWrite: true

      - task: [email protected]1
        displayName: 'Publish artifacts'
        inputs:
          pathtoPublish: '$(build.artifactStagingDirectory)' 
          artifactName: 'BicepArtifacts' 
          publishLocation: Container


- template:  templates/deploy-arm.yml
  parameters:
    stage_name: 'Deploy_Development'
    depends_on: 'Compile_Bicep'
    deploy_environment: 'Development'

- template:  templates/deploy-arm.yml
  parameters:
    stage_name: 'Deploy_Testing'
    depends_on: 'Deploy_Development'
    deploy_environment: 'Testing'
  
- template:  templates/deploy-arm.yml
  parameters:
    stage_name: 'Deploy_Production'
    depends_on: 'Deploy_Testing'
    deploy_environment: 'Production'

Trigger – we’ve setup this process to kick off whenever code is committed to the develop environment.

Resources – this process leverages the templates from PipelineTemplates, which requires the definition of a resource pointing to the appropriate tagged release in the pipeline templates github repository.

Stages – there’s one build stage, followed by three release (aka deploy) stages. The steps for the build stage leverage the bicep template from pipeline templates, in order to generate the ARM template. The ARM template is then executed in each of the environments and deployed to the appropriate resource group.

The three deploy stages all used the same template, that we’ll see in a minute, coupled with an environment specific parameter value. For example the first stage passes in a parameter ‘Development’.

Deploy Stage Template

As I mentioned, the three deployment stages are identical, except for some parameter values that are defined for each environment. In this case the template referenced for each stage includes creating the resource group and then planting it.

parameters:
- name: stage_name
  type: string
  default: 'Deploy_ARM_Resources'

- name: depends_on
  type: string
  default: ''

  # deploy_environment - Environment code
- name: deploy_environment
  type: string

stages:
- stage: ${{ parameters.stage_name }}
  dependsOn: ${{ parameters.depends_on }}
  variables:
  - group: 'Common.${{ parameters.deploy_environment }}'
  
  pool:
    vmImage: 'windows-latest'

  jobs:
  - deployment: 'Deploy${{ parameters.stage_name }}'
    displayName: 'Deploy ARM Resources to ${{ parameters.deploy_environment }}' 
    environment: ${{ parameters.deploy_environment }}
    strategy:
      runOnce:
        deploy:
          steps:
          - task: [email protected]2
            name: ${{ parameters.stage_name }}
            inputs:
              targetType: 'inline'
              workingDirectory: $(Pipeline.Workspace)
              script: |
                  $envParam = '${{ parameters.deploy_environment }}'
                  Write-Host "Deployment deploy environment parameter: $envParam"

                  $envName = '$(EnvironmentName)'
                  Write-Host "Deployment environment name variable: $envName"

          - task: [email protected]2
            displayName: 'Create resource group - $(ResourceGroupName)'
            inputs:
              azureSubscription: $(AzureSubscriptionConnectionName)
              scriptType: ps
              scriptLocation: inlineScript
              inlineScript: |
                Write-Host "Creating RG: $(ResourceGroupName)"
                az group create -n $(ResourceGroupName) -l $(ResourceGroupLocation)
                Write-Host "Created RG: $(ResourceGroupName)"

          - task: [email protected]2
            displayName: 'Deploying ARM template to $(ResourceGroupName)'
            inputs:
              azureSubscription: $(AzureSubscriptionConnectionName)
              action: 'Create Or Update Resource Group' 
              resourceGroupName: $(ResourceGroupName)
              location: $(ResourceGroupLocation) 
              templateLocation: 'Linked artifact'
              csmFile: '$(arm_template_filepath)' # Required when  TemplateLocation == Linked Artifact        
              overrideParameters: '-location $(ResourceGroupLocation) -env $(EnvironmentCode) -applicationName $(application_name)'

Throughout this pipeline, there are various variables referenced. The important thing to note is that the variables need to exist for each environment, or are environment independent.

The Common Build Variables group was imported in the build and release process yaml file. The ResourceGroupLocation is the only variable from this group that’s used within this template.

The variable group for each environment is imported within the deploy stage template. The environment name, which is passed in as the deploy_environment parameter, is concatenated with “Common.” in order to import the correct variable group. The imported variables can be referenced the same way as other locally defined variables.

The deployment pipeline template has three steps: The first simply outputs variables so that it’s clear what environment is being built. Then there’s a task for creating the resource group, and then lastly a task for deploying the ARM template.

Running the Pipeline End to End

In the post we’ve defined three different environment and configured Azure DevOps to have different variable groups for each of the environments. Here you can see an execution of the pipeline with the build stage, followed by three deployment stages.

In this case note the Testing deployment failed because the resources were being deployed from the wrong branch (develop instead of release). Unfortunately because this one stage failed, the entire pipeline was marked as failed.

Building a Bicep from an ARM Template with Parameters

As I start to work with Azure Bicep I figured I’d share some of my experiences here. First up, is just a quick recap of how to generate bicep code from an ARM template. Actually, the problem I initially started with was how to start writing bicep code since I wasn’t really familiar with either … Continue reading “Building a Bicep from an ARM Template with Parameters”

As I start to work with Azure Bicep I figured I’d share some of my experiences here. First up, is just a quick recap of how to generate bicep code from an ARM template.

Actually, the problem I initially started with was how to start writing bicep code since I wasn’t really familiar with either the bicep syntax, or how to define each of the resources I wanted to create. I had skimmed through the docs that the team have put together, which helped my understand the basic syntax. Then I took a look through the various examples that the team had posted. In fact, they even have an interactive playground where you can enter bicep code and have it generate the corresponding ARM template. In the top right corner, they have a list of sample templates, along with the corresponding bicep code.

Now that I’d brushed up on the syntax and had trawled through a dozen or so of the examples, I figured I was ready to write my first bicep file. Hold up…. before I get into writing some bicep code, I actually decided to make sure I could build and deploy my bicep code locally (if you want to build and run your bicep code in an Azure DevOps pipeline, check out this post).

Build and Deploy Bicep Code

Here’s a quick summary of working with Visual Studio Code to write, compile and deploy your Bicep code.

  • Install latest Bicep version (v0.1.37-alpha at time of writing) – I’d recommend using the installer as this will correctly setup path variables etc
  • Install the Visual Studio Code extension – this isn’t available via the extensions marketplace, so you’ll need to download it from the release page on github. Make sure you follow the instructions to install the extension by selecting the downloaded VSIX from within Visual Studio Code (rather than double-clicking the downloaded file)
  • Launch Visual Studio Code and create your first, empty, bicep file eg myfirstbicep.bicep. Note that Visual Studio Code will recognise the file type and show “Bicep” as the language in the bottom right corner of the window.
  • Make sure you have the Azure CLI Tools extension installed for Visual Studio Code
  • Create an azcli file, eg myfirstbicep.azcli that will be used to run Azure CLI commands. Alternatively you can just enter the commands into a Powershell terminal (assuming you have the Azure CLI installed).
  • Add the following code to the azcli file – comments should explain what each line does.
# generates myfirstbicep.json (ie compiles the bicep file to ARM template)
bicep build myfirstbicep.bicep 

# create a resource group to deploy the bicep code to
# this is ignored if resource group already exists
az group create -n rgbicepexample -l westus 

# deploy the ARM template that was generated from bicep file
az deployment group create -f myfirstbicep.json -g rgbicepexample

# delete the resource group
# this will prompt to confirm deletion
az group delete -n rgbicepexample
  • Execute each line in the azcli file by right-clicking and selecting Run Line in Terminal

If you run each of the lines you should see output similar to the following

In this scenario I ran the final command to clean up the resource group. If you’re in the process of writing your bicep code, chances are that you’ll simply create the resource group once and then keep compiling the bicep code to the ARM template (first command in the azcli file) and deploying the updated ARM template.

First Bicep Resource

Now that we have the tooling setup so that we can compile and deploy our bicep file, it’s time to dig in and start to create resources. Despite having some examples to follow, I was still a bit bewildered by not knowing what options I needed to include for any given resource. I decided to take a rather pragmatic approach by using the Azure portal to generate resources, export the ARM template and then convert that to bicep code. The last step, as you’ll see, is very manual and repetitive – hopefully this will get easier when this GitHub issue is addressed, providing us with a tool to at least automatically generate bicep code from an ARM template.

Let’s step through this process and I’ll point out a couple of things along the way. I’ll create a Storage Account that can be deployed to a resource group. This should be enough to give you a flavour for the approach.

  • Start by invoking the “az group create” command in Visual Studio code to make sure you have a resource group to work with. Alternatively you can work with any existing resource group if you’d prefer (I tend to avoid doing this to ensure I don’t accidentally overwrite, or delete, other resources I might have)
  • Head to the Azure Portal and open the resource group. Click the Add button to launch the New resource wizard.
  • Search for Storage Account and provide the necessary details to create a Storage Account.
  • I’m not going to go through the various settings here but once you get to the Review + Create tab, you’ll see that there is a link at the bottom to Download a template for automation.
  • When you click through to the template, you’ll see that it’s nicely presented with a tree view to allow for each navigation around the ARM template
  • Rather than simply copying the entire ARM template into our bicep file (and then having to change the syntax from ARM to Bicep), we’re going to do this in steps. We’re going to start with the parameters but instead of using the Parameters in the ARM template, we’re going to grab the json from the Parameters tab.
{
    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "location": {
            "value": "westus2"
        },
        "storageAccountName": {
            "value": "stbicepexample"
        },
        "accountType": {
            "value": "Standard_RAGRS"
        },
        "kind": {
            "value": "StorageV2"
        },
        "accessTier": {
            "value": "Hot"
        },
        "minimumTlsVersion": {
            "value": "TLS1_2"
        },
        "supportsHttpsTrafficOnly": {
            "value": true
        },
        "allowBlobPublicAccess": {
            "value": true
        },
        "networkAclsBypass": {
            "value": "AzureServices"
        },
        "networkAclsDefaultAction": {
            "value": "Allow"
        }
    }
}
  • Copy the json for the parameters into the bicep file and then start trimming the bits we don’t need – remember the bicep syntax is more succinct, so in a lot of cases the change to syntax is simply removing braces and the verbose ARM template code.

From the parameters json, the main things we need are the parameters and their default values, everything else can go. You’ll need to insert the keyword “param” and the type for each parameter – most are obvious from the default value but if in any doubt, you can go back to the ARM template and look in the parameters list. You’ll also need to change all strings from using ” to using ‘. What we’re left with is a very compact list of parameters, with their default values.

param location string = 'westus2'
param storageAccountName string = 'stbicepexample'
param accountType string = 'Standard_RAGRS'
param kind string = 'StorageV2'
param accessTier string = 'Hot'
param minimumTlsVersion string = 'TLS1_2'
param supportsHttpsTrafficOnly bool = true
param allowBlobPublicAccess bool = true
param networkAclsBypass string = 'AzureServices'
param networkAclsDefaultAction string = 'Allow'

By providing default values for each of the parameter, we’re making them optional – if a parameter value is provided as part of running the deployment, it will be used, otherwise the default value will be used. If you want to require a parameter to be provided as part of the deployment, simply remove the default value (eg change “param location string = ‘westus2′” to just “param location string”)

  • Next up is to copy across the json from the ARM template for the resource itself. I simply grab the resources block of json
    "resources": [
        {
            "name": "[parameters('storageAccountName')]",
            "type": "Microsoft.Storage/storageAccounts",
            "apiVersion": "2019-06-01",
            "location": "[parameters('location')]",
            "properties": {
                "accessTier": "[parameters('accessTier')]",
                "minimumTlsVersion": "[parameters('minimumTlsVersion')]",
                "supportsHttpsTrafficOnly": "[parameters('supportsHttpsTrafficOnly')]",
                "allowBlobPublicAccess": "[parameters('allowBlobPublicAccess')]",
                "networkAcls": {
                    "bypass": "[parameters('networkAclsBypass')]",
                    "defaultAction": "[parameters('networkAclsDefaultAction')]",
                    "ipRules": []
                }
            },
            "dependsOn": [],
            "sku": {
                "name": "[parameters('accountType')]"
            },
            "kind": "[parameters('kind')]",
            "tags": {}
        }
    ],
  • Converting this code is a little trickier but essentially, you need to start with declaring the resource which will look like the following, where the [type] and [apiVersion] need to be replaced by the values from the ARM template.
resource myStorage '[type]@[apiVersion]' = {
    properties: {
    }
}

Essentially the resource declaration is just a key-value pair object graph. To covert the ARM template to the corresponding object graph you mainly just need to remove the parenthesis and trailing commas. You’ll also need to replace the parameter references with a simple reference to one of the param variables we declared earlier (eg “parameters(‘kind’)” becomes just kind). The converted resource (including the param lines we converted earlier) then looks like.

param location string = 'westus2'
param storageAccountName string = 'stbicepexample'
param accountType string = 'Standard_RAGRS'
param kind string = 'StorageV2'
param accessTier string = 'Hot'
param minimumTlsVersion string = 'TLS1_2'
param supportsHttpsTrafficOnly bool = true
param allowBlobPublicAccess bool = true
param networkAclsBypass string = 'AzureServices'
param networkAclsDefaultAction string = 'Allow'


resource myStorage 'Microsoft.Storage/[email protected]' = {
    name: storageAccountName
    location: location
    sku: {
        name: accountType
    }
    kind: kind
    properties: {
        accessTier: accessTier
        minimumTlsVersion: minimumTlsVersion
        supportsHttpsTrafficOnly: supportsHttpsTrafficOnly
        allowBlobPublicAccess: allowBlobPublicAccess
        networkAcls: {
            bypass: networkAclsBypass
            defaultAction: networkAclsDefaultAction
        }
    }
}

This bicep code is good to go – you can compile and deploy this to your resource group. Don’t forget, once your code is finished and you’ve tested it locally, make sure you commit it to Azure DevOps and deploy it to Azure as part of a CI/CD process.

Important Note: A lot of Azure resources are pay for use, so you won’t rack up the dollars just by creating resources. However, there are some resources that will start to cost you as soon as they’re created. I would highly recommend deleting your development resource group whenever you’re done for the day, that way you can be sure you’re not going to continue to be charged.

Thinking Out Loud: Events, Messaging and Mvvm Navigation with XAML Frameworks

This post will explores mvvm navigation further, employing the latest c# 9 code generator to reduce the boilerplate code that developers have to write.

In my previous post on this topic, Thinking Out Loud: Mvvm Navigation for XAML Frameworks such as Xamarin.Forms, UWP/WinUI, WPF and Uno, I explored using events emitted by a ViewModel to drive page navigation. This post will explore this concept further, employing the latest c# 9 code generator to reduce the boilerplate code that developers have to write.

Before we go on, let’s just recap of where we got to previously:

  • ViewModels are independent, not knowing what’s before or after them in the navigation flow of the application
  • Use events to signify when a ViewModel is complete
  • ViewModel events are converted to navigation methods at an application level

The upshot is that you can have a simple ViewModel that simply raises an event to indicate that it’s complete (for example when the user clicks a submit button on a form).

public class MainViewModel
{
    public event EventHandler ViewModelDone;
    public async Task<int> DoSomething()
    {
        var rnd = new Random().Next(1000);
        await Task.Delay(rnd);
        if (rnd % 2 == 0)
        {
            ViewModelDone?.Invoke(this, EventArgs.Empty);
        }
        ... 
    }
}

There were a couple of pieces of feedback following my previous post:

  • Use an Observable instead of an event
  • Bypass the ViewModel event completely and simply raise a message that could be handled by the application. For example a developer could attach a Behavior to a Button that would send a message to an application wide dispatcher that could determine where to navigate to based on the message type, or perhaps the parameter set.

Whilst I like the first idea of a ViewModel exposing an Observable, I think that this is an idea we’ll explore sometime in the future. Using an event is incredibly simple and gives us the clear separation we’re after. The only downside is that the add/remove handler code required for events is somewhat nasty.

The idea of using messages, and having a central dispatcher for messages, is a great idea and one that I wanted to explore. I didn’t want to change the ViewModel to have to emit a message, since again this just adds additional complexity to the ViewModel. This means that there needs to be some sort of conversion between events and messages. As you can imagine, this is simply adding more code that developers need to write in order to get everything to work.

I’ve just pointed out two areas where developers will have to write unnecessary code: add/remove event handlers and converting between events and messages. I’ll be using the c# 9 code generators to help eliminate this excess code.

TL;DR

In this post I’m not going to I walk through the complexities of events, messaging and code generation because that would make for a long post. Instead, let me walk through a scenario where we’re going to add a new page, FifthPage, to our existing application (which as you can probably guess, already has four pages). Here’s what we need:

  • Add FifthPage
  • Add a Button to MainPage that navigates to FifthPage when clicked
  • Add Button to FifthPage that navigates back to MainPage when clicked
  • Add FifthViewModel that will be the DataContext for FifthPage
  • Add string property, Title, to FifthViewModel that returns a page title
  • Add TextBlock to FifthPage that is bound to the Title property on the FifthViewModel.

Create Page and ViewModel

The first step is to create the FifthPage and FifthViewModel. I’ve separated out the application code into different projects, so I have my view models in a project called MvvmNavigation.Core. My pages are still in the MvvmNavigation.Shared project that was created by the Uno solution template.

Whilst we’re creating these classes, we’ll add some of the basics that we’re going to need. On the FifthPage, we’ll add a TextBlock, for the Title, and a Button, to trigger navigation back to the MainPage.

<Page
    x:Class="MvvmNavigation.FifthPage"
    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
    xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
    xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
    mc:Ignorable="d"
    Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">

    <StackPanel VerticalAlignment="Center"
                HorizontalAlignment="Center">
        <TextBlock Text="page title" />
        <Button Content="Go Back" />
    </StackPanel>
</Page>

In the FifthViewModel we’ll add a Title property and a simple event, FifthDone, that will indicate to the application that the FifthViewModel is done. The application will be responsible for determining how to handle this; which according to our specification, should navigate back to MainPage. We’ll also create a RaiseFifthDone method that can be called to invoke the FifthDone event.

public class FifthViewModel
{
    public event EventHandler FifthDone;

    public string Title => "Page 5";

    public void RaiseFifthDone()
    {
        FifthDone?.Invoke(this, EventArgs.Empty);
    }
}

Navigation to FifthPage

To navigate to the FifthPage we need a new Button on the MainPage that, when clicked, will raise a message, PleadTheFifthMessage, that the application will handle in order to navigate to the FifthPage. Let’s unpack this into the steps:

Add Button

Add a Button to the MainPage XAML, along with the NavigationMessageAction behavior which will raise the PleadTheFifthMessage

<Button Content="Go To Page 5">
    <Interactivity:Interaction.Behaviors>
        <Interactions:EventTriggerBehavior EventName="Click">
            <builditbehaviors:NavigationMessageAction MessageType="localmessages:PleadTheFifthMessage" />
        </Interactions:EventTriggerBehavior>
    </Interactivity:Interaction.Behaviors>
</Button>

PleadTheFifthMessage

Add a class, PleadTheFifthMessage, that inherits from CompletedMessage.

public class PleadTheFifthMessage : CompletedMessage
{
    public PleadTheFifthMessage() : base() { }
    public PleadTheFifthMessage(object sender) : base(sender) { }
}

Map PleadTheFifthMessage to FifthViewModel

As part of the MvvmApplicationService, we need to register a navigation to the FifthViewModel for the PleadTheFifthMessage. In this case, we’re only handling the PleadTheFifthMessage for when it’s raised by the MainViewModel.

serviceRegistrations.AddSingleton<INavigationMessageRoutes>(sp =>
{
    var routes = new NavigationMessageRoutes()
        .RegisterNavigate<MainViewModel, PleadTheFifthMessage, FifthViewModel>()
        .RegisterNavigate<MainViewModel, CompletedMessage, SecondViewModel>()
	// ... omitted for brevity
        .RegisterGoBack<CloseMessage>();

    return routes;
});

FifthPage – FifthViewModel Mapping

Whilst we’ve defined a mapping from the PleadTheFifthMessage to the FifthViewModel, there needs to be a way for the application to connect the FifthViewModel to the FifthPage. Rather than rely on naming convention, we’re going to apply an Attribute to the FifthPage.

[ViewModel(typeof(FifthViewModel))]
public sealed partial class FifthPage 
{
    public FifthPage()
    {
        this.InitializeComponent();
    }
}

ViewModel Binding

So far we’ve wired up the navigation from MainPage to FifthPage. However, when arriving at FifthPage it’s clear than neither the Title, nor the Button event handler, has been wired up. The title could be easily wired up by simply creating an instance of the FifthViewModel in XAML and setting it as the DataContext.

However, this doesn’t scale well for real world applications where a view model may be dependent on any number of services that are required (eg for fetching and/or saving data). It’s preferable to have some sort of depedency injection framework that can be used to instantiate the view model.

ViewModel Instantiation

To this end, we’re going to add a ViewModel property to our FifthPage, along with a partial method, InitiViewModel (note also that we’ve added a second parameter to the ViewModel attribute). The implementation of the partial method will be done by our code generator that will generate the code necessary to instantiate, along with any dependent services, the FifthViewModel.

[ViewModel(typeof(FifthViewModel), nameof(InitViewModel))]
public sealed partial class FifthPage 
{
    partial void InitViewModel();
    public FifthViewModel ViewModel => this.ViewModel(() => DataContext as FifthViewModel, () => InitViewModel());

    public FifthPage()
    {
        this.InitializeComponent();
    }
}

FifthViewModel Registration

Despite providing the mapping between FifthPage and FifthViewModel, there’s currently no way for the dependency injection container to create an instance of FifthViewModel, since we haven’t registered the FifthViewModel type. Rather than have the developer work out where to add the code to register the FifthViewModel type, we can simply attribute the FifthViewModel:

[Register]
public class FifthViewModel
{
    // ... 
}

Bind FifthViewModel With x:Bind

We can then update the XAML of our FifthPage to data bind both the Title property and the RaiseFifthDone method on the FifthViewModel.

<Page x:Class="MvvmNavigation.FifthPage"
      xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
      xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
      xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
      xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
      mc:Ignorable="d"
      Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">

    <StackPanel VerticalAlignment="Center"
                HorizontalAlignment="Center">
        <TextBlock Text="{x:Bind ViewModel.Title}" />
        <Button Content="Go Back"
                Click="{x:Bind ViewModel.RaiseFifthDone}" />
    </StackPanel>
</Page>

Navigation Back to MainPage

When we created the FifthViewModel we already created the FifthDone event which will be invoked by the RaiseFifthDone method. However, clicking the Button on the FifthPage currently does nothing – actually it does indeed raise the FifthDone event but currently nothing is listening to that event.

Let’s add the EventMessage attribute to the FifthDone event. In this case the attribute references the existing CloseMessage which is the message that will be dispatched when the FifthDone event is raised.

[Register]
public class FifthViewModel
{
    [EventMessage(typeof(CloseMessage))]
    public event EventHandler FifthDone;

    // ...
}

We already have a handler for the CloseMessage for all pages that will simply close the current page.

That completes all the steps necessary to add a new page along with navigation too and from the page. It seems there’s a lot of steps, but actually there’s only minimal code required and it facilitates a high degree of separation between the elements of the application.

If you want to check out the code for this example app, feel free to check out the MvvmNavigation GitHub repo. Note that this is a work in progress and that there will most likely be quite a bit of refactoring over the coming weeks, after which I’ll post about more of the details on how the various mappings work and the code generation that’s used behind the scenes.

How to Debug C# 9 Source Code Generators

Build and debug a source code generator with C# 9 and Visual Studio.

I’ve been working on a follow up to my previous post on a different take on MVVM (Thinking Out Loud: Mvvm Navigation for XAML Frameworks such as Xamarin.Forms, UWP/WinUI, WPF and Uno) and I’ve got the code to a point that I’m mostly happy with it. However, there’s a couple of things that are just tedious for developers to have to do and something that’s perfect for source code generation. More about that in a later post; in this post I want to cover how you can start to generate code and how you can debug the source code generation process.

Before we get started, for a more detailed set of instructions on how to get started with code generation, the dotnet team has a great post on this from earlier this year – Introducing C# Source Generators.

Let’s jump in and create a simple code generator (remember the point of this post isn’t to talk in detail about code generation, it’s to show you how to debug the generation process).

We’ll start with just a vanilla .NET standard 2.0 class library, GenerationSample – this is the assembly where we’ll be doing the code generation. I’ve specifically picked a .NET Standard 2.0 class library to point out that even though the code generation is a feature of the C# 9 feature set, you can still use it to inject code into existing libraries. We’ll come back to this library in a minute, once we’ve created our code generator.

Next up, we need to create a new project that will house our code generator. Again, I’ll create a .NET Standard 2.0 library, called Generators. Creating a generator only requires us to reference the appropriate NuGet packages and then create a class that implements ISourceGenerator (and has the Generator attribute).

Here’s the updated csproj file with both the NuGet package references we need, as well as LangVersion set to preview

<Project Sdk="Microsoft.NET.Sdk">
	<PropertyGroup>
		<TargetFramework>netstandard2.0</TargetFramework>
		<LangVersion>preview</LangVersion>
	</PropertyGroup>
	<ItemGroup>
		<PackageReference Include="Microsoft.CodeAnalysis.CSharp" Version="3.8.0-3.final" PrivateAssets="all" />
		<PackageReference Include="Microsoft.CodeAnalysis.Analyzers" Version="3.3.0" PrivateAssets="all" />
	</ItemGroup>
</Project>

To do the code generator we’ll create a class called DebuggableGenerator that implements the ISourceGenerator interface. Note: If you copy the code from the introductory post from April 2020 by the dotnet team, you’ll see a bunch of errors relating to the ISourceGenerator interface.

These errors can easily be fixed by clicking on the dropdown next to the error and selecting Implement Interface, and then removing the old methods – it looks like the ISourceGenerator interface changed since the post back in April. Our code generator should look similar to the following code at this point.

using Microsoft.CodeAnalysis;
using System;
using System.Diagnostics;

namespace Generators
{
    [Generator]
    public class DebuggableGenerator : ISourceGenerator
    {
        public void Execute(GeneratorExecutionContext context)
        {
            Debug.WriteLine("Execute code generator");
        }
        public void Initialize(GeneratorInitializationContext context)
        {
            Debug.WriteLine("Initalize code generator");
        }
    }
}

The last thing to do, to complete the setup, is to add a reference to the Generators library into the GenerationSample library. The csproj should now look similar to the following.

<Project Sdk="Microsoft.NET.Sdk">
	<PropertyGroup>
		<TargetFramework>netstandard2.0</TargetFramework>
		<LangVersion>preview</LangVersion>
	</PropertyGroup>
	<ItemGroup>
		<ProjectReference 
			Include="..\Generators\Generators.csproj"
			OutputItemType="Analyzer"
			ReferenceOutputAssembly="false" />
	</ItemGroup>
</Project>

A couple of things to note: We’ve set the LangVersion to preview and added a couple of additional attributes to the ProjectReference – these are required to link the code generator into the compilation process.

At this point, you’re good to start implementing your code generation. However, as pointed out by the dotnet team in the FAQ section of their post, there’s currently no built in support for debugging. You can set a breakpoint in Visual Studio but it won’t be hit. You can write Debug.WriteLine or Console.WriteLine statements but they won’t appear in the Output tool window (not even if you set the verbosity to Diagnostic).

Luckily, there’s quite a simple hack to give you a mostly-fully featured debugging experience. At the start of the Initialize methods (you can think of this as the entry point for code generation) add code to launch the debugger.

using Microsoft.CodeAnalysis;
using System.Diagnostics;

namespace Generators
{
    [Generator]
    public class DebuggableGenerator : ISourceGenerator
    {
        public void Execute(GeneratorExecutionContext context)
        {
            Debug.WriteLine("Execute code generator");
        }
        public void Initialize(GeneratorInitializationContext context)
        {
#if DEBUG
            if (!Debugger.IsAttached)
            {
                Debugger.Launch();
            }
#endif 
            Debug.WriteLine("Initalize code generator");
        }
    }
}

To ensure this code doesn’t accidentally end up in my Release code I’ve wrapped it using conditional compilation. The code also tests to see if the Debugger is already attached, otherwise you’ll find that it will continually prompt to launch a new debugging session.

Now if we force a rebuild of our GenerationSample library, we’ll see a prompt asking to specify where the code should be debugged.

From this dialog you can either select the existing instance of Visual Studio, or open a new instance. My preference it to open up a new instance of Visual Studio – it feels a bit to much like inception to debug the code generation in the same instance of Visual Studio but this comes down to whatever works for you.

Once the debugger is attached, you can step through the code, view variables, set breakpoints etc. Unfortunately it appears that Edit and Continue doesn’t work, so it does mean that in order to make changes you need to stop debugging, make changes and then rebuild the project in order to trigger the code generator to run.

And there you have it -the ability to debug your code generation library using Visual Studio.

Pipeline Templates: Using Project Bicep with Azure DevOps Pipeline to Build and Deploy an ARM Templates

This post covers how you can use a .bicep file in your Azure DevOps pipeline to deploy resources to Azure via an ARM template

It’s great to see that Microsoft listens to the pain that developers and devops professionals go through in working with Azure. Specifically that ARM templates, whilst very powerful, are insanely verbose and cumbersome to work with. Hence Project Bicep which is at it’s core a DSL for generating ARM templates. In this post I’m not going to go into much details on how to work with .bicep files but rather on how you can use a .bicep file in your Azure DevOps pipeline to deploy resources to Azure.

If you are interested in learning more about Project Bicep and the .bicep file format, here are some posts that provide some introductory material:

In order to deploy a .bicep file to Azure, you first need to use the bicep command line to generate the corresponding ARM template. You can then deploy the ARM template to Azure. Rather than having to add multiple steps to your build pipeline, wouldn’t it be nice to have a single step that you can use that simply takes a .bicep file, some parameters and deploys it to Azure. Enter the bicep-run template that is part of the 0.7.0 release of Pipeline Templates.

If you haven’t worked with any of the templates from the Pipeline Templates project, here’s the quick getting started:

Add the following to the top of your pipeline – this defines an external repository called all_templates that can be referenced in your pipeline.

resources:
  repositories:
    - repository: all_templates
      type: github
      name: builttoroam/pipeline_templates
      ref: refs/tags/v0.7.0
      endpoint: github_connection

Next, we’re going to use the bicep-run template to deploy our .bicep file to Azure.

      - template: ../../steps/bicep/bicep-run.yml
        parameters:
          name: BicepRun
          az_service_connection: $(service_connection)
          az_resource_group_name: $(resource_group_name)
          az_resource_location: $(resource_location)
          bicep_file_path: '$(bicep_filepath)'
          arm_parameters: '-parameter1name $(parameter1value) -parameter2name $(parameter2value)'

The bicep-run wraps the following:

  • Downloads and caches the Project Bicep command line. It currently references the v0.1.37 release but you can override this by specifying the bicep_download_url – make sure you provide the url to the windows executable, not the setup file.
  • Runs the Bicep command line to covert the specified .bicep file (bicep_file_path parameter) into the corresponding ARM template
  • Uses the Azure Resource Group Deployment Task to deploy the ARM template into Azure. The arm_parameters are forwarded to the overrideParameters parameter on the Azure Resource Group Deployment task.

Would love feedback on anyone that takes this template for a spin – what features would you like to see added? what limitations do you currently see for Project Bicep and the ability to run using this task?

Note: The bicep-run template is designed to run on a windows image.

XAML Back to Basics #16: Custom Grouping

How to implement custom grouping


XAML Basics Series Index Page

The next post in the series originally written by Beatriz Stollnitz. Original post available on Github.

How to implement custom grouping

The previous post in this series shows how to group items based on the value of a certain property. In a real-world scenario you may want to group your items based on some other logic. With this in mind, WPF Data Binding provides a way for you to write custom code and specify how you want to group your items. This allows maximum flexibility; you can group your items pretty much any way you can think of.

In this post I’ll look at how to group items based on their type and an example of how to do custom Grouping.

My data source in this sample is of type ObservableCollection<object>, and contains some objects of type GreekGod and others of type GreekHero. My goal is to group all the items of type GreekGod in a group called “Greek Gods” and group all GreekHero items under the group “Greek Heroes”. This is what the markup looks like:

<Window.Resources>
    <local:GreekGodsAndHeroes x:Key="GodsAndHeroes" />
    <local:GroupByTypeConverter x:Key="GroupByTypeConverter"/>

    <CollectionViewSource x:Key="cvs" Source="{Binding Source={StaticResource GodsAndHeroes}}">
        <CollectionViewSource.GroupDescriptions>
            <PropertyGroupDescription Converter="{StaticResource GroupByTypeConverter}"/>
        </CollectionViewSource.GroupDescriptions>
    </CollectionViewSource>
</Window.Resources>

Notice that this time, instead of setting PropertyName in PropertyGroupDescription, I set the Converter property. This Converter is defined in the code behind and contains the logic to divide the data items in groups.

public class GroupByTypeConverter : IValueConverter
{
    public object Convert(object value, Type targetType, object parameter, CultureInfo culture)
    {
        if (value is GreekGod)
        {
            return "Greek Gods";
        }
        else if (value is GreekHero)
        {
            return "Greek Heroes";
        }
        return null;
    }
}

All the items that return the same value in the Converter will be grouped together. In this scenario I am grouping the items based on their type and my groups are of type string. Remember that you can use a Converter to group your items some other way. Notice also that the groups don’t have to be a string, they can be any object you want.

Just like in the previous post, I want to display the groups and items in a TreeView.

<TreeView ItemsSource="{Binding Source={StaticResource cvs}, Path=Groups}" Width="200">
</TreeView>

In this case, however, templating the items is not as obvious. When the items are all of the same type this is really easy to achieve with a chain of HierarchicalDataTemplates and a DataTemplate for the leaf nodes. In this scenario we need a HierarchicalDataTemplate for the groups and one of two DataTemplates for the leaf nodes, depending on their type.

My first approach to this was to have those 3 templates in the resources and set their DataType property instead of giving them a key (with x:Key). This does not work because when you use a HierarchicalDataTemplate to template a group and do not set its ItemTemplate property, that same template is used for the lower levels of the hierarchy. This behavior is useful when all the levels have items of the same type (for example, when using a TreeView to display a hierarchy of directories in a computer).

My second approach was to set the ItemTemplateSelector property of the HierarchicalDataTemplate to a template selector that decides the correct template to use based on the type of the leaf item. Unfortunately there is a bug in the ItemTemplateSelector property of HierarchicalDataTemplate that prevents this from working. Once the bug is fixed, this will be the correct way to specify the templates.

My third and final approach was to move the template selector to the TreeView and add one more “if” branch to deal with deciding what type to return for the groups (which are of type CollectionViewGroup).

public override DataTemplate SelectTemplate(object item, DependencyObject container)
{
    string templateKey;
    if (item is CollectionViewGroup)
    {
        templateKey = "GroupTemplate";
    }
    else if (item is GreekGod)
    {
        templateKey = "GreekGodTemplate";
    }
    else if (item is GreekHero)
    {
        templateKey = "GreekHeroTemplate";
    }
    else
    {
        return null;
    }
    return (DataTemplate)((FrameworkElement)container).FindResource(templateKey);
}

<Window.Resources>
    <local:GodHeroTemplateSelector x:Key="GodHeroTemplateSelector" />
    (...)
</Window.Resources>

<TreeView ItemsSource="{Binding Source={StaticResource cvs}, Path=Groups}" ItemTemplateSelector="{StaticResource GodHeroTemplateSelector}" Width="200">
</TreeView>

For each of the items displayed in the TreeView, this template selector looks up the appropriate (Hierarchical)DataTemplate in the resources.

Here is a screenshot of the completed sample:

WPF Source Code

WPF

UWP/WinUI Notes
GroupDescriptions aren’t supported on UWP/WinUI so the Greek Gods and Heros are grouped using Linq in the codebehind of the page.

The TreeView isn’t supported on Uno at the moment – due in v3.1

The Treview on UWP/WinUI doesn’t use a HierarchicalDataTemplate to define the hierarchy. Instead it uses TreeViewItem within a DataTemplate to define where there are child nodes.

UWP Source Code

UWP

WinUI with Uno and WinUI Desktop Source Code

WinUI – Desktop

Zero Installer, Zero MSIX, Zero Packaging with .NET Single File Apps

A feature that was added to .NET Core apps was the ability to publish as a single file. As we approach the release of .NET 5 I thought it worthwhile taking a look at the options for publishing Windows Forms and WPF applications with dotnet publish. For this post we’re going to work with a … Continue reading “Zero Installer, Zero MSIX, Zero Packaging with .NET Single File Apps”

A feature that was added to .NET Core apps was the ability to publish as a single file. As we approach the release of .NET 5 I thought it worthwhile taking a look at the options for publishing Windows Forms and WPF applications with dotnet publish.

For this post we’re going to work with a very basic Windows Forms and WPF application: WinFormsSingleFileSample and WpfSingleFileSample. To create these applications I used the Windows Forms (WinForms) Application and WPF Application project templates in Visual Studio 16.8 preview 2 (with .NET 5 RC1 installed). After creating the applications I ran them from within Visual Studio just to confirm there was no issue with the creation process – I always recommend doing this otherwise you end up chasing your tail if something is broken and you only realise after doing hours of work.

dotnet publish

We’ll start of by calling dotnet publish using the Release configuration (typically the Debug configuration will be used if the Configuration parameter isn’t specified):

dotnet publish -r win-x64 /p:Configuration=Release

Here’s the output for this command. Note that if this is the first time you’ve called dotnet build/publish for a specific architecture (using the -r win-x64 flag), it may take a few minutes to restore some of the dependencies.

C:\temp\WinFormsSingleFileSample>dotnet publish -r win-x64 /p:Configuration=Release
Microsoft (R) Build Engine version 16.8.0-preview-20451-02+51a1071f8 for .NET
Copyright (C) Microsoft Corporation. All rights reserved.
Determining projects to restore…
Restored C:\temp\WinFormsSingleFileSample\WinFormsSingleFileSample\WinFormsSingleFileSample.csproj (in 86 ms).
You are using a preview version of .NET. See: https://aka.ms/dotnet-core-preview
WinFormsSingleFileSample -> C:\temp\WinFormsSingleFileSample\WinFormsSingleFileSample\bin\Release\net5.0-windows\win-x64\WinFormsSingleFileSample.dll
WinFormsSingleFileSample -> C:\temp\WinFormsSingleFileSample\WinFormsSingleFileSample\bin\Release\net5.0-windows\win-x64\publish\

The final line includes the output folder for published application. If we look in this folder there are 293 items (280 files and a number of language folders). You can easily zip this folder and copy it to a different machine and run the application but it would be nice to have just a single output.

Self Contained

One option to reduce the number of files to be deployed is to provide the SelfContained parameter. The default value for the SelfContained parameter is true which means that you can copy the output folder to any machine without worrying about whether .NET has been installed.

dotnet publish -r win-x64 /p:Configuration=Release /p:SelfContained=false

Setting SelfContained to false results in a folder containing only 5 items.

Of course, you now need to manage the deployment of .NET dependencies (More information at https://docs.microsoft.com/en-us/dotnet/core/deploying/deploy-with-cli#self-contained-deployment)

PublishSingleFile

Let’s back up and remove the SelfContained parameter, since we’re really interested in distributing an app and not having to worry about any dependencies. This time we’re going to include the PublishSingleFile parameter

dotnet publish -r win-x64 /p:Configuration=Release /p:PublishSingleFile=true

The output on Windows isn’t exactly a single file. There are 11 files in the output folder – these are the native dependencies required in order to run the application.

For more information on why there are multiple files on Windows, check out the preview 8 blog post by Richard Lander from the dotnet team. This is also covered in the comments of this github issue.

IncludeAllContentForSelfExtract

There is still a mechanism where you can achieve a single file output, which is to include the IncludeAllContentForSelfExtract parameter.

dotnet publish -r win-x64 /p:Configuration=Release /p:PublishSingleFile=true /p:IncludeAllContentForSelfExtract=true

The output of this command is two files, one of which is the symbols file (i.e. the pdb file). To deploy this application you can just copy the .exe to the target machine.

One thing to note about the IncludeAllContentForSelfExtract parameter is that when you run the application for the first time, the native dependencies will be extracted which can result in a small delay in launching the app. The managed libraries will be loaded directly from the exe.

PublishTrimmed

Our single file application is currently ~146Mb, which seems a lot for an application that is literally an empty window. Let’s include the PublishedTrimmed file to see if we can optimise the output.

dotnet publish -r win-x64 /p:Configuration=Release /p:PublishSingleFile=true /p:IncludeAllContentForSelfExtract=true /p:PublishTrimmed=true

This command takes a little longer to run but now the output size is just ~83Mb which is a significant improvement. More information on application trimming

PublishReadyToRun

The last parameter we’re going to add is PublishReadyToRun. This essentially invokes AOT compilation, which in theory improves startup performance for the app. I say in theory because given the incredibly simple app we’re dealing with here, it’s unlikely we’ll see any actual performance improvement.

dotnet publish -r win-x64 /p:Configuration=Release /p:PublishSingleFile=true /p:IncludeAllContentForSelfExtract=true /p:PublishTrimmed=true /p:PublishReadyToRun=true  

Now our application has ballooned back up to ~138Mb.

Hopefully in this post you’ll have seen some of the options available to you when publishing Windows Forms (WinForms) and/or WPF applications.

Note: At the time of writing there is a bug with WPF applications where the PublishSingleFile parameter generates a binary that won’t launch in some cases. According to the dotnet team this is to be fixed with RC2 of .NET 5.

Experimenting with .NET 5 Target Framework Names and the Windows platform

Firstly, if you haven’t been following the development of .NET 5 then you should definitely download the latest Visual Studio preview and .NET 5 preview SDK today. Next, you should follow the blogs from the dotnet team and specifically the post by Immo that discusses the future of .NET Standard. The post doesn’t just cover … Continue reading “Experimenting with .NET 5 Target Framework Names and the Windows platform”

Firstly, if you haven’t been following the development of .NET 5 then you should definitely download the latest Visual Studio preview and .NET 5 preview SDK today. Next, you should follow the blogs from the dotnet team and specifically the post by Immo that discusses the future of .NET Standard. The post doesn’t just cover .NET Standard, it also covers the basics of how target framework names (TFMs) work in the .NET 5+ era. In this post we’re going to play around with this and take a look at some examples of different TFMs in action.

NetCoreApp

Let’s start by creating a new project based on the Console Application project template (formerly the Console App (.NET Core) template).

When prompted we’ll select the .NET 5 (Preview) target framework.

Out of the box, this gives use the following project, which as you’d expect targets the net5.0 target framework.

<Project Sdk="Microsoft.NET.Sdk">
  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>net5.0</TargetFramework>
  </PropertyGroup>
</Project>

What does this actually mean? Well, let’s add some debugging output to this project file.

<Project Sdk="Microsoft.NET.Sdk"  InitialTargets="Init">
	<PropertyGroup>
		<OutputType>Exe</OutputType>
		<TargetFramework>net5.0</TargetFramework>
	</PropertyGroup>
	<Target Name="Init">
		<Warning Text="$(TargetFrameworkMoniker)" />
		<Warning Text="$(TargetPlatformMoniker)" />
	</Target>
</Project>

When we build the project we’ll see two additional lines in the output that will show the actual TargetFrameworkMoniker and the TargetPlatformMoniker.

1>------ Rebuild All started: Project: TFMSample, Configuration: Debug Any CPU ------
1>c:\temp\TFMSample\TFMSample\TFMSample.csproj(7,3): warning : .NETCoreApp,Version=v5.0
1>c:\temp\TFMSample\TFMSample\TFMSample.csproj(8,3): warning : (No message specified)
1>You are using a preview version of .NET. See: https://aka.ms/dotnet-core-preview
1>TFMSample -> c:\temp\TFMSample\TFMSample\bin\Debug\net5.0\TFMSample.dll
1>Done building project "TFMSample.csproj".
========== Rebuild All: 1 succeeded, 0 failed, 0 skipped ==========

What’s interesting here is that the TargetFrameworkMoniker is actually .NETCoreApp. This means that net5.0, is actually the same as writing netcoreapp5.0 (e.g. <TargetFramework>netcoreapp5.0</TargetFramework >).

Windows 7: WindowsForms and WPF

According to the design documents for Target Framework Names in .NET 5 if we want to target Windows specific apis then we should be able to add the -windows suffix to the TFM. If we change the TFM to .net5.0-windows, this is what we see when we build the project (I’m just going to show the two output lines for brevity).

1>c:\temp\TFMSample\TFMSample\TFMSample.csproj(7,3): warning : .NETCoreApp,Version=v5.0
1>c:\temp\TFMSample\TFMSample\TFMSample.csproj(8,3): warning : Windows,Version=7.0

With this change we’re now targeting the Windows platform and specifically version 7. As you’d imagine this aligns with the API set that was available for Windows 7….. it does not mean that the output can be run on a Windows 7 device. In this case, we should be able to access the APIs from Windows Forms and WPF. Let’s try this out by attempting to reference a WinForms API

static void Main(string[] args)
{
    Console.WriteLine("Hello .NET 5!");
    System.Windows.Forms.Application.SetHighDpiMode(System.Windows.Forms.HighDpiMode.SystemAware);
    Console.ReadLine();
}

This doesn’t compile, yielding the following error.

At this point I started to scratch my head – Surely since .net5 is just .netcoreapp5 then we should be able to reference WinForms and WPF in the same way as we did when targeting netcoreapp3.1. If you go back and create a WinForms or WPF app using the .NET Core project templates, you’ll see that the project file is basically the same as what we have for net5.0 with the exception that it includes an additional project UseWindowsForms or UseWPF.

Adding UseWindowsForms to our net5.0 project fixes our build issue. If you look at the Dependencies, you’ll see that adding UseWindowsForms adds a dependency on the Microsoft.WindowsDekstop.App.WindowsForms package.

It’s no surprise that we can do the same with UseWPF, which adds Microsoft.WindowsDesktop.App.WPF as a dependency.

Windows 10: UWP

Currently we’re using net5.0-windows and as we saw this maps to version 7 of the Windows platform. If we change the tfm to net5.0-windows7, it’s not surprise that we see exactly the same output (i.e. we’re still targeting version 7 of the Windows platform). Now let’s change it to net5.0-windows10.

1>------ Rebuild All started: Project: TFMSample, Configuration: Debug Any CPU ------
1>c:\temp\TFMSample\TFMSample\TFMSample.csproj(9,3): warning : .NETCoreApp,Version=v5.0
1>c:\temp\TFMSample\TFMSample\TFMSample.csproj(10,3): warning : Windows,Version=10.0.0.0
1>C:\Program Files\dotnet\sdk\5.0.100-rc.1.20452.10\Sdks\Microsoft.NET.Sdk\targets\Microsoft.NET.TargetFrameworkInference.targets(222,5): error NETSDK1140: 10.0.0.0 is not a valid TargetPlatformVersion for Windows. Valid versions include:
1>C:\Program Files\dotnet\sdk\5.0.100-rc.1.20452.10\Sdks\Microsoft.NET.Sdk\targets\Microsoft.NET.TargetFrameworkInference.targets(222,5): error NETSDK1140: 10.0.19041.0
1>C:\Program Files\dotnet\sdk\5.0.100-rc.1.20452.10\Sdks\Microsoft.NET.Sdk\targets\Microsoft.NET.TargetFrameworkInference.targets(222,5): error NETSDK1140: 10.0.18362.0
1>C:\Program Files\dotnet\sdk\5.0.100-rc.1.20452.10\Sdks\Microsoft.NET.Sdk\targets\Microsoft.NET.TargetFrameworkInference.targets(222,5): error NETSDK1140: 10.0.17763.0
1>C:\Program Files\dotnet\sdk\5.0.100-rc.1.20452.10\Sdks\Microsoft.NET.Sdk\targets\Microsoft.NET.TargetFrameworkInference.targets(222,5): error NETSDK1140: 8.0
1>C:\Program Files\dotnet\sdk\5.0.100-rc.1.20452.10\Sdks\Microsoft.NET.Sdk\targets\Microsoft.NET.TargetFrameworkInference.targets(222,5): error NETSDK1140: 7.0
1>Done building project "TFMSample.csproj" -- FAILED.
========== Rebuild All: 0 succeeded, 1 failed, 0 skipped ==========

As you’d expect, the windows10 maps to a version number of 10.0.0.0, which, as the error explains, is not a valid version number. There are a couple of interesting things to point out here. Firstly, that the only supported versions of Windows 10 are currently 10.0.17763.0, 10.0.18362.0 and 10.0.19041.0, which limits the backward compatibility of .net5 to only those versions of Windows that are still being supported. Whilst it’s not the official documentation on Windows 10 support, Wikipedia has a nice visual representation of supported versions of Windows 10.

The other thing to note from the error messages is that there is a version 8.0 that’s supported. I’m not sure what APIs are included in version 8.0 but I would imagine that they’re the WinRT APIs that align with Windows 8.0/8.1. I’m not sure why you’d necessarily want to target version 8.0, and not Windows 10, so I’m not going to skip over version 8.0 in this post.

Let’s update our TFM to target version 10.0.18362.0 (i.e. net5.0-windows10.0.18362.0). Now we see the addition of Microsoft.Windows.SDK.NET.Ref as a dependency.

When we build the project we now see the following output, which is what we’d expect.

1>c:\temp\TFMSample\TFMSample\TFMSample.csproj(9,3): warning : .NETCoreApp,Version=v5.0
1>c:\temp\TFMSample\TFMSample\TFMSample.csproj(10,3): warning : Windows,Version=10.0.18362.0

Let’s update our code to access a WinRT api – in this case the Storage API.

static void Main(string[] args)
{
    Console.WriteLine("Hello .NET 5!");
    var tempFolder = ApplicationData.Current.TemporaryFolder;
    Console.WriteLine("Folder " + tempFolder.DisplayName);
    Console.ReadLine();
}

This compiles but throws an InvalidOperationException when we attempt to run it.

Typically when you’re accessing WinRT apis you’re doing so from within the confines of say a UWP application. Here we’re attempting to access the WinRT apis from a Win32 application, so you can imagine there’s some extra work we need to do in order for our application to be permitted to access those apis.

The easiest way to grant access to the apis, is to add a Windows Packaging Project to our solution.

Right-click on the Applications folder in the packaging project and select Add Reference. Select the TFMSample project.

If you set the packaging project as the startup project you can attempt to build and run the application. However, in the current Visual Studio preview you’ll see an error in the debug output

The target process exited without raising a CoreCLR started event. Ensure that the target process is configured to use .NET Core. This may be expected if the target process did not run on .NET Core.

Right-click on the packaging project and select Publish, Create App Package. Follow the prompts (you’ll need to select or create a signing certificate) to create the app package. If you then right-click on the packaging project again and select Deploy, this should install the generated package onto your computer. Then you can run the application from the Start menu – it’ll have the name of the packaging project, not your net5 application.

Hopefully this post has given you some insight into how the target framework names will work for .net5+.

XAML Back to Basics #15: TreeView

How to display grouped data in a TreeView

XAML Basics Series Index Page

The next post in the series originally written by Beatriz Stollnitz. Original post available on Github.

How to display grouped data in a TreeView

The TreeView control is great at displaying structured data using the HierarchicalDataTemplate (see Karsten’s blog post on this topic). But what do you do if the data you’re given is not structured hierarchically? In this post, I will show you how to create that hierarchy from a flat list of data items, using the grouping feature of data binding.

I am using the same Animal data source I used in my last post. Grouping the Animals by Category is done the same way as in my last sample:

<local:Animals x:Key="animals"/>

<CollectionViewSource x:Key="cvs" Source="{Binding Source={StaticResource animals}, Path=AnimalList}">
    <CollectionViewSource.GroupDescriptions>
        <PropertyGroupDescription PropertyName="Category"/>
    </CollectionViewSource.GroupDescriptions>
</CollectionViewSource>

We now have the data in a hierarchical form. In this particular case it has only one level of groups, and another level with the animals. You can easily imagine that by adding more GroupDescriptions you would end up with a deeper hierarchy.

When binding to a CollectionViewSource, the Binding object knows to grab the CollectionViewSource’s View property. This property returns the custom view (of type ICollectionView) that CollectionViewSource creates on top of the data collection (where the grouping is applied). In our scenario, we want to bind to the hierarchy we created with grouping, or in other words, we want to bind to the groups. We can get to this data by binding to the Groups property in ICollectionView:

<TreeView ItemsSource="{Binding Source={StaticResource cvs}, Path=Groups}" ItemTemplate="{StaticResource categoryTemplate}" Width="200">
</TreeView>

When using data binding’s grouping feature, each group of items is wrapped in a CollectionViewGroup object. We can access the name of the group (the property we’re grouping by) by using CollectionViewGroup’s Name property, and we can get to the items that belong to the group through the Items property. This is all the information we need in order to make a HierarchicalDataTemplate that will display the Category of each animal and specify the animals that belong to it:

<HierarchicalDataTemplate x:Key="categoryTemplate" ItemsSource="{Binding Path=Items}" ItemTemplate="{StaticResource animalTemplate}">
    <TextBlock Text="{Binding Path=Name}" FontWeight="Bold"/>
</HierarchicalDataTemplate>

Finally we need a DataTemplate for the leaf nodes, which specifies how we want the Animal data to be displayed. In this case, we are interested in displaying the Name property of each Animal. Notice that the HierarchicalDataTemplate’s ItemTemplate property points to this template.

<DataTemplate x:Key="animalTemplate">
    <TextBlock Text="{Binding Path=Name}"/>
</DataTemplate>

Here is the result of the completed sample:

WPF Source Code

WPF

UWP/Uno Notes
Since UWP (and thus Uno and WinUI) doesn’t support grouping in the CollectionViewSource, we’ve provided an alternative implementation that makes use of Linq’s IGrouping and an ItemTemplateSelector to switch between templates based on whether it’s a Category or an Animal node in the tree.

Uno doesn’t currently support the TreeView, but it’s expected to land in the v3.1 timeframe.

UWP Source Code

UWP

WinUI with Uno and WinUI Desktop Source Code

WinUI – Desktop

XAML Back to Basics #14: Sorting and Grouping

How to sort groups of data items

XAML Basics Series Index Page

The next post in the series originally written by Beatriz Stollnitz. Original post available on Github.

How to sort groups of data items

With the introduction of CollectionViewSource, we are now able to do basic grouping of data items in an ItemsControl without using code. In this post I will show you how to group items and sort those groups.

The data source of this sample consists of a list of objects of type Animal. Animal has a Name and a Category (which is an enumeration). I want to group the items depending on their Category. This is easily done in markup by using CollectionViewSource:

<Window.Resources>
    <local:Animals x:Key="animals"/>

    <CollectionViewSource x:Key="cvs" Source="{Binding Source={StaticResource animals}, Path=AnimalList}">
        <CollectionViewSource.GroupDescriptions>
            <PropertyGroupDescription PropertyName="Category"/>
        </CollectionViewSource.GroupDescriptions>
    </CollectionViewSource>

    <DataTemplate x:Key="animalTemplate">
        <TextBlock Text="{Binding Path=Name}" Foreground="MediumSeaGreen"/>
    </DataTemplate>
</Window.Resources>

<ItemsControl ItemsSource="{Binding Source={StaticResource cvs}}" ItemTemplate="{StaticResource animalTemplate}"/>

As I explained in a previous post, CollectionViewSource creates a custom View over the source list through markup. A view is a layer on top of a source data list that allows us to group, sort, and filter items, as well as keep track of the currently selected item.

If you try the sample markup above, you will see the names of the animals, but no information about the groups. The next step is to provide a template to display the group titles. CollectionViewSource wraps each group of items in an object of type CollectionViewGroup, and we are interested in its “Name” property, which we can display using the following template:

<DataTemplate x:Key="categoryTemplate">
    <TextBlock Text="{Binding Path=Name}" FontWeight="Bold" Foreground="ForestGreen" Margin="0,5,0,0"/>
</DataTemplate>

In order to use this template for the group titles, we have to add it to the GroupStyle property of ItemsControl (which takes a collection of GroupStyle objects):

<ItemsControl ItemsSource="{Binding Source={StaticResource cvs}}">
    <ItemsControl.GroupStyle>
        <GroupStyle HeaderTemplate="{StaticResource categoryTemplate}" />
    </ItemsControl.GroupStyle>
</ItemsControl>

We could add more GroupStyles to the collection, in which case they would be applied to different levels of groups. (For simplicity, we just have one level of grouping in this sample.)

At this point, the groups and items display correctly, but we would like to sort the groups and the items within the groups. I’ve seen a few people approach this by looking for a specific “SortGroups” method or something similar. We didn’t design a special API to sort groups because you can accomplish that simply by sorting the items by the same property by which you are grouping:

<CollectionViewSource x:Key="cvs" Source="{Binding Source={StaticResource animals}, Path=AnimalList}">
    <CollectionViewSource.GroupDescriptions>
        <PropertyGroupDescription PropertyName="Category"/>
    </CollectionViewSource.GroupDescriptions>
    <CollectionViewSource.SortDescriptions>
        <scm:SortDescription PropertyName="Category" />
        <scm:SortDescription PropertyName="Name" />
    </CollectionViewSource.SortDescriptions>
</CollectionViewSource>

Adding two sort descriptions allows us to sort the groups first and then the items within the groups. Notice that because Category is an enumeration, sorting by that property will display the groups in the order they are defined in the enumeration (which may or may not be alphabetically). Name is of type string, so the leaf items will be displayed alphabetically.

This is a screenshot of the completed sample:

WPF Source Code

WPF

UWP Notes
The UWP CollectionViewSource doesn’t support grouping or sort, requiring the underlying data source to be grouped and sorted in advance.

UWP Source Code

UWP

Uno Notes
The Uno CollectionViewSource doesn’t handle grouped data source. Instead it presents the items in a single list.

WinUI Notes
The WinUI CollectionViewSource for both desktop and UWP is similar to the UWP implementation. As such it requires the data source to be grouped and sorted in advance.

WinUI with Uno and WinUI Desktop Source Code

WinUI – Desktop

XAML Back to Basics #13: DataTemplateSelector

How to display items in an ItemsControl using different templates

XAML Basics Series Index Page

The next post in the series originally written by Beatriz Stollnitz. Original post available on Github.

How to display items in an ItemsControl using different templates

I will show you two ways to display some items of a data bound collection differently from others. The rule of thumb is straightforward: if you want to differentiate items that are of the same type based on one of their properties, you should use DataTemplateSelector; if your data items are of different types and you want to use the types to differentiate them, then using implicit data templating is a simpler way to do this.

Let us consider the scenario where the source collection has elements that are all of the same type. In this case, the goal is to change the way they are displayed based on some property in the data element, and using a DataTemplateSelector is the way to go. In the sample code below, the ListBox is bound to a collection of Places, where Place is an object with properties Name and State. I want places in Washington state to be displayed differently from other places, so I defined two DataTemplates in the resources. Then I wrote a PlaceTemplateSelector that picks the correct DataTemplate based on the State property of a Place. Finally, I instantiated a ListBox whose ItemTemplateSelector DependencyProperty is set to the selector I defined.

<Window.Resources>    
    <local:Places x:Key="places" />

    <DataTemplate x:Key="washingtonTemplate">
        <Border Background="Lavender">
            <TextBlock Text="{Binding Path=Name}" Foreground="CornFlowerBlue" FontWeight="Bold"/>
        </Border>
    </DataTemplate>

    <DataTemplate x:Key="notWashingtonTemplate">
        <TextBlock Text="{Binding Path=Name}" Foreground="DarkSeaGreen" />
    </DataTemplate>

    <local:PlaceTemplateSelector WashingtonTemplate="{StaticResource washingtonTemplate}" NotWashingtonTemplate="{StaticResource notWashingtonTemplate}" x:Key="placeTemplateSelector" />
</Window.Resources>

<ListBox ItemsSource="{Binding Source={StaticResource places}}" ItemTemplateSelector="{StaticResource placeTemplateSelector}" Margin="10"/>

Here is the code for the PlaceTemplateSelector:

public class PlaceTemplateSelector : DataTemplateSelector
{
    private DataTemplate washingtonTemplate;

    public DataTemplate WashingtonTemplate
    {
        get { return washingtonTemplate; }
        set { washingtonTemplate = value; }
    }

    private DataTemplate notWashingtonTemplate;

    public DataTemplate NotWashingtonTemplate
    {
        get { return notWashingtonTemplate; }
        set { notWashingtonTemplate = value; }
    }

    public override DataTemplate SelectTemplate(object item, DependencyObject container)
    {
        Place place = (Place)item;

        if (place.State == "WA")
        {
            return washingtonTemplate;
        }
        else
        {
            return notWashingtonTemplate;
        }
    }
}

Consider now the scenario where the collection has objects with different types added to it. In this case, the goal is to template items differently depending on their type. In the sample code below, the ListBox is bound to a heterogeneous collection that contains both GreekGod and GreekHero objects.

<Window.Resources>
    <local:GreekGodsAndHeros x:Key="godsAndHeros" />
</Window.Resources>

<ListBox ItemsSource="{Binding Source={StaticResource godsAndHeros}}" Margin="10"/>

Sure, a DataTemplateSelector could be used to template the items by picking the correct DataTemplate depending on the type of the item passed to the SelectTemplate method, as I have seen a few people do. However, implicit data templating is a better way to do this because it accomplishes the same thing all in xaml (no need for code behind). To use a DataTemplate implicitly, instead of setting its key (with x:Key), I set the DataType property to the type I want it to be applied to.

<DataTemplate DataType="{x:Type local:GreekGod}">
    <Grid>
        <ColumnDefinition Width="100"/>
        <ColumnDefinition Width="*"/>
        <RowDefinition Height="Auto"/>
        <TextBlock Text="{Binding Path=GodName}" Grid.Column="0" Grid.Row="0" Foreground="Brown"/>
        <TextBlock Text="{Binding Path=GodDescription}" Grid.Column="1" Grid.Row="0" Foreground="Brown"/>
    </Grid>
</DataTemplate>

<DataTemplate DataType="{x:Type local:GreekHero}">
    <TextBlock Text="{Binding Path=HeroName}" FontWeight="Bold" Foreground="Red"/>
</DataTemplate>

Here is a screen shot of the completed sample:

WPF Source Code

WPF

UWP/Uno Notes

There is no support for implicit templating based on the type of data object. I’ve added an additional template selector which uses the type of the object to determine which template to use. The DataType attribute on the templates has been used to support x:Bind instead of Binding.

UWP Source Code

UWP

WinUI with Uno and WinUI Desktop Source Code

WinUI – Desktop

XAML Islands Getting Started Guide – Adding UWP Controls to Windows Forms or WPF Application

One of the reasons that Microsoft failed to get wide spread adoption of the Universal Windows Platform (UWP) is that there is already a massive investment into Windows Forms (WinForms) and Windows Presentation Foundation (WPF) applications. In this post we’re going to walk through how you can use XAML Islands to host UWP controls within an existing WinForms or WPF application.

One of the reasons that Microsoft failed to get wide spread adoption of the Universal Windows Platform (UWP) is that there is already a massive investment into Windows Forms (WinForms) and Windows Presentation Foundation (WPF) applications. What’s ironic is that this is true for both existing applications and new applications. Over the last couple of years Microsoft has changed strategy and has been looking at tools and techniques for bridging the gap between these frameworks in order to allow developers to take advantage of the rich controls and capabilities of UWP. In this post we’re going to walk through how you can use XAML Islands to host UWP controls within an existing WinForms or WPF application.

Before we get into working with XAML Islands, here are a couple of reference posts that are worth a read if you want to understand the background and some additional details about XMAL Islands:

Windows Forms

Let’s get into this – we’re going to start with Windows Forms and we’re going to be working with a Windows Forms application that’s sitting on .NET Core 3.1. As Miguel discusses in his post, there is support for .NET Framework but there are some limitations for third party controls. If your application is still based on .NET Framework, I would highly recommend looking at migrating to .NET Core.

In Visual Studio, we’ll create a new project using the Windows Forms (WinForms) Application project template. I’m currently using Visual Studio 2019 16.8 preview 2.1 where the project templates have been renamed – this template was formerly called Windows Forms App (.NET Core), which points to Microsoft’s intent to move developers to building Windows Forms app off .NET Core instead of .NET Framework (the .NET Framework based template is still called called Windows Forms App (.NET Framework)).

We’re going to select .NET Core 3.1 for the target framework

After creating the project we’ll rename Form1 to MainForm, and then proceed with adding four more forms that will host the four scenarios we’re going to look at.

Next I’ll create four buttons on the MainForm, which we’ll use to launch the four forms we just created.

The code behind for these buttons is relatively simple.

private void btnSimpleButton_Click(object sender, EventArgs e)
{
    new SimpleButtonForm().ShowDialog();
}
private void btnCustomControl_Click(object sender, EventArgs e)
{
    new CustomControlForm().ShowDialog();
}
private void btnThirdPartyControl_Click(object sender, EventArgs e)
{
    new ThirdPartyControlForm().ShowDialog();
}
private void btnThirdPartyControlWithStyle_Click(object sender, EventArgs e)
{
    new ThirdPartyControlWithStyleForm().ShowDialog();
}

Standard UWP Button

Now let’s start with the first scenario where we’re just going to display a standard UWP Button inside the SimpleButtonForm. To do this, the first thing we need to do is to reference the Microsoft.Toolkit.Forms.UI.XamlHost NuGet package.

Next, we’re going to add code in the SimpleButtonForm constructor to create the instance of both the Button and the WindowsXamlHost. The WindowsXamlHost is the wrapper that makes it really easy to add UWP based controls to the Windows Forms application.

public SimpleButtonForm()
{
    InitializeComponent();

    var myHostControl = new Microsoft.Toolkit.Forms.UI.XamlHost.WindowsXamlHost();
    myHostControl.Dock = System.Windows.Forms.DockStyle.Fill;
    myHostControl.Name = "hostUwpButton";

    var uwpButton = new Windows.UI.Xaml.Controls.Button();
    uwpButton.Content = "Say Something!";
    uwpButton.HorizontalAlignment = Windows.UI.Xaml.HorizontalAlignment.Stretch;
    uwpButton.VerticalAlignment = Windows.UI.Xaml.VerticalAlignment.Stretch;
    uwpButton.Click += UwpButton_Click;

    myHostControl.Child = uwpButton;
    this.Controls.Add(myHostControl);
}

private void UwpButton_Click(object sender, Windows.UI.Xaml.RoutedEventArgs e)
{
    MessageBox.Show("Hello World!");
}

Important Note: If we run the application at this point we’ll see an error shown in the following image, that reads “WindowsXamlManager and DesktopWindowsXamlSource are supported for apps targeting Windows version 10.0.118226.0 and later”.

To fix this issue we need to include an app.manifest file with the following content:

<?xml version="1.0" encoding="utf-8"?>
<assembly manifestVersion="1.0" xmlns="urn:schemas-microsoft-com:asm.v1">
	<compatibility xmlns="urn:schemas-microsoft-com:compatibility.v1">
		<application>
			<!-- Windows 10 -->
			<maxversiontested Id="10.0.18362.0"/>
			<supportedOS Id="{8e0f7a12-bfb3-4fe8-b9a5-48fd50a15a9a}" />

		</application>
	</compatibility>
</assembly>

The app.manifest file needs to be set as the Manifest file for the Windows Forms project via the Application tab of the Project Properties (Right-click on the project in Solution Explorer and select Properties).

Now, we can run the application, click on the button entitled “Simple UWP Button” and then click on the Say Something button.

What we’ve seen so far is simply using the built-in UWP controls. If you want to use your own custom controls, or third party controls, you’ll need to follow some additional steps.

Custom Control

For the custom control scenario, let’s start by creating a new project based on the Class Library project template. Note that you could also use a UWP class library for this and follow the same steps.

In order to add UWP controls to the class library, we’ll update the project file to use the uap10.0.16299 target framework (this step isn’t required if you’re use the UWP class library project template).

<Project Sdk="MSBuild.Sdk.Extras/2.1.2">
  <PropertyGroup>
    <TargetFrameworks>uap10.0.16299</TargetFrameworks>
  </PropertyGroup>
</Project>

Our custom control is going to be very basic with a single Button that’s going to generate a random number that’s displayed in a TextBlock.

<UserControl
    x:Class="UwpControlLibrary.MyCustomControl"
    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
    xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
    xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
    mc:Ignorable="d">
    <StackPanel>
        <Button Content="Generate Random Number" Click="RandomNumber_Click" />
        <TextBlock Text="[placeholder]" x:Name="RandomNumberOutputTextBlock" />
    </StackPanel>
</UserControl>

With very simple code behind

private void RandomNumber_Click(object sender, Windows.UI.Xaml.RoutedEventArgs e)
{
    var rnd = new Random();
    RandomNumberOutputTextBlock.Text = rnd.Next(0, 10000).ToString();
}

Referencing our control isn’t a simple as just adding a project reference to our class library. Instead, we need to provide a context in which our control is going to be instantiated. When our control gets created in a normal UWP application, it does so within the context of the application, which allows for resolution of resources, styles etc. We need to provide a similar context for our control when it’s rendered within a Windows Forms application.

To do this, we need to create a new project based on the Blank App (Universal Windows) project template. I would avoid attempting to use either a UWP class library or multi-targeted class library for this as neither of them will generate the necessary output for the hosting of our custom control in a Windows Forms or WPF application.

In creating the new project, make sure you select 10.0.18362 (version 1903) as the minimum version.

We then need to add a reference to the Microsoft.Tookit.Win32.UI.XamlApplication NuGet package to the UWP application project.

The UWP application needs to reference the control library.

And the Windows Forms application needs to reference both the UWP application and the control library.

Now we can go ahead and add the code to the CustomControlForm to add an instance of the MyCustomControl.

public CustomControlForm()
{
    InitializeComponent();

    var myHostControl = new Microsoft.Toolkit.Forms.UI.XamlHost.WindowsXamlHost();
    myHostControl.Dock = DockStyle.Fill;
    myHostControl.Name = "uwpHost";

    var customControl = new MyCustomControl();
    customControl.HorizontalAlignment = Windows.UI.Xaml.HorizontalAlignment.Stretch;
    customControl.VerticalAlignment = Windows.UI.Xaml.VerticalAlignment.Stretch;
    myHostControl.Child = customControl;

    this.Controls.Add(myHostControl);
}

At this point if you try to run the Windows Forms application you’ll see build errors similar to the following

Microsoft.VCRTForwarders.140.targets(91,9): warning : Because your app is being built as AnyCPU no Microsoft.VCRTForwarders.140 DLLs were copied to your ouput folder. Microsoft.VCRTForwarders.140 only supports x86, x64, or arm64 applications due to a C++ Runtime dependency.

or

error : The OutputPath property is not set for project 'UwpXamlIslandHostApp.csproj'. Please check to make sure that you have specified a valid combination of Configuration and Platform for this project. Configuration='Debug' Platform='AnyCPU'.

The errors are pointing to a disparity between the platforms that the projects are being built for. To work around this, you need to change the Platform for each project to be consistent. Right-click on the solution in Solution Explorer and select Configuration Manager. For each platform, make sure the same Platform is selected. This may mean that you have to create a new configuration for those projects that only have Any CPU, such as in this example.

From the New Project Platform dialog, select the platform and make sure the “Create new solution platforms” option is unchecked.

With this done, we should be able to run the application and click on the Custom Control button to launch the CustomControlForm that hosts the MyCustomControl. In this case the MyCustomControl encapsulates the functionality for handling the Button click and updating the Text on the TextBlock.

Third Party Control

In this scenario we’re going to reference the Telerik UWP control library (Telerik.UI.for.UniversalWindowsPlatform on NuGet) and make use of the RadCalendar. The first step is to simply add the reference to the NuGet package. I’m going to go ahead and add it to both the Windows Forms project, as well as both the UWP application and class library projects.

With the reference added, we can simply create an instance of the RadCalendar inside the constructor of the ThirdPartyControlForm.

public ThirdPartyControlForm()
{
    InitializeComponent();

    var myHostControl = new Microsoft.Toolkit.Forms.UI.XamlHost.WindowsXamlHost();
    myHostControl.Dock = DockStyle.Fill;
    myHostControl.Name = "uwpHost";

    var customControl = new Telerik.UI.Xaml.Controls.Input.RadCalendar();
    customControl.HorizontalAlignment = Windows.UI.Xaml.HorizontalAlignment.Stretch;
    customControl.VerticalAlignment = Windows.UI.Xaml.VerticalAlignment.Stretch;
    myHostControl.Child = customControl;
            
    this.Controls.Add(myHostControl);
}

And without any further changes, we can go ahead and run the Windows Forms application and click on the Third Party Control button. This will show the ThirdPartyControlForm with the RadCalendar visible.

Third Party Control With Style

The last scenario also makes use of the RadCalendar. This time we’re going to combine it with other Windows Forms controls to illustrate how you can use data binding and apply styles.

To begin with we’re going to use the Windows Forms designer to put together a basic layout. Unfortunately even though the WindowsXamlHost control appears in the Toolbox, an exception is thrown by Visual Studio when attempting to add it directly to the Form. Instead, I’ve added a Panel which will act as a placeholder for the WindowsXamlHost, and subsequently the RadCalendar.

I’ve also added a Windows Forms DateTimePicker and a Label. The idea is that the user should be able to use either the RadCalendar or the DateTimePicker to select a date, which will be displayed in the Label below.

We’ll add a very simple class that will be used for data binding.

public class DataModel : INotifyPropertyChanged
{
    private string dateAsString;
    private DateTime myDate;

    public event PropertyChangedEventHandler PropertyChanged;

    public string DateAsString
    {
        get => dateAsString; set
        {
            dateAsString = value;
            PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(nameof(DateAsString)));
        }
    }

    public DateTime MyDate
    {

        get => myDate;
        set
        {
            myDate = value;
            PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(nameof(MyDate)));
            DateAsString = myDate.ToString("O");
        }
    }
}

In terms of data binding to the RadCalendar, we have a couple of options. We could manually create the data binding expression. This seems quite archaic, so alternatively we can specify the binding in XAML. However, this only works if the instance of the RadCalendar is being created in XAML, so that we can specify the binding expression. Easily done – by creating a CustomCalendar UserControl in our Control Library, with the following XAML.

<UserControl x:Class="UwpControlLibrary.CustomCalendar"
             xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
             xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
             xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
             xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
             xmlns:input="using:Telerik.UI.Xaml.Controls.Input"
             xmlns:telerikCalendar="using:Telerik.UI.Xaml.Controls.Input.Calendar"
             mc:Ignorable="d">
    <UserControl.Resources>
        <telerikCalendar:CalendarDateToSingleDateRangeConverter x:Key="converter" />
    </UserControl.Resources>
    <input:RadCalendar SelectedDateRange="{Binding MyDate, Converter={StaticResource converter}, Mode=TwoWay}"
                       SelectionMode="Single" />
</UserControl>

Note that in this case, being able to do the binding in XAML is particularly useful since we need to create and use an instance of the CalendarDateToSingleDateRangeConverter. This converter allows for binding a single DateTime property (MyDate) to the SelectedDateRange property.

Back to the Windows Forms project, the code for creating the instance of the CustomCalendar control and wiring up the data binding with the other controls on the page, looks like this.

public ThirdPartyControlWithStyleForm()
{
    InitializeComponent();

    var myHostControl = new Microsoft.Toolkit.Forms.UI.XamlHost.WindowsXamlHost();
    myHostControl.Dock = DockStyle.Fill;
    myHostControl.Name = "uwpHost";

    var customControl = new CustomCalendar();
    customControl.HorizontalAlignment = Windows.UI.Xaml.HorizontalAlignment.Stretch;
    customControl.VerticalAlignment = Windows.UI.Xaml.VerticalAlignment.Stretch;
    myHostControl.Child = customControl;

    pnlXamlIsland.Controls.Add(myHostControl);

    var data = new DataModel();
    customControl.DataContext = data;
    dtpPickDate.DataBindings.Add(new Binding(nameof(DateTimePicker.Value), data, nameof(DataModel.MyDate), true, DataSourceUpdateMode.OnPropertyChanged));
    lblDate.DataBindings.Add(new Binding(nameof(Label.Text), data, nameof(DataModel.DateAsString), true, DataSourceUpdateMode.OnPropertyChanged));
}

Running the Windows Forms application and clicking on the Third Party Control With Style button shows the ThirdPartyControlWithStyleForm. Either the RadCalendar (nested in the CustomCalendar control) or the DateTimePicker can be used to select a date, which is shown in the Label below.

You’ll notice that the selected date in the RadCalendar has a different style applied with a green background and red border. This has been applied using an implicit style defined in the App.xaml in the UWP application project.

<xamlhost:XamlApplication xmlns:xamlhost="using:Microsoft.Toolkit.Win32.UI.XamlHost"
                          x:Class="UwpXamlIslandHostApp.App"
                          xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
                          xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
                          xmlns:input="using:Telerik.UI.Xaml.Controls.Input">
    <xamlhost:XamlApplication.Resources>
        <Style TargetType="input:RadCalendar">
            <Setter Property="SelectedCellStyle">
                <Setter.Value>
                    <input:CalendarCellStyle>
                        <input:CalendarCellStyle.DecorationStyle>
                            <Style TargetType="Border">
                                <Setter Property="Background"
                                        Value="PaleGreen" />
                                <Setter Property="BorderBrush"
                                        Value="MediumVioletRed" />
                            </Style>
                        </input:CalendarCellStyle.DecorationStyle>
                    </input:CalendarCellStyle>
                </Setter.Value>
            </Setter>
        </Style>
    </xamlhost:XamlApplication.Resources>
</xamlhost:XamlApplication>

That’s it for the Windows Forms application – four different scenarios for hosting UWP controls in a Windows Forms application using Xaml Islands.

Windows Presentation Foundation (WPF)

Now we’ll move on to showing the same four scenarios in a WPF application. As we’ve already done a lot of the setup work for the various controls, this section will focus on the differences with the hosting in WPF. To get started we’ll use the WPF Application project template.

Like we did for the Windows Forms application, we’ll create four additional Windows and connect them to four buttons on the main Window of the application.

We’ll need to reference the Microsoft.Toolkit.Wpf.UI.XamlHost NuGet package.

You’ll also need to add an app.manifest file and set it as the manifest file for the WPF application.

Standard UWP Button

The XAML and code behind for the SimpleButtonWindow are as follows.

<Window x:Class="WPFIslandsDemo.SimpleButtonWindow"
        xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
        xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
        xmlns:xamlhost="clr-namespace:Microsoft.Toolkit.Wpf.UI.XamlHost;assembly=Microsoft.Toolkit.Wpf.UI.XamlHost"
        mc:Ignorable="d"
        Title="SimpleButtonWindow" Height="450" Width="800">
    <Grid>
        <xamlhost:WindowsXamlHost x:Name="XamlHost"/>
    </Grid>
</Window>

public SimpleButtonWindow()
{
    InitializeComponent();

    var button = new Windows.UI.Xaml.Controls.Button();
    button.HorizontalAlignment = Windows.UI.Xaml.HorizontalAlignment.Stretch;
    button.VerticalAlignment = Windows.UI.Xaml.VerticalAlignment.Stretch;
    button.Content = "Say Something";
    button.Click += Button_Click;
    XamlHost.Child = button;
}

private void Button_Click(object sender, Windows.UI.Xaml.RoutedEventArgs e)
{
    MessageBox.Show("Hello World!");
}

Running this and clicking the Simple UWP Button, we see a new Window appear that’s similar to the Windows Forms example.

Custom Control

Adding the Custom Control is actually even simpler, as we can just specify the MyCustomControl using the InitialTypeName property. Don’t forget to add references to the UWP application and class library projects.

<Window x:Class="WPFIslandsDemo.CustomControlWindow"
        xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
        xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
        xmlns:local="clr-namespace:WPFIslandsDemo"
        xmlns:xamlhost="clr-namespace:Microsoft.Toolkit.Wpf.UI.XamlHost;assembly=Microsoft.Toolkit.Wpf.UI.XamlHost"
        mc:Ignorable="d"
        Title="CustomControlWindow" Height="450" Width="800">
    <Grid>
        <xamlhost:WindowsXamlHost InitialTypeName="UwpControlLibrary.MyCustomControl" />
    </Grid>
</Window>

Again, this looks very similar to the Windows Forms output.

Third Party Control

The ThirdPartyControlWindow is very similar to the CustomControlWindow in that we can just specify the InitialTypeName attribute. In this case using the class Telerik.UI.Xaml.Controls.Input.RadCalendar.

The ThirdPartyControlWithStyleWindow is slightly more complex as we need to establish the data binding. Here’s the XAML and code behind.

<Window x:Class="WPFIslandsDemo.ThirdPartyControlWithStyleWindow"
        xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
        xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
        xmlns:xamlhost="clr-namespace:Microsoft.Toolkit.Wpf.UI.XamlHost;assembly=Microsoft.Toolkit.Wpf.UI.XamlHost"
        mc:Ignorable="d"
        Title="ThirdPartyControlWithStyleWindow"
        Height="450"
        Width="800">
    <StackPanel>
        <TextBlock Text="Pick as date:" />
        <xamlhost:WindowsXamlHost InitialTypeName="UwpControlLibrary.CustomCalendar" />
        <DatePicker SelectedDate="{Binding MyDate, Mode=TwoWay}" />

        <TextBlock Text="{Binding DateAsString}" />
    </StackPanel>
</Window>

public ThirdPartyControlWithStyleWindow()
{
    InitializeComponent();

    DataContext = new DataModel();
}

Notice how simple this is – the DataContext is applied to both the WPF and UWP controls, making it possible to easily integrate controls from both frameworks into the same layout with minimal fuss.

And that’s how easy it is to integrate UWP controls with both Windows Forms and WPF applications. The source code for this walkthrough is available on GitHub

XAML Back to Basics #12: Dialogs

How to implement a data bound dialog box

XAML Basics Series Index Page

The next post in the series originally written by Beatriz Stollnitz. Original post available on Github.

How to implement a data bound dialog box

In this post I will show you how to implement a dialog box using data binding. While this may seem like a straightforward task at first glance, when using data binding it can be tricky to get the “OK” button of a dialog to commit the user’s changes and the “Cancel” button to discard them.

One possible approach is to allow the bindings to update the data source as the user is typing information into the dialog box, then undo the work done by the bindings if the user happens to press the “Cancel” button. I don’t like the “Cancel” scenario of this approach because the data source acquires values that are only kept temporarily. Besides, it requires additional logic in the application to remember the data when the dialog box opens and to revert back to that data if the user presses “Cancel”. This is a lot of work and quite confusing. Fortunately, there is an easier way to get the job done – by changing the value of UpdateSourceTrigger in your Bindings.

The main Window in this sample has a Button that launches the dialog box, and Labels that show the contents of the data source. When this app is loaded the Labels are empty. When the user opens the dialog box, enters data in the TextBoxes and presses OK, the Labels in the main Window display the data just entered. If the user presses Cancel instead, the Labels should remain empty.

<Button Click="ShowDialog" Width="100" Height="30">Show Dialog</Button>
<Label Grid.Row="0" Grid.Column="1" Name="Name" Margin="5" Content="{Binding Source={StaticResource source}, Path=Name}"/>
<Label Grid.Row="1" Grid.Column="1" Name="Comment" Margin="5" Content="{Binding Source={StaticResource source}, Path=Comment}"/>

private void ShowDialog(object sender, RoutedEventArgs args) { Dialog1 dialog = new Dialog1(); dialog.Owner = this; dialog.ShowDialog(); }

The dialog box contains TextBoxes data bound to the same data as the Labels and OK/Cancel Buttons. This is the markup that goes in the dialog box:

<TextBox Grid.Row="0" Grid.Column="1" Name="Name" Margin="5" Text="{Binding Source={StaticResource source}, Path=Name, UpdateSourceTrigger=Explicit}"/>
<TextBox Grid.Row="1" Grid.Column="1" Name="Comment" Margin="5" Text="{Binding Source={StaticResource source}, Path=Comment, UpdateSourceTrigger=Explicit}"/>
<Button Click="OKHandler" IsDefault="true" Margin="5">OK</Button>
<Button IsCancel="true" Margin="5">Cancel</Button>

The Binding object allows us to specify how to trigger updates to the data source through its UpdateSourceTrigger property. The default update trigger for the TextBox’s Text DP is “LostFocus”, which means that the data the user types is updated to the source when the TextBox loses focus. This is not what we want for this scenario though; we want the data to be updated only when the user presses the “OK” button. By changing the update trigger to “Explicit”, the data will not be updated to the source until we explicitly call the “UpdateSource()” method on the BindingExpression, which we can do in the handler for the “OK” button:

private void OKHandler(object sender, RoutedEventArgs args)
{
    BindingExpression bindingExpressionName = BindingOperations.GetBindingExpression(Name, TextBox.TextProperty);
    bindingExpressionName.UpdateSource();
    BindingExpression bindingExpressionComment = BindingOperations.GetBindingExpression(Comment, TextBox.TextProperty);
    bindingExpressionComment.UpdateSource();
    this.DialogResult = true;
}

The logic for the “OK” button is simple, but the “Cancel” is even simpler. Because we never allowed the values typed to update to the source, all we have to do is close the Window. This can be done by simply setting IsCancel=true on the Cancel button, no event handler necessary.

Here is a screen shot of the completed sample:

WPF Source Code

WPF

UWP Notes
The WPF code relies on what seems like magic in order to update the content on the Main page. For those familiar with XAML binding you may be surprised that this example works given that the DataSource class doesn’t implement INotifyPropertyChanged. There are definitely smarts built into WPF that will update all elements bound to the same source if one of its properties are updated, such as the case in this example. This does NOT work with UWP where you have to be explicit about raising PropertyChanged event in order for any elements bound to the source to update.

UWP also requires that the Mode of the Binding for the two TextBox elements be set to TwoWay

Additionally, the UWP ContentDialog has built in primary and secondary buttons to encourage a standard look and feel for dialogs.

Uno Notes
Currently the UpdateSourceTrigger attribute of the Binding expression isn’t respected. This means that any changes made in the dialog will be updated in the main page.

UWP Source Code

UWP

WinUI Notes

WinUI for Desktop, whilst respecting the UpdateSourceTrigger attribute, ends up looking more like UWP than WPF. The dialog needs to inherit from ContentDialog. There’s a need to explicitly set the XamlRoot and for some reason typing in the TextBox elements doesn’t work.

WinUI for UWP has issues with databound properties being changed in a ContentDialog and having to update on the main page. This is most likely a prerelease issue. Ironically the Uno platforms all update the content but again ignore the UpdateSourceTrigger attribute.

WinUI with Uno and WinUI Desktop Source Code

WinUI – Desktop

XAML Back to Basics #11: Multiple Linked Lists

How to synchronize ListBoxes displaying three levels of hierarchical data

XAML Basics Series Index Page

The next post in the series originally written by Beatriz Stollnitz. Original post available on Github.

How to synchronize ListBoxes displaying three levels of hierarchical data

The master-detail scenario with more than 2 levels is very common, and we made sure we have good support for it in WPF. I will show in this post three ways to sync selection of three ListBoxes, each displaying a different level of a hierarchy of data. In this sample, the first ListBox displays a list of mountain ski resorts. When the user selects a ski resort, the second ListBox gets updated with several lifts from that mountain. By selecting a particular lift, the third ListBox gets updated with ski runs that can be taken down from the top of that lift.

Here is the approach some developers might take when trying to get this scenario to work:

<Window.Resources>
    <local:Mountains x:Key="mountains" />
    <CollectionViewSource Source="{StaticResource mountains}" x:Key="cvs" />
</Window.Resources>
<ListBox ItemsSource="{Binding Source={StaticResource cvs}}" DisplayMemberPath="Name" Name="lb1" />
<ListBox ItemsSource="{Binding Source={StaticResource cvs}, Path=Lifts}" DisplayMemberPath="Name" Name="lb2" />
<ListBox ItemsSource="{Binding Source={StaticResource cvs}, Path=Lifts/Runs}" Name="lb3" />

Unfortunately this does not work as expected: lb1 and lb2 are in sync but lb3 is not. When creating a custom view on top of a collection by using CollectionViewSource, selection and currency are in sync by default. This is why lb1 and lb2 are in sync in this scenario. This markup does not use a custom view for the Lifts collection though – a default view is created internally instead. Default views do not have currency and selection in sync by default, which is the reason why lb2 and lb3 don’t sync.

There are at least three ways to have the three ListBoxes in sync.

The most obvious solution is to create a second CollectionViewSource for the Lifts collection and bind lb2 and lb3 to it:

<Window.Resources>
    (...)
    <CollectionViewSource Source="{Binding Source={StaticResource cvs}, Path=Lifts}" x:Key="cvs2"/>
</Window.Resources>
<ListBox ItemsSource="{Binding Source={StaticResource cvs}}" DisplayMemberPath="Name" Name="lb1" />
<ListBox ItemsSource="{Binding Source={StaticResource cvs2}}" DisplayMemberPath="Name" Name="lb2" />
<ListBox ItemsSource="{Binding Source={StaticResource cvs2}, Path=Runs}" Name="lb3" />

The second solution is to ignore CollectionViewSource, and let WPF create default views internally for us. Because default views don’t sync selection and currency by default, we have to override the default behavior by setting IsSynchronizedWithCurrentItem to true:

<ListBox ItemsSource="{Binding Source={StaticResource mountains}}" DisplayMemberPath="Name" IsSynchronizedWithCurrentItem="True" Name="lb1" />
<ListBox ItemsSource="{Binding Source={StaticResource mountains}, Path=Lifts}" DisplayMemberPath="Name" IsSynchronizedWithCurrentItem="True" Name="lb2" />
<ListBox ItemsSource="{Binding Source={StaticResource mountains}, Path=Lifts/Runs}" IsSynchronizedWithCurrentItem="True" Name="lb3" />

The third solution is to rely simply on the items displayed in the previous ListBox. Binding allows us to link not only to XML and objects, but also to other elements in the logical tree. To accomplish this scenario, we set the ElementName property of Binding to the Name of the source element (instead of setting Binding’s Source property), and the Path to the property of the element we’re interested in.

<ListBox ItemsSource="{Binding Source={StaticResource mountains}}" DisplayMemberPath="Name" Name="lb1" IsSynchronizedWithCurrentItem="True"/>
<ListBox DataContext="{Binding ElementName=lb1, Path=Items}" ItemsSource="{Binding Path=Lifts}" DisplayMemberPath="Name" Name="lb2" IsSynchronizedWithCurrentItem="True"/>
<ListBox DataContext="{Binding ElementName=lb2, Path=Items}" ItemsSource="{Binding Path=Runs}" Name="lb3" IsSynchronizedWithCurrentItem="True"/>

In the markup above, we set the DataContext of the second ListBox to the first ListBox’s Items property. Because DataContext is not expecting a collection, internally the binding engine returns the current item of that collection. We can then bind the ItemsSource to the Lifts property of the current Mountain, which returns the list we want.

This sample uses CLR objects as the data source. When using an XML data source, note that only the third solution above will work (for reasons I won’t go into here).

Here is a screen shot of the completed sample:

WPF Source Code

WPF

UWP/Uno/WinUI Notes

It’s highly recommended that you do NOT use the ISynchronizedWithCurrentItem as it’s likely to cause runtime errors. The Items collection doesn’t maintains a CurrentItem, instead use SelectedItem on the ListBox/ListView instead. It’s recommended to use the ListView control rather than the older ListBox control

UWP/Uno Source Code

UWP
WASM

WinUI with Uno and WinUI Desktop Source Code

WinUI – Desktop

XAML Back to Basics #10: List and Details

List-detail scenario

XAML Basics Series Index Page

The next post in the series originally written by Beatriz Stollnitz. Original post available on Github. Original post used terminology of Master-Detail, which has been changed to List-Detail to more accurately reflect what it represents.

List-detail scenario

In the simplest list-detail scenario, clicking a particular item of an ItemsControl causes the details about that item to be displayed in another control. For example, an application may display a list of customer names in a ListBox, and clicking a particular customer causes TextBlocks to be updated with the address, phone number and date of birth of that customer.

In this post I will use a data source with the planets of the solar system: clicking on the name of a planet in the ListBox causes its picture and information to be displayed in a templated ContentControl. The ListBox plays the role of the list and the ContentControl presents the detail.

In the resources section of the Window, I have an XmlDataProvider with the planet data and a CollectionViewSource with the Source property bound to the provider. Here is the markup for the ListBox bound to the CollectionViewSource:

<!-- list -->
<ListBox ItemsSource="{Binding Source={StaticResource cvs}}" DisplayMemberPath="@Name" Padding="5" Margin="0,0,5,0"/>

I also need a ContentControl, which is used to display the details of the selected item. The markup below may seem a little strange at first: we are binding a ContentControl (which displays a single item) to a collection of items? (Notice that its Content’s Binding is the same as the Binding in the ListBox’s ItemsSource.) This markup works fine because the data binding engine is smart enough to distinguish between the two targets. When binding an ItemsControl to a collection we get the collection; when binding a ContentControl to a collection we get the current item of that collection. This is what makes the list-detail scenario so simple in WPF.

<!-- detail -->
<ContentControl ContentTemplate="{StaticResource detailTemplate}" Content="{Binding Source={StaticResource cvs}}"/>

To specify how the details of the planet data should be displayed in the ContentControl, we use a DataTemplate. The following markup shows the data-binding specific parts of the DataTemplate. Notice that because I am binding to XML, the Binding is using XPath instead of Path.

<DataTemplate x:Key="detailTemplate">
    (...)
    <Image Source="{Binding XPath=Image, Converter={StaticResource stringToImageSource}}" />
    (...)
    <StackPanel Orientation="Horizontal" Margin="5,5,5,0">
        <TextBlock Text="Orbit: " FontWeight="Bold" />
        <TextBlock Text="{Binding XPath=Orbit}" />
    </StackPanel>
    <StackPanel Orientation="Horizontal" Margin="5,0,5,0">
        <TextBlock Text="Diameter: " FontWeight="Bold"/>
        <TextBlock Text="{Binding XPath=Diameter}" />
    </StackPanel>
    <StackPanel Orientation="Horizontal" Margin="5,0,5,5">
        <TextBlock Text="Mass: " FontWeight="Bold"/>
        <TextBlock Text="{Binding XPath=Mass}" />
    </StackPanel>
    (...)
</DataTemplate>

Here is a screen shot of the completed sample:

WPF Source Code

WPF

Uno Notes

Because the CollectionViewSource isn’t support across the different Uno platforms, I’ve data bound the SelectedItem on the ListView to a property on the MainPage, which in turn updates the Content property on the ContentControl. This only applies to the Non-UWP platforms.

You’ll also note that unlike in my previous post, where I used an XmlElementConverter, in this example I’ve used an XmlWrapper. This leads to binding expressions that are closer to what’s in the original post as it allows for traversing the element and attributes on the Xml that’s loaded from the associated data file.

UWP Source Code

UWP

WinUI Notes

Whilst WinUI for Desktop is very close to WPF, it doesn’t include the XmlDataProvider. Similar to the Uno project, we’ve used an embedded xml file instead of the inline data.

WinUI with Uno and WinUI Desktop Source Code

WinUI-Desktop

Fix: WinUI Preview 2 with Visual Studio 2019 Error Creating Project

If you’ve recently upgraded to the latest Visual Studio preview (Preview 2 of VS 16.8) then you may run into issues if you attempt to create a new WinUI for Desktop project. You’ll see a prompt similar to the following and only the packaging project will get created. The full text from the error is … Continue reading “Fix: WinUI Preview 2 with Visual Studio 2019 Error Creating Project”

If you’ve recently upgraded to the latest Visual Studio preview (Preview 2 of VS 16.8) then you may run into issues if you attempt to create a new WinUI for Desktop project. You’ll see a prompt similar to the following and only the packaging project will get created.

The full text from the error is as follows.

-------------------------
Microsoft Visual Studio
-------------------------
A problem was encountered creating the sub project 'MasterDetail.Desktop'. The expression "[Microsoft.Build.Utilities.ToolLocationHelper]::GetPlatformSDKLocation('', 10.0.18362.0)" cannot be evaluated. Parameter "targetPlatformIdentifier" cannot have zero length. C:\Program Files (x86)\Microsoft Visual Studio\2019\Preview\MSBuild\Current\Bin\Microsoft.Common.CurrentVersion.targets

The solution to this issue has been covered over at this github issue. The main steps are:

  • Download and extract Install-Dotnet script
  • Open an elevated command or powershell terminal window and run Install-DotNet -version 5.0.100-preview.5.20279.10
  • Open Visual Studio and create an empty solution

Note: If you attempt to simply create a new solution using the Blank App template, you’ll see the same error as before. You need to have the global.json file in the root folder before the WinUI project is created.

  • Add global.json file with the following content to the root folder of the solution (the same folder as the .sln file):
{
  "sdk": {
    "version": "5.0.100-preview.5.20279.10"
  }
}
  • Add project using the “Blank App, Packaged (WinUI in Desktop)” template

At this point you’ll be thinking to yourself that everything is looking good and you’ve successfully created your WinUI project. Unfortunately, when you go to run your project you’ll see an error similar to.

Severity Code Description Project File Line Suppression State
Error NETSDK1005 Assets file 'c:\temp\App1\App1\obj\project.assets.json' doesn't have a target for '.NETCoreApp,Version=v5.0'. Ensure that restore has run and that you have included 'net5.0' in the TargetFrameworks for your project. App1 C:\Program Files\dotnet\sdk\5.0.100-preview.5.20279.10\Sdks\Microsoft.NET.Sdk\targets\Microsoft.PackageDependencyResolution.targets 234

If you read the GitHub issue, you’ll note that it very clearly says “you need to use VS16.7.2”.

Make sure you are using Visual Studio 16.7, not Visual Studio 16.8 preview 2. The point of this post is that if you have both installed side by side, the installation of preview 2 will break your ability to run WinUI for Desktop projects. Adding the global.json file to your solution folder will fix this issue.