XAML Control Templates for Windows (UWP) and Platform.Uno

Recently there has been a lot of discussion about using code to declare the user interface of an app. Such as a recent post I did following the announcement of SwiftUI by Apple. In the Xamarin.Forms world the hashtag CSharpForMarkup has become the latest distraction. CSharpForMarkup encourages developers to move away from XAML to defining their layouts using C#. Whilst I’m not against this, I do feel that we’re all discussing the wrong problem. Whether you use code or XAML to define your layouts, Xamarin.Forms suffers from being tied to the underlying platform controls. Renderers, Effects and Visual are all mechanisms to compensate for not having XAML control templates for every control (more on templates).

Enter Platform.Uno and their mission to take the XAML framework from Windows (UWP) and adapt it to other platforms. As an alternative for building apps for iOS and Android, Platform Uno was moderately interesting. With their push to support WebAssembly, Platform.Uno opens up a world of opportunities for building rich applications for the web. Their recent example of publishing the Windows calculator to the web (https://calculator.platform.uno) is just the the start for this technology.

In this post we’ll walk through how the XAML templating system works and how to go about defining a Control Template to customise the behaviour of controls.

Note: In this post I’ll be using XAML so that I can use Blend to assist with writing XAML. You can choose to use C# if you prefer defining your layouts in code. The point of this post is to highlight the templating capabilities of UWP and Platform.Uno.

Lookless or Templated Controls

One of the foundations of XAML was the notion of a templated, or lookless, control. In his post “What does it mean to be ‘lookless’?” Dave observes that a lookless control is one where the UI is declared in XAML and the behaviour or functionality of the control is defined in code. My view is that a lookless control is one where the functionality of the control isn’t tied to a specific design. By this I mean that each aspect of the control should be defined using a template, thus making it a templated control. A developer using the control should be able to adjust any of the templates without affecting the functionality of the control.

What is a Button?

Let’s look at what this means in practice by examining a Button. If I asked you what you thought a Button in UWP is, you’d probably answer by writing the following element:

<Button Content="Press Me!" />

All you’ve done is declared a Button with content that says “Press Me!” that uses the default Button styles. You haven’t actually answered the question. So let’s start with a simple definition of a button.

A Button is an element that responds to the user tapping or clicking by performing some action.

However, in the context of UWP where a Button is an actual control, this definition is a little too vague. For example, the following Border/TextBlock combination matches the definition

<Border Background="Gray" Tapped="PressMeTapped">
    <TextBlock Text="Press Me!"/>
</Border>

Tapping on the Border does invoke the PressMeTapped event handler but is that all we mean when we say a Button. I would argue that one of the significant aspects to a Button is that it gives some visual cues to the user. By this I mean that when the user hovers their mouse over a button they can the mouse pointer changes, or the Button adjusts how it looks. Similarly, when the user taps or clicks on the Button they should get some visual feedback confirming their action. This feedback is what separates a Button from just simply a tap or click event handler that can be attached to any element.

What we’ve just observed is that the Button has a number of different visual states. In the context of a UWP button these are the Normal, PointerOver and Pressed states. Using the default Button style in the Light theme, these states are shown in the following image.

States in the Control Template for the UWP Button
UWP Button States

As part of extending our definition of a Button I was careful not to define what feedback, or change to layout, should be used for each state of the Button. UWP ships with Dark and Light themes and a set of default control styles that combine to give you the states shown above. However, the important thing to remember is that the visual appearance isn’t what defines a Button. Perhaps an updated definition might be something like:

A Button is an element that responds to the user tapping or clicking by performing some action. It has Normal, PointerOver, Pressed and Disabled visual states that can provide feedback and guide the user on how to interact with the control.

Content v Text Properties

Xamarin.Forms deviates from most other XAML platforms when it comes to defining elements and attributes. For example, instead of using a StackPanel, Xamarin.Forms has a StackLayout. Not only are they named differently, they also exhibit different behaviour. Similarly, the Button control has a Text property instead of a Content property. Whilst this might seem like a decision to simplify the Button control it highlights the limitation of Xamarin.Forms due to the lack of templating.

Let’s back up for a second and take a look at the Content property. In a lot of cases you might simply set the Content attribute to a string, such as the earlier example where I set the Content to “Press Me!”. However, to override the styling of the text, or add other content to the Button, I can set the Content property explicitly.

<Button>
    <StackPanel>
        <TextBlock Style="{StaticResource HeaderTextBlockStyle}"
                   Text="Press Me!" />
        <TextBlock Style="{StaticResource BodyTextBlockStyle}"
                   Text="or.... perhaps not" />
    </StackPanel>
</Button>

Note: As the Content property on the Button is annotated with the ContentPropertyAttribute, we don’t need to wrap the StackPanel in Button.Content tags.

Even though I’ve changed the content of the Button, I haven’t in anyway changed the way the Button functions. In fact, in this case I haven’t even changed how the Button changes appearance for the different states.

Using a Content Template

Before we get into customising the different Button states I just wanted to touch on the use of content templates. In the previous example I showed how the Content of the Button could be set to any arbitrary XAML. What happens if I want to reuse the content layout but just with different text? This is where the ContentTemplate property is useful.

The ContentTemplate property expects a DataTemplate which will be used to define the layout for the Content. At runtime the DataContext of the ContentTemplate will be set to the value of the Content property. Let’s see this in practice by moving the XAML from the Content property into the ContentTemplate.

<Button>
  <Button.Content>
    <local:ButtonData Heading="Press Me!" SubText="or.... perhaps not" />
  </Button.Content>
  <Button.ContentTemplate>
    <DataTemplate>
      <StackPanel>
        <TextBlock Style="{StaticResource HeaderTextBlockStyle}"
                   Text="{Binding Heading}" />
        <TextBlock Style="{StaticResource BodyTextBlockStyle}"
                   Text="{Binding SubText}" />
      </StackPanel>
    </DataTemplate>
  </Button.ContentTemplate>
</Button>

In this code block you can see that the Content property has been set to a new instance of the ButtonData class. The two TextBlock elements are subsequently data bound to the Heading and SubText properties.

public class ButtonData
{
    public string Heading { get; set; }

    public string SubText { get; set; }
}

What this shows is that we’ve separated the data (i.e. the instance of the ButtonData class) from the presentation (i.e. the DataTemplate that is specified for the ContentTemplate property). Now if we wanted to reuse the DataTemplate across multiple elements we can extract it as a StaticResource. The following code illustrates extracting the DataTemplate to a StaticResource, as well as updating the data binding syntax. The x:Bind syntax gives us strongly typed data binding. Meaning we get intellisense and better runtime performance as there are no reflection calls. Button1 and Button2 are properties on the page that return instances of the ButtonData class.

<Page.Resources>
    <DataTemplate x:Key="MyButtonTemplate"
                  x:DataType="local:ButtonData">
        <StackPanel>
            <TextBlock Style="{StaticResource HeaderTextBlockStyle}"
                       Text="{x:Bind Heading}" />
            <TextBlock Style="{StaticResource BodyTextBlockStyle}"
                       Text="{x:Bind SubText}" />
        </StackPanel>
    </DataTemplate>
</Page.Resources>
<Button Content="{x:Bind Button1}"
        ContentTemplate="{StaticResource MyButtonTemplate}" />
<Button Content="{x:Bind Button2}"
        ContentTemplate="{StaticResource MyButtonTemplate}" />

Custom Control Templates

So far we’ve seen how the Button control supports arbitrary Content and the use of ContentTemplates to reuse layouts across multiple controls. Neither of these can be used to adjust the visual appearance of the different Button states. In this section we’re going to start by looking at what properties you can adjust to tweak the existing states. We’ll then bust out Blend and start messing with the way a Button looks. All of this will be done without changing the functionality of the Button.

ThemeResources in the Control Template

In the previous section I added some arbitrary XAML to the Content of the Button. However, when I run the application I still see the default background gray. The background of the Button control is part of the default Template for the control. We’ll delve into this in a little more detail in the next sections. For the time being we’re going to adjust the colour of the background and round the corners on the border.

Adjusting the background of the Button when it’s in the Normal state can be done two ways. You can either specify the Background on the Button element itself, or you can override the ButtonBackground resource with the colour of your choosing.

<Page.Resources>
    <Color x:Key="ButtonBackground">DarkGreen</Color>
</Page.Resources>

<Button Content="{x:Bind Button1}"
        Background="Blue"
        ContentTemplate="{StaticResource MyButtonTemplate}" />
<Button Width="200"
        Height="200"
        HorizontalAlignment="Center"
        Content="{x:Bind Button2}"
        ContentTemplate="{StaticResource MyButtonTemplate}" />

At this point if you run the application up on UWP you’ll see that the initial colours for the two Button elements are blue and dark green respectivey. However, if you move the mouse over each of them you’ll see that they switch back to a grey background. This is because we’ve only adjusted the colour used to set the initial background colour. To adjust the colour for the PointerOver and Pressed states we need to adjust the appropriate resources, which are aptly called ButtonBackgroundPressed and ButtonBackgroundPressed. A full list of the overridable colours is available in the documentation for the Button control. Alternatively you can find the entire list of built in color resources in the ThemeResources.xaml file (more information here).

Button Colors in ThemeResources.xaml

In addition to changing the background colour, we also wanted to round the corners of the border that appears around the control when the user moves the mouse in closer. This can be done by simply setting the CornerRadius property to 10 on the Button element.

When we get to talking about visual states we’ll go through how properties on the Button control are modified with each visual state.

Changing the Control Template Layout

Sometimes it’s necessary to make changes that can’t be achieved by either setting the ContentTemplate or overriding the default colour resources. For example, say you wanted to add the Built to Roam logo as a watermark behind the content of the Button. To do this we need to override the Template for the Button. For this I’m going to use Blend as it’s particularly good at edit templates.

To begin editing the control template, locate the Button in the Objects and Timelines window. Right-click on the [Button] and select Edit Template, followed by Edit a Copy.

Editing a Copy of the Default Button Template

Next, you’ll be prompted to give the new template a name and specify where you want the template to be defined. When the new template is created it will include the default Button Template, thus allowing you to customise it. In this case the WatermarkButtonTemplate will be defined in the current document, limiting its availability to just the Button elements on the page.

Naming the New Control Template

Note: In this example we’re copying the default Button template. However, what gets copied into the XAML is actually the entire default Button style. This includes all the default property setters. The value of the Template property is set to a new instance of a ControlTemplate.

I’m not going to go through the changes to the template in detail but the summary is:

  • Wrap the ContentPresenter in a Grid called ContentFrame
  • Move Background, BackgroundSizing, BorderBrush, BorderThickness and CornerRadius from the ContentPresenter to ContentFrame
  • Adjust Visual States so that they reference ContentFrame for border and background properties. Leave any references to Foreground targetting the ContentPresenter element.
  • Add an Image inside the Grid with Source set to an image added to the Assets folder.
  • Set Opacity to 20% (0.2) and Stretch to Uniform on the Image.

Before:

<ControlTemplate TargetType="Button">
    <ContentPresenter
        x:Name="ContentPresenter"
        Padding="{TemplateBinding Padding}"
        HorizontalContentAlignment="{TemplateBinding HorizontalContentAlignment}"
        VerticalContentAlignment="{TemplateBinding VerticalContentAlignment}"
        AutomationProperties.AccessibilityView="Raw"
        Background="{TemplateBinding Background}"
        BackgroundSizing="{TemplateBinding BackgroundSizing}"
        BorderBrush="{TemplateBinding BorderBrush}"
        BorderThickness="{TemplateBinding BorderThickness}"
        Content="{TemplateBinding Content}"
        ContentTemplate="{TemplateBinding ContentTemplate}"
        ContentTransitions="{TemplateBinding ContentTransitions}"
        CornerRadius="{TemplateBinding CornerRadius}">
        <VisualStateManager.VisualStateGroups>                
        ...
        </VisualStateManager.VisualStateGroups>
    </ContentPresenter>
</ControlTemplate>

After:

<ControlTemplate TargetType="Button">
    <Grid
        x:Name="ContentFrame"
        Padding="{TemplateBinding Padding}"
        Background="{TemplateBinding Background}"
        BackgroundSizing="{TemplateBinding BackgroundSizing}"
        BorderBrush="{TemplateBinding BorderBrush}"
        BorderThickness="{TemplateBinding BorderThickness}"
        CornerRadius="{TemplateBinding CornerRadius}">
        <Image
            Opacity="0.2"
            Source="Assets/BuiltToRoamLogo.png"
            Stretch="Uniform" />
        <ContentPresenter
            x:Name="ContentPresenter"
            HorizontalContentAlignment="{TemplateBinding HorizontalContentAlignment}"
            VerticalContentAlignment="{TemplateBinding VerticalContentAlignment}"
            AutomationProperties.AccessibilityView="Raw"
            Content="{TemplateBinding Content}"
            ContentTemplate="{TemplateBinding ContentTemplate}"
            ContentTransitions="{TemplateBinding ContentTransitions}"
            FontFamily="Segoe UI" />
        <VisualStateManager.VisualStateGroups>
        ...
        </VisualStateManager.VisualStateGroups>
    </Grid>
</ControlTemplate>

The net result is that the image is displayed on top of the background colour (which is set on the parent Grid) but beneath the content.

The important thing about defining this ControlTemplate is that it can be reused on any Button, irrespective of the Content that is being displayed. Again, these changes have affected the design of the Button but we still haven’t modified what happens when the user interacts with the Button.

Tweaking Control Template Visual States

The last set of changes we’re going to look at is how we modify the behaviour of the Button when the user does interact with it. In this case we’re going to look at the Pressed state. We’re going to change the default behaviour to slightly enlarge the Button when it’s in the Pressed state. To do this we need to modify the Pressed VisualState that is defined inside the ControlTemplate.

Using Blend we’ll step through adjusting the Pressed Visual State to expand the Button to 150% of it’s original size (i.e. a scale of 1.5 in both X and Y directions). To get started in the Objects and Timeline window, right click the Button element and select Edit Template, then Edit Current. This is similar to what we did previously but this time we already have a ControlTemplate in the XAML for the page.

Edit an Existing Control Template

Next, from the States window, select the Pressed state. If you click where the word Pressed is written you should see both the focus border around the Pressed state and a red dot appear alongside, indicating that Blend is in recording mode. You should also see a red border appear around the main design surface. If you look at the Button on the design surface it should appear like it’s been Pressed to illustrate what the Pressed state looks like.

Switching to the Pressed state

In the Properties window, locate the Transform section. Select the third icon in, which is for adjusting the Scale of the element. Set both X and Y values to 1.5. You should see a white square alongside each of the values indicating that you’ve explicitly set the value.

Adjusting the Scale Transform for X and Y

If you go back to the Objects and Timeline window you should see that under the ContentFrame there are entries for RenderTransform/ScaleX and RenderTransform/ScaleY to indicate that these properties have been modified for the currently selected visual state.

Visual State Properties

If you run the application now and click on the Button you should see that whilst the mouse button is depressed the Button moves to the Pressed state which shows the Button enlarged.

Enlarged Pressed State for Button

If you examine the change to the XAML, what you’ll see is that two Setters were added to the Pressed Visual State.

<VisualState x:Name="Pressed">
    <VisualState.Setters>
        <Setter Target="ContentFrame.(UIElement.RenderTransform).(CompositeTransform.TranslateX)" Value="1.5" />
        <Setter Target="ContentFrame.(UIElement.RenderTransform).(CompositeTransform.TranslateY)" Value="1.5" />
    </VisualState.Setters>
    <Storyboard>
        ... other existing animations ...
    </Storyboard>
</VisualState>

Animating Content Template State Transitions

The change we just made to the Control Template was to change the Scale X and Scale Y properties for the Pressed state. However, this is rather a jarring experience as there’s no transition or animation defined. The final step in this process is to add a transition both too and from the Pressed state.

In the States window, click the arrow with a plus sign and select the first item (i.e. *=>Pressed) to add a transition that will be invoked whenever the Button goes to the Pressed state, regardless of the initial state.

Adding Transition into Pressed State

If you look in the Objects and Timeline window you can now see a timeline that will define what animations are run during the transition.

Start of the Transition Timeline

Over in the Properties window, locate the Transform section and click on the empty square alongside the X property. Select Convert to Local Value, which will set the value to 1. The square will now appear as a sold white square. Repeat for Scale Y

Setting the Start of the Transition

Back in the Objects and Timeline window you should now see a keyframe marker appear at time 0. Drag the vertical orange line to 0.5 seconds.

End of Transition in Timeline

Now in the Properties window set the Scale X and Scale Y to 1.5, which is the end values for the transition. Thisshould match the values previously set for the Pressed state.

End Transition Values

Repeat this process for a transition for leaving the Pressed state. The transition should be the reverse with the Scale X and Y starting at 1.5 and ending at 1.

Transitions In and Out of Pressed State

And there you have it, an animated Pressed state.

Animated Pressed State in Control Template

The final XAML for the Transitions includes two Storyboards that defines the transition to and from the Pressed state.

<VisualStateGroup.Transitions>
    <VisualTransition GeneratedDuration="00:00:00"
                      To="Pressed">
        <Storyboard>
            <DoubleAnimationUsingKeyFrames Storyboard.TargetName="ContentFrame"
                                           Storyboard.TargetProperty="(UIElement.RenderTransform).(CompositeTransform.ScaleX)">
                <EasingDoubleKeyFrame KeyTime="00:00:00"
                                      Value="1" />
                <EasingDoubleKeyFrame KeyTime="00:00:00.5000000"
                                      Value="1.5" />
            </DoubleAnimationUsingKeyFrames>
            <DoubleAnimationUsingKeyFrames Storyboard.TargetName="ContentFrame"
                                           Storyboard.TargetProperty="(UIElement.RenderTransform).(CompositeTransform.ScaleY)">
                <EasingDoubleKeyFrame KeyTime="00:00:00"
                                      Value="1" />
                <EasingDoubleKeyFrame KeyTime="00:00:00.5000000"
                                      Value="1.5" />
            </DoubleAnimationUsingKeyFrames>
        </Storyboard>
    </VisualTransition>
    <VisualTransition GeneratedDuration="00:00:00"
                      From="Pressed">
        <Storyboard>
            <DoubleAnimationUsingKeyFrames Storyboard.TargetName="ContentFrame"
                                           Storyboard.TargetProperty="(UIElement.RenderTransform).(CompositeTransform.ScaleX)">
                <EasingDoubleKeyFrame KeyTime="00:00:00"
                                      Value="1.5" />
                <EasingDoubleKeyFrame KeyTime="00:00:00.5000000"
                                      Value="1" />
            </DoubleAnimationUsingKeyFrames>
            <DoubleAnimationUsingKeyFrames Storyboard.TargetName="ContentFrame"
                                           Storyboard.TargetProperty="(UIElement.RenderTransform).(CompositeTransform.ScaleY)">
                <EasingDoubleKeyFrame KeyTime="00:00:00"
                                      Value="1.5" />
                <EasingDoubleKeyFrame KeyTime="00:00:00.5000000"
                                      Value="1" />
            </DoubleAnimationUsingKeyFrames>
        </Storyboard>
    </VisualTransition>
</VisualStateGroup.Transitions>

As you can see from this walkthrough, the XAML templating system is relatively sophisticated. It allows for multiple levels of configuration or tweaking in order to get it looking just right.

In coming posts I’ll be talking more about what differentiates XAML and why it’s still being used across Windows, Xamarin.Forms and of course Platform Uno. If you’re looking for cross platform solutions, please don’t hesitate to contact me or anyone on the team at Built to Roam.

Navigate Flutter Apps with Routes

One of the most important aspects of an app is the flow or journey that the user takes through the app. Apps are often described in terms of pages, or screens, and navigating between them. In this post I’m going to cover dividing your application into routes, and how to work with the Flutter navigation system.

Whilst a Flutter app is constructed out of widgets, there still needs to be a mechanism for the app to respond to user interaction. For example, tapping on an row in a list of items might display the details of that item. Subsequently tapping on the back button should return the user to the list of items. In order to support navigating between different pages, Flutter includes the Navigator class. According to the documentation the Navigator class is:

 A widget that manages a set of child widgets with a stack discipline. 

For someone who’s just got their head around how to layout widgets, this statement makes no sense. In this post we’re going to ignore this definition and cover the basics of how to navigate between pages. I’m also going to ignore the Flutter definition of a Route. For now we’ll assume that a route is roughly equivalent to a page or a screen in your app. As you can imagine, all but very basic apps have multiple pages that the user is able to navigate between. Flutter apps are no different except that you navigate Flutter apps with routes.

Flutter Navigation

Let’s get into navigating between routes. We’ll start with almost the most basic Flutter app you can create. Technically you can create a Flutter app that doesn’t start with the MaterialApp but then you’re left doing a lot of heavy lifting yourself. The following code sets up a basic app that is comprised of two classes that inherit from StatelessWidget. I could have combined these into a single widget by simply setting the value of the home attribute in the MaterialApp to be the new Container widget. However, I’ve created a separate widget, FirstNoRoutePage, to help make it easy to see how the pages of the app are defined.

void main() => runApp(MyNoRouteNavApp());

class MyNoRouteNavApp extends StatelessWidget {
  @override
  Widget build(BuildContext context) {
    return MaterialApp(
      home: FirstNoRoutePage(),
    );
  }
}

class FirstNoRoutePage extends StatelessWidget {
  @override
  Widget build(BuildContext context) {
    return Container(
      alignment: Alignment.center,
      child: Text('First Page'),
    );
  }
}

Ad-Hoc Routes with Navigator.push()

Now that we have the basic app structure, it’s time to navigate to our first route. We’re going to use the Navigator widget to push a newly created, or ad-hoc, route onto the stack. The following code:

  • Wraps the Text widget in a FlatButton
  • The onPressed calls the push method on the Navigator, passing in a newly constructed MaterialPageRoute.
  • The MaterialPageRoute builder is set to return a new instance of the SecondNoRoutePage widget
class FirstNoRoutePage extends StatelessWidget {
  @override
  Widget build(BuildContext context) {
    return Container(
      alignment: Alignment.center,
      child: FlatButton(
        child: Text(
          'First Page',
          style: TextStyle(color: Colors.white),
        ),
        onPressed: () {
          Navigator.push(
              context,
              MaterialPageRoute(
                builder: (BuildContext context) => SecondNoRoutePage(),
              ));
        },
      ),
    );
  }
}

class SecondNoRoutePage extends StatelessWidget {
  @override
  Widget build(BuildContext context) {
    return Container(
      alignment: Alignment.center,
      child: Text(
        'Second Page',
        style: TextStyle(color: Colors.white),
      ),
    );
  }
}

We now have an application that has two pages that the user can navigate between. On the first page, clicking on the “First Page” text will push a new route that shows the second page (i.e. the SecondNoRoutePage widget). Pressing the device back button (Android only) will navigate the user back to the first page. It does this by popping the route off the stack maintained by the Navigator.

Go Back with Navigator.pop()

For the purpose of this post I’ve kept the pages simple, showing only elements necessary to show which page the user is on, or to allow the user to perform an action. I’ve chosen not to use a Scaffold, which means no AppBar. This also means no back button shown in the AppBar. On iOS this makes it impossible for the user to navigate to the previous page as there is no dedicated back button, unlike Android. If you use a Scaffold in your widget, you’ll see the AppBar. The AppBar adjusts to include a back button if there’s more than one route on the stack.

Flutter provides built in support for navigating back to the previous route via the AppBar back button or, in the case of Android, the device back button. In addition, the pop method on the Navigator can be used to pop the current route off the stack. This will return the user to the previous route.

class SecondNoRoutePage extends StatelessWidget {
  @override
  Widget build(BuildContext context) {
    return Container(
      alignment: Alignment.center,
      child: FlatButton(
        child: Text(
          'Second Page',
          style: TextStyle(color: Colors.white),
        ),
        onPressed: () {
          Navigator.pop(context);
        },
      ),
    );
  }
}

What the Current Route?

Earlier in the post we created a new route when navigating to the second page of the app. However, what we glossed over is the fact that the first page of the app is also a route. To show this, let’s add some code that adds the name of the current route to both pages. This will highlight where in the Flutter navigation stack the user is currently at.

class FirstNoRoutePage extends StatelessWidget {
  @override
  Widget build(BuildContext context) {
    var route = ModalRoute.of(context).settings.name;
    return Container(
      alignment: Alignment.center,
      child: FlatButton(
        child: Text(
          'First Page - $route',
          style: TextStyle(color: Colors.white),
        ),
        onPressed: () {
          Navigator.push(
              context,
              MaterialPageRoute(
                builder: (BuildContext context) => SecondNoRoutePage(),
              ));
        },
      ),
    );
  }
}

class SecondNoRoutePage extends StatelessWidget {
  @override
  Widget build(BuildContext context) {
    var route = ModalRoute.of(context).settings.name;
    return Container(
      alignment: Alignment.center,
      child: FlatButton(
        child: Text(
          'Second Page - $route',
          style: TextStyle(color: Colors.white),
        ),
        onPressed: () {
          Navigator.pop(context);
        },
      ),
    );
  }
}

What gets displayed on the two pages are ‘First Page – /’ and ‘Second Page – null’. This indicates that the first route is assigned a name of ‘/’, whilst the second route does not have a name (i.e. null value). The first route is created from the home property being set on the MaterialApp. It is assigned a name equal to the defaultRouteName constant from the Navigator class (i.e. ‘/’) to indicate it is the entry point for the application.

As we created the route for the second page, it’s our responsibility to name the route. In the above scenario we’re creating the route at the point where it is required (i.e. when navigating to the second page), and as such we don’t need to give it a name. That is, unless we want to see that the current route is by inspecting it’s name. Let’s update the call to Navigator.push to include a name for the second route.

onPressed: () {
  Navigator.push(
      context,
      MaterialPageRoute(
        builder: (BuildContext context) => SecondNoRoutePage(),
        settings: RouteSettings(name: '/second')
      ));
},

There’s no requirement for the route name to include the ‘/’. However, it is a convention to do so, and there are some scenarios where the ‘/’ has some implied behaviour.

Named Routes in Flutter Navigation

In the previous section we added a name to the ad-hoc route that we created. An alternative to creating routes as they’re required, is to define the routes for the app up front. For example the following code shows three routes that have been defined for an app and can be used for Flutter navigation.

class Routes {
  static const String firstPage = '/';
  static const String secondPage = '/second';
  static const String thirdPage = '/third';
}

class MyNavApp extends StatelessWidget {
  @override
  Widget build(BuildContext context) {
    return MaterialApp(
      routes: {
        Routes.firstPage: (BuildContext context) => FirstPage(),
        Routes.secondPage: (BuildContext context) => SecondPage(),
        Routes.thirdPage: (BuildContext context) => ThirdPage(),
      },
    );
  }
}

The Routes class includes the names of each of the three routes. Inside the constructor of the MaterialApp widget the three routes have been declared. Each route is an association between the route name and the builder method that’s used to construct the widget.

Navigate.pushNamed()

When the user clicks the button, the Navigate.pushNamed method is used to navigate to the corresponding route:

onPressed: () {
  Navigator.pushNamed(context, Routes.secondPage);
},

Skipping Over Routes with Navigator.popUntil

Using named routes not only makes it easier to manage the pages in your app, it also means that you can create conditional logic that is dependent on the route name. For example, the Navigator.popUntil allows you to keep popping routes off the stack until the predicate is true. In the following code the predicate looks at the name property of the route to determine whether it is the first page of the app.

class ThirdPage extends StatelessWidget {
  @override
  Widget build(BuildContext context) {
    return Container(
      alignment: Alignment.center,
      child: FlatButton(
        child: Text(
          'Third Page',
          style: TextStyle(color: Colors.white),
        ),
        onPressed: () {
          Navigator.popUntil(context, (r) => r.settings.name == Routes.firstPage);
        },
      ),
    );
  }
}

Setting the Initial Route

Initially the first page of the app was defined by setting the home property. This was used to define the initial route for the app. When the app was updated to use a set of predefined routes, the initial route was inferred by looking for the route where the name matches the defaultRouteName (i.e. ‘/’). If there wasn’t a route with that name isn’t found, an error will be raised and the app will fail to start.

It is also possible to set the initialRoute property on the MaterialApp. However, this doesn’t negate the need to set either the home property, or have a route declared with a name of ‘/’. What’s really interesting about the initialRoute property is that it can be used to launch the app on a route that has other routes in the navigation stack. For example in the following code the initialRoute is set to the thirdPage, which is defined as ‘/second/third’. When this app launches, it will launch showing ThirdPage but if the user taps the back button, they will go to SecondPage and then FirstPage if they tap again.

class Routes {
  static const String firstPage = '/';
  static const String secondPage = '/second';
  static const String thirdPage = '/second/third';
}

class MyNavApp extends StatelessWidget {
  @override
  Widget build(BuildContext context) {
    return MaterialApp(
      initialRoute: Routes.thirdPage,
      routes: {
        Routes.firstPage: (BuildContext context) => FirstPage(),
        Routes.secondPage: (BuildContext context) => SecondPage(),
        Routes.thirdPage: (BuildContext context) => ThirdPage(),
      },
    );
  }
}

On app startup the initalRoute is split on ‘/’ and then any routes that match the segments are navigated to. In this case the app navigates to FirstPage, then SecondPage and finally ThirdPage.

Intercepting Back Button

The last topic for this post looks at how to intercept the back button. This can be done by including the WillPopScope widget and returning an appropriate value in the onWillPop callback.

class ThirdPage extends StatelessWidget {
  @override
  Widget build(BuildContext context) {
    return WillPopScope(
      onWillPop: () {
        return new Future.value(true);
      },
      child: Container(
        alignment: Alignment.center,
        child: Text('Third Page'),
      ),
    );
  }
}

Note that the onWillPop expects a Future to be returned, allowing you to return an asynchronous result. In this case the code is simply returning the value of true to allow the current route to be popped.

Summary of Flutter Navigation

This post on Flutter navigation has covered a number of the navigation related methods on the Navigator class. As part of defining how your app looks, you should define all the routes and how the user will navigate between them. If you do this early in the development process you’ll be able to validate the flow of your app as you continue development.

Decompilers for .NET and Windows (UWP) Apps

I think I’ve been living under a rock as I’ve only just come across dnSpy, a decompiler for .net!

I’ve been building apps and services with .NET for a long time, so a Twitter thread talking about decompilers amused me. David Kean’s comment pretty accurately reflects my sentiment regarding Reflector.

Reflector was such a simple tool and it just worked. That was until Red Gate took over and I can’t even remember what happened to it. Does it still exist?

The thread started with Jared commenting that developers should be using ilSpy instead of hacking with ildasm. I’ve rocked ilSpy in my toolkit for a while now and it’s always served me well. In fact I go so far as to set it as the default file handler for .dll files. After all, what other program are you going to want to launch when you double-click on a .dll file.

dnSpy: A Decompiler for .NET

What really knocked my socks off was that someone mentioned dnSpy which I’d never heard of. Thinking it was something similar to ilspy I didn’t think much of it but figured I would download it and take a look.

Next thing I knew it was like I had opened an entire development environment. The layout was familiar, all the way down to the Debug menu that allows me to attach to process.

What? come again? Attach to Process? Yes, that’s right you can attach to a running process, set breakpoints and intercept exceptions (and I’m sure a whole bunch more things. Ironically whilst I was attempting to modify an image of dnspy running, Paint.NET crashed on me. Whilst Paint.NET was hung, I was able to attach to the process using dnspy and see that it was stuck waiting for a print dialog to return.

dnSpy: a Decompiler for .NET

If you’re using ilspy, you should check out dnspy

For anyone still using ildasm, you should check out ilspy and dnspy

Developers not using a decompiler, what have you been doing? Get yourself a decompiler for .NET, and use either ilspy or dnspy.


Nick Randolph @thenickrandolph
If you need help debugging your application, contact Built to Roam


Fiddler Debugging Http/Https Traffic for Xamarin iOS, Android and Windows (UWP) Applications

Debugging Http/Https Traffic using Fiddler for Xamarin iOS, Android and Windows (UWP) Applications

One of the most frustrating things as a frontend developer is when you are receiving incorrect data. You don’t know whether the problem lies with your application, or the backend services. The easiest way to validate this is to pretend to be a hacker. You can stage a man-in-the-middle attack on your own application. Debugging using tools like Fiddler or Charles can be used to inspect the traffic from your application. Unfortunately, the same effort that goes into protecting apps from such attacks, also means that it is harder for developers to setup Fiddler debugging.

In this post I’ll walk through setting up Fiddler debugging for a Xamarin.Forms application. The same basic approach will work for a native or Xamarin iOS/Android application as well. For this post I’ve created an application using the Blank Xamarin.Forms template that comes with Visual Studio 2019. I’ve selected to target all three platforms.

In the OnAppearing method in the MainPage of the Xamarin.Forms application, I’ve added some basic code to retrieve a string for a Https endpoint. We’ll use a Https endpoint on the assumption that if we can intercept Https then we can also intercept Http traffic.

protected override async void OnAppearing()
{
     base.OnAppearing();
     var client = new HttpClient();
     var users = await client.GetStringAsync("https://reqres.in/api/users?page=0");
     Debug.WriteLine(users);
}

Setting Up Fiddler Debugging

Before we get started with the individual platforms, it’s worth checking your configuration for Fiddler:

  • You need to make sure you have setup Https traffic decryption. I’m not going to repeat the documentation, so check out how to Configure Fiddler to Decrypt HTTPS Traffic.
  • You’ll also need to setup Fiddler debugging for remote traffic. This can be done by opening the Options window. Under the Connections tab, make sure the “Allow remote computers to connect” checkbox is checked. If you’re running Skype, or some other communication tools, it may have jumped onto the default port configured in Fiddler. In this case you can adjust the “Fiddler listens on port”.
Fiddler Debugging Options Window
Enabling remote connection in Fiddler

Windows (UWP)

I’ll start with Windows, because it’s relatively straight forward to get setup. With Fiddler running, you can run the UWP project from within Visual Studio and Fiddler debugging will work. Fiddler will capture the Https traffic without any further configuration or setup.

Fiddler Debugging
Capturing network traffic in Fiddler

In some cases, you need to make sure your application is added to the exemption list, so that traffic can be routed to the local machine. For more information see the blog post Revisiting Fiddler and Win8+ Immersive applications. To check this, click the WinConfig button in the top left corner of the Fiddler interface.

WinConfig
Click WinConfig to adjust the exemption list for Windows 10 apps

Locate your application and confirm that there is a check in the box next to your application. If it’s not checked, check the box, and then click Save Changes

Exemption Utility
Locate your UWP application and check the box to add your application to exemption list

One other issue I’ve seen but can’t reproduce reliably, is that sometimes when you run your application from Visual Studio it unchecks the box in the WinConfig in Fiddler. If for some reason you no longer see traffic in Fiddler for your application, go back and double check the exemption list in Fiddler.

Diagnosing Issues with Fiddler on Windows

When configuring Https traffic decryption, you’ll be prompted to install and trust the Fiddler root certificate. If you don’t accept all the prompts, Fiddler debugging for Windows applications will not work. Windows traffic debugging works without any further configuration because the Fiddler root certificate is trusted on the machine running Fiddler.

If you’re attempting to run the Windows application on a different machine than the one running Fiddler, you’ll need to install and trust the Fiddler root certificate. This is in addition to setting the remote machine as a proxy, which is a topic for another post. Navigate to http://[ipaddress]:[port] where [ipaddress] is the ip address of the machine running Fiddler and the [port] is the port number that Fiddler is listening on. Do NOT use https as this site is http only. You should see a screen similar to the following image – if you don’t, make sure you check that Fiddler is running!!

Fiddler Debugging Echo Service
Click on the FiddlerRoot certificate to install the root certificate

Click on the FiddlerRoot Certificate and install as a Trusted Certificate Authority on the Local Machine. This should allow your Windows application to trust Fiddler.

iOS Fiddler Debugging

Next up is iOS and in this case we’re going to use the iOS simulator. The same process should work with any iOS device that’s on the same local network as the machine running Fiddler. There are two steps to setup iOS for traffic debugging:

  1. Trust the Fiddler root certificate, and
  2. Set the http proxy to use the machine running Fiddler.

To setup the simulator, first launch the iOS application from Visual Studio. If you’re on Windows, this will launch the remote viewer for the simulator. Once the simulator is running, stop debugging and we’ll setup the simulator for traffic debugging.

Trusting the Fiddler Certificate

Navigate to http:[ipaddress]:[port] (eg http://192.168.1.109:8888) to load the Fiddler echo page. Then click on the Fiddler Certificate link. Follow the prompts to download and install the certificate.

Fiddler Echo Service iPhone Download Certificate iPhone Download Profile iPhone

In addition to downloading the certificate you also need to install it. Go to Settings / General / Profile and click through on the FiddlerRoot profile in order to Install it.

image image image image image image image

The Fiddler root certificate needs to be trusted as a root certificate. Go to Settings / About / Certificate Trust Settings and toggle the switch next to the FiddlerRoot certificate. Fiddler generates a certificate for each site you go to that is derived from the root certificate, so the root certificate needs to be installed as a trusted certificate.

image image image image

The only difference between a real iOS device and the simulator is that on a real iOS device you can set a network proxy. There are online tutorials, such as this one, for instructions on setting a proxy. Unfortunately, setting up a proxy this isn’t configurable on the simulator. If you want to use a proxy in the simulator, you can set the proxy on the mac running the simulator but this would affect all traffic on the mac. Alternatively, when running on the simulator, we can adjust the HttpClient to use a WebProxy using the following code:

var handler = new HttpClientHandler();
handler.Proxy = new WebProxy("192.168.1.109", 8888);
var client = new HttpClient(handler);

Running the iOS application should show network traffic in Fiddler debugging window. You should still see the returned data printed out in the Output window (ie from the Debug.WriteLine statement).

Android Fiddler Debugging

For Android I’m going to use the Android Simulator. Real devices should behave similarly, assuming they’re connected to the same local network as the machine running Fiddler.

Unlike iOS, that will use any proxy configured for the device, for Android you need to explicitly opt in to use a proxy in your code. You’ll need to use code similar to the following on both emulator and real devices:

var handler = new HttpClientHandler();
handler.Proxy = new WebProxy("192.168.1.109", 8888);
var client = new HttpClient(handler);

What’s a little scary about this code is that it “just works”. You might be thinking that this is a good thing. However, it does raise the question of how much of the system security model does the Microsoft built HttpClientHandler respect. What I would have expected is an SSL fail exception because the Fiddler root certificate isn’t trusted by the emulator. Furthermore, the application is not configured to use any user certificates.

The other thing to point out here is that you should not be using the HttpClientHandler. I’ve discussed this in my previous post on working with the HttpClient. Let’s change our code by moving it into the OnCreate method of the Android head project. We’ll also change over to using the AndroidClientHandler.

protected override async void OnCreate(Bundle savedInstanceState)
{
     ...
     var handler = new AndroidClientHandler();
     handler.Proxy = new WebProxy("192.168.1.109", 8888);
     var client = new HttpClient(handler);
     var users = await client.GetStringAsync("https://reqres.in/api/users?page=0");
     System.Diagnostics.Debug.WriteLine(users);
}

When we run the application we see the very familiar SSL handshake exception being raised, which is what we should expect. To get things to work, we now need to install the Fiddler certificate and configure the application to use user certificates.

Install the Fiddler Root Certificate

To install the Fiddler root certificate, navigate to http:[ipaddress]:[port] (eg http://192.168.1.109:8888) to load the Fiddler echo page. Click on the Fiddler Certificate link in order to download the certificate. Follow the prompts to download and install the certificate

image image image image image  image

After installing the certificate go to Setting / Security and Location / Encryption & credentials / User credentials to inspect the certificate

image  image

With the certificate installed into the user store, you need to configure the Android project to allow the use of certificates from the user store. In my post Working with Self Signed Certificates (Certificate Pinning) in Android Applications with Xamarin.Forms, I covered this in detail. The quick summary is that you need to create a network_security_config.xml file which sets the trust-anchors property (set using the debug-overrides element) to include certificates from the user store. You then need to reference this xml file from the networkSecurityConfig attribute on the application element in the AndroidManifest.xml file.

After installing the certificate and adding the network security configuration to the Android application you should now see network requests from the application appear within Fiddler debugging window.

Side Note: Links on how to setup your iOS, Android or Windows device for remote debugging

Side Note: Links on how to setup your iOS, Android or Windows device for remote debugging

Whether it be for convenience, or out of necessity, being about to set up remote debugging to your device makes for a better debugging experience. Particularly if you’re working on a Windows machine, it’s really useful to not have to sit within a meter of the machine running the build agent. 

iOS on WiFi

On iOS, you need to connect your device to Xcode this first time. Thereafter you can debug across a WiFi network. Note that there are some limitations, specifically older devices won’t support network debugging.

Configure Remote Debugging for iOS

Android ADB Configuration

For Android, you need to configure ADB to connect to your device. Note that this configuration only lasts whilst the IP address of the device is retained. If a new IP address is leased, you’ll need to reconfigure ADB to allow for remote debugging.

Windows Remote Machine Setup for Remote Debugging

With a Windows application, use the debugging tools to connect to your device and then debug using the remote machine option in Visual Studio.

Remote Machine settings

The ability to do remote debugging is particularly useful when developing an Xbox application. With the Xbox running in Dev Mode it’s possible to use the Find option to locate the address of the Xbox. Debugging an Xbox application is then the same as debugging any other Windows 10 application.

If you need to diagnose the network traffic from your application, check out the post on using Fiddler for debugging


Nick Randolph @thenickrandolph

If you need help debugging your application, contact Built to Roam

 


Self Signed Android Certificates and Certificate Pinning in Xamarin.Forms

Working with Self Signed Certificates (Certificate Pinning) in Android Applications with Xamarin.Forms

Next up is looking at working with self-signed certificates in an Android application. Previous posts in this sequence are:

In this post we’re going to briefly talk about non-secure services. Next, we’ll look at how to trust self signed certificates by adding them to the Android bundle. Then lastly we’ll look at intercepting the certificate validation process when making service calls.

One resource that is particularly useful is the network security documentation provided for Android developers. This lists the various elements of the network security configuration file that this post will reference.

Non-Secure (i.e. Http) Services

By default, Android, like iOS, doesn’t allow applications to connect to non-secure services. This means that connecting to http://192.168.1.107 or http://192.168.1.107.xip.io will not work out of the box. You’ll see an error similar to the following, if you attempt to connect to a non-secure service:

Java.IO.IOException: Cleartext HTTP traffic to 192.168.1.107 not permitted occurred

It’s important to note that this behaviour has changed between Android 8.1 and Android 9. Prior to Android 9 there were no default restrictions on calling non-secure, or plain text, services.

Adding Network Security Configuration

You can enable accessing Http/Plain text services by adjusting the network security configuration for the application. To do this we need to add an xml file to the Resources/xml folder which we’ve called network_security_config.xml:

<?xml version="1.0" encoding="utf-8"?>
<network-security-config>
     <base-config cleartextTrafficPermitted="true" />
</network-security-config>

This adjusts the network security for the application both in debug and in production. If you want to access plain text services only during debugging, you should change the base-config open and close tags to debug-overrides (everything else remaining the same). The “Application (Debuggable = true/false)” assembly attribute controls whether or not the application runs in debugging mode.

#if DEBUG
[assembly: Application(Debuggable = true)]
#else
[assembly: Application(Debuggable = false)]
#endif

We also need to add a reference to the network_security_config.xml file. In the application manifest file (AndroidManifest.xml) add the networkSecurityConfig attribute.

<?xml version="1.0" encoding="utf-8"?>
< manifest http://schemas.android.com/apk/res/android%22">http://schemas.android.com/apk/res/android"
           android_versionCode="1"
           android_versionName="1.0"
           package="com.refitmvvmcross"
           android_installLocation="auto">
     <uses-sdk android_minSdkVersion="19"
               android_targetSdkVersion="28" />
     <application
         android_allowBackup="true"
         android_theme="@style/AppTheme"
         android_label="@string/app_name"
         android_icon="@mipmap/ic_launcher"
         android_roundIcon="@mipmap/ic_launcher_round"
         android:networkSecurityConfig="@xml/network_security_config"
         android_resizeableActivity="true">
         <meta-data
             android_name="android.max_aspect"
             android_value="2.1" />
     </application>
< /manifest>

You can change the filename of the network_security_config.xml file, so long as it matches the networkSecurityConfig attribute value in the AndroidManifest.xml file.

If you run the application it will connect to the Http/Plain text endpoint.

Plain-text on iOS

In addition to enabling/disabling Http/Plain text services across the entire application, it can also be controlled on a per-endpoint basis. See the documentation on the Network Security Configuration for more information.

Self Signed Certificate Endpoints

Switching the endpoint to a Https endpoint with a self signed certificate (eg https://192.168.1.107:5001) will raise the following error:

Javax.Net.Ssl.SSLHandshakeException: <Timeout exceeded getting exception details> occurred

Which of course is completely meaningless, except that we do know it’s related to establishing the SSL connection.

If we look to the Output window, we can find more information about the exception. Unfortunately when an Android app crashes it will spew a lot of mostly irrelevant data into the output window. I’m sure all that debug information is useful in a lot of cases. However, in this case it is a lot of irrelevant information that makes it hard to find the important information. The best way to find more information is to search for the error from the debug session (i.e. Javax.Net.Ssl.SSLHandshakeException). Once found, look at the next 10-15 lines of information. In this case we see

05-04 08:31:21.756 I/MonoDroid( 5948): UNHANDLED EXCEPTION:
05-04 08:31:21.768 I/MonoDroid( 5948): Javax.Net.Ssl.SSLHandshakeException: java.security.cert.CertPathValidatorException: Trust anchor for certification path not found. ---> Java.Security.Cert.CertificateException: java.security.cert.CertPathValidatorException: Trust anchor for certification path not found. ---> Java.Security.Cert.CertPathValidatorException: Trust anchor for certification path not found.

This points to the fact that the application hasn’t been able to verify the certificate path for the certificate returned by the service. This result is not surprising because the service is using a self signed certificate and neither the Android device or the application trust the certificate.

Trusting Self Signed Android Certificates

In this section we’re going to use the public key from the self signed certificate. This will allow the application to connect to the secure endpoint, without having to write code to intercept the certificate validation. We’ll cover two different ways but they amount to the same thing. In both cases it’s necessary to have the public key of the certificate authority available to the application. The application will use the public key to verify the certificate path of the certificate returned by the service.

Installing to the Android Certificate Store

The first option is to install the public key of the certificate authority onto the Android device. Android maintains two different certificate stores: system and user. We’re going to be adding the certificate to the user store. Aadding to the System store requires root access and can be done using ADB as shown in various posts, such as this one.

The public key that we need to use is the public key of the certificate authority. As discussed in previous posts, we’ve used mkcert to generate our self signed certificate. The public key of the certificate authority used by mkcert is available at C:\Users\[username]\AppData\Local\mkcertrootCA.pem

The first challenge is to get the certificate onto the device, which I typically find easiest via a download link. I add the public key to dropbox and then open the link in Chrome on the Android device:

image image

Installing the Public Key

After downloading the pem file, clicking on the file in the Downloads list does nothing. You actually need to go to Settings / Security & location / Encryption & credentials / Install from SD card

image

The name of the items under settings may vary from device to device. For example Install from SD card might be Install from storage. The quickest way is to search for certificate and go to the item that says something like Install from SD card.

image

Clicking on the Install from SD card/storage settings item will display a file picker. From the burger menu you can select Downloads which will reveal the pem file you’ve just downloaded. However, this item will be disabled, presumably because of some security related to downloaded items. I initially thought that I’d downloaded the wrong certificate format. In actual fact, if you go back to the burger menu and browse the contents of the device (see third image above) and click on Download, you’ll see the same rootCA.pem file but this time it’s not dimmed out and you can click on it.

image

Saving the Root Certificate

If you have a PIN setup on the device, when you click on the rootCA.pem, you’ll need to enter your pin. After entering your PIN you’ll be asked to enter a Name for the certificate and what the certificate is to be used for.

The Name isn’t particularly useful since it doesn’t show up anywhere. It doesn’t appear in either the Trusted credentials screen (second image, showing the added certificate), nor the Security certificate popup that appears if you click on the certificate for more information.

Since we want the certificate to be used by the app we leave the default use, “VPN and apps”. At this point if you open the service endpoint in the browser you should see that the certificate is trusted.

image

Accessing the Android Certificate Store

Now that we’ve installed the certificate, there’s just one change we need to make to the Android application itself. By default Android applications will only use certificates in the system store. However, you can adjust the application to make use of the user certificate store. To do this we need to add a trust-anchors and certificates elements to the network_security_config.xml with the following contents:

<?xml version="1.0" encoding="utf-8"?>
<network-security-config>
     <debug-overrides>
         <trust-anchors>
            <certificates src="user" />
         </trust-anchors>
     </debug-overrides>
</network-security-config>

In this case, the certificates element will only be used when the application is running in debug mode (ie by using the debug-overrides element). Running the application now will return data from the Https endpoint.

image

The issue with this approach is that the public key for the certificate authority has to be installed on the device. Since the public key appears in the user store, it means that at any point the user could remove it. Whilst it is unlikely that the user will delete the individual item, they may decides to clear all cached credentials.

Clearing cached credentials has the unfortunate side effect of clearing out the user store, thus preventing the application from running. If you’re only connecting to endpoints for the purpose of development or debugging, this option may be sufficient and would minimise the changes required to the application.

Adding to Application Package

The second option is to add the public key of the certificate authority to the application package itself. We’ll use the DER format for the public key which we generated from the mkcert public key file in the previous post. Add the kestrel.der file to the Resources / raw folder, with Build Action set to AndroidResource, and the Custom Tool to MSBuild:UpdateGeneratedFiles.

image

You need to link the public key to the network security configuration file. Add the trust-anchors and certificates elements to the network_security_config.xml file as follows:

<?xml version="1.0" encoding="utf-8"?>
<network-security-config>
     <base-config>
         <trust-anchors>
             <certificates src="@raw/kestrel" />
         </trust-anchors>
    </base-config>
</network-security-config>

If you only want to use the certificates in debugging, exchange the base-config with debug-overrides.

That’s all you need to do to be able to access a service that uses a self-signed certificate.

Validating Server Certificates (i.e. Android Certificate Pinning)

The other alternative to working with self signed certificates is to override the certificate validation that is done as part of each service request. On Android you can override the ConfigureCustomSSLSocketFactory and GetSSLHostnameVerifier methods on the AndroidClientHandler. For example, here’s a CustomerAndroidClientHandler that inherits from the AndroidClientHandler and overrides these methods:

public class CustomAndroidClientHandler : AndroidClientHandler
{
     protected override async Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
     {
         request.Version = new System.Version(2, 0);
         return await base.SendAsync(request, cancellationToken);
     }

    protected override SSLSocketFactory ConfigureCustomSSLSocketFactory(HttpsURLConnection connection)
     {
         return SSLCertificateSocketFactory.GetInsecure(0, null);
     }

    protected override IHostnameVerifier GetSSLHostnameVerifier(HttpsURLConnection connection)
     {
         return new BypassHostnameVerifier();
     }
}

internal class BypassHostnameVerifier : Java.Lang.Object, IHostnameVerifier
{
     public bool Verify(string hostname, ISSLSession session)
     {
         return true;
    }
}

We’ve also included the class BypassHostnameVerifier, which is a basic implementation of the IHostnameVerifier that needs to be returned from the GetSSLHostnameVerifier method. In the ConfigureCustomSSLSocketFactory method we’re returning a factory generated by the GetInsecure method. According to the method documentation the GetInsecure method “Returns a new instance of a socket factory with all SSL security checks disabled”. The documentation goes on to provide a warning “Warning: Sockets created using this factory are vulnerable to man-in-the-middle attacks!”.

I would like at this point to reiterate this warning. By providing your own handling of the SSL checks, you are effectively taking ownership and responsibility of making sure your application isn’t being hacked. As such, I would recommend only overriding these methods when you need to connect to a service that uses a self signed certificate. Make sure that in the Verify method you validate the service response to ensure only responses with self signed certificates that you trust are processed (ie check for man-in-the-middle attacks).

If you’re just interested in certificate pinning (i.e. ensuring that your application is connecting to a server that is returning a known certificate) you can simply override the GetSSLHostnameVerifier method. This will leave the default SSL verification/security in place but give you the opportunity to validate that the certificate being returned is what you expect.

Http2 Handling on Android

The bad news is that Http2 combined with self signed certificates doesn’t currently play nicely with the out of the box AndroidClientHandler. The AndroidClientHandler is the default and preferred handler on Android. When you attempt to connect to a service that is Http2 only and it uses a self signed certificate, you’ll probably see an error similar to the following:

Javax.Net.Ssl.SSLHandshakeException: Connection closed by peer occurred

Unfortunately there’s no simple solution without importing another library. Luckily, the ModerHttpClient library has the code necessary to do this and is updated slightly in the modernhttpclient-updated nuget package.

Please do not include the nuget package in your application as neither the original or the updated package are being actively maintained. Instead, copy the source code that you need (in this case the NativeClientHandler) and take ownership of maintaining it within your application logic.

To round this out, here’s an example that overrides the NativeMessageHandler to set the Http version to 2.

public class CustomNativeClientHandler : NativeMessageHandler
{
     protected async override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
     {
         request.Version = new System.Version(2, 0);
         return await base.SendAsync(request, cancellationToken);
     }
}

And when an instance of the CustomNativeClientHandler is created, the EnableUntrustedCertificates property is set to true. This effectively disables any of the SSL checking, so again opens up the risk of man-in-the-middle attacks.

Mvx.IoCProvider.LazyConstructAndRegisterSingleton<HttpMessageHandler, IServiceOptions>(options =>
{
     return new CustomNativeClientHandler
     {
         AutomaticDecompression = options.Compression,
         EnableUntrustedCertificates = true
     };
});

And when we run this, we can see that the Http protocol used is Http/2.

image

In this post we’ve touched on using both self signed certificates, as well as certificate pinning. This is not an easy topic but important for application developers to be across. If you come across issues with your application security, feel free to reach out on twitter or via Built to Roam’s contact page.


Nick Randolph @thenickrandolph

Built to Roam on building cross-platform applications


Self Signed iOS Certifcates and Certificate Pinning in a Xamarin.Forms application

Working with Self Signed Certificates (Certificate Pinning) in iOS Application with Xamarin.Forms

The next post in the app security series looks at working with self-signed certificates in an iOS application. Previous posts in this sequence are:

In this post we’re going to cover:

1) accessing non-secure services

2) trusting a self-signed certificate and

3) handling certificate validation.

This gives you all the options you should need when accessing which security option to use during development. It will also cover how to implement certificate pinning in the production version of your app.

Non-Secure (i.e. Http) Services

iOS is secure by default, which means that, by default, an iOS application can only connect to services over Https. The certificate will be verified against one of the well known certificate authorities. Most production services will use a certificate that has been issued by a well know certificate authority. For example, when you deploy a service to Azure App Service, the generated endpoint (eg myservices.azurewebsites.net) already has a Https endpoint with a certificate that is trusted.

In some cases being able to connect to plain-text services may useful. For example,when you’re running the services locally, or you’re attempting to connect to a service in an environment, where there is no https endpoint. In these cases, you can adjust the behaviour of the iOS application so that it can connect to a non-secure (ie http) endpoint.

Accessing Non-Secure Services

Let’s see this in action by changing our the endpoint of our service request to http://192.168.1.107:5000. The endpoint is configured for both https on port 5001 and http on port 5000. If you are trying on a new ASP.NET Core 3 project, don’t forget that the template comes with the line UseHttpRedirection in startup.cs so. If you want to expose an http endpoint you’ll need to remove that line.

image

In the iOS application if you simply change the endpoint to http://192.168.1.107:5000, the application will operate correctly. This is despite all the concern that http connections aren’t supported. This is because there’s a clear set of exceptions to the Https rule on iOS:

image

If you want to use http but instead of using an IP address (ie the exclusion we just saw) you have a domain name. Let’s try this by changing the endpoint to http://192.168.1.107.xip.io:5000. BIG shout out to the xip.io service which is super cool. You can enter any ip address before the xip.io and the returned ip address from doing a DNS lookup will be the same ip address.

image

When we run the iOS application and it attempts to make the service call we get the following exception raised within Visual Studio:

Unhandled Exception:
System.Net.WebException: <Timeout exceeded getting exception details> occurred

This isn’t very meaningful. However, in the Output window there’s much more information:T

Unhandled Exception:
System.Net.WebException: The resource could not be loaded because the App Transport Security policy requires the use of a secure connection. ---> Foundation.NSErrorException: Error Domain=NSURLErrorDomain Code=-1022 "The resource could not be loaded because the App Transport Security policy requires the use of a secure connection." UserInfo={NSLocalizedDescription=The resource could not be loaded because the App Transport Security policy requires the use of a secure connection., NSErrorFailingURLStringKey=http://192.168.1.107.xip.io:5000/api/values

Allowing Insecure Connections

This exception we can combat by including an exception in the Info.plist:

<key>NSAppTransportSecurity</key>
<dict>
   <key>NSExceptionDomains</key>
   <dict>
     <key>192.168.1.107.xip.io</key>
     <dict>
       <key>NSExceptionAllowsInsecureHTTPLoads</key>
       <true />
     </dict>
   </dict>
</dict>

Adding this to the plist will exclude the listed domain from the App Transport Security policy. Another alternative is to use the NSAllowArbitraryLoads attribute.  You should avoid using this attribute as it effectively disables the security policy for any endpoint that the app connects to.

<key>NSAppTransportSecurity</key>
<dict>
   <key>NSAllowsArbitraryLoads</key>
   <true />
</dict>

So that’s it for accessing non-secure, or Http, endpoints. Simply add the endpoint to the NSExceptionDomains element in the Info.plist file and you’re good to go.

Trusting Self-Signed Certificates

Now let’s go back to connecting to a secure endpoint. This time we’re going to keep with using a xip.io address to ensure any security policies are enforced. The secure endpoint would be https://192.168.1.107.xip.io:5001. I’ve reissued the certificate used by the ASP.NET Core application to include 192.168.1.107.xip.io in the alternative names section:

image

You would expect this to work since the endpoint domain is already listed in the Info.plist file (see earlier on doing this using the NSExceptionDomains). Unfortunately there is still an error similar to the following.

System.Net.WebException: The certificate for this server is invalid. You might be connecting to a server that is pretending to be “192.168.1.107.xip.io” which could put your confidential information at risk.

The information in this error is only partially correct. The certificate for the server is actually valid, it’s just that the app/device isn’t able to verify the integrity of the certificate.

Working with Public Keys

We need to find a way for the device to trust the certificate being returned by the server. By far the easiest way to get the certificate to be trust by applications running on an iOS device, is to install the public key for the certificate onto the device. To do this we need the public key, which we can extract from the pfx file used by the ASP.NET Core service, using the following openssl command (This site is very useful for Openssl commands):

openssl pkcs12 -in kestrel.pfx -out kestrel.pem -nodes

This extracts the public key in pem format. iOS needs der format. Luckily there’s again an openssl command for converting the files.

openssl x509 -outform der -in kestrel.pem -out kestrel.der

To get the public key to the device you can either share a hyperlink (ie upload the der file and share a link) or email the file to your self and open it on the device. My preference is just to add the file to my dropbox and then open the corresponding link using Safari on the device. Select the Direct Download option and the file downloads, extracts and attempts to installs the certificate

image

After clicking on Direct download you should see a prompt from the OS about installing a profile that has been installed from a website. We understand the risk so click on Allow. You will then see a confirmation prompt, indicating the Profile has been Downloaded.

imageimage

Installing iOS Root Certificate

It’s important to read the second prompt closely because what it’s saying is that you still need to go to Settings in order to complete the installation of the profile (which in this case is just a certificate). Open Settings / General / Profile and then select the profile for 192.168.1.107.xip.io you’ll see information about the certificate and the ability to Install (top right corner) the certificate.

image

After clicking Install and following the prompts you’ll be returned to this screen. The link to Install will change to Done. However, the certificate will be marked as Not Verified. This is because the root certificate, which in this case is the root certificate used by mkcert, is not trusted by the device.

image

Unfortunately since the certificate isn’t trusted, the application will still fail to connect to this endpoint. For the moment we’ll remove this certificate as it’s not helpful.

Trusting the Root iOS Certificate

Let’s repeat the process of installing the certificate but this time let’s install the root certificate used by mkcert. The public key can be found at C:Users[username]AppDataLocalmkcertrootCA.pem and when you attempt to install it on the device you should see something similar to

image image

Note the difference after the certificate has been installed – it is clearly marked as Verified in green.

To prevent profiles being accidentally downloaded and installed by users and for them to have full access to the device, it is necessary to manually trust certificates. The certificate settings can be found under Settings / General / About / Certificate Trust Settings. On this screen you can control which profiles (ie certificates) are fully trusted.

image

After toggling the full trust setting we’re good to try our application. We’ve not had to make any changes to the application itself. After installing the correct certificate onto the device, we’re able to connect to the secured services. Marking the root certificate as trusted on this device, also removes the need for the NSExceptionDomains section in the Info.plist file.

Validating Server Certificates (i.e. Certificate Pinning)

In this last section we’re going to look at how you can specify a certificate within the application itself. This will to allow requests to be made to the service with the self-signed certificate. Before proceeding you should removed any certificates that were previously installed.

Currently (at the time of writing) there is no way to override the certificate validation process for the out of the box NSUrlSessionHandler. There’s been work done in the past to provide a better alternative, such as the ModernHttpClient. However, most do not seem to work with self-signed certificates. They may have worked well with self-signed certificates back when the library was created but as it’s no longer maintained it appears to not support self-signed certificates.

Using the NSUrlSessionHandler

Even the sample project put together by Jonathan Peppers on SSLPinning doesn’t appear to work. Luckily with a minor tweak it’s possible to use the revised NSUrlSessionHandler to permit access to the self-signed service. Using the source code in the SSLPinning repository as the starting point, I’ve collated all the pieces of the alternative NSUrlSessionHAndler into a single file (Full source code). The main changes are:

- The addition of an UntrustedCertificate property which will accept the raw data from the .der public key

public NSData UntrustedCertificate { get; set; }


- Modification to the processing included in the DidReceiveChallenge method. This essentially installs the root certificate so that it can be trusted by the application.

if (sessionHandler.UntrustedCertificate != null)
{
   var trust = challenge.ProtectionSpace.ServerSecTrust;
    var rootCaData = sessionHandler.UntrustedCertificate;
    var x = new SecCertificate(rootCaData);

    trust.SetAnchorCertificates(new[] { x });
     trust.SetAnchorCertificatesOnly(false);
    completionHandler(NSUrlSessionAuthChallengeDisposition.PerformDefaultHandling, challenge.ProposedCredential);
}
else
{
    completionHandler(NSUrlSessionAuthChallengeDisposition.CancelAuthenticationChallenge, null);
}

In order to take advantage of the updated NSUrlSessionHandler we need to modify the setup.cs

Mvx.IoCProvider.LazyConstructAndRegisterSingleton<HttpMessageHandler, IServiceOptions>(options =>
{
     var handler = new Xamarin.SSLPinning.iOS.NSUrlSessionHandler
     {
         UntrustedCertificate = NSData.FromFile("kestrel.der")
     };
     return handler;
});

And of course we need to make sure we include the kestrel.der file in the Resources folder. Make sure the Build Action set to BundleResource.

image

Running this up and we get the response back from the service.

image

The big difference with this option is that there’s nothing outside of the application is modified. All the setting up of the trust is done within the application, making it much easier to deploy to any device without having to worry about configuring the device.

Summary of iOS Certificates

Note that we didn’t strictly do certificate pinning – we just allowed the application to connect to a self-signed endpoint. To carry out certificate pinning you can make changes to the DidReceiveChallenge method and determine whether the certificate should be trusted. The ModernHttpClient does have an implementation of a callback that the application can register for in order to determine whether the certificate, and thus the endpoind, should be trusted.

Working with Self Signed Certificates (Certificate Pinning) in Windows (UWP) Application with Xamarin.Forms

Working with Self Signed Certificates (Certificate Pinning) in Windows (UWP) Application with Xamarin.Forms

I’ve been doing a bit of progression talking about building and debugging ASP.NET Core services over https and http/2, coupled with using platform specific handlers to improve the way the HttpClient works on each platform. The following links provide a bit of a background on what we’ve covered so far.

Accessing ASP.NET Core API hosted on Kestrel over Https from iOS Simulator, Android Emulator and UWP Applications.
Publishing ASP.NET Core 3 Web API to Azure App Service with Http/2
Xamarin and the HttpClient For iOS, Android and Windows

In this post we’re going to pick up from the end of the previous post to discuss using self-signed certificates in a Windows (ie UWP) application. Previously we managed to get the ASP.NET Core API hosting setup in such a way that the services were exposed using the IP address of the host computer, meaning that it can be accessed from an app running on an iOS simulator, the Android emulator, or even a UWP app running locally on the computer. As we’ll see there’s still a bit of work to be done within the app on each platform to allow the app to connect to the API.

Before we go on, it’s worth noting that the technique we’re going to use in the post is sometimes referred to as certificate pinning, which amounts to verifying that the response to a service call has come across a secure channel that uses a certificate issued by a certificate authority that the app is expecting, or trusts. There are a variety of reasons for using this technique but the main one is to help eliminate man in the middle attack by preventing some third party from impersonating the service responding to the requests for an app. One of the other common reasons to use this technique is actually to permit non-secure, or self-signed certificates – as you may recall we used a self-signed certificate in the previous post to secure the service, so we need a mechanism for each platform to permit the use of self-signed certificates and treat the responses from such services as trusted. This will be done over a three part series of posts, starting with a Universal Windows Application (UWP) application in this post.

To get started, let’s take a quick look at what happens if we simply run up both the UWP application we had previously setup to use the WinHttpHandler. The only change I’m going to make to the UWP application initially is to change the BaseUrl for the service to https://192.168.1.107 (ie the IP address of the development machine) – note that it’s a https endpoint. Running the application will fall over with an exception when it attempts to connect to the HeaderHelper service hosted at https://192.168.1.107/api/value.

image

The extracted error message is as follow:

System.Net.Http.HttpRequestException
   HResult=0x80072F8F
   Message=An error occurred while sending the request.
   Source=System.Private.CoreLib
Inner Exception 1:
WinHttpException: Error 12175 calling WINHTTP_CALLBACK_STATUS_REQUEST_ERROR, ‘A security error occurred’.

Now if you search for this error information, you’re likely to see a bunch of documents talking about the 0x80072F8F error code as it seems to come up in relation to Windows activation issues. However if you google the 12175 error (ie the internal exception) you’ll see a number of articles (eg http://pygmysoftware.com/how-to-fix-windows-system-error-12175-solved/) that point at there being an SSL related error. In this case it’s because we accessing a service that uses a certificate that isn’t trusted and can’t be validated.

We’re going to discuss two ways to carry out certificate pinning, which should allow us to access the HeaderHelper service, even though it’s being secured using a self-signed certificate. In the previous post where we setup the ASP.NET Core service to use a new certificate when hosting on Kestrel, we generated a .PFX certificate that included both the public and private components using mkcert. In both of the methods described here, you’ll need the public key component, which is easy to grab using openssl thanks to this post. For example:

openssl pkcs12 -in kestrel.pfx -nocerts -out kestrel.pem -nodes

Look Dad, No Code

The first way to configure the UWP application to connect to the service with a self-signed certificate is to add the public key for the certificate into the UWP application and declare the certificate in the Package.appxmanifest.

– Open the Package Manifest designer by double-clicking the package.appmanifest

– Once opened, select the Declarations tab, and then from the Available Declarations, select Certificates and click Add.
image

– Click the Add New button at the bottom of the Properties section
image

– Set the Store name to TrustedPeople and click the … button to select the public key file generated earlier

image

If you’re interested as to what has been changed when you selected the public key in the manifest editor:

– The public key file (in this case kestrel.pem) was added to the root of the UWP project with Build Action set to Content so that the pem file gets deployed with the application

– The package.manifest file was updated to include an Extensions section, specifically a Certificate element that defines both the store and the certificate file name.

<Extensions>
   <Extension Category=”windows.certificates”>
     <Certificates>
       <Certificate StoreName=”TrustedPeople” Content=”kestrel.pem”/>
     </Certificates>
   </Extension>
</Extensions>

And that’s it – you can successfully run the application and all calls to the service secured using the generated self-signed certificate will be successful.

With Code Comes Great Responsibility

The second way to prevent man in the middle style attacks is to do some validation of the connection in code of the certificate returned as part of the initial all to the services.

If you read some of the documentation/blogs/posts online they sometimes reference handling the ServerCertificateValidationCallback on the ServicePointManager class. For example the following code will simply accept all validation requests, thereby accepting all response data, on the assumption that the caller is in some way validating the response.

ServicePointManager.ServerCertificateValidationCallback += (sender, cert, chain, sslPolicyErrors) => true;

Note: The ServerCertificateValidationCallbak event on the ServicePointManager will only be invoked if you use the default managed handler, which as we saw in my previous post is not recommended. I would discourage the use of this method for handling certificate validation challenges.

So, if ServicePointManager isn’t the correct place to intercept request, what is?

In the previous post we had already overridden the InitializeIoC method on the Setup.cs class, so it makes sense to route the NBN cabling through the roof cavity.

– A new method, CertificateCallacbk, has been set to handle the ServerCertificateValidatationCallback on the WinHttpHandler (not to be confused with the ServicePointManager callback).

protected override void InitializeIoC()
{
     base.InitializeIoC();


    Mvx.IoCProvider.LazyConstructAndRegisterSingleton<HttpMessageHandler, IServiceOptions>(options =>
     {
         return new WinHttpHandler()
         {
             ServerCertificateValidationCallback = CertificateValidationCallback,
         };
     });
}
private bool CertificateValidationCallback(HttpRequestMessage arg1, X509Certificate2 arg2, X509Chain arg3, SslPolicyErrors arg4)
{
     return true;
}

– Of course simply returning true to all certificate validation challenges, isn’t very secure, and it’s highly recommended that your certificate checking is much more comprehensive.

And that’s it; you can now ignore certificates that are self-signed, or that aren’t signed by a trusted certificate authority. Whilst the methods presented in this post are for UWP, they are equally applicable for a UWP application that’s been written using Xamarin.Forms.

Accessing ASP.NET Core API hosted on Kestrel over Https from iOS Simulator, Android Emulator and UWP Applications.

Accessing ASP.NET Core API hosted on Kestrel over Https from iOS Simulator, Android Emulator and UWP Applications.

This post is a stepping stone to get local debugging working for a Http/2 service over Https from a Xamarin.Forms application. In my post on publishing to Azure I covered the fact that the underlying service receives a Http/1.1 connection, despite applications establishing a http/2 connection. This made it difficult to build out applications that use technology such as GRPC which rely on the http/2 protocol. To make it possible to develop both the mobile app and the services locally, we need to setup the ASP.NET Core debugging to allow the applications (ie each of the supported platforms) to connect.

This post assumes that the ASP.NET Core application is being hosted locally using Kestrel, mainly because of the limitations around http/2 (here and here). By default, when you create an ASP.NET Core application it is setup with multiple launch configurations, allowing you to switch between IIS Express, Kestrel and if you select the Docker option when creating your project, you’ll see an option to launch using Docker (as shown in the following image showing the launchSettings.json for the HeaderHelper project).

image

To switch between the different launch configurations you just need to select the right configuration from the run dropdown in Visual Studio – in this case I’ve selected the HeaderHelper option, which as you can see from the above launch configurations uses the “Project” command name that correlates to hosting using Kestrel (I know, not super obvious, right!).

image

When we run the ASP.NET Core application using the default launch configuration on Kestrel, what we see is that a command window is shown (since Kestrel is basically a console application) and then a browser window is subsequently launched. As you’d expect the out of the box experience is all good – we can see it’s launched the https endpoint and there’s the lock icon to indicate it’s trusted.

image

It’s also interesting to note that the service is returning Http/2 when according to this document (see screenshot below) the default is Http/1.1.

image

Well, it looks like the documentation hasn’t been updated in line with the latest code. If you take a look at GitHub for AspNetCore repository you can see that between the stable v2.2.4 and the v3.0.0-preview4 release there has been a change to the default value.

image

Coming back to our Kestrel hosted ASP.NET Core application, we can see that the endpoint host is localhost, which aligns with what’s in the applicationUrl property in the configuration in the launchSettings.json file. Unfortunately, localhost isn’t great when it comes to working with mobile applications as localhost doesn’t always resolve to the development machine. For example if you’re working with a real iOS or Android device, they’re most likely going to be on the same WiFi network but localhost won’t resolve to machine running the ASP.NET Core application. Similarly if you’re developing on a Windows PC and using a remote Mac to do the build and run the simulator, localhost again won’t resolve to the correct machine.

To solve this, we need to change the Kestrel configuration to expose the service in such a way that it can be accessed via the IP address of the machine where Kestrel will be running. Note that there are plenty of services such as ngrok, portmap.io and Forward which are great and easy to setup for non-secure services. However, once you get into trying to extend the configuration to support Https or Http/2 you end up needing to pay to use their premium service. These services are great if you want to extend beyond the bounds of your firewall but are overkill if all you want to do is expose your service for development purposes.

A much similar alternative is to:

– Change Kestrel to bind to all IP addresses for the host machine

– Add a firewall rule to allow in-bound connections on the posts required for the application

I’ll elaborate on these in more detail – and I’m going to do them in reverse order because the firewall rule is required in order to verify the Kestrel configuration is working when binding to the IP address.

Adding a Firewall Rule for Ports 5001 and 5000 (on Windows)

On Windows, it’s relatively straight forward to add a firewall rule that will allow inbound connections on specific ports. In this case we’re interested in adding a rule that works for ports 5000 and 5001, which are the two ports used in the applicationUrl property of launchSettings.json. Here’s the step-by-step

– Press Start key, type “Windows Defender” and click on the “Windows Defender Firewall with Advanced Security” option

– Click on “Inbound rules” in the left panel

– Click on “New rule” in the right (Actions) panel to launch the New Inbound Rule Wizard

– When prompted for the type of rule, select “Port” and click Next

– Make sure the “Specific local ports” option is selected and enter “5000-5001” (or “5000,5001”) in the text box.

– Click Next, accepting the defaults on the remaining pages of the New Inbound Rule Wizard, through to the final page where you’ll need to give the rule a name before hitting Finish.

Once you’ve created the Inbound rule, any requests made on these ports will be allowed through to whatever service is bound to those ports on your computer. You should disable this rule when you’re not making use of these ports for debugging.

Binding Kestrel to All IP Addresses

This can be done simply by changing the launchSettings.json file to replace localhost with 0.0.0.0:

image

When you rebuild (you may need to force a rebuild as sometimes the change to launchSettings.json isn’t picked up by Visual Studio) and attempt to run the application you’ll see an error page – this is because 0.0.0.0 isn’t actually a real IP address, it’s just the address used in the launchSettings to configure Kestrel to bind to all addresses.

image

If you change the address to use localhost instead of 0.0.0.0 you’ll again see that the api result is returned successfully. However, if you now use the actual IP address of the computer (in this case 192.168.1.107) you’ll see a certificate warning. Clicking the Advanced you can proceed to the site and see the result but the “Not secure” in the address bar will remain.

image

The fact that there’s a security error is going to cause a lot of issues if we don’t resolve it because none of the application platforms (ie iOS, Android, UWP) work well with Https when the certificate can’t be verified. Even if you use certificate pinning (to be covered in a future post) you’ll find it hard to configure the different platforms to work with certificates that don’t match the domain of the service.

If we take a look at the certificate being used, we can see that the Subject Alternative Name only matches with localhost.

image

Luckily this problem can be fixed by changing the certificate that is used by your ASP.NET Core application. If you’re planning on exposing your ASP.NET Core endpoint directly to the internet I would recommend getting a certificate from a well known CA. The following process can be used for setting up your service for development purposes:

If you know what you’re doing you can download the latest openssl and proceed to create your own certificates. However, this is fairly involved and a much similar way is to leverage the mkcert tool that is available at https://github.com/FiloSottile/mkcert. The steps are as follows:

– Download the latest binaries for mkcert (you might want to rename the executable from say mkcert-v1.3.0-windows-amd64.exe to mkcert.exe for convenience)

– Launch a command prompt running as Administrator

– Run “mkcert -install”. If you get an error such as “failed to execute keytool…..”  you probably didn’t read the previous step and opened a regular command prompt. You need to be running as Administrator

image

A successful install should look like:

image

The install process creates a certificate and trusts it on the local computer as a trusted certificate authority, meaning it can be used to generate other certificates.

– Run mkcert to create a certificate that you can use in your ASP.NET Core application

mkcert -pkcs12 -p12-file kestrel.pfx 192.168.1.107 localhost 127.0.0.1 ::1

image

– Copy the newly created kestrel.pfx into the ASP.NET Core project and set the Build Action to Content to make sure it gets deployed with your application.

image

– Remove the applicationUrl property from the Kestrel configuration in launchSettings.json

image

– Update the CreateHostBuilder method in program.cs to setup the Kestrel configuration. Specifically setting up both ports 5001 and 5000 to listen on Https and Http respectively. For port 5001 the kestrel.pfx certificate is used (note despite the advice we haven’t changed the password here but would recommend doing so if you’re going to use this in production)

public static IHostBuilder CreateHostBuilder(string[] args) =>
     Host.CreateDefaultBuilder(args)
         .ConfigureWebHostDefaults(webBuilder =>
         {
             webBuilder
                 .ConfigureKestrel(options =>
                 {
                     options.ListenAnyIP(5001, listenOptions =>
                     {
                         listenOptions.UseHttps(“kestrel.pfx”, “changeit”);
                     });
                     options.ListenAnyIP(5000);
                 })
                 .UseStartup<Startup>();
         });

Now when we run the ASP.NET Core application on the Kestrel hosting we can successfully navigate to the endpoint using the machines IP address.

image

Inspecting the https certificate you can see that the Subject Alternative Names include 192.168.1.107 (ie the machines IP address) and that the Certification path ends in the mkcert certificate that has been added to the trusted certificate authorities on this computer.

imageimage

Now that we’ve configured Kestrel and ASP.NET Core to play nice, what we need to do is to configure our mobile applications to connect to this service, which we’ll do in the next post.

Xamarin and the HttpClient For iOS, Android and Windows

Xamarin and the HttpClient For iOS, Android and Windows

In an earlier post that talked about using dependency injection and registering interfaces for working with Refit across both Prism and MvvmCross I had code that registered an instance of the CustomHttpMessageHandler class which internally used a HttpClientHandler for its InnerHandler. For developers who have spent a bit of time optimising their iOS, Android or Windows application, you’ll have noted that using the HttpClientHandler is generally not deemed to be best practice.  As I’m a big fan of trying to demonstrate best practices, I figured I’d expand on this a little into a post talking about the HttpClient and some of the options you have.

Firstly, a couple of bits of side reading:

– Docs on the HttpClient stack

– Mono blog post talking about the HttpWebRequest / HttpClient

– Jon’s post on Http Performance

What you should gather from these articles is that Microsoft is doing their best to set you up for success but not wanting to take any documentation for granted, let’s see what happens when we create a brand new Xamarin.Forms project and spin up an instance of the HttpClient. When creating the project I just picked the Blank Xamarin.Forms template and made sure that all three platforms were included. The code for creating the HttpClient just uses the zero-parameter constructor:

var client = new HttpClient();

Let’s run each platform and see what the HttpClient gives us (and at this point I haven’t updated any NuGet packages, framework versions or anything. This is just what VS2019 gives me when I create a new XF project).

UWP

image

Here we get the managed HttpClientHandler, rather than the newer (and arguably better) WinHttpHandler. Actually I didn’t find a definitive guide on which is better for UWP, although this stackoverflow post does seem to imply the WinHttpHandler would be the preferred choice, particularly if you want to leverage Http/2.

Android

image

Android is using the AndroidClientHandler which is what should give us the most up to date http experience.

iOS

image

iOS is using the NSUrlSessionHandler which is what should give us the most up to date http experience.

This all seems good (albeit that you might want to use the WinHttpHandler on UWP) so for a lot of developers they might never run into any issues. If you did want to adjust which handler is used on iOS and Android (again assuming you’re just using the HttpClient with the default constructor) you can do so via the properties dialog for the corresponding platform:

image

However, where things come unstuck is if you want to customise some of the http behaviour. In my previous post I demonstrated setting the compression flag but it could have equally been adding an additional header or changing the credentials that are sent as part of each request. In this case, it’s easy enough to use the overload of the HttpClient constructor that takes a HttpMessageHandler and use the managed HttpClientHandler implementation (as I demonstrated). As you’d have seen from the linked articles above, this isn’t ideal as the managed implementation doesn’t leverage the platform specific optimisations.

The better approach is for my application to register the platform specific handler, which in MvvmCross can be done via the Setup class (which is created by default when using MvxScaffolding):

UWP

public class Setup : MvxFormsWindowsSetup<Core.App, UI.App>
{
     protected override void InitializeIoC()
     {
         base.InitializeIoC();
        Mvx.IoCProvider.LazyConstructAndRegisterSingleton<HttpMessageHandler, IServiceOptions>(options =>
         {
             return new WinHttpHandler()
             {
                AutomaticDecompression = options.Compression
             };

         });
     }
}

Android

public class Setup : MvxFormsAndroidSetup<Core.App, UI.App>
{
     protected override void InitializeIoC()
     {
         base.InitializeIoC();


        Mvx.IoCProvider.LazyConstructAndRegisterSingleton<HttpMessageHandler, IServiceOptions>(options =>
         {
            return new AndroidClientHandler
             {
                 AutomaticDecompression = options.Compression
             };

         });
     }
}

iOS

public class Setup : MvxFormsIosSetup<Core.App, UI.App>
{
     protected override void InitializeIoC()
     {
         base.InitializeIoC();


        Mvx.IoCProvider.LazyConstructAndRegisterSingleton<HttpMessageHandler, IServiceOptions>(options =>
         {
             var nsoptions = NSUrlSessionConfiguration.DefaultSessionConfiguration;
             if (options.Compression == System.Net.DecompressionMethods.None)
             {
                 nsoptions.HttpAdditionalHeaders = new NSDictionary(“Accept-Encoding”, “identity”, new object[] { });
             }
             var handler = new NSUrlSessionHandler(nsoptions);
             return handler;

         });
     }
}

Note: for iOS the NSUrlSessionHandler enabled compression by default, so the code here illustrates how you could disable compression if you wanted by sending the identity Accept-Encoding header.

In this post I’ve shown you how you can register each of the native platform handlers to optimise the requests made when using the HttpClient. This post should be read in conjunction with my earlier post that registered the other classes necessary to create the HttpClient based on the registered handler. The only other change is that the HttpService constructor should accept an HttpMessageHandler instead of an ICustomHttpMessageHandler.

public class HttpService : IHttpService
{
     public HttpService(HttpMessageHandler httpMessageHandler, IServiceOptions options)
     {
         HttpClient = new HttpClient(httpMessageHandler as HttpMessageHandler)
         {
             BaseAddress = new Uri(options.BaseUrl)
         };
     }


    public HttpClient HttpClient { get; }
}

Update: It’s worth noting that the WinHttpHandler used in the UWP example isn’t part of the core framework. Instead it is accessible via the System.Net.Http.WinHttpHandler NuGet package. Visual Studio provides a handy way to find and install this package – selecting the WinHttpHandler reference (where there is a build error) and looking at the intellisense options, there is an option to Install the System.Net.Http.WinHttpHandler package.

image

Publishing ASP.NET Core 3 Web API to Azure App Service with Http/2

Publishing ASP.NET Core 3 Web API to Azure App Service with Http/2

In my previous post I was testing a new ASP.NET Core 3 Web API that I’d created that simply returns header and http information about the request. Having got everything working locally I decided that I should push it into an Azure App Service to make it accessible from anywhere (this seemed to be easier than attempting to connect to my locally running service from a Xamarin.Forms application). Here’s the process:

Right-click on the ASP.NET Core project and select Publish.

image

In this case we’re going to select App Service (ie a Windows host) and Create New, followed by the Publish button. Next we need to give the new App Service a name and specify both a Resource Group and an App Service Plan – in this case I’m going to create all of these as part of the publishing process

image

Hitting Create will firstly create the necessary Azure resources and then it will proceed with publishing the ASP.NET Core project into the App Service. Unfortunately, once this process has finished you’ll see that the launched url doesn’t load correctly:

image

And secondly, when you return to Visual Studio you’ll see a warning prompt indicating that ASP.NET Core 3 isn’t supported in Azure App Service right now.

image

Luckily Microsoft documentation has you covered. If you go to the main documentation on publishing to Azure App Service there is a link of to deploying preview versions of ASP.NET Core applications. This document covers two different ways to fix this issue – you can either install the preview site extensions for ASP.NET Core 3, or you can simply change your deployment to be a self-contained application. In this case we’re going to go with deploying a self-contained application, since this reduces any external dependencies which seems sensible to me.

After returning to Visual Studio and dismissing the above version warning, you’ll see the Publish properties page with the default publish configuration (you can get back to this page by right-clicking your ASP.NET Core project and selecting Publish in the future).

image

We’re going to click the pencil icon along side any of the summary properties to launch the Publish dialog and change the Deployment Mode to Self-Contained, and the Target Runtime to win-x86. You may be tempted to select win-x64 but only do this if the Platform setting on your App Service is set to 64 Bit, otherwise your service won’t start and you’ll see a 503 service error.

image

Click Save and then the Publish button to publish the application using the updated publishing properties. Note that if you’re on a network that has a slow uplink (eg ADSL) this might take a while, so you might consider jumping on a fast network (eg 4G mobile) to do the upload (and yes, this does make Australia sound like an under-developed nation when it comes to access to the internet – sigh!).

Without any further messing around, the ASP.NET Core application launches correctly:

image

Hmmm, but wait, shouldn’t it be reporting HTTP/2, after all that’s what the browser was reporting when I ran the same service on Kestrel. There’s a couple of pieces to this answer but before we do, I want to remove any element on confusion as to what’s going on here by switching across to using curl – this way we have both control over what protocol we’re requesting but also detailed logging on what protocol is being used (you’ll see why this is important in a minute). Executing the following:

curl https://headerhelper.azurewebsites.net/api/values -v

image

As you can see from the image, the response was indeed done over Http/1.1, which is consistent with the Http protocol listed by the service. Ok, so let’s try requesting Http/2

curl https://headerhelper.azurewebsites.net/api/values –http2 –v

image

This call is successful but again returns Http/1.1 – this is because curl is attempting to request an upgrade to http/2 but the service isn’t willing/able to upgrade the connection.

curl https://headerhelper.azurewebsites.net/api/values –http2-prior-knowledge -v

image

This call fails because curl is forcing the use of Http/2 when in fact the service isn’t able to communicate using Http/2. So, how do we fix this? Well the good news is that Azure App Service has a simple configuration setting that can be used to enable Http/2. Here I’m just setting the HTTP version in the Configuration page for the Azure App Service.

image

This can also be set via the resource explorer, as covered by a number of other people (eg this post). After making your change, don’t forget to Save changes and then give the service 30-60seconds for it to be restarted – if you attempt to request the service immediately you’ll still get Http/1.1 responses.

After the change has been applied, here’s what we see when we use the same curl commands as above:

curl https://headerhelper.azurewebsites.net/api/values –http2 –v

curl https://headerhelper.azurewebsites.net/api/values –http2-prior-knowledge –v

image

Note that it doesn’t matter whether we attempt to negotiate the http/2 upgrade (–http2 flag) or force the point (–http2-prior-knowledge), in both cases the connection reports HTTP/2. However, what’s not cool is that the Http protocol returned by the service is HTTP/1.1 – this is what is seen by the ASP.NET Core Web API.

What we’re seeing here is that Azure is terminating the Http/2 connection and then communicating to the underlying ASP.NET Core application using Http/1.1. This is consistent with the way that SSL support is done – Azure terminates the SSL connection, meaning that your ASP.NET Core application doesn’t need to worry about fronting a secure service. This is awesome for developers that want to add SSL or HTTP/2 to their existing services – you just enable the option in the configuration page of your App Service. However, the down side is that it makes leveraging some of the underlying capabilities of HTTP/2 impossible – for example, it’s currently impossible to host a GRPC service in an App Service as this relies on HTTP/2 to function.

The question still remains – when I request the service from the browser, what protocol is being used? The response returns HTTP/1.1 because that’s what our ASP.NET Core application sees. However, if we look at the browser debugging tools, we can see that the response is indeed being handled over a HTTP/2 connection. Note that this isn’t exposed in the UI of the debugging tools but if you save the request you can see the full details:

image

Opening the HAR file in VS Code:

image

And there you have it – deploying an ASP.NET Core 3 application to Azure App Service and exposing it using HTTP/2.

Testing ASP.NET Core Web API on Kestrel with Fiddler Composer Fails

Testing ASP.NET Core Web API on Kestrel with Fiddler Composer Fails

It’s been one of those days when you set out to do something so simple and yet you get distracted by having to fix something that should just work. I’ll set the scene – I wanted to generate a simple ASP.NET Core Web API that would return the HTTP protocol and headers of a particular request. All up I think the code for this took me about 30seconds to write as follows (this was in a new ASP.NET Core 3 project created using the Api template in Visual Studio 2019):

[HttpGet]
public ActionResult<IEnumerable<string>> Get()
{
     var headers = (from h in Request.Headers
                     select h.Key + ” – ” + h.Value).ToList();
     headers.Add(“Http – ” + string.Join(“,”, Request.Protocol));
     return headers;
}

When I ran this in Visual Studio it launched the browser and did indeed return the headers and HTTP protocol version. At this point I was a bit surprised as it return Http/2 even though I had done nothing either in the browser or the service to indicate that I wanted Http/2.

image

Realising that this was something that the browser was negotiating I thought I’d see what result I got when I called the service from Fiddler where I could control what Http version was being requested. As you can see from the image what I got back was a 502 response:

[Fiddler] The connection to ‘localhost’ failed.  <br />System.Security.SecurityException Failed to negotiate HTTPS connection with server.fiddler.network.https&gt; HTTPS handshake to localhost (for #1) failed. System.IO.IOException Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. &lt; An existing connection was forcibly closed by the remote host

image

This was frustrating because this should have just worked. Furthermore there was no exception raised within my ASP.NET Core project. I was running my project on Kestrel which also was exposing a non-https endpoint, http://localhost:5000/api/values. However, the Api template adheres to best practice and comes with the line “app.UseHttpsRedirection” in Startup.cs which caused the request from Fiddler to be redirected to https, which of course then fails as before. If I remove the redirection, the request again fails with the 502 exception.

Luckily I’ve been in this situation and realised that whilst there’s no exception being raised in my code, there was most likely an exception being thrown internally as part of the ASP.ENT Core middleware. To investigate this further I firstly made sure that all “Common Language Runtime Exceptions would trigger a break in Visual Studio (you need to run the application in order to see this window by default, or you can open it from the Debug / Windows / Exception Settings menu item).

image

By itself this isn’t sufficient, you also need to uncheck the “Enable Just My Code” checkbox in Tools / Options menu item.

image

Invoking the service from Fiddler now generates the following exception:

System.Security.Authentication.AuthenticationException: ‘Authentication failed, see inner exception.’
InnerException: Win32Exception: The client and server cannot communicate, because they do not possess a common algorithm.

image

After a bit of investigation I realised that the combination of this exception and the 502 response returned to Fiddler was pointing to a miss-match between the protocols being requested and those supported. Out of the box Fiddler requests only support a very limited set of protocols for secure connections, shown on the Https tab in the Options dialog.

image

Clicking on the list of protocols allows you to edit them, in this case to include tls 1.1 and 1.2.

image

After applying this change I was able to execute the requests from Fiddler (in this case I’ve left the Https redirection off) and see that the Http protocol matches the 1.1 of the request.

image

The truly annoying thing after all this effort, it would appear the Fiddler doesn’t appear to actually support Http/2, despite there being a dropdown on the Compose tab for it. Using the Http/2.0 option causes exceptions to be raised within the ASP.Core application. Furthermore this seems to be consistent with what happens if you attempt to intercept requests coming from Chrome (they get reverted to HTTP/1.1) and this post.

What does work is using Curl from the command line. However you’ll find that the version you have installed may not support http/2.0 requests. If this is the case, you should download the latest version from https://curl.haxx.se/download.html

At this point it’s also worth having a read through the ASP.NET Core 3 information on Kestrel hosting, specifically the part that talks about http/2 support. In your appsettings.json you can adjust whether you want Http1, Http1AndHttp2, or just Http2 support.

image 

Depending on what protocols you choose to support, you’ll find that different CURL commands will work. Here are some examples:

Protocols = Http2 in the appsettings.json file

curl http://localhost:5000/api/values –http2 -v -k

This attempts to connect with Http1.1 with header Connection: Upgrade, HTTP2-Settings but this fails as connection is Http (not supported scenario on Kestrel)

curl https://localhost:5001/api/values –http2 -v -k

As part of Https negotiation this also upgrades from http1.1 connection to http2. Request succeeds over Http/2

curl http://localhost:5000/api/values –http2-prior-knowledge –v -k

This forces a http/2 connection but still unsecured

Note that the –v option for curl shows verbose information, whilst –k is required when connecting to Kestrel on local machine since the default developer certificate isn’t trusted.

Lazy Dependencies and Interfacing Refit with MvvmCross and Prism

Lazy Dependencies and Interfacing Refit with MvvmCross and Prism

Refit is one of those libraries that makes your life as a developer having to write code to work with services that much easier. By way of an example let’s look at the GET List Users method from the sample REST service available at https://reqres.in/

image

Running the response json through JsonToCSharp we get a class model similar to:

public class User
{
     public int id { get; set; }


    public string first_name { get; set; }


    public string last_name { get; set; }


    public string avatar { get; set; }
}


public class UserList
{
     public int page { get; set; }


    public int per_page { get; set; }


    public int total { get; set; }


    public int total_pages { get; set; }


    public List<User> data { get; set; }
}

Next we define the interface for our service (Note that the Get attribute defines the path, including the page parameter that’s passed into the RetrieveUsers method):

public interface IUserService
{
     [Get(“/api/users?page={page}”)]
     Task<UserList> RetrieveUsers(int page);
}

Of course, we need to add a reference to the Refit NuGet package to our application. Lastly we of course need to retrieve an instance of the UserService, which is where the magic comes in – behind the scene, Refit generates the appropriate implementation for the IUserService so that all you need to do is retrieve it using code similar to:

var userService = RestService.For<IUserService>(“https://reqres.in/”);

This is all well and good but when it comes to writing applications using Prism or MvvmCross, what we really want to be able to do is have the IUserService injected into the constructor of our view model.

Injecting Refit with MvvmCross

We’ll start with MvvmCross (I’ve create a brand new project using MvxScaffold to get me started) and I’ve modified the HomeViewModel to add IUserService as a parameter to the constructor. The service is then used in the ViewAppeared method to retrieve the first page of users.

public class HomeViewModel : BaseViewModel
{
     private IUserService UserService { get; }
     public HomeViewModel(IUserService userService)
     {
         UserService = userService;
     }


    public override async void ViewAppeared()
     {
         base.ViewAppeared();


        var users = await UserService.RetrieveUsers(1);
         Debug.WriteLine(users.data.Count);

     }
}

Now the trick is that we need to register the instance of the IUserService with MvvmCross which we can do at the end of the Initialize method of App.cs in the Core project. We don’t want all of the services to be created at start-up, so instead we register the type using the LazyConstructAndRegisterSingleton extension method: For example:

public override void Initialize()
{
     …
     Mvx.IoCProvider.LazyConstructAndRegisterSingleton<IUserService>
         (() => RestService.For<IUserService>(“https://reqres.in/”));
}

There’s a couple of issues here:

– String literal for the host url

– No ability to override the HttpClient used by the IUserService implementation

Let’s see how we can resolve this by registering some other instances that we can use as part of creating the IUserService. One of the other options for the For method is that we can supply a HttpClient that already has a BaseAddress set. We can also set a HttpMessageHandler on the HttpClient instance that will allow us to customise the behaviour of the HttpClient. Unfortunately the only interface that the HttpClient implements is IDisposable. To get around this we wrap the HttpClient instance in a service which returns the instance as a property:

public interface IHttpService
{
     HttpClient HttpClient { get; }
}


public class HttpService : IHttpService
{
     public HttpService(ICustomHttpMessageHandler customHttpMessageHandler, IServiceOptions options)
     {
         HttpClient = new HttpClient(customHttpMessageHandler as HttpMessageHandler)
         {
             BaseAddress = new Uri(options.BaseUrl)
         };
     }
     public HttpClient HttpClient { get; }
}

Note that in this example the HttpService implementation relies on an ICustomHttpMessageHandler and IServiceOptions. These are here to illustrate the chaining of dependencies that can all be lazy loaded.

public interface ICustomHttpMessageHandler
{
}
public class CustomHttpMessageHandler : DelegatingHandler, ICustomHttpMessageHandler
{
     public CustomHttpMessageHandler(IServiceOptions options)
     {
         InnerHandler = new HttpClientHandler()
         {
             AutomaticDecompression = options.Compression
         };
     }
}
public interface IServiceOptions
{
     string BaseUrl { get; }
     DecompressionMethods Compression { get; }
}
public class ServiceOptions : IServiceOptions
{
     public string BaseUrl { get; set; }
     public DecompressionMethods Compression { get; set; }
}

The final registration code in the Initialize method is:

public override void Initialize()
{
     …
     Mvx.IoCProvider.LazyConstructAndRegisterSingleton<IServiceOptions>(() => new ServiceOptions()
     {
         BaseUrl = “https://reqres.in”,
         Compression = DecompressionMethods.Deflate | DecompressionMethods.GZip
     });
     Mvx.IoCProvider.LazyConstructAndRegisterSingleton<ICustomHttpMessageHandler, CustomHttpMessageHandler>();
     Mvx.IoCProvider.LazyConstructAndRegisterSingleton<IHttpService, HttpService>();
     Mvx.IoCProvider.LazyConstructAndRegisterSingleton<IUserService, IHttpService>
         (service => RestService.For<IUserService>(service.HttpClient));
}

Injecting Refit with Prism

I’m not going to cover over generating the classes and IUserService interface. Instead we’re going to look at how we can register and then make use of the IUserService within Prism. One of the limitations of Prism is that there is no way to register a singleton for lazy creation using a function callback. This means we’d need to register an actual implementation of the IUserService which would mean creating the service instance on app startup instead of when the service is first used. Luckily there is a work around that makes use of the Lazy<T> class. Again, the Lazy<T> doesn’t implement an interface, so we’ll inherit from Lazy<T> and implement a new interface ILazyDependency.

public interface ILazyDependency<T>
{
     T Value { get; }
}
public class LazyDependency<T> : Lazy<T>, ILazyDependency<T>
{
     public LazyDependency(Func<T> valueFactory) : base(valueFactory)
     {
     }
}

And now here’s the registration code (added to the RegisterTypes method in App.xaml.cs):

protected override void RegisterTypes(IContainerRegistry containerRegistry)
{
     …
     containerRegistry.RegisterInstance<IServiceOptions>(new ServiceOptions()
     {
         BaseUrl = “https://reqres.in”,
         Compression = DecompressionMethods.Deflate | DecompressionMethods.GZip
     });
     containerRegistry.RegisterSingleton<ICustomHttpMessageHandler, CustomHttpMessageHandler>();
     containerRegistry.RegisterSingleton<IHttpService, HttpService>();
     containerRegistry.RegisterInstance<ILazyDependency<IUserService>>(
         new LazyDependency<IUserService>(() =>
         RestService.For<IUserService>(
             PrismApplicationBase.Current.Container.Resolve<IHttpService>().HttpClient)
         ));
}

This code looks very similar to the registration code when using MvvmCross. However, the difference is that instead of registering IUserService, it registers ILazyDependency<IUserService>. This is important to note because we need to remember to request the correct interface instance in our view model:

public class MainPageViewModel : ViewModelBase
{
     private IUserService UserService { get; }
     public MainPageViewModel(INavigationService navigationService, ILazyDependency<IUserService> userService)
         : base(navigationService)
     {
         UserService = userService.Value;
         Title = “Main Page”;
     }
     public override async void OnNavigatedTo(INavigationParameters parameters)
     {
         base.OnNavigatedTo(parameters);


        var users = await UserService.RetrieveUsers(0);
         Debug.WriteLine(users.data.Count);
     }
}

And there you have it, we’ve abstracted the Refit implementation away from our view models, making them ready for testing.

#The following code was added to the MvvmCross project to simplify the registration of the IUserService:

public static class MvxIoCContainerExtensions
{
     private static Func<TInterface> CreateResolver<TInterface, TParameter1>(
         this IMvxIoCProvider ioc,
             Func<TParameter1, TInterface> typedConstructor)
             where TInterface : class
             where TParameter1 : class
     {
         return () =>
         {
             ioc.TryResolve(typeof(TParameter1), out var parameter1);
             return typedConstructor((TParameter1)parameter1);
         };
     }


    public static void LazyConstructAndRegisterSingleton<TInterface, TParameter1>(this IMvxIoCProvider ioc, Func<TParameter1, TInterface> constructor)
         where TInterface : class
         where TParameter1 : class
     {
         var resolver = ioc.CreateResolver(constructor);
         ioc.RegisterSingleton(resolver);
     }
}

Note: This code has already been merged into MvvmCross but at time of writing isn’t in the stable release.

Deploying ASP.NET Core 3 to Linux Azure App Service with Docker

Deploying ASP.NET Core 3 to Linux Azure App Service with Docker

In my earlier post I covered creating and debugging an ASP.NET Core service using Docker Desktop. I’m going to build on that and look at how you then push the service into an Azure App Service. Normally I’d simply use the publish option that would allow me to push the service directly into an Azure App Service – this would run the service in much the same way as it runs locally if I was debugging on IISExpress. However, since I’m debugging via a docker container I figured it’d be great to push to Azure in a way that it continues to run in a container. Actually the reason I explored this in the first place was that I’ve been experimenting with GRPC and currently this doesn’t seem to be able to be supported on a regular Azure App Service. I figured if I could run my code in a container there would be less restrictions so I would be able to get GRPC to work (this is work in progress still).

What I did notice in Visual Studio is that when I right-clicked on my ASP.NET Core project and selected Publish, one of the options I saw was to Create new App Service for Containers.

image

Clicking Publish started the wizard to allow me to create the appropriate resources in Azure.

image

Clicking Create will trigger the creation of the selected Azure resources and then proceed to publish the application.

Note: My initial attempt to published failed with an error

“The system cannot find the file specified. In the default daemon configuration on Windows, the docker client must be run elevated to connect. This error may also indicate that the docker daemon is not running.”

Turns out I didn’t have Docker Desktop running (I have so many apps like Slack, Teams, Skype etc that run in background that I force quit most of them each day to try to retain some semblance of a reasonable battery life on my laptop).

Once I realised that I needed to start Docker Desktop, the publish process kicked off and I saw the Docker deployment console appear with quite a detailed breakdown of the upload status – seriously why can’t Visual Studio’s build progress window be this useful. I really need to hand it to the Docker team as their uploading was super resilient. I started off uploading off our standard wifi connection which is based on an ADSL connection, so minimal upload bandwidth. I got impatient so switched mid-upload across to my mobile hotspot – after a second or two delay, the upload retried and continued without missing a beat.

image

Once publishing has completed, the Azure App Service should be all setup and ready to go with your code already published. You can use the Site URL in the publish information pane to launch the service for testing. Since my application was a web api, I’ve appended the /api/values so that I get a valid response from the Get request.

image

One of the thing that continue to amaze me about Visual Studio is the ability to create and publish new projects to Azure. Of course, for production apps, you wouldn’t follow this process at all but it does make spinning up end to end prototype applications a walk in the park.

Shell in v4 of Xamarin.Forms and Visual Studio 2019

Shell in v4 of Xamarin.Forms and Visual Studio 2019

Back in late 2018 I did a post on getting started with Shell where I did a “File-New-Project” with Xamarin.Forms Shell. In this post I’m going to do a quick update to that post looking at creating a new Shell application with Visual Studio 2019, and then upgrading to the preview of Xamarin.Forms Shell v4.

As all great projects start, let’s get going with the Create a new project dialog. Search for Xamarin and select the Mobile App (Xamarin.Forms) template.

image

Give some basic project information

image

Select Shell as the application template.

Note: The Windows (UWP) option has come back (removed in the initial release of Visual Studio 2019) when creating Xamarin.Forms applications. However, since Shell isn’t supported by UWP at the moment, the Windows (UWP) option is currently disabled.

image

And there you have it a new Xamarin.Forms application that you can build and run, that leverages Shell.

image

And looks a little like this with bottom tabs and an ADD button in the navigation bar.

image

But let’s see what v4 is going to give us. Select “Include prerelease” and update to the latest packages.

image

Xamarin.Forms Shell v4

One addition that is more of a cosmetic improvement is the naming of ShellItem and ShellSection – I think the initial intent of these were that they should be somewhat abstracted for the actual UI implementation. However, as Shell has matured, the reality is that ShellItem maps to an item that appears in the flyout and an ShellSection maps to a tab…..

Wow, hold on, what are these things ShellItem, ShellSection and ShellContent? If you haven’t been following what the Xamarin.Forms team have been working on then Shell might come as a bit of a surprise. However, as nearly every app developer will admit, one of the most painfully tedious parts of building an application is create and linking all the pages of the application so that the user can navigate between them. The cognitive load of how to do master-details or tabs even in Xamarin.Forms makes it hard for developers to get started. What Shell aims to achieve is to provide a declarative way for you to define how your application is structured.

Essentially Shell represents a hierarchy of navigation elements:

– Shell – this is represents the application as a whole

– ShellItem – these are the first level pages of the application. Currently if there are multiple ShellItems defined, they’ll automatically appear in a Flyout.

– ShellSection – a page can be broken into sections which essentially map to bottom tabs. If a ShellItem only has one ShellSection, no tabs will show.

– ShellContent – this is the actual page content that will be displayed within the bottom tabs. If a ShellSection has multiple ShellContent, tabs will appear at the top of the tab giving you a tab-sandwich display.

In v4, to make it easier for developers to clearly see what was going on, additional classes, FlyoutItem and Tab were added that sub-class ShellItem and ShellSection respectively. The following example layouts use the new element names – if you’re still on v3.6 of Xamarin.Forms you will need to stick with ShellItem and ShellSection.

Some examples:

Single Page Application

image

Notes:

For a single ShellContent there’s no need to include a Tab element, simply nest it directly under the FlyoutItem.

– The FlyoutBehavior attribute can be used to hide/show the flyout on different pages in the application, or (as in this case) across the whole application. Use Shell.FlyoutBehavior on individual FlyoutItem elements to hide the flyout on those pages.

<Shell … FlyoutBehavior=”Disabled”>
     <FlyoutItem … >
         <ShellContent … />
     </FlyoutItem>
</Shell>

Two Page Application With Flyout

image

<Shell …>
     <FlyoutItem Title=”Home” … >
         <ShellContent … />
     </FlyoutItem>
     <FlyoutItem Title=”Single Page” … >
         <ShellContent … />
     </FlyoutItem>
</Shell>

Looking at the different combination of FlyoutItem, Tab and ShellContent we can get different page behaviours:

Bottom Tabs

image

<FlyoutItem Title=”Bottom Tabs” … >
     <Tab Title=”Home” >
         <ShellContent … />
     </Tab>
     <Tab Title=”Activity” … >
         <ShellContent … />
     </Tab>
</FlyoutItem>

Top Tabs

image

<FlyoutItem Title=”Top Tabs” … >
     <Tab Title=”Activity” … >
         <ShellContent Title=”Shared” … />
         <ShellContent Title=”Notifications” … />
     </Tab>
</FlyoutItem>

Tab Sandwich

image

<FlyoutItem Title=”Tab Sandwich” … >
     <Tab Title=”Activity” … >
         <ShellContent Title=”Shared” … />
         <ShellContent Title=”Notifications” … />
     </Tab>
     <Tab Title=”Updates” … >
         <ShellContent Title=”Updates” … />
         <ShellContent Title=”Home” … />
     </Tab>
</FlyoutItem>

As you will have briefly seen, it’s possible to rapidly stand up the basics of an application using a combination of flyouts and tabs to structure your application. In this post we’ve referenced the preview of the next version of Xamarin.Forms Shell, so you can expect that some of the features, particularly around navigation are subject to change in the coming months.

Debugging ASP.NET Core with Visual Studio and Docker Desktop

Debugging ASP.NET Core with Visual Studio and Docker Desktop

With Visual Studio 2019 hot off the press I’ve been experimenting with a few of the new project templates and the improvements that have been made in Visual Studio. In this post I’m going to cover how to solve a particularly annoying problem I encountered when attempting to run and debug an ASP.NET Core 3 application from within Visual Studio, hosted within a Docker image. I’ll walk through the whole process of creating the new project and the issue I ran into when first attempting to debug the application.

When you launch Visual Studio 2019, or go to create a new project, you’ll see the Create a new project dialog. We’re going to select the ASP.NET Core Web Application template.

image

Next we need to provide the standard project information such as name and location.

image

The next stage is to provide more information about how the template should be configured. Here we’re selecting the API template from the left of the screen, and checking both the https and the Enable Docker Support.

Note: At this point if you haven’t already downloaded and installed Docker Desktop, do it now. It’s a half Gb or so download, so not small, and may take a while based on your network bandwidth.

image

After creating the template, you’ll see that there are a number of options available to us in order to run the application. We’re going to proceed with the Docker option.

image

If you haven’t already, make sure you have launched Docker Desktop, otherwise you’ll see the following warning in Visual Studio when you attempt to run the application.

image

Unless you’ve previously setup Docker Desktop you’ll most likely see the following error. Essentially you need to award Docker Desktop access to a drive in order to create images etc.

image

Right-click the Docker icon in the tray and select Settings

image

Under Shared Drives tab, check the local drives you want to make available to Docker Desktop.

image

When you click Apply you’ll be prompt to authenticate. It will detect the credentials of the current user, which for me is an Azure Active Directory user.

image

Important: Unfortunately after providing my password and clicking OK, Docker Desktop decides that it will uncheck the drive that I had selected. This seems to be a common issue, raised by a couple of different people online. Anyhow, the following steps demonstrate how to setup a different account and using it to allow Docker Desktop to access the drive. Whilst a bit hacky, this does seem to be the only work around for this issue.

To setup a new account launch Settings, click Other users and then click the + button under Other users.

image

When prompted to enter email or phone number, instead click the “I don’t have this persons’ sign-in information” option.

image

Next, click the “Add a user without a Microsoft account” option

image

When prompted, enter username, password and some security questions. Next you need to change this user to be an administrator, so expand out the account under Other users and click Change account type.

image

Change Account type to Administrator

image

Return now to Docker Desktop and enter the new account as part of setting up the shared drive. You shouldn’t see any further issues within the Docker Desktop application.

image

Attempting to run the application from within Visual Studio again reveals an error, this time complaining it doesn’t have authority to crLeate or adjust folders (including creating files). 

image

Locate the folder indicated in the error message, right-click on the folder and select Properties. From the Security tab, click Edit.

image

Add the local account you just created and make sure it’s assigned all permissions.

image

You may need to repeat this process for 2 or 3 folders that Docker Desktop requires access to, and in some case assigning permissions can take a minute or two. Once done, your application will be launched from within a Docker image, with the Visual Studio debugger attached.

ViewModel to ViewModel Navigation in a Xamarin.Forms Application with Prism and MvvmCross

ViewModel to ViewModel Navigation in a Xamarin.Forms Application with Prism and MvvmCross

I’m a big fan of the separation that the Mvvm pattern gives developers in that the user interface is encapsulated in the view (Page, UserControl etc) and that the business logic resides in the ViewModel/Model. When structuring the solution for an application I will go so far as to separate out my ViewModels into a separate project from the views, even with Xamarin.Forms where the views can be defined in a .NET Standard library.

One of the abstractions that this lends itself to is what is referred to as ViewModel to ViewModel navigation – rather than the ViewModel explicitly navigation to a page, or bubbling an event up to the corresponding view to get the view to navigate to the next page, ViewModel to ViewModel navigation allows the ViewModel to call a method such as Navigation(newViewModel) where the newViewModel parameter is either the type of the ViewModel to navigate to, or in some frameworks it may be an actual instance of the new ViewModel.

MvvmCross

Let’s see this in action with MvvmCross first – I’m going to start here because ViewModel to ViewModel navigation is the default navigation pattern in MvvmCross. I’ll start with a new project, created using the MvxScaffolding I covered in my previous post, using MvvmCross in a Xamarin.Forms application. The single view template already comes with a page, HomePage, with corresponding ViewModel, HomeViewModel. To demonstrate navigation I’m going to add a second page and a second ViewModel. Firstly, I’ll add a new class, SecondViewModel, which will inherit from BaseViewModel (a class generated by MvxScaffolding which inherits from MvxViewModel that’s part of MvvmCross).

public class SecondViewModel : BaseViewModel
{
}

Next, I’ll add a new ContentPage called SecondPage (note the convention here that there is a pairing between the page and the ViewModel ie [PageName]Page maps to [PageName]ViewModel)

image

MvvmCross supports automatic registration of pages and ViewModels but it does require that the page inherits from the Mvx base class, MvxContentPage. I just need to adjust the root XAML element from

<ContentPage …

to

<views:MvxContentPage x_TypeArguments=”viewModels:SecondViewModel” …

The inclusion of the TypeArguments means that the generic overload of MvxContentPage is used, providing a helpful ViewModel property by which to access the strongly typed ViewModel that is databound to the page.

Now that we have the second page, we just need to be able to navigate from the HomePage. I’ll add a Button to the HomePage so that the user can drive the navigation:

<Button Text=”Next” Clicked=”NextClicked” />

With method NextClicked as the event handler (here I’m just using a regular event handler but in most cases this would be data bound to a command within the HomeViewModel):

private async void NextClicked(object sender, EventArgs e)
{
     await ViewModel.NextStep();
}

And of course we need to add the NextStep method to the HomeViewModel that will do the navigation. The HomeViewModel also needs access to the IMvxNavigationService in order to invoke the Navigate method – this is done by adding the dependency to the HomeViewModel constructor.

public class HomeViewModel : BaseViewModel
{
     private readonly IMvxNavigationService navigationService;


    public HomeViewModel(IMvxNavigationService navService)
     {
         navigationService = navService;
     }
     public async Task NextStep()
     {
         await navigationService.Navigate<SecondViewModel>();
    }
}

As you can see from this example the HomeViewModel only needs to know about the SecondViewModel, rather than the explicit SecondPage view. This makes it much easier to test the ViewModel as you can provide a mock IMvxNavigationService and verify that the Navigate method is invoked.

Prism

Now let’s switch over to Prism and I’ve used the Prism Template Pack to create a new project. To add a second page I’ll add a SecondPageViewModel, which in the case of Prism inherits from ViewModelBase and requires the appropriate constructor that provides access to the INavigationService. Note that the naming convention with Prism is slightly different from MvvmCross where the ViewModel name is [PageName]PageViewModel (ie both the page and the viewmodel have the Page suffix after the PageName eg SecondPage and SecondPageViewModel)

public class SecondPageViewModel : ViewModelBase
{
     public SecondPageViewModel(INavigationService navigationService) : base(navigationService)
     {
     }
}

I’ll add a new ContentPage called SecondPage but unlike MvvmCross I don’t need to alter the inheritance of this page. Instead what I do need to do is register the page so that it can be navigated to. This is done in the App.xaml.cs where there is already a RegisterTypes method – note the additional line to register SecondPage.

protected override void RegisterTypes(IContainerRegistry containerRegistry)
{
     containerRegistry.RegisterForNavigation<NavigationPage>();
     containerRegistry.RegisterForNavigation<MainPage, MainPageViewModel>();
     containerRegistry.RegisterForNavigation<SecondPage, SecondPageViewModel>();
}

Similar to the MvvmCross example, I’ll add a button to the MainPage (the first page of the Prism application created by the template) with code behind to call the NextStep method on the MainViewModel

private async void NextClicked(object sender, EventArgs e)
{
     await (BindingContext as MainPageViewModel).NextStep();
}

Note that because the MainPage just inherits from the Xamarin.Forms ContentPage there’s no property to expose the data bound viewmodel. Hence the casting of the BindingContext, which you’d of course do null checking and error handling around in a real world application.

public async Task NextStep()
{
     await NavigationService.NavigateAsync(“SecondPage”);
}

The NextStep method invokes the NavigateAsync method using a string literal for the SecondPage – I’m really not a big fan of this since it a) uses a string literal and b) requires the the ViewModel knows about the view that’s being navigated to. So let’s adjust this slightly by changing the way that pages and ViewModels are registered. The RegisterForNavigation method accepts a parameter that allows you to override the navigation path, meaning we can set it to be the name of the ViewModel instead of the name of the page.

protected override void RegisterTypes(IContainerRegistry containerRegistry)
{
     containerRegistry.RegisterForNavigation<NavigationPage>();
     containerRegistry.RegisterForNavigation<MainPage, MainPageViewModel>(nameof(MainPageViewModel));
     containerRegistry.RegisterForNavigation<SecondPage, SecondPageViewModel>(nameof(SecondPageViewModel));
}

The navigation methods would then look like:

public async Task NextStep()
{
     await NavigationService.NavigateAsync(nameof(SecondPageViewModel));
}

But I think we an improve this further still by defining a couple of extension methods

public static class PrismHelpers
{
     public static void RegisterForViewModelNavigation<TView, TViewModel>(this IContainerRegistry containerRegistry)
         where TView : Page
         where TViewModel : class
     {
         containerRegistry.RegisterForNavigation<TView, TViewModel>(typeof(TViewModel).Name);
     }


    public static async Task<INavigationResult> NavigateAsync<TViewModel>(this INavigationService navigationService)
         where TViewModel : class
     {
         return await navigationService.NavigateAsync(typeof(TViewModel).Name);
     }
}

Using these extension methods we can update the registration code:

protected override void RegisterTypes(IContainerRegistry containerRegistry)
{
     containerRegistry.RegisterForNavigation<NavigationPage>();
     containerRegistry.RegisterForViewModelNavigation<MainPage, MainPageViewModel>();
     containerRegistry.RegisterForViewModelNavigation<SecondPage, SecondPageViewModel>();
}

And then the navigation code:

public async Task NextStep()
{
     await NavigationService.NavigateAsync<SecondPageViewModel>();
}

The upshot of these changes is that there’s almost no difference between the MvvmCross method of navigation and what can be done with a little tweaking with Prism.

Scaffolding Your Next MvvmCross Xamarin.Forms Project

Scaffolding Your Next MvvmCross Xamarin.Forms Project

One of the things I liked about the getting started experience with Prism was that there was a Visual Studio extension that made creating a new project super simple. Whilst I know that MvvmCross doesn’t provide something like that out of the box, I decided to take a look at some of the project/solution templates that the community have created. The Getting Started page on the MvvmCross website does maintain a list of MvvmCross templates but I must confess that some of these are a little dated. I just went through and opened each of the links, and there were only two that seemed to be recent, Mvx Toolkit and MvxScaffolding. I downloaded both extensions and just happened to try out MvxScaffolding first. Here’s a quick summary of creating a new Xamarin.Forms application using MvxScaffolding

Search for Mvx to find the scaffolding templates

image

Basic project details

image

Now we’re into the MvxScaffolding custom dialog – wow, look at how nice this is.

image

I went with the Single Item template – it’s a tad confusing that you have to click on the grey circle to pick each option, rather than clicking the whole card. Next up, pick the platform and which test projects you want generated.

image

I added UWP support, and asked for all the test projects

image

A quick summary before the projects are created

image

The final solution

image

And of course, the running application.

image

This was a bit mind blowing to be honest – the level of detail in this extension was awesome and I was able to generate the runnable application in under a minute. I like the way the projects are separated and that it can generate all the test projects.

Resolving Dependencies In Platform Pages, Renderers, Effects and Elements with Xamarin.Forms and Prism

Resolving Dependencies In Platform Pages, Renderers, Effects and Elements with Xamarin.Forms and Prism

We’ve been using Prism for a number of our Xamarin.Forms projects and for the most part we rely on the services being injected into our view models but a scenario came up recently where we wanted to access one of our services from within the platform implementation of a renderer. We were using DryIoC and needed to come up with a mechanism to resolve dependencies that had been registered with DryIoC via Prism. I recalled seeing in Brian’s What’s New in Prism 7.1 post that support had been added for the Xamarin.Forms Dependency Resolver and was wondering whether we could use that to allow us to resolve dependencies by calling Resolve on the static DependencyService that Xamarin.Forms exposes. In this post I’m going to walk through creating a simple Prism application, register a service and then retrieve it from the code behind of a UWP page (but the same process applies for any renderer, effect, page or element).

To get started with a new Prism application I’m going to make use of the Prism Template Pack that installs as a Visual Studio Extension and includes a variety of project and item templates. In Visual Studio 2019 when you launch the application you can select Create a New Project and be given the option to search for project templates.

image 

Once you’ve selected the Prism Bank App (Xamarin.Forms) you’ll then be prompted to specify the name and location of the project

image

The final step is a Prism specific dialog that allows you to specify which platforms and what IoC container you want to use

image

In this case we’re going with DryIoC but I think the technique will work equally well if you select AutoFac or Unity. After hitting Create Project our solution is setup with four projects: one for each target platform and a project that contains both our view and view models. 

image

My preference is to separate views and viewmodels further but for the timebeing we’ll leave the project structure as it is. The next step is to create and register the service that we want to retrieve from our platform specific code.

public interface IFancyService
{
     string WelcomeText { get; }
}


public class MyFancyService : IFancyService
{
     public string WelcomeText => “Hello Dependency World!”;
}

The MyFancyService needs to be registered with the DI container, which we can do by adding a line to the RegisterTypes method in the App.xaml.cs

protected override void RegisterTypes(IContainerRegistry containerRegistry)
{
     containerRegistry.RegisterForNavigation<NavigationPage>();
     containerRegistry.RegisterForNavigation<MainPage, MainPageViewModel>();


    containerRegistry.Register<IFancyService, MyFancyService>();
}

Let’s switch across to the code behind of the MainPage in the UWP application – note that this page is different from the MainPage that’s in the Xamarin.Forms project which is the page that’s displayed on all platforms. The MainPage in the UWP project is used to host the Xamarin.Forms application when it’s run on Windows. I’m going to add the OnNavigatedTo override method and in it I’m going to attempt to resolve the IFancyService interface using the Xamarin.Forms DependencyService.

protected override void OnNavigatedTo(NavigationEventArgs e)
{
     base.OnNavigatedTo(e);


    var fancy = DependencyService.Resolve<IFancyService>();
}

Running this code we’ll see that the variable fancy is null, meaning that the Resolve method wasn’t able to find any registered implementation of the IFancyService.

image

To fix this, we need to tell Prism to register as a Dependency Resolver, which will mean that the Xamarin.Forms DepedencyService will use the same DI container when resolving instances. This is done by using the overloaded constructor of the PrismApplication class. In the App.xaml.cs of our application, change the call to base to include a second parameter which is set to true.

public App(IPlatformInitializer initializer) : base(initializer, setFormsDependencyResolver: true)
{
}

Running the application now and the fancy variable is set to an instance of the MyFancyService.

image

And there you have it – an easy way to use the Xamarin.Forms DependencyService in conjunction with Prism and DryIoC.

Deploying Uno Wasm using Blob Storage

Deploying Uno Wasm using Blob Storage

Earlier today I posted about deploying Uno on Wasm to an Azure App Service (to which the Uno team replied on Twitter with an updated web.config). I was thinking a bit more about how I would deploy a real Uno Wasm app and I realised that of course, there’s no server side logic, so I could just go host it off Blob Storage.

I ran up the Azure Storage Explorer and created a new container within an existing Blob Storage account

image

As I’m going to be serving up content directly from Blob Storage I need to make sure that I enable public read access on the individual blobs (the default is that there’s no public read access on the container or blobs). Right-click on the container and select Set Public Access Level

image

Set access to Public read access for blobs only

image

Now you can simply copy the Uno Wasm application into the container by dragging the files from File Explorer into the right pane of the Azure Storage Explorer. I found that the best folder to copy is the folder that Visual Studio uses to deploy files when you do a Publish. For my project this is found at FirstUnoProject.WasmobjReleasenetstandard2.0PubTmpOutFirstUnoProject.WasmdistbinReleasenetstandard2.0dist.

And that’s it – now I can run my Uno Wasm application by clicking the Copy URL from the tool bar and then switching to my browser and launching the corresponding URL.

image

The Uno Wasm application runs without any further configuration required.

image