Exploring Artificial Intelligence (AI) Services from Cloud Vendors

Exploring Artificial Intelligence (AI) Services from Cloud Vendors

Currently the market is buzzing with talk of artificial intelligence (AI) and/or machine learning (ML) along with the power of the cloud and edge computing. Talking to various people across different industries there’s genuine excitement about the change that is about to take place. However, what businesses struggle with is how to take the leap and get started. Most organisations, whilst they may have been capturing and managing data for quite some time, don’t have luxury of having a data scientist on staff. Luckily there are a range of services provided by different cloud providers that are in effect commoditizing artificial intelligence.

In a previous post I covered working with Microsoft’s custom vision service to do number plate recognition, which is one out of many services offered by Microsoft under the banner of their cognitive services. In fact, the custom vision service is one out of a number of vision related services:

– Computer Vision

– Video Indexer

– Custom Vision

– Face

– Content Moderator

But Microsoft isn’t alone in provision vision and image services. Google have a set of AI services, which include:

Cloud Vision API

Cloud Video Intelligence

And of course AWS have similar services, such as their Amazon Rekognition service.

Each of these services have their strengths and weaknesses but the question really starts with the problem that needs to be solved. Does it make sense for a business to investigate these services? Are there opportunities to make use of these services?

MVX=1F: TipCalc with Xamarin.Forms (MVX+1 days of MvvmCross)

MVX=1F: TipCalc with Xamarin.Forms (MVX+1 days of MvvmCross)

In this post I’m going to extend the TipCalc to include Xamarin.Forms targets, similar to what we did in the post MVX=0F: A first MvvmCross Application (MVX+1 days of MvvmCross)

Adding Xamarin.Forms

Note: The following instructions can be applied to any project by simply replacing TipCalc with the name of your project

  1. Add a New Project based on the Mobile App (Xamarin.Forms) template 
    image

  2. In the New Cross Platform App dialog, select Blank App, check the Platforms you want, select .NET Standard and click OK
    image
  3. Upgrade the Xamarin.Forms NuGet to latest for all four Forms projects
  4. Add MvvmCross NuGet Package to all Forms projects (Forms, Forms.iOS, Forms.Android and Forms.UWP)
  5. Add MvvmCross.Forms NuGet Package to all Forms projects (Forms, Forms.iOS, Forms.Android and Forms.UWP)

Update the TipCalc.Forms project

  1. Remove all code in App class except for constructor with a call to InitializeComponent
  2. Create Views folder
  3. Move MainPage into Views folder and rename to FirstView
  4. Adjust FirstView.xaml and FirstView.xaml.cs to change class name to FirstView and to make it inherit from MvxContentPage

Update the FirstDemo.Forms.Uwp project

  1. Update Microsoft.NETCore.UniversalWindowsPlatform
  2. Add reference to TipCalc.Core
  3. Change MainPage to inherit from MvxFormsWindowsPage
  4. Remove all code other than the InitializeComponent method call in the constructor of MainPage
  5. Add ProxyMvxApplication
    public abstract class ProxyMvxApplication : MvxWindowsApplication<MvxFormsWindowsSetup<Core.App, TipCalc.Forms.App>, Core.App, TipCalc.Forms.App, MainPage>
  6. Change App.xaml and App.xaml.cs to inherit from ProxyMvxApplication
  7. Remove all code other than the constructor, with a single call to InitializeComponent, in App.xaml.cs

Update the FirstDemo.Forms.Android project

Note (1): If you run into the following error, you may need to rename your project. In this case we renamed it to Forms.Droid (as well as the folder the project resides in)
1>C:Program Files (x86)Microsoft Visual StudioPreviewEnterpriseMSBuildXamarinAndroidXamarin.Android.Common.targets(2088,3): error MSB4018: System.IO.PathTooLongException: The specified path, file name, or both are too long. The fully qualified file name must be less than 260 characters, and the directory name must be less than 248 characters.

Note (2): If you’re using the preview build of Visual Studio, you may run into an error: “error XA4210: You need to add a reference to Mono.Android.Export.dll when you use ExportAttribute or ExportFieldAttribute.” If you do, you just need to Add Reference to Mono.Android.Export (search in the Add Reference dialog).

  1. Change Forms.Android project to target latest Android SDK
  2. Upgrade Xamarin.Android.Support.* to latest for the Forms.Android project
  3. Add reference to TipCalc.Core
  4. Change MainActivity inheritance, remove code except for a constructor:
    public class MainActivity : MvxFormsAppCompatActivity<MvxFormsAndroidSetup<Core.App, App>, Core.App, App>
    {
    }

Update the TipCalc.Forms.iOS project

  1. Add reference to TipCalc.Core
  2. Changed inheritance of AppDelegate
    public partial class AppDelegate : MvxFormsApplicationDelegate<MvxFormsIosSetup<Core.App, TipCalc.Forms.App>, Core.App, TipCalc.Forms.App>

Adding TipCalc Xamarin.Forms Layout

Update the TipCalc.Forms project by updating the FirstView.xaml with the following XAML.

<?xml version=”1.0″ encoding=”utf-8″ ?>
<views:MvxContentPage
    
     http://xamarin.com/schemas/2014/forms"”>http://xamarin.com/schemas/2014/forms”
              http://schemas.microsoft.com/winfx/2009/xaml"”>http://schemas.microsoft.com/winfx/2009/xaml”
              x_Class=”TipCalc.Forms.Views.FirstView”>
  
  <StackLayout>
         <Label Text=”SubTotal”/>
         <Editor Text=”{Binding SubTotal, Mode=TwoWay}”/>
         <Label Text=”How generous?”/>
         <Slider Value=”{Binding Generosity, Mode=TwoWay}”
                     Minimum=”0″
                     Maximum=”100″/>
         <Label Text=”Tip:”/>
         <Label Text=”{Binding Tip}”/>
         <Label Text=”SubTotal:”/>
         <Label Text=”{Binding Total}”/>
     </StackLayout>

</views:MvxContentPage>

That’s it – there’s nothing more to do in order to add Xamarin.Forms targets (iOS, Android and UWP) to the TipCalc

Building a Number Plate Identification Service in 5 Minutes with Microsoft’s Custom Vision Service

Building a Number Plate Identification Service in 5 Minutes with Microsoft’s Custom Vision Service

I woke up this morning pondering how hard it would be to use Microsoft’s Custom Vision service to build a service which could be used to extract out number plates from an image. In this post I’m just going to cover the object detection phase of this work, which identifies where on an image a number plate exists. A subsequent phase would be to use the OCR service that’s part of the computer vision services to extract the number plate itself.

I decided to see how far I could get using the out of the box offering at CustomVision.AI. After signing in (you’ll need an account linked to Azure) you’re presented with the option to create a new project:

image

Clicking New Project I needed to fill in some basic information about my project and the type of project I’m building. In this case we’re going to use object detection in order to identify number plates within an image

image

After hitting Create Project I’m dropped into the project dashboard which essentially covers three areas: Training, Performance and Predictions. Rather than this being a strict sequence, the idea is that you’ll go back and forth between these areas gradually refining and improving the model. As we don’t already have a model, we need to start by adding some images to train with.

image

As I don’t have an archive of photos with number plates, I decided to grab a selection of images from Google. You’ll notice that I include “with car” in my image search – we’ll talk about why this is important in a minute

image

I downloaded around 30 of these images (you’ll need at least 15 to train the model with but the more images the better). Clicking on Add images give me the ability to upload the images I downloaded from Google image search

imageimageimageimage

The images I uploaded appeared as “untagged” – essentially I haven’t identified what in each photo we’re looking for. To proceed, I need to go through each image, and select and tag any areas of interest

image

Rather than selecting each individual image, if you hit Select All and then click on the first image you can step through each image in turn

image

If you hover over the image, you’ll see that there are some suggest areas that appear with a dotted outline.

image

You can either click the suggested area, or you can simply click-and-drag to define your own area

image

In my first attempt I assumed that I should be marking just the area that includes the text, because the registration number is what I want as the eventual output. However, this didn’t seem to give very accurate results. What the service is great at is identifying objects, and rather than defining areas that show a number plate, I was just getting it to recognise text, any text. In my second attempt I defined regions that bounded the whole number plate, which gave me much better results.

image

After going through all of the images and tagging them, all the images should appear as tagged and you should see a summary of the number if images for each tag

image

Now to hit the Train button (the green button with cogs in top right corner). Once training is done you can see some key metrics on the quality of the iteration of your model. In general terms the higher the percentage the better; and the more training images you provide and tag, the better the model will get.

image

After you’ve run Train the first time, you actually have a model that you can use. From the Predictions tab you can see information about the endpoint that’s available for your app to call in order to invoke the service.

image

What’s really great is that you can click on Quick Test and then supply an image to see how your service performs.

image

In this case the service identified one area that it thinks is a number plate with a probability of 92.4%. The next step would be to pass the number plate through an OCR service in order to extract the registration number.

All of this was setup, trained and made available as a service in under 5 minutes (excluding the time spent re-tagging the training images to include the whole number plate, rather than just the text).

Using IntelliTest in a .NET Standard Library

Using IntelliTest in a .NET Standard Library

Today I was reviewing some of the test cases we have for BuildIt.General – these are a bit dated now as they were created with MSTest using IntelliTest. Recently we’ve updated the BuildIt libraries to target .NET Standard and to support multi-targetting. Unfortunately, whilst the test cases continue to run, I was unable to run IntelliTest in order to add test cases for some of the new features we’ve added recently. As this Stack Overflow question confirms, IntelliTest is only supported on .NET Framework class libraries. What’s worse is that IntelliTest is only supported on old style csproj projects.

In order to get IntelliTest to work for BuildIt.General I created a new .NET Framework class library, BuildIt.General.FullFramework.csproj, which I placed into the same folder as BuildIt.General.Tests (putting it into the BuildIt.General folder causes all manner of weirdness due to different csproj formats not playing nicely together).

image

For each file in BuildIt.General that I wanted to use IntelliTest to generate test cases I added the file as a link to the BuildIt.General.FullFramework project. IntelliTest can be run by right-clicking within the method to be tested, selecting IntelliTest –> Run IntelliTest.

image

The IntelliTest output is shown in the IntelliTest Exploration Results window, from which each test can be saved.

image

More information on IntelliTest can be found on the docs website.

Calculator App in 100 Lines

Calculator App in 100 Lines

A bit of a challenge was set off over the last week which I think started after Don Syme tweeted about a Calculator sample app that had been done in 95 lines of code

image

Github repo at https://github.com/nosami/Elmish.Calculator and was based off a Xamarin sample https://github.com/xamarin/mobile-samples/tree/master/LivePlayer/BasicCalculator

As there was a bit of discussion as to the pros and cons of different mobile platforms, some of the community set out to see how many lines of code it would take them to build the same sample. Thomas tweeted about doing it in Flutter which originally came in a 93 lines of code (I question the choice of colours and definitely prefer the colours in the original sample)

Update: In my original post I didn’t link to the correct info. Thomas’ calculator came in at 90 lines of code and sticks with the original colour scheme

Github: https://github.com/escamoteur/flutter_calculator

There was an alternative colour scheme proposed which came in at 93 lines of code

image

Github repo: https://github.com/fmatosqg/flutter_calculator

I tweeted in jest that a XAML app wouldn’t get out of bed for less than 100 lines of code, and sure enough when I did a very quick attempt it came in at approximately 70 lines of XAML and the same again for codebehind, so ~140 lines all up. I think with a bit of optimising I could get it down to say 120 lines but the reality is that XAML is verbose and that there is a cost associated with splitting the code between XAML and codebehind. Of course, if I’d actually applied an MVVM pattern it’d probably jump up to say 150 lines of code.

One way I could optimise this to get a result closer to either the Elmish or Flutter examples would be to do all my layout in code. I mentioned in my post on using declarative C# that following Vincent’s example of using extension methods you can easily do your layout in C# and avoid any of the overhead of splitting out XAML, C# and ViewModel code.

The question I ask is what do we lose in defining layout in code. As Vincent points out in his example, the answer can be that we lose very little but my concern is that whilst defining layout in code works for seasoned developers, how will it go with more junior developers?

For those who have been building apps for long enough, you’ll remember how we despised building Windows Forms applications because all the layout was done in code – sure there was a design experience, but before long you were writing a lot of logic to manipulate the UI which resulted in mal-formed code that was a pain to debug/maintain. Is this where we’re going to end up with all these code-first approaches to defining layout? (and this question isn’t about which platform is better, it’s about declarative over coded UI).

By way of example, stop and take a look at the code Thomas put together in his Flutter example – very efficient but imagine it on a much more complex application. You can quite easily see how it’s going to become hard to understand/follow for a developer who has to maintain it.

Redux and the State of My XAML Application (part 3)

Redux and the State of My XAML Application (part 3)

This is the final part in this series looking at using Redux in a XAML application. Previous parts:

– Part 1 – https://nicksnettravels.builttoroam.com/post/2018/05/15/Redux-and-the-State-of-My-XAML-Application-(part-1).aspx

– Part 2 – https://nicksnettravels.builttoroam.com/post/2018/05/16/Redux-and-the-State-of-My-XAML-Application-(part-2).aspx

At the end of Part 2 I left you wondering what magic I’m using to prevent the entire list being refreshed as items were being added/removed from the Family. In this post we’ll look at what’s going on behind the scenes.

Let’s start with the bolded text from the XAML in Part 2 – I’ve included it here so you don’t need to go back and look.

<Page.Resources>
   <converters:ImmutableDataConverter x_Key=”ImmutableDataConverter”/>
</Page.Resources>
<Grid DataContext=”{Binding Converter={StaticResource ImmutableDataConverter}}”>

So the secret ingredient is the ImmutableDataConverter which takes the current DataContext (which will be an instance of the MainViewModel) and returns an object that will become the DataContext for the rest of the page. The question is, what is this object and what does it do?

If you recall the issue we saw when we didn’t use the ImmutableDataConverter is that when the Data property on the MainViewModel changes (ie raised the PropertyChanged event) every data bound element on the page is refreshed. What we want is that only the elements on the page where data has changed should be updated. To do this, we need to step through the Data object and only raise PropertyChanged for the parts that have actually changed. Based on this description, the ImmutableDataConverter has to have the smarts to a) prevent PropertyChanged causing the entire UI to refresh and b) be able to iterate over every object in the Data object graph and where appropriate raise the PropertyChanged event.

Behind the scenes the ImmutableDataConverter is super simple – all it does is create an instance of the ImmutableDataWrapper<T> class. It uses a small piece of reflection to determine what the generic parameter should be based on the implementation of the IHasImmutableData interface on the MainViewModel.

The ImmutableDataWrapper<T> exposes a single property Data, of type T (and it’s no coincidence that this is the same as the IHasImmutableData<T> interface which also exposes a property Data, of type T – thereby making it simple to add the ImmutableDataConverter without changing any other lines of XAML). It also listens to the PropertyChanged event on the source entity, which in this case is the MainViewModel. Now instead of the PropertyChanged event on the MainViewModel being picked up by the MainPage, it is instead picked up by the ImmutableDataWrapper and used to invoke the ChangeData method where all the work happens.

The ChangeData method is used to compare the old Data object with the new Data object (ie the value that is set on the MainViewModel when the PropertyChanged event is triggered). It does this by using reflection to step through each property on the Data object:

– Properties that are of value type, or string, are updated on the old Data object if the value on the new Data object is different – the PropertyChanged event is raised for just that property.

– For properties that return a non-value type (or string) reflection is used to interrogate the nested entity and work out which properties need to be updated.

– For ObservableCollections some basic list change detection is used to trigger add/remove events on the collection on the old Data object – we can probably improve the logic here to be more efficient but for the moment it does the job.

As you can imagine there’s quite a bit of reflection that has to go on each time the Data object changes. Assuming that the Data object could change quite a bit, we don’t want to be doing reflection every time, which is where the TypeHelper class comes it. The TypeHelper class has some smarts of assisting with both checking to see if an entity has change, and for updating entities. Based on the type of entity, it caches methods that are used for comparison and updating entities. You can check out the TypeHelper class if you want to see more of the details

So lastly, let’s look at the requirements for the ViewModel and your Data entity:

– ViewModel needs to implement IHasImmutableData

– Data entity (and any nested entities) needs to implement INotifyPropertyChanged but also IRaisePropertyChanged – this is required so that the ChangeData method can raise the PropertyChanged on behalf of a data entity

– Properties on the Data entity (and any nested entities) should not raise PropertyChanged – otherwise there will be multiple PropertyChanged events raised

– Any collections within the Data entity hierarchy should use ObservableCollection<T>

A couple of final pointers:

– Currently this is only available for UWP – I need to implement the appropriate converter for Xamarin.Forms (and I guess WPF if anyone cares?)

– Currently this is not thread safe – make sure you update the Data property on the ViewModel on the UI thread.

Redux and the State of My XAML Application (part 2)

Redux and the State of My XAML Application (part 2)

In part 1 I talked a bit about the challenge that XAML based applications face when trying to use a pattern such as Redux. In this post I’m going to jump in and use Redux.NET to demonstrate the issue, and the show how we can make a small adaption to the XAML to fix this issue.

We’ll start with the basic – our application state is a Person entity, with a Name property and a Family property. The Family property is an ObservableCollection of Person:

public class Person : NotifyBase
{
     public string Name { get; set; }


    public ObservableCollection<Person> Family { get; set; } = new ObservableCollection<Person>();
}

In this case NotifyBase comes from the BuildIt.General library and implements INotifyPropertyChanged. It also implements IRaisePropertyChanged which exposes a RaisePropertyChanged method which can be called in order to raise a PropertyChanged event on the object – we’ll come to why this is important later.

Implementing the Redux pattern starts with the Store, and in this case I’m just going to expose this as a static property off the App class. In reality you’d probably register this with your IoC container and have it injected into your ViewModel but to keep things simple I’m just creating it as a static property.

sealed partial class App : Application
{
     public static IStore<Person> PersonStore { get; private set; } = new Store<Person>(reducer: PersonReducer.Execute, initialState: new Person { Name = “Fred” });

The Store of course requires a Reducer, which in this case will be the PersonReducer class

public static class PersonReducer
{
     private static Random rnd = new Random();
     public static Person Execute(Person state, IAction action)
     {
         if (action is AddAction addAction)
         {
             var newPerson = new Person { Name = addAction.NameOfNewFamilyMember };


            return new Person
             {
                 Name = state.Name,
                 Family = state.Family.DeepClone().AddItem(newPerson)
             };
         }


        if (action is RemoveAction)
         {
             var idxToRemove = rnd.Next(0, 1000) % state.Family.Count;
             return new Person
             {
                 Name = state.Name,
                 Family = state.Family.DeepClone().RemoveItemAt(idxToRemove)
             };
         }


        return state;
     }
}

As you can see from the code the PersonReducer implements two actions: AddAction and RemoveAction. We’ll create these as classes

public class AddAction : IAction
{
     public string NameOfNewFamilyMember { get; set; }
}


public class RemoveAction : IAction { }

The other thing to note about the PersonReducer is that both actions return entirely new Person entities. It also makes use of a couple of helper methods:

public static class ReduxHelpers
{

    public static ObservableCollection<T> DeepClone<T>(this ObservableCollection<T> source) where T : new()
     {
         var collection = new ObservableCollection<T>();
         var helper = TypeHelper.RetrieveHelperForType(typeof(T));
         foreach (var item in source)
         {
             var newItem = new T();
             helper.DeepUpdater(newItem, item);
             collection.Add(newItem);
         }
         return collection;
     }

    public static ObservableCollection<T> AddItem<T>(this ObservableCollection<T> source, T itemToAdd)
     {
         source.Add(itemToAdd);
         return source;
     }

    public static ObservableCollection<T> RemoveItemAt<T>(this ObservableCollection<T> source, int index)
     {
         if (index < 0 || index >= source.Count) return source;
         source.RemoveAt(index);
         return source;
     }
}

Note: These extension methods will be added to BuildIt.General in the coming days and they rely on other types/methods (such as the TypeHelper class) that are already part of the BuildIt.General library.

With the Store and Reducer defined, we can define our MainViewModel

public class MainViewModel : NotifyBase, IHasImmutableData<Person>
{
     private Person data;
     public Person Data
     {
         get => data;
         set => SetProperty(ref data, value);
     }


    public MainViewModel()
     {
         App.PersonStore.Subscribe(newData => Data = newData);
     }
}

As this code shows, when the state in the Store changes, we’re just updating the Data property on the MainViewModel, this will in turn raise the PropertyChanged event causing the UI to be re-bound. Let’s take a look at the XAML for the MainPage.xaml:

<Page
     x_Class=”reduxsample.MainPage”
     http://schemas.microsoft.com/winfx/2006/xaml/presentation"”>http://schemas.microsoft.com/winfx/2006/xaml/presentation”
     http://schemas.microsoft.com/winfx/2006/xaml"”>http://schemas.microsoft.com/winfx/2006/xaml”
     http://schemas.microsoft.com/expression/blend/2008"”>http://schemas.microsoft.com/expression/blend/2008″
     http://schemas.openxmlformats.org/markup-compatibility/2006"”>http://schemas.openxmlformats.org/markup-compatibility/2006″
    
     mc_Ignorable=”d”
     Background=”{ThemeResource ApplicationPageBackgroundThemeBrush}”>
     <Page.Resources>
         <converters:ImmutableDataConverter x_Key=”ImmutableDataConverter”/>
     </Page.Resources>

     <Grid DataContext=”{Binding Converter={StaticResource ImmutableDataConverter}}”>
         <Grid DataContext=”{Binding Data}”>
             <Grid.RowDefinitions>
                 <RowDefinition Height=”Auto”/>
                 <RowDefinition />
             </Grid.RowDefinitions>
             <StackPanel Grid.Row=”0″>
                 <TextBlock Text=”Name”/>
                 <TextBlock Text=”{Binding Name}”/>
                 <TextBlock Text=”Family member to add”/>
                 <TextBox x_Name=”NewFamilyMemberName”/>
                 <Button Content=”Add” Click=”AddFamilyClick”/>
                 <Button Content=”Remove” Click=”RemoveFamilyClick”/>
             </StackPanel>
             <ListView Grid.Row=”1″ ItemsSource=”{Binding Family}”>
                 <ListView.ItemTemplate>
                     <DataTemplate>
                         <Border
                             BorderBrush=”Azure”
                             BorderThickness=”0,0,0,1″>
                             <TextBlock Text=”{Binding Name}”/>
                         </Border>
                     </DataTemplate>
                 </ListView.ItemTemplate>
             </ListView>
         </Grid>
     </Grid>
</Page>

This is all stock XAML with the exception of the Bold text (which we’ll come to in a minute). First, let’s add the methods for adding and removing family members

public sealed partial class MainPage : Page
{
     public MainPage()
     {
         this.InitializeComponent();


        DataContext = new MainViewModel();
     }
     private void AddFamilyClick(object sender, RoutedEventArgs e)
     {
         App.PersonStore.Dispatch(new AddAction { NameOfNewFamilyMember = NewFamilyMemberName.Text });
     }
     private void RemoveFamilyClick(object sender, RoutedEventArgs e)
     {
         App.PersonStore.Dispatch(new RemoveAction());
     }
}

Ok, now for a minute, let’s go back to the XAML and for a minute imagine that the bold text isn’t there. Now when we run the app and click the Add or Remove button, the Data property on the MainViewModel will get repopulated – the implication on the UI is that the ListView which binds to the Family property will refresh entirely. This will cause a flicker, and will drop any selected state and/or scroll position on the list – generally a really bad user experience.

With the bold text left in the XAML, when we run the app and click the Add or Remove button, only the properties that have changed are refreshed – there is no flicker on the ListView and any selection/scroll position is maintained….. so what’s going on…. can you work out why IRaisePropertyChanged is important…. more to come in my next post.

Redux and the State of My XAML Application (part 1)

Redux and the State of My XAML Application (part 1)

Recently there has been quite a bit of noise about new tools and technologies (eg Flutter) and how they’re going to reshape how we build mobile applications. Various developers have blown a lot of smoke into the ecosystem as they’ve thrown in the towel with one technology and jumped headlong into the unknown. In this post I wanted to explore one of the key architectural differences that has developers up in arms about.

Let’s firstly step back a few, or more, years back to the dark ages of building Windows Forms (WinForms) applications where everything was done via imperative code. Visual Studio provided some basic abstraction from designer generated code v’s developer code but other than that everything was done manually. Whilst WinForms did have a form of data binding, it was so hard to get it to work well, most apps ended up resorting to writing logic to set properties on the UI elements directly.

A little further along the timeline we see the introduction of XAML (WPF, Silverlight etc) where data binding was a first class citizen. Enter the age of the MVVM pattern which was widely adopted as it offered a great separation of concerns between the view (ie the XAML) and the logic of the app. Personally I’ve never seen MVVM as much more than just the use of data binding. Recently I’ve heard of all sorts of reasons why developers thought that MVVM was being used, such as allowing the reuse of ViewModels and/or Models across different apps – I’m not sure where this concept came from but the reality is that it never happens. I think MVVM is still just about making it easier to test the logic of the application without having to spin up the UI.

Databinding works well, allowing the UI to be declaratively defined (either in XAML or code) but it doesn’t prescribe how an application should be architected behind the scenes. There are some frameworks, such as MvvmCross that help with a lot of the boilerplate logic (app start up, DI framework, navigation etc), but that’s where the guidance ends. For simple applications this isn’t an issue, and for a lot of application complexity is kept quite low, which means that keeping business logic, and state, on a page by page basis isn’t an issue. However, over time applications grow, and the complexity increases. This was identified by Facebook as the complexity of their website grew and they needed a more effective way to manage state. At this point I’m going to skip ahead to Redux (there’s more background at redux.js.org) which aims to solve the issue of state management within an application using a mono-direction flow to ensure consistency of state. I’m also not going to proclaim to be a guru on Redux, I just want to point to how it supports a new wave of React style development. The essential concept is that app state is immutable and that any changes result in a new state.

If you take a look at the way that Flutter builds their UI, the layout is made up of a sequence of widgets generated each time the build method is invoked. As the state of the application changes, a call to setState will trigger the build method to be run, yielding a completely new set of widgets that will be painted in the next pass to the screen. It’s pretty obvious that if the app state is being regenerated on each change (ie Redux pattern), this plays nicely with the setState/build flow that’s core to Flutter.

So, the question is – if we want to take advantage of Redux, do we have to abandon ship and start building Flutter apps? Well if you want to give up on all the years of experience you have, the mature ecosystem, and all the platforms that Flutter doesn’t support, sure, why not, but I do feel that in the absence of other reasons, that this is a bit like throwing the baby out with the bathwater.

To rephrase the question – in a XAML application, how do we take advantage of Redux? Well the good news is that half the work is already done – Redux.NET. However, I would caution you not to follow the rather simplistic examples given on the project website which essentially disposes of the use of data binding – if you’re going to do that, just go build your app using a different technology. Instead, we need to think a bit more about how we can marry the concept of immutable state with data binding.

The naïve approach is to expose the state of the application as a property and then every time the state changes, we update the property with the new value. For example the following ViewModel exposes a Person object which represents the current state of this simple application.

public class MainViewModel : INotifyPropertyChanged
{
     private Person data;


    public Person Data
     {
         get => data;
         set => SetProperty(ref data, value);
     }

This approach will work and as the Data property is updated with new Person entities, the data bound UI will update accordingly. However, if the Person object is moderately complex, with nested data bound properties, when you update the Data property there will be some nasty UI artefacts – this is because triggering PropertyChanged on the Data property will force every data binding that starts with Data to be re-evaluated. Imagine that the Person entity has a property Family, which is a list of Person entities, and that property is data bound to a ListView. If the Data property changes, the entire ListView will be repopulated, losing any selection or scroll position, not to mention other visual artefacts such as a flicker as it redraws. Clearly this isn’t what we want to happen.

This leads us to the question of how change is managed within our application. Let’s go back over our history lesson:

– With WinForms we were required to do everything. Every change within our application we needed to evaluate whether something needed to change on the UI, and then we’d explicitly set the appropriate property on the control.

– With XAML based applications we updated properties that were data bound. We still needed to work out what changed, because we didn’t want to raise the PropertyChanged event more than was absolutely necessary.

– With React style applications we no longer need to track what’s changed, we just use the latest state to build the UI.

The latter sounds great, except for the reality is that there is going to be some change tracking going on, we just don’t need to code for it. Let’s look at an example – say we have a list of items on the screen and the user has scrolled half way down the list. If the UI was to just be rebuilt, that scroll position would be lost. The same applies to text entered into a text field etc.

Perhaps what we need to do is to abstract away the need to work out what’s changed and invoke PropertyChanged only for the properties that have actually changed – if we can do that, then updating our Data property can be done as much as it needed without worrying about weird UI artefacts……

Part 2 will show an example of how changing state can be handled in a XAML application along with using Redux to keep our state immutable

BuildIt Libraries using Continuous Delivery

BuildIt Libraries using Continuous Delivery

Following my previous post I’ve been wondering how hard it would be to setup continuous delivery for the BuildIt libraries (a small collection of libraries that help with things like state management and some nice additions to Xamarin Forms). We already have a build process and releases configured for each library so I figured it can’t be that difficult. I’ve been tracking what the team over at ReactiveUI are doing (see https://reactiveui.net/blog/2018/05/moving-towards-vsts-and-continuous-deployment) and as I mentioned previously I think their model can work well, assuming there are enough automated tests to validate quality. In the case of the BuildIt libraries, we have some tests but not enough that I would consider it full tested, nor to the point where I would be comfortable relying on tests to ensure quality.

With this in mind, I’ve made some changes to the process for BuildIt:

– We now have two main branches:

master – This tracks what has been released to nuget as a stable release.  All changes have to be PR’d into this branch and PRs can only be created by designated individuals. PRs also have to be approved and the VSTS build has to pass

develop – this is the default branch, and tracks what’s released to nuget as a beta release (ie x.x.x-beta). All changes have to be PR’d into this branch and PRs can be created by anyone. PRs have to be approved and the VSTS build has to pass

– The VSTS build is setup as continuous integration based on either master or develop branches

– Releases are setup in VSTS pushing only to nuget (I’m considering using myget at some point too)

Alpha – Build artefacts are packaged and deployed to myget as a beta release. This is setup as continuous delivery from all branches

Beta – Build artefacts are packaged and deployed to nuget as a beta release. This is setup as continuous delivery, but has a condition that limits it to builds from the develop branch

Stable – Build artefacts are packaged and deployed to nuget as a stable release. This is setup as continuous delivery, but has a condition that limits it to builds from the master branch

The important thing for me was that anyone can submit code to create a feature or fix a bug and raise a PR on develop. The only thing in the way of a new package being released that can be tested is an approval on the PR. Limiting PRs to master limits adds a little bit of friction and allows for a bit more quality control when releasing stable builds.

Having spent a bit of this morning configuring this, I was amazed that I could effectively complete the whole process of releasing a beta and stable release of the libraries from my phone (of course I had already committed the code changes to github from my desktop).

Continuous Delivery for OSS Projects

Continuous Delivery for OSS Projects

Over the last couple of years the Microsoft/.NET developer community has had to suffer through a substantial amount of crap thanks to some rather reckless decision making by Microsoft. There’s no single culprit but hands down the single point of failure for the Microsoft community right now is Visual Studio, and this in turn is suffering due to the continual mismanagement of NuGet. In the recent builds of Visual Studio things are gradually improving but updating package references for a moderately complex project almost gives me anxiety it’s so bad – you can loose days at a time stuck in NuGet hell before you work out how to get a project back to a stable state. So what does this have to do with CD for OSS projects? Well let me explain….

Last week we had the great privilege of having Geoff Huntley hang our in the Built to Roam offices and we were sharing stories about maintaining OSS projects. One of the topics he’s passionate about is shifting ReactiveUI to a continuous delivery model. When I heard this my first comment was “you mean to beta packages, right”? His response was that, no, CD all the way through to release packages. What this would mean is that once a PR has been merged, a build will be kicked off and a new release packaged pushed out to NuGet. This isn’t a new concept and it’s one that I’ve heard being used in practice quite successfully by some business (eg Domain.com.au talk about it on their engineering blog) but applying it to an OSS project was new and immediately interesting to me.

Before we go into what needs to happen to make this happen, let’s look at why you’d want to do CD for an OSS project. After MvvmCross release v6 I raised an issue discussing release schedule and versioning and one of the points I made is that in order to boost community contributions we wanted to have a known release schedule, particularly for bug fixes. Currently the goal is that if you submit a bug fix via a PR, it should be available in the next patch release, which should go out at the beginning of each month. But what if you only had to wait for your bug fix to be approved and merged – imagine if that then triggered a build and release of a new NuGet package that you could pull into your application. My belief is that this would significantly increase the willingness of the community to contribute and build a more collaborative ecosystem.

Here’s the kicker – what needs to happen in order to do CD? The answer is basically the same as any release process – you need to ensure the release meets your quality bar (however you chose to measure that). Currently for MvvmCross we have a minimal set of unit tests that are run as part of a build. Beyond that, we rely on the maintainers having a good sense of the stability of the framework – this in itself is pretty concerning but unfortunately all too common. The difficulty with a project such as MvvmCross is the sheer matrix of different platforms it supports (eg iOS, Android, UWP, Xamarin Forms (iOS, Android, UWP…..), WPF, MacOS…) and having to write unit and integration tests for all of them, and then be able to run each of the tests on real devices. There are solutions out there, such as App Center Test, which allows for tests to be run on real devices, but what do we do for platforms such as Tizen which aren’t in the list of supported test devices?

So back to my introductory comments – let’s assume that we can solve the CD quality bar issue and that we’re pushing out new packages each time a PR is approved. Now let’s also assume that every package author is setup to do the same, what does that mean for the application I’m building – am I going to be suffering a heart attack every week from the continual need to upgrade package references?

One suggestion is to let Visual Studio start to do the heavy lifting for you – set you packagereferences to use Version=”*” for all packages – this will use the latest stable package and will upgrade as and when they change. Of course, there will be scenarios where you may need to intervene and set specific package versions (eg where there are incompatibilities between different package versions).

To go along with this suggestions is a massive word of caution – whilst in theory this should work well, as we move to a model where more packages are releasing more frequently, Visual Studio will need to keep pace. I don’t believe the current NuGet infrastructure within Visual Studio could handle CD for all package references (and that’s assuming NuGet itself could handle it!!!). Let’s hope that Microsoft are onto this already and have some massive improvements in the pipeline.

Update 6th May: Unfortunately it appears that I was premature in suggesting that we can set the Version in packagereference to *, specifically for cross platform development. Whilst it does appear to work for .NET Standard libraries, it does not work for legacy project types such as Xamarin Android, Xamarin iOS and UWP. Until these project types are updated to handle * versioning, you’re going to have to continue to fix the version referencing.

MVX+1 Update

MVX+1 Update

MvvmCross v6.0.1 was recently released. I’ve just updated both FirstDemo and TipCalc to reference v6.0.1 of MvvmCross

One of the changes that I did make to all projects is how packages are referenced. By default in Visual Studio when you reference a NuGet package it will draw in the specific version. However, by editing the csproj you can set the version to * which will mean Visual Studio will draw in the latest stable version of your referenced libraries. This is particularly convenient if you’re not in the habit of remembering to upgrade packages frequently. The downside is that you may discover one day that your app stops working, or behaves differently, thanks to a new package version being used by your application. More on this in a future post once I’ve collected my current thinking regarding continous deployment and the impact this would have on app development