Nick's .NET Travels

Continually looking for the yellow brick road so I can catch me a wizard....

Communication and Synchronization using Background Tasks with Windows and Windows Phone 8.1

In an earlier post I covered creating a background task for Windows platform applications that would allow for synchronization in the background, triggered by change in Internet availability. However, one thing I didn’t cover is what happens when the application is in use, in this case, not only might the user be interacting with the data (loading and writing new records), they might have even forced a synchronization within the application. For this reason it may be necessary to prevent background synchronization whilst the foreground application is being used.

If you haven’t worked with Windows/Windows Phone 8.1 before the OnActivated method might be confusing in the Application class – it’s not the same as the Activated/Deactivated pair from Windows Phone 7x/8.0 where they occurred reliably when the application went into the background. For Windows/Phone 8.1 you need to look at the Window VisibilityChanged event to detect when the Window goes into the background (note that there is an issue with this on Windows 10 where it isn’t triggered when you switch to another application as all applications are windowed by default).

protected override void OnWindowCreated(WindowCreatedEventArgs args)
{
    args.Window.VisibilityChanged += WindowVisibilityChanged;
}

In the event handler I’m going to take ownership of a named Mutex when the Window is visible, and release it when the Window is no longer visible:

private static Mutex backgroundMutex = new Mutex(false,"BackgroundSync");
private async void WindowVisibilityChanged(object sender, Windows.UI.Core.VisibilityChangedEventArgs e)
{
    if (e.Visible)
    {
        backgroundMutex.WaitOne();

        await ReregisterTasks();
    }
    else
    {
        backgroundMutex.ReleaseMutex();
    }
}

You’ll notice that after taking ownership of the Mutex I then reregister my background task:

private async Task ReregisterTasks()
{
    try
    {
        BackgroundTaskManager.UnregisterBackgroundTasks(typeof(SyncTask).FullName, true);
        BackgroundTaskRegistration taskReg =
            await BackgroundTaskManager.RegisterBackgroundTask(typeof(SyncTask).FullName,
                "Background Sync Task",
                new SystemTrigger(SystemTriggerType.InternetAvailable, false),
                null);
    }
    catch (Exception ex)
    {
        // Swallow this to ensure app doesn't crash in the case of back ground tasks not registering
        Debug.WriteLine(ex.Message);
    }
}

The reason for this is that we want to cancel any running background task (the “true” parameter passed into the UnregisterBackgroundTasks call).

In the background task I have a Mutex with the same name (Named Mutexes are shared across processes so a great way to communicate between foreground/background tasks). At the beginning of the Run method I attempt to acquire the Mutex: if this succeeds I know my foreground application isn’t visible; if this fails (which it will do immediately since I specified a wait time of 0) it will simply return from the task as we don’t want to sync whilst the foreground application is visible. If the background task is going to run, I immediately release the Mutex which will ensure that if the foreground application is made visible, or launched, it won’t be blocked waiting to acquire the Mutex.

private static Mutex foregroundMutex = new Mutex(false,"BackgroundSync");
public async void Run(IBackgroundTaskInstance taskInstance)
{
    try
    {
        if (!foregroundMutex.WaitOne(0)) return;
        foregroundMutex.ReleaseMutex();
        // do the rest of background task

It’s important within your background task to override and handle the OnCanceled method so that if the background task is executing when it is unregistered, the task can be cancelled gracefully.

Custom Domains for Azure Mobile Services

As packaging a cloud based solution one of the tasks is to change the configuration of the services so that they have a application specific domain. In the case of Azure websites this feature has been available for quite a while in the form of custom domains. However, it was only recently that this capability was added to Azure Mobile Services. This enables me to change the Mobile Service url from https://realestateinspector.azure-mobile.net to https://realestate.builttoroam.com. This capability is only available to Mobile Services running in Standard mode, which can be quite a costly commitment if custom domains are the only reason to upgrade.

Here’s a quick run through of setting up a custom domain. Note that this doesn’t include setting up SSL for your custom domain, which is highly recommended. There is more information here that includes using wildcard SSL certificates, which might be useful if you are packaging multiple services (eg Mobile Service and a Website) off the same base domain.

The first thing to do is to setup a CName record (alternatively you can setup an A record using these instructions) – this needs to be done with the name service that hosts the DNS records for your domain.

image

If you simply try to browse to the new URL you’ll see quite a useful 404 message. The first option is exactly the scenario I now face – I have to configure the Mobile Service to know about the custom domain.

image

Currently there is no UI in the Azure portal for managing custom domains for Mobile Services, unlike for Azure Websites where it can all be configured in the portal. Instead, I need to use the Azure CLI. Before doing this, make sure you are using v0.8.15 or higher (v0.8.15 is current at time of writing). Note that I ran into some issues upgrading the Azure CLI – docs online suggest using npm (eg npm update azure-cli, or npm update azure-cli –g depending on whether you installed the azure-cli globally or not). However, I found that this wasn’t working – the output suggested it had updated to 0.8.15 but when I queried azure –v I saw an earlier version. Turns out that I’d installed the azure-cli via the Web Platform Installer – in this case you either need to uninstall the azure-cli via the platform installer, or simply install the new version via the platform installer (which is what I did).
Adding a custom domain is then relatively straight forward: azure mobile domain add <mobileservicename> <customdomain> eg

image

Now when you browse to the new url you see the typical Mobile Service status homepage.

image

When I run my client applications I need to update the Mobile Service Client URL to point to the new url. I can then see in Fiddler that the traffic is indeed going to the new custom domain.

image

Database Migrations with Package Manager Console and Azure Mobile Services

I was caught out recently after I published an incorrect database migration into my cloud base Azure Mobile Service (I created a second controller based on the RealEstateProperty entity instead of the PropertyType entity). The upshot is that I only noticed this when all the properties of the entities I was synchronizing down from the cloud came back with null for most of their properties. Initially I thought my issue was with the migration I had performed on the database, so I thought I’d roll back to a previous version. My most recent migration was “201502260615263_Added proeprty type entity” and I wanted to roll it back to the previous migration, “201501061158375_AddedInspections”. To do this you can simply call the update-database method in the Package Manager Console:

update-database –TargetMigration “201501061158375_AddedInspections”

However, I wanted to invoke this only the database for the Mobile Service running in the cloud. To do this I need to add the –ConnectionString and –ConnectionProviderName attributes. The latter is easy as it needs to be the static value “System.Data.SqlClient” but the former requires two steps:

- In the Azure Management Portal go to the SQL Databases tab and then select the database that correlates to the Mobile Service. With the database selected, click “Manage” from the toolbar – this will prompt to add a firewall rule allowing access from your computer (this only happens the first time or again if your ip address changes). You need to add this firewall rule as Visual Studio will be attaching directly to the database to run the code-first migration on the database.

image

- From the Dashboard pane of the SQL Server database, select Connection Strings from the right link menu, and copy the contents of the ADO.NET connection string.

image

Now I can add the connection string to the update-database method:

update-database –TargetMigration “201501061158375_AddedInspections” –ConnectionString “Server=tcp:p7zzqmjcmf.database.windows.net,1433;Database=realestateinspector;User ID={my username};Password={your_password_here};Trusted_Connection=False;Encrypt=True;Connection Timeout=30;” –ConnectionProviderName “System.Data.SqlClient”

I checked that this had removed the PropertyType table (which was part of the migration I just reversed) and then removed the old migration file, “201502260615263_Added proeprty type entity.cs”, and then regenerated the new migration by calling add-migration again:

add-migration ‘Added proeprty type entity’

Given that the Mobile Service itself hadn’t changed at that point I figured that I’d simply call update-database without the TargetMigration parameter but with the ConnectionString that points to my actual Mobile Service. This seemed to go ok but then when I ran my Mobile Service and attempted to synchronize my PropertyType entities – this caused an exception because I’d discovered the root of my issue, which was I had two controllers both referencing the RealEstateProperty entity. I fixed that and republished my Mobile Service. Now synchronization worked, but mainly because there were no entities in the PropertyType table in the database, so I then attempted to add a PropertyType using the direct access (rather than synchronizing entities) in the MobileServiceClient (using GetTable instead of GetSyncTable) – this caused some weird exception as it seemed to require that the CreatedAt property be set. I’ve never had to do this on previous inserts, so I sensed something was wrong. Using the Visual Studio 2015 CTP I connected directly to the SQL Server database and sure enough on my PropertyType table there were no triggers for insert/update. Usually this is where the CreatedAt column is updated.

So, feeling a little puzzled I decided to undo my migration on my Mobile Service database once more. But this time, instead of attempting to change any of the migration scripts, all I did was republish my Mobile Service. Now when I attempted to add a PropertyType it worked, no problems. Checking with Visual Studio 2015, the trigger on the PropertyType table had been successfully created. At this point I’m not sure what exactly happens when the Mobile Service runs but it seems to do more than just applying the code-first migrations. It definitely seems to me that updating the cloud database using the package manager console seemed to skip the validation step that Mobile Services does in order to add the appropriate triggers, and thus should be avoided.

Multiple Bootstrapper in WebApiConfig for Mobile Service

In my “wisdom” I decided to rename the primary assembly for my Mobile Service (ie just changing the assembly name in the Properties pane for the Mobile Service).

image

This all worked nicely when running locally but when I published to Azure I started seeing the following error in the Log, and of course my service wouldn’t run…

Error: More than one static class with name 'WebApiConfig' was found as bootstrapper in assemblies: RealEstateInspector.Services, realestateinspectorService. Please provide only one class or use the 'IBootstrapper' attribute to define a unique bootstrapper.

Turns out that when I was publishing I didn’t have the “Remove additional files at destination” box checked in the Publish Web dialog. This meant that my old Mobile Service assembly (ie with the old name) was still floating around. As reflection is used over assemblies in the bin folder to locate the bootstrapper, it was picking up the same class in both assemblies…. hence the issue.

image

Checking the “Remove additional files at destination” box ensures only those files that are currently in your Mobile Service project are deployed.

Azure Active Directory Graph API and Azure Mobile Service

Last month in an earlier post I talked about using the Azure Active Directory Graph API Client library in my Azure Mobile Service. Whilst everything I wrote about does indeed when published to the cloud, it does raise a number of errors that are visible in the Log and the status of the service ends up as Critical – which is definitely something I don’t want. The error looks something like the following:

Error: Found conflicts between different versions of the same dependent assembly 'System.Spatial': 5.6.2.0, 5.6.3.0. Please change your project to use version '5.6.2.0' which is the one currently supported by the hosting environment.

Essentially the issue is that the Graph API references a newer version of some of the data libraries (System.Spatial, Microsoft.Data.OData, Microsoft.Data.Edm and Microsoft.Data.Services.Client to be exact). What’s unfortunate is that even using the runtime redirect in the web.config file to point to the newer versions of these library which are deployed with the service, the errors still appear in the log. As there essentially doesn’t seem to be any compatibility issues between the Graph API and the slightly older version (ie 5.6.2.0) I even tried downgrading the other libraries (you can use the –Force function in package management console to remove NuGet packages even if others are dependent on them, so I removed the new versions and added the old version back in) but of course Visual Studio then fails its validation checks during compilation.

The upshot is that you have to either:

- Wait for the Mobile Services team to upgrade their backend to support the new versions of these libraries…..personally I don’t understand why this causes an error in the logs and forces the service to critical, particularly since my service actually appears to be operating fine!

- Downgrade the Graph API Library back to the most recent v1 library – this references an older version of those libraries so has now issues. Unfortunately it doesn’t contain the well factored ActiveDirectoryClient class, making it harder to query AAD.

Migrating Data Between Blob Storage Accounts in Azure

Over the last couple of posts I’ve been talking about working with different configurations and in my previous post I noted that one of the things we had to do was to migrate some data that had been entered into the Test environment into Production environment (again I stress that I’m not recommending it but occasionally you have to bend the process a little). One of the challenges we encountered was that we not only had to migrate the database, which was easy using the database copy capability in the Azure portal, we also needed to migrate the related blob storage data from one account into another. Here’s some quick code that makes use of the Azure Storage client library (WindowsAzure.Storage package via NuGet and more information here).

Firstly in the app.config we have two connection strings:

<connectionStrings>
    <add name="BlobMigrator.Properties.Settings.SourceStorage"
      connectionString="DefaultEndpointsProtocol=https;AccountName=sourceaccount;AccountKey=YYYYYYYYYYY" />
    <add name="BlobMigrator.Properties.Settings.TargetStorage"
      connectionString="DefaultEndpointsProtocol=https;AccountName=targetaccount;AccountKey=XXXXXXXXXXXXXXX" />

</connectionStrings>

Next, some straight forward code to iterate through containers in one storage account and copy content across to the target account:

var source= CloudStorageAccount.Parse(Settings.Default.SourceStorage);
var target= CloudStorageAccount.Parse(Settings.Default.TargetStorage);

var sourceClient = source.CreateCloudBlobClient();
var targetClient = target.CreateCloudBlobClient();

var containers= sourceClient.ListContainers("searchprefix").ToArray();
Debug.WriteLine("Source containers: " + containers.Length);
var idx = 0;
foreach (var cnt in containers)
{
    var tcnt =targetClient.GetContainerReference(cnt.Name);
    await tcnt.CreateIfNotExistsAsync();

    var sblobs = cnt.ListBlobs();
    foreach (var sblob in sblobs)
    {
        var b = await sourceClient.GetBlobReferenceFromServerAsync(sblob.Uri);
        var tb = tcnt.GetBlockBlobReference(b.Name);
        var ok = await tb.StartCopyFromBlobAsync(b.Uri);
        Debug.WriteLine(ok);
    }
    idx++;
    Debug.WriteLine("Migrated {0} of {1} - {2}",idx,containers.Length,cnt.Name);
}

In this case it’s limiting the containers that are copied to those that start with the prefix “searchprefix” but this is optional if you want to copy all containers.

Client Configurations for Different Mobile Service Environments

In my previous post I talked about setting up different instance of the backend cloud services. The next thing is to control which environment a given build of the client applications will point to. I used to do this with build configurations (eg defining compilation symbols like DEBUG and TEST) to toggle which Constants are compiled into the application. This is a little painful if you want to actually debug against production (ie you want to run the application pointing at production data but in debug mode so you can step through code). It also meant that my configuration information quickly became distributed all over the place in my Constants file. I’m going to include here a simple Configuration class that I’m now using as an alternative. Note that I actually still use compilation constants as the default way of specifying which build configuration is used. However it can easily be overridden to allow for debugging against test or production.

public class Configuration
{
    static Configuration()
    {
#pragma warning disable 162 // This is to allow for easy override of configuration values to debug issues
        if (false)
        {
            Debug.WriteLine("-----------------WARNING - Default Configuration Values Overridden ------------------");
            Current = Configurations[ConfigurationType.Production];
        }
#pragma warning restore 162

#if DEBUG
#if DEBUGLOCAL
        Current = Configurations[ConfigurationType.LocalDevelopment];
#else
        Current = Configurations[ConfigurationType.Development];
#endif
#elif TEST
        Current = Configurations[ConfigurationType.LocalDevelopment];
#else
        Current = Configurations[ConfigurationType.Production];
#endif
    }

    public static Configuration Current { get; set; }

    public string ADTenant { get; set; }

    public string ADAuthority
    {
        get { return ClientConstants.ADAuthorityRoot + ADTenant; }
    }

    public string ADNativeClientApplicationClientId { get; set; }

    public string ADRedirectUri { get; set; }

    public string MobileServiceRootUri { get; set; }
    public string MobileServiceAppIdUri { get; set; }

    public string MobileServiceApiKey { get; set; }

    public enum ConfigurationType
    {
        LocalDevelopment,
        Development,
        Test,
        Production
    }

    public static IDictionary<ConfigurationType, Configuration> Configurations
    {
        get { return configurations; }
    }

    private static readonly IDictionary<ConfigurationType, Configuration> configurations
        = new Dictionary<ConfigurationType, Configuration>
        {
            {
                ConfigurationType.LocalDevelopment, new Configuration
                {
                    ADTenant = "realestateinspector.onmicrosoft.com",
                    ADNativeClientApplicationClientId = "a5a10ee9-zzzzzzz-4bde-997f-3f1c323fefa5",
                    ADRedirectUri = "
http://builttoroam.com",
                    MobileServiceRootUri = "
http://localhost:51539/",
                    MobileServiceAppIdUri =
https://realestateinspectordev.azure-mobile.net/login/aad,
                    MobileServiceApiKey="wpxaI---------------------------EBcg12"
                }
            },
            {
                ConfigurationType.Development, new Configuration
                {
                    ADTenant = "realestateinspector.onmicrosoft.com",
                    ADNativeClientApplicationClientId = "a5a10ee9-zzzzzzz-4bde-997f-3f1c323fefa5",
                    ADRedirectUri = "
http://builttoroam.com",
                    MobileServiceRootUri =
https://realestateinspectordev.azure-mobile.net/,
                    MobileServiceAppIdUri =
https://realestateinspectordev.azure-mobile.net/login/aad,
                    MobileServiceApiKey="wpxaI---------------------------EBcg12"
                }
            },
            {
                ConfigurationType.Test, new Configuration
                {
                    ADTenant = "realestateinspector.onmicrosoft.com",
                    ADNativeClientApplicationClientId = "a5a10ee9-tttt-4bde-997f-3f1c323fefa5",
                    ADRedirectUri = "
http://builttoroam.com",
                    MobileServiceRootUri =
https://realestateinspectortest.azure-mobile.net/,
                    MobileServiceAppIdUri =
https://realestateinspectortest.azure-mobile.net/login/aad,
                    MobileServiceApiKey="wpxaI---------------------------EBcg12"
                }
            },
            {
                ConfigurationType.Production, new Configuration
                {
                    ADTenant = "realestateinspector.onmicrosoft.com",
                    ADNativeClientApplicationClientId = "a5a10ee9-wwww-4bde-997f-3f1c323fefa5",
                    ADRedirectUri = "
http://builttoroam.com",
                    MobileServiceRootUri = "
https://realestateinspector.azure-mobile.net/",
                    MobileServiceAppIdUri = "
https://realestateinspector.azure-mobile.net/login/aad",
                    MobileServiceApiKey="wpxaI---------------------------EBcg12"
                }
            }
        };
}

Adding a new configuration is easy:
- define a new enumeration value (eg LocalDevelopment)
- create a new entry in the Configurations dictionary
- (optional) test by changing “if (false)” to “if (true)” and changing the specified configuration

Different Cloud Environments for Development, Testing and Production

One of the aspects of developing applications that have a cloud backend that gets overlooked initially is how to separate development from test and production versions of the application. For web applications ASP.NET solved this by supporting transformations in the web.config file based on build configuration (eg web.Debug.config and web.Release.config). However, this issue is harder with client applications that don’t have config files and don’t understand configuration transformations. The other issue with transformations is that they’re only applied during the publishing process, rather than simply when you change the build configuration in Visual Studio.

I’ll come back to talk about how I’ve chosen to handle different application configurations in a later post. In this post I want to discuss how we’ve handled having multiple environments for our Mobile Service backend; this includes how we decided to do this working with our development team v’s the client site.

Our strategy was to have three environments: Development, Testing and Production. Development was housed within the Built to Roam development Azure subscription which the development team have access to. For the most part anyone within the development team could deploy to this environment at any stage – of course there was some self management involved to minimize breaking changes. As an aside, as I’ve pointed out in a previous post, it is possible to set up Mobile Services to run locally, even if you enable Azure Active Directory authentication. The Development environment was also based on an Azure Active Directory (AAD) tenant explicitly created for the development of that project – that way accounts could be added/removed without affecting any other AAD.

Test and Production were both created in the customers Azure subscription. This was to minimize differences between these environments. These environments also connected to the customers AAD which meant that testing could be carried out with real user accounts since their AAD was synchronized with their internal AD. In a case where writing is supported back to AAD you may want to consider having test pointing to a separate AAD instance but for our purposes AAD was read only so there was no issue in using the same AAD tenant for both Test and Production.

For each of these we created a separate Mobile Service, named according to the environment, with production being the exception as we decided to drop the “production” suffix. Taking the RealEstateInspector example our services would be called:

Development – RealEstateInspectorDev
Testing – RealEstateInspectorTest
Production – RealEstateInspector

Note that we shortened both Development and Testing to just Dev and Test for simplicity.

We also created corresponding storage accounts, with names that matched the names of the mobile service

We also created corresponding applications in the appropriate Azure Active Directory, again with names that matched the corresponding environment. We didn’t use the same applications for Testing and Production to ensure we could configure them separately if required.

One issue we faced is that during the first iteration of development as the system was undergoing final testing in the Testing environment some real data was entered into the system. This meant that rather than simply deploying to Production we actually needed to migrate data from Testing to Production (definitely not something I would recommend as best practice). To do this was actually relatively simple using the ability in Azure to copy a SQL database and then within the Mobile Service change the database that it points to. We also had to migrate content from one storage account to another for which we couldn’t find a simple out of the box tool to use. However, this was actually much simpler than we thought and I’ll come back to this in a future post.

Adding Logging to Client Applications using MetroLog not NLog

I wanted to add some logging to my set of applications and was somewhat disappointed to discover the complete lack of PCL support in NLog. After a quick search to find out what others are using I came across MetroLog which seems to be very similar in a lot of regards to NLog (in fact they claim the API surface is very similar indeed). I went to install the NuGet package…

image

and noticed that my Core library (a PCL) wasn’t in the list. Clearly my PCL profile doesn’t match one of the supported profiles which is a bit painful. However, it does support all my client application types so I was happy at least to use MetroLog.

image

I did have a couple of issues installing the NuGet package, namely Visual Studio decided to crash mid way through installing the packages. This meant I had to manually install Microsoft.Bcl.Compression which is a dependency for the Windows Phone 8.0 project, and then uninstall and reinstall the support for the Windows and Desktop projects.

After all this I was building successfully again so it was time to think about how to structure the logging support. Clearly with logging I want it to be a simple as possible and yet accessible virtually anywhere within the application. I want to define a service that is available within my Core library in a similar way to my IDataService and ISyncService implementation. I also wanted the log output to be written out to a sqlite database file for ease of access (there are plenty of third party tools capable of viewing sqlite db files) but rather than use the SqliteTarget that comes with MetroLog I felt I had to write my own (as you do). Luckily this whole process is relatively simple.

I start by creating an ILogWriterService interface which will provide the low level API for writing a LogEntry to a local Mobile Service sqlite table (I’m going to use the same process that is used to cache my synchronized data, except for the time being at least, the data won’t be synchronized anywhere).

public interface ILogWriterService
{

    IMobileServiceSyncTable<LogEntry> LogTable { get; }
    Task Initialize();
}

public class LogEntry : BaseEntityData
{
    public DateTime LogDateTime { get; set; }
    public string Entry { get; set; }
}

public class LogWriterService : ILogWriterService
{
    private readonly MobileServiceClient mobileService = new MobileServiceClient(Constants.MobileServiceRootUri);

    private IMobileServiceClient MobileService
    {
        get { return mobileService; }
    }

    public IMobileServiceSyncTable<LogEntry> LogTable
    {
        get { return MobileService.GetSyncTable<LogEntry>(); }
    }

    public async Task Initialize()
    {
        var data = new MobileServiceSQLiteStore("log.db");
        data.DefineTable<LogEntry>();

        await MobileService.SyncContext.InitializeAsync(data, new MobileServiceSyncHandler());
    }
}

Next I define the high level service interface, ILogService:

public interface ILogService
{
    void Debug(string message);

    void Exception(string message, Exception ex);
}

So far, all of these classes have been in the Core library. However, the implementation of the ILogService has to be in the Shared.Client project as it needs to be used by all the client projects.

public class LogService : ILogService
{
    public ILogWriterService Writer { get; set; }

    public ILogger Logger { get; set; }

    public LogService(ILogWriterService writer)
    {
        Writer = writer;
        var target = new MobileServiceTarget(Writer);

        LogManagerFactory.DefaultConfiguration.AddTarget(LogLevel.Debug, target);

        Logger = LogManagerFactory.DefaultLogManager.GetLogger("Default");
    }

    public void Debug(string message)
    {
        Logger.Debug(message);
    }

    public void Exception(string message, Exception ex)
    {
        Logger.Error(message, ex);
    }
}

As you can see the implementation sets up the MetroLog logger but uses a custom MobileServiceTarget as the destination. This is implemented as follows:

public class MobileServiceTarget : Target
{
    public ILogWriterService Writer { get; set; }
    public MobileServiceTarget(ILogWriterService writer)
        : base(new SingleLineLayout())
    {
        Writer = writer;
    }

    private bool InitCompleted { get; set; }
    protected async override Task<LogWriteOperation> WriteAsyncCore(LogWriteContext context, LogEventInfo entry)
    {
        try
        {
            if (!InitCompleted)
            {
                await Writer.Initialize();
                InitCompleted = true;
            }
            var log = new LogEntry { LogDateTime = DateTime.Now, Entry = entry.ToJson() };
            await Writer.LogTable.InsertAsync(log);
            return new LogWriteOperation(this, entry, true);
        }
        catch (Exception ex)
        {
            return new LogWriteOperation(this, entry, false);
        }
    }
}

I of course need to register the implementations with Autofac:

builder.RegisterType<LogWriterService>().As<ILogWriterService>();
builder.RegisterType<LogService>().As<ILogService>();

And the last thing is a static helper class that makes logging for two core scenarios really easy:

public static class LogHelper
{
    public static void Log<TEntity>(TEntity entity, [CallerMemberName] string caller = null)
    {
        var json = JsonConvert.SerializeObject(entity);
        Log(typeof(TEntity).Name + ": " + json, caller);
    }

    public static void Log(string message = null, [CallerMemberName] string caller = null)
    {
        try
        {
            InternalWriteLog("[" + caller + "] " + message);
        }
        catch (Exception ex)
        {
            Debug.WriteLine(ex.Message);
        }
    }

    public static void Log(this Exception ex, string message = null, [CallerMemberName] string caller = null)
    {
        try
        {
            Debug.WriteLine("Exception ({0}): {1}", caller, ex.Message);
            InternalWriteException(caller + ": " + message, ex);
        }
        catch (Exception ext)
        {
            Debug.WriteLine(ext.Message);
        }
    }

    private static ILogService logService;

    private static ILogService LogService
    {
        get
        {
            if (logService == null)
            {
                logService = ServiceLocator.Current.GetInstance<ILogService>();

            }
            return logService;
        }
    }

    private static void InternalWriteLog(string message)
    {
        try
        {

            LogService.Debug(message);
        }
        catch (Exception ext)
        {
            Debug.WriteLine(ext.Message);
        }
    }

    private static void InternalWriteException(string message, Exception ex)
    {
        try
        {
            LogService.Exception(message, ex);
        }
        catch (Exception ext)
        {
            Debug.WriteLine(ext.Message);
        }
    }
}

The first scenario is a simple string output eg:

LogHelper.Log("Startup complete");

The second is logging the output of an Exception:

try
{
   …
}
catch (Exception ex)
{
    ex.Log();
}

Note that the Exception logging also does a Debug.WriteLine which is convenient during development to pick up any issues in the Output window.

Creating Design Time Data in Blend for Shared XAML Pages

In a previous post I created a second page for my Universal (Windows/Windows Phone) applications which was placed in the Shared project. Unfortunately Blend doesn’t support design time data for XAML pages that are in shared projects. However, there is a trick to get the design time data wired up and displaying for these pages.

I’ll start by opening the MainPage for the Windows application, which is in the Windows project as it’s not shared with the Windows Phone application. From the Data pane I can go ahead and create a sample data set. There are different ways to partition your sample data – I prefer to have a sample data set per page; in this case the data is for SecondPage, which is in the shared project.

image

In this case my design time data is just going to be made up of a single complex entity, CurrentProperty

image

Next I am going to remove the SecondPageDataSource that was created on the MainPage and add it instead to the Application.Resources section in the App.xaml. Note that I needed to add the d, mc and SampleData namespaces.

xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
    xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
    xmlns:SampleData="using:Blend.SampleData.SecondPageDataSource"
    mc:Ignorable="d">
    <Application.Resources>
        <SampleData:SecondPageDataSource x:Key="SecondPageDataSource" d:IsDataSource="True"/>

Now when I open up the SecondPage I’ll see that there is a SecondPageDataSource under the Data pane.

image

One thing to be aware of is that this will only exist when designing the Windows application. Unfortunately you’ll need to create a different sample data set for use when designing the page for Windows Phone.

Blend Designer Error Due to Service Locator

I was just about to get started using Blend to layout a page and noticed that there was an error in the Results pane in Blend, stating that the ServiceLocatorProvider must be set.

image

I was pretty certain that this was something I was doing during startup but clearly that code isn’t being run correctly at design time. Turns out I don’t really care at design time since I’m going to predominantly use design time data. This means that in the ViewModelLocator constructor I can simply exit if it’s being invoked at design time. Unfortunately the usual design mode property that you can query to determine if the code is being run at design time doesn’t exist in the view model locator scope. Luckily an alternative is to simply query if the service locator provider has been set:

public ViewModelLocator()
{
    if (!ServiceLocator.IsLocationProviderSet) return;
    DataService = ServiceLocator.Current.GetInstance<IDataService>();
    SyncService = ServiceLocator.Current.GetInstance<ISyncService>();
    NavigateService = ServiceLocator.Current.GetInstance<INavigateService>();
}

Synchronizing in a Background Task

Now that we have implementations for IDataService and ISyncService I can update the backgounrd task for the Windows platform applications to perform synchronization in the background. To begin with I need to reference the Autofac libraries (including the Microsoft common service locator and extensions libraries) by adding the RealEstateInspector.Background in NuGet package manager

image

 

The next thing is to update the Run method of the background task so that it looks for a current access token and uses it to initialise the data service.

public async void Run(IBackgroundTaskInstance taskInstance)
{
    try
    {
        var cost = BackgroundWorkCost.CurrentBackgroundWorkCost;

        var authContext = new AuthenticationContext(Constants.ADAuthority);
        if (authContext.TokenCache.ReadItems().Count() > 0)
        {
            authContext = new AuthenticationContext(authContext.TokenCache.ReadItems().First().Authority);
        }

        var authResult =
            await
                authContext.AcquireTokenSilentAsync(Constants.MobileServiceAppIdUri,
                Constants.ADNativeClientApplicationClientId);
        if (authResult != null && !string.IsNullOrWhiteSpace(authResult.AccessToken))
        {
            var dataService = ServiceLocator.Current.GetInstance<IDataService>();
            var syncService = ServiceLocator.Current.GetInstance<ISyncService>();

            await dataService.Initialize(authResult.AccessToken);

            if (cost == BackgroundWorkCostValue.High)
            {
                await syncService.ForceUpload();
            }
            else
            {
                await syncService.Synchronise(true);
            }
        }
    }
    catch (Exception ex)
    {
        Debug.WriteLine(ex.Message);
    }
    finally
    {
        if (deferral != null)
        {
            deferral.Complete();
        }
    }
}

Depending on the cost of the background task I either want the task to force an upload of pending updates (if high cost), or do a full synchronisation.

Navigation in the WPF Application Between View Models

In my previous post I showed adding a INavigateService to facilitate navigation between view models. This included an implementation of the service for Universal applications. For WPF the implementation looks very similar:

public class WPFNavigationService : CoreNavigateService<Page>
{
    protected override void NavigateToView(Type viewType)
    {
        (App.Current.MainWindow.Content as Frame).Navigate(new Uri("/Pages/" + viewType.Name + ".xaml", UriKind.RelativeOrAbsolute));
    }
}

Note that this assumes that pages are in the Pages folder of the project.

The other change that is required is that the WPF application needs to have a Frame which can be used to navigate between pages. So, the MainWindow now looks like

<Window x:Class="RealEstateInspector.Desktop.MainWindow"
        xmlns="
http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        Title="MainWindow" Height="350" Width="525"
        >
    <Frame Source="/Pages/MainPage.xaml" />
</Window>

All the content that was in the MainWindow is now in MainPage. And the type registration at application startup is getting more complex:

public void ApplicationStartup()
{
    CoreApplication.Startup(builder =>
    {

        builder.RegisterType<SignalRFactory>().As<ISignalR>();
#if NETFX_CORE
        builder.RegisterType<UniversalUIContext>().As<IUIContext>();
        builder.RegisterType<WindowsPlatformNavigationService>().SingleInstance().As<INavigateService>();

#elif DESKTOP
        builder.RegisterType<WPFNavigationService>().SingleInstance().As<INavigateService>();
#endif
    });

#if NETFX_CORE
    var navService = ServiceLocator.Current.GetInstance<INavigateService>() as WindowsPlatformNavigationService;
#elif DESKTOP
    var navService = ServiceLocator.Current.GetInstance<INavigateService>() as WPFNavigationService;
#endif
#if NETFX_CORE || DESKTOP
    navService.Register<MainViewModel, MainPage>();
    navService.Register<SecondViewModel, SecondPage>();
#endif
}

This creates a navigation bar which allows the user to navigate back between pages:

image

Navigation Service for Cross Platform Page/View Navigation from View Model

I already have a solution for allowing the view model to jump back onto the UI thread using the UIContext. However, I currently don’t have a mechanism that will allow one view model initiate navigation to a new page/view. What I want is the ability for one view model to request navigation by specifying the destination view model. Of course, this needs to work cross platform. Let’s take a look at the basics of how this could work – there are essentially two strategies that most mvvm style navigation frameworks use: The first is by convention where the lookup for the destination page/view is based on the name of the target view model; the second is by defining a mapping between view models and the corresponding page/view. In this case I’m going to go with the latter – In the Core library I define a couple of interfaces and an abstract implementation:

public interface INavigateService
{
    void Navigate<TViewModel>() where TViewModel : IDataViewModel;
}

public interface INativeNavigateService<TView> : INavigateService
    where TView : class,new()
{
    void Register<TViewModel, TViewType>() where TViewType : TView;
}

public abstract class CoreNavigateService<TView> : INativeNavigateService<TView> where TView : class, new()
{
    private readonly IDictionary<Type, Type> viewDictionary = new Dictionary<Type, Type>();

    protected Type ViewType<TViewModel>()
    {
        Type viewType = null;
        viewDictionary.TryGetValue(typeof(TViewModel), out viewType);
        return viewType;
    }

    public void Register<TViewModel, TViewType>() where TViewType : TView
    {
        viewDictionary[typeof(TViewModel)] = typeof(TView);
    }

    public void Navigate<TViewModel>() where TViewModel : IDataViewModel
    {
        var navType = ViewType<TViewModel>();
        NavigateToView(navType);
    }

    protected abstract void NavigateToView(Type viewType);
}

Next, in the client projects, I define a class that inherits from CoreNavigateService and implements the NavigateToView method. Here is the Windows Platform implementation:

public class WindowsPlatformNavigationService : CoreNavigateService<Page>
{
    protected override void NavigateToView(Type viewType)
    {
        (Window.Current.Content as Frame).Navigate(viewType);
    }
}

The ApplicationStartup method now looks like:

public void ApplicationStartup()
{
    CoreApplication.Startup(builder =>
    {

        builder.RegisterType<SignalRFactory>().As<ISignalR>();
#if NETFX_CORE
        builder.RegisterType<UniversalUIContext>().As<IUIContext>();

        builder.RegisterType<WindowsPlatformNavigationService>().SingleInstance().As<INavigateService>();
#endif
    });

    var navService = ServiceLocator.Current.GetInstance<INavigateService>() as WindowsPlatformNavigationService;

#if NETFX_CORE
    navService.Register<MainViewModel,MainPage>();
    navService.Register<SecondViewModel,SecondPage>();
#endif
}

Both the IDataViewModel, BaseViewModel and ViewModelLocator need to be extended to include an INavigateService property called NavigateService. Now from within the view model, navigation can be invoked by calling NavigateService.Navigate<SecondViewModel>()

Refactoring ViewModelLocator with Autofac

After reviewing the way that I was constructing the IDataService and ISyncService implementations I figured that I wasn’t really leveraging Autofac very well. I realised that I could refactor the ViewModelLocator to at least look up the current implmentation, for example:

public ViewModelLocator()
{
    DataService= ServiceLocator.Current.GetInstance<IDataService>();
    SyncService = ServiceLocator.Current.GetInstance<ISyncService>();
}

Of course for this to work I have to register the types, which can be done in the ApplicationCore class since the implmentations of both interfaces are located in the Core library:

public class ApplicationCore
{
    public void Startup(Action<ContainerBuilder> dependencyBuilder)
    {
        var builder = new ContainerBuilder();

        builder.RegisterType<DataService>().SingleInstance().As<IDataService>();
        builder.RegisterType<SyncService>().SingleInstance().As<ISyncService>();

        dependencyBuilder(builder);

Note however, that whilst this is a bit of an improvement, the ViewModelLocator is still an example of a service locator, which is an anti-pattern (http://blog.ploeh.dk/2010/02/03/ServiceLocatorisanAnti-Pattern/). I’m yet to find a workable improvement that will still allow me to construct the ViewModelLocator in XAML.

Integration Synchronization Wrapper and Restructuring Application Services

So far all the Mobile Service operations, including holding the instance of the MobileServiceClient, has been done by the MainViewModel. Clearly as the application grows this is not a viable solution so we need some application services which can be used to hold the reference to the MobileServiceClient and to facilitate application logic such as data access and synchronisation. To this end I’m going to create two services, IDataService and ISyncService with their corresponding implementations as follows:

public interface IDataService
{
    IMobileServiceClient MobileService { get; }

    Task Initialize(string aadAccessToken);
}

public class DataService: IDataService
{
    private readonly MobileServiceClient mobileService = new MobileServiceClient(
        Constants.MobileServiceRootUri,
        "wpxaIplpeXtkn------QEBcg12",
        new MobileServiceHttpHandler()
        );

    public IMobileServiceClient MobileService
    {
        get { return mobileService; }
    }

    public async Task Initialize(string aadAccessToken)
    {
        var jobj = new JObject();
        jobj["access_token"] = aadAccessToken;
        var access = await MobileService.LoginAsync(MobileServiceAuthenticationProvider.WindowsAzureActiveDirectory, jobj);
        Debug.WriteLine(access != null);
        var data = new MobileServiceSQLiteStore("inspections.db");
        data.DefineTable<RealEstateProperty>();
        data.DefineTable<Inspection>();

        await MobileService.SyncContext.InitializeAsync(data, new CustomMobileServiceSyncHandler());

    }
}

The IDataService implementation holds the reference to the IMoblieServiceClient. This will need to be initialized by passing in the Azure Active Directory access token but there after the MobileService accessor can be used to access data directly through the IMobileServiceClient instance.

public interface ISyncService
{
    event EventHandler<DualParameterEventArgs<double, string>> Progress;
    Task Synchronise(bool waitForCompletion);
    Task ForceUpload();
}

public class SyncService: ISyncService
{
    [Flags]
    private enum SyncStages
    {
        None = 0,
        UploadChanges = 1,
        PullProperties = 2,
        PullInspections = 4,
        All = UploadChanges | PullProperties | PullInspections
    }

    public event EventHandler<DualParameterEventArgs<double,string>> Progress;

    public IDataService DataService { get; set; }

    private ISynchronizationContext<SyncStages> SynchronizationManager { get; set; }

    public SyncService(IDataService dataService)
    {
        DataService = dataService;
        SynchronizationManager = new SynchronizationContext<SyncStages>();
        SynchronizationManager.DefineSynchronizationStep(SyncStages.UploadChanges, UploadPendingLocalChanges);
        SynchronizationManager.DefineSynchronizationStep(SyncStages.PullProperties, DownloadChangesToRealEstateProperties);
        SynchronizationManager.DefineSynchronizationStep(SyncStages.PullInspections, DownloadChangesToInspections);
        SynchronizationManager.SynchronizationChanged += SynchronizationManager_SynchronizationProgressChanged;
    }

    public async Task Synchronise(bool waitForCompletion)
    {
        await SynchronizationManager.Synchronize(SyncStages.All, waitForSynchronizationToComplete: waitForCompletion);
    }

    public async Task ForceUpload()
    {
        await SynchronizationManager.Synchronize(SyncStages.UploadChanges, true, true);
    }

    private void SynchronizationManager_SynchronizationProgressChanged(object sender, SynchronizationEventArgs<SyncStages> e)
    {
        var message = e.ToString();
        if (Progress != null)
        {
            Progress(this,new object[]{ e.PercentageComplete, message});
        }
    }

    private async Task<bool> UploadPendingLocalChanges(ISynchronizationStage<SyncStages> stage)
    {
        await DataService.MobileService.SyncContext.PushAsync(stage.CancellationToken);
        return true;
    }
    private async Task<bool> DownloadChangesToRealEstateProperties(ISynchronizationStage<SyncStages> stage)
    {
        await DataService.MobileService.PullLatestAsync<RealEstateProperty>(stage.CancellationToken);
        return true;
    }
    private async Task<bool> DownloadChangesToInspections(ISynchronizationStage<SyncStages> stage)
    {
        await DataService.MobileService.PullLatestAsync<Inspection>(stage.CancellationToken);
        return true;
    }
}

The ISyncService defines the actual synchronization steps. Rather than simply exposing a generic Synchronize method that accepts the a SyncStages parameter to determine which steps are synchronized, the ISyncService actually exposes high level methods for performing a full synchronize (Synchronize) and just to upload pending changes (ForceUpload). Note that the former has a parameter indicating whether the method should wait synchronization completion before returning, whereas the latter will always wait for the upload part of the synchronize to complete.

To make these services available to the view models of the application the BaseViewModel has been updated to include properties for both services:

public class BaseViewModel : INotifyPropertyChanged
{
    public IDataService DataService { get; set; }
    public ISyncService SyncService { get; set; }

And of course the ViewModelLocator is updated to create instances of these services and assign them to the view model when they’re created:

public class ViewModelLocator
{
    public IDataService DataService { get; set; }
    public ISyncService SyncService { get; set; }

    public ViewModelLocator()
    {
        DataService=new DataService();
        SyncService=new SyncService(DataService);
    }

    public MainViewModel Main
    {
        get { return CreateViewModel<MainViewModel>(); }
    }

    private readonly Dictionary<Type, object> viewModels = new Dictionary<Type, object>();

    private T CreateViewModel<T>() where T:new()
    {
        var type = typeof (T);
        object existing;
        if (!viewModels.TryGetValue(type, out existing))
        {
            existing = new T();
            viewModels[type] = existing;
        }

        var baseVM = existing as BaseViewModel;
        if (baseVM != null)
        {
            baseVM.DataService = DataService;
            baseVM.SyncService = SyncService;
        }

        return (T)existing;
    }
}

Complex Synchronization Wrapper

One of the more complex tasks in building offline-enabled (aka occasionally connected/disconnected) software is how you handle synchronization. Most synchronization frameworks typically handle synchronization of one form of data. For example Mobile Services allow for synchronization of individual tables based on a query. However, for most application this isn’t sufficient – they may require database synchronization in addition to uploading of new images taken on the device and downloading any associated documents for offline viewing. This means you need a synchronization layer that can co-ordinate synchronization of different data types/formats/processes.

There may be times within the application where you don’t want to perform a full synchronization. For example if the user creates a new record, the application should attempt to push this new record to the server immediately but it may not want to do a full synchronization until the user hits the sync button.This means you need a mechanism where you can partition the synchronization layer and only trigger synchronization of individual parts as required.

Here’s an example of what the synchronization wrapper might look like in action:

[Flags]
public enum SyncStages
{
    None=0,
    Stage1=1,
    Stage2=2,
    Stage3=4,
    All = Stage1 | Stage2 | Stage3
}

public class TestSynchronization     {
    public SynchronizationContext<SyncStages> SynchronizationManager { get; set; }

    public Action<string> Progress { get; set; }

    public TestSynchronization()
    {
        SynchronizationManager = new SynchronizationContext<SyncStages>();
        SynchronizationManager.DefineSynchronizationStep(SyncStages.Stage1, Step1);
        SynchronizationManager.DefineSynchronizationStep(SyncStages.Stage2, Step2);
        SynchronizationManager.DefineSynchronizationStep(SyncStages.Stage3, Step3);
        SynchronizationManager.SynchronizationChanged += SynchronizationManager_SynchronizationProgressChanged;
    }

    private void SynchronizationManager_SynchronizationProgressChanged(object sender, SynchronizationEventArgs<SyncStages> e)
    {
        var message = e.ToString();
        Progress(message);
    }

    private const int Step1Stages = 5;

    public async Task<bool> Step1(ISynchronizationStage<SyncStages> step)
    {
        step.RegisterSubStagesCount(Step1Stages);
        for (int i = 0; i < Step1Stages; i++)
        {
            step.StartSubStage();
            await Task.Delay(1000, step.CancellationToken);
            step.EndSubStage();
            if (step.CancellationToken.IsCancellationRequested) return false;
        }
        return true;
    }

    public async Task<bool> Step2(ISynchronizationStage<SyncStages> step)
    {
        await Task.Delay(2*1000, step.CancellationToken);
        return true;
    }

    public async Task<bool> Step3(ISynchronizationStage<SyncStages> step)
    {
        step.RegisterSubStages(Step3Stages.S3Stage1, Step3Stages.S3Stage2, Step3Stages.S3Stage3);

        await step.RunSubStage(Step3Stages.S3Stage1, Step3Sub);

        await step.RunSubStage(Step3Stages.S3Stage2, Step3Sub);

    await model.SynchronizationManager.Synchronize(SyncStages.Stage1 | SyncStages.Stage2,
        waitForSynchronizationToComplete: true);
}

private async void FullSynchronizeClick(object sender, RoutedEventArgs e)
{
    await model.SynchronizationManager.Synchronize(SyncStages.All,
        cancelExistingSynchronization:true,
        waitForSynchronizationToComplete: true);

        return true;
    }

    private enum Step3Stages
    {
        S3Stage1,
        S3Stage2,
        S3Stage3
    }

    private async Task<bool> Step3Sub(ISynchronizationStage<Step3Stages> step)
    {
        step.Progress(0.2);
        await Task.Delay(2000);
        step.Progress(0.7);
        await Task.Delay(2000);
        step.Progress(1.0);
        return true;
    }

}

Triggering and/or cancelling synchronization can then be done using the following:

await model.SynchronizationManager.Synchronize(SyncStages.Stage1 | SyncStages.Stage2, waitForSynchronizationToComplete: true);
await model.SynchronizationManager.Synchronize(SyncStages.All,
                cancelExistingSynchronization:true,
                waitForSynchronizationToComplete: true);
await model.SynchronizationManager.Cancel(true);

The first line triggers stages 1 and 2 to be synchronized – it won’t cancel any existing synchronization process and will only return once it has completed the synchronization process; The second line triggers all stages to be synchronized and will cancel any existing synchronization process; The third line will cancel any existing synchronization and only return once they’ve been cancelled.

I’ve attached a first pass at an implementation of such a sync framework. Note that the actual sync logic is in the steps shown above, it’s the framework for scheduling them and reporting progress which is being shown in the sample.

SynchronizationWrapper.zip (48.4KB)

Adding a Background Task to the Windows Platform Applications

At some point you’re likely to want to run code in the background – this might be to update live tiles, or to do periodic synchronization of data. In this post I’ll add a background task to the Windows platform applications. I’ll start by adding a new Windows Runtime Component to the solution.

image

As a Windows RT component there are some additional restrictions on the definition of types and the way methods are exposed. However, I can still add a reference to the background project to the Core library of the application. As the Core library encapsulates all the logic for the application this should be sufficient to perform background operations such as synchronizing data or updating tile contents.

Next I want to add a reference from the Universal applications (Windows and Windows Phone) to the background task project. Once done, it’s time to create the actual task that will be invoked in the background. I’ll replace the default Class1.cs with SyncTask.cs with the following templated structure:

public sealed class SyncTask : IBackgroundTask
{
    private BackgroundTaskCancellationReason cancelReason = BackgroundTaskCancellationReason.Abort;
    private volatile bool cancelRequested = false;
    private BackgroundTaskDeferral deferral = null;
    //
    // The Run method is the entry point of a background task.
    //
    public async void Run(IBackgroundTaskInstance taskInstance)
    {
        try
        {
            Debug.WriteLine("Background " + taskInstance.Task.Name + " Starting...");

            //
            // Get the deferral object from the task instance, and take a reference to the taskInstance;
            //
            deferral = taskInstance.GetDeferral();
            //
            // Associate a cancellation handler with the background task.
            //
            taskInstance.Canceled += OnCanceled;

            //
            // Query BackgroundWorkCost
            // Guidance: If BackgroundWorkCost is high, then perform only the minimum amount
            // of work in the background task and return immediately.
            var cost = BackgroundWorkCost.CurrentBackgroundWorkCost;

            if (cost == BackgroundWorkCostValue.High)
            {
                // Only push changes
            }
            else
            {
                // Do full sync
            }
        }
        catch (Exception ex)
        {
            Debug.WriteLine(ex.Message);
        }
        finally
        {
            if (deferral != null)
            {
                deferral.Complete();
            }
        }
    }

    //
    // Handles background task cancellation.
    //
    private void OnCanceled(IBackgroundTaskInstance sender, BackgroundTaskCancellationReason reason)
    {
        //
        // Indicate that the background task is canceled.
        //
        cancelRequested = true;
        cancelReason = reason;
        Debug.WriteLine("Background " + sender.Task.Name + " Cancel Requested...");
    }
}

I also need to define this task as a background task in the Declarations section of the project manifest files. In this case we’re going to be triggering the background task based on a system event.

image

Lastly, the background task needs to be registered. For this I’m using this BackgroundTaskManager class which wraps the registration process for tasks.

public class BackgroundTaskManager
{
    /// <summary>
    /// Register a background task with the specified taskEntryPoint, name, trigger,
    /// and condition (optional).
    /// </summary>
    /// <param name="taskEntryPoint">Task entry point for the background task.</param>
    /// <param name="name">A name for the background task.</param>
    /// <param name="trigger">The trigger for the background task.</param>
    /// <param name="condition">An optional conditional event that must be true for the task to fire.</param>
    public static async Task<BackgroundTaskRegistration> RegisterBackgroundTask(String taskEntryPoint, String name, IBackgroundTrigger trigger, IBackgroundCondition condition)
    {
        BackgroundExecutionManager.RemoveAccess();
        var hasAccess = await BackgroundExecutionManager.RequestAccessAsync();
        if (hasAccess == BackgroundAccessStatus.Denied) return null;

        var builder = new BackgroundTaskBuilder();
        builder.Name = name;
        builder.TaskEntryPoint = taskEntryPoint;
        builder.SetTrigger(trigger);
        BackgroundTaskRegistration task = builder.Register();
        Debug.WriteLine(task);

        return task;
    }

    /// <summary>
    /// Unregister background tasks with specified name.
    /// </summary>
    /// <param name="name">Name of the background task to unregister.</param>
    /// <param name="cancelRunningTask">Flag that cancels or let task finish job</param>
    public static void UnregisterBackgroundTasks(string name, bool cancelRunningTask)
    {
        //
        // Loop through all background tasks and unregister any with SampleBackgroundTaskName or
        // SampleBackgroundTaskWithConditionName.
        //
        foreach (var cur in BackgroundTaskRegistration.AllTasks)
        {
            if (cur.Value.Name == name)
            {
                cur.Value.Unregister(cancelRunningTask);
            }
        }
    }
}

The actual registration is done when the application starts. In this case I’m going to cheat a little and include it in the OnNavigatedTo for the MainPage:

protected async override void OnNavigatedTo(NavigationEventArgs e)
{
    base.OnNavigatedTo(e);

    try
    {
        BackgroundTaskManager.UnregisterBackgroundTasks(typeof(SyncTask).FullName, true);
        BackgroundTaskRegistration taskReg =
            await BackgroundTaskManager.RegisterBackgroundTask(typeof(SyncTask).FullName,
                "Background Sync Task",
                new SystemTrigger(SystemTriggerType.InternetAvailable, false),
                null);
    }
    catch (Exception ex)
    {
        // Swallow this to ensure app doesn't crash in the case of back ground tasks not registering
        Debug.WriteLine(ex.Message);
    }
}

Notice that this task is registering interest in the InternetAvailable trigger, allowing the background task to be invoked whenever connectivity changes. Note that this process works for both Windows and Windows Phone Universal projects.

Update:

What I forgot to include here is that if you want tasks to run in the background on the Windows platform you need to make them lock screen enabled. To do this set the “Lock screen notifications” to either Badge or Badge and Tile Text.

image

This will cause a couple of red circles indicating issues with the configuration of the application. The first is that you need to specify either location, timer, control channel or push notifications in the Declarations tab – I’ve chosen to check the push notification checkbox

image

The other requirement is a badge logo – I’ve only specified one of the three options. I’d recommend providing all three if you are serious about delivering a high quality application.

image

Update 2:

When it actually came to run this I must confess I ran into an issue with a typo. Back on the second screenshot an observant reader would have noticed the spelling mistake on the namespace RealEstateInspector. When you attempt to register the service I saw something similar to the following:

image

Full exception details:

System.Exception occurred
  HResult=-2147221164
  Message=Class not registered (Exception from HRESULT: 0x80040154 (REGDB_E_CLASSNOTREG))

Fixing up the Client For Writing to Azure Blob Storage with Shared Access Signature

In my previous post I updated the service logic for retrieving the Shared Access Signature (SAS) to return the full Url of the blob container including the SAS. In order for this to work I also need to update the client logic. This gets much simpler as I can simply construct a new CloudBlockBlob by amending the Url to include the name of the blob to be written to.

private async void CaptureClick(object sender, RoutedEventArgs e)
{
    var picker = new MediaPicker();
    var sas = string.Empty;
    using (var media = await picker.PickPhotoAsync())
    using (var strm = media.GetStream())
    {
        sas = await CurrentViewModel.RetrieveSharedAccessSignature();

        // Append the image file name to the Path (this will
        // retain the SAS as it's in the query string
        var builder = new UriBuilder(sas);
        builder.Path += "/testimage" + Path.GetExtension(media.Path);
        var imageUri = builder.Uri;

        // Upload the new image as a BLOB from the stream.
        var blob = new CloudBlockBlob(imageUri);
        await blob.UploadFromStreamAsync(strm.AsInputStream());
    }
}

But, we can actually do even better…. what we get back is a Url, including the SAS, for the blob container. So we can use the Azure Storage library to create a CloudBlobContainer and then acquire the blob reference from there – this does the work of combining the urls for us.

private async void CaptureClick(object sender, RoutedEventArgs e)
{
    var picker = new MediaPicker();
    var sas = string.Empty;
    using (var media = await picker.PickPhotoAsync())
    using (var strm = media.GetStream())
    {
        sas = await CurrentViewModel.RetrieveSharedAccessSignature();
        var container = new CloudBlobContainer(new Uri(sas));
        var blobFromContainer = container.GetBlockBlobReference("testimage" + Path.GetExtension(media.Path));
        await blobFromContainer.UploadFromStreamAsync(strm.AsInputStream());
    }
}

Modifying the GET Request for the SharedAccesSignature Controller

In the previous post I noted that the code was pretty messy, particularly for the client code with a bunch of hardcoded literals. To fix this I’m going to encapsulate the full URL for blob storage into the server code, meaning that the client shouldn’t have to know the url of blob storage – this will make it easy to administer this in the future as things change.

It turns out that in order to make this change all I needed to do is to return the full blob container url (including the SAS) instead of just the SAS.

var ub = new UriBuilder(container.Uri.OriginalString)
{
    Query = container.GetSharedAccessSignature(sasPolicy).TrimStart('?')
};
sas =  ub.Uri.OriginalString;

The client code of course needs to be updated to handle the full Uri being passed back – Note that we didn’t include the name of the blob as part of creating the Uri. This is something the client should do. Since the SAS is for access to the whole container, the client doesn’t have to request a new SAS for each blob, only for each container it wants to write to.