Nick's .NET Travels

Continually looking for the yellow brick road so I can catch me a wizard....

Navigation Service for Cross Platform Page/View Navigation from View Model

I already have a solution for allowing the view model to jump back onto the UI thread using the UIContext. However, I currently don’t have a mechanism that will allow one view model initiate navigation to a new page/view. What I want is the ability for one view model to request navigation by specifying the destination view model. Of course, this needs to work cross platform. Let’s take a look at the basics of how this could work – there are essentially two strategies that most mvvm style navigation frameworks use: The first is by convention where the lookup for the destination page/view is based on the name of the target view model; the second is by defining a mapping between view models and the corresponding page/view. In this case I’m going to go with the latter – In the Core library I define a couple of interfaces and an abstract implementation:

public interface INavigateService
{
    void Navigate<TViewModel>() where TViewModel : IDataViewModel;
}

public interface INativeNavigateService<TView> : INavigateService
    where TView : class,new()
{
    void Register<TViewModel, TViewType>() where TViewType : TView;
}

public abstract class CoreNavigateService<TView> : INativeNavigateService<TView> where TView : class, new()
{
    private readonly IDictionary<Type, Type> viewDictionary = new Dictionary<Type, Type>();

    protected Type ViewType<TViewModel>()
    {
        Type viewType = null;
        viewDictionary.TryGetValue(typeof(TViewModel), out viewType);
        return viewType;
    }

    public void Register<TViewModel, TViewType>() where TViewType : TView
    {
        viewDictionary[typeof(TViewModel)] = typeof(TView);
    }

    public void Navigate<TViewModel>() where TViewModel : IDataViewModel
    {
        var navType = ViewType<TViewModel>();
        NavigateToView(navType);
    }

    protected abstract void NavigateToView(Type viewType);
}

Next, in the client projects, I define a class that inherits from CoreNavigateService and implements the NavigateToView method. Here is the Windows Platform implementation:

public class WindowsPlatformNavigationService : CoreNavigateService<Page>
{
    protected override void NavigateToView(Type viewType)
    {
        (Window.Current.Content as Frame).Navigate(viewType);
    }
}

The ApplicationStartup method now looks like:

public void ApplicationStartup()
{
    CoreApplication.Startup(builder =>
    {

        builder.RegisterType<SignalRFactory>().As<ISignalR>();
#if NETFX_CORE
        builder.RegisterType<UniversalUIContext>().As<IUIContext>();

        builder.RegisterType<WindowsPlatformNavigationService>().SingleInstance().As<INavigateService>();
#endif
    });

    var navService = ServiceLocator.Current.GetInstance<INavigateService>() as WindowsPlatformNavigationService;

#if NETFX_CORE
    navService.Register<MainViewModel,MainPage>();
    navService.Register<SecondViewModel,SecondPage>();
#endif
}

Both the IDataViewModel, BaseViewModel and ViewModelLocator need to be extended to include an INavigateService property called NavigateService. Now from within the view model, navigation can be invoked by calling NavigateService.Navigate<SecondViewModel>()

Refactoring ViewModelLocator with Autofac

After reviewing the way that I was constructing the IDataService and ISyncService implementations I figured that I wasn’t really leveraging Autofac very well. I realised that I could refactor the ViewModelLocator to at least look up the current implmentation, for example:

public ViewModelLocator()
{
    DataService= ServiceLocator.Current.GetInstance<IDataService>();
    SyncService = ServiceLocator.Current.GetInstance<ISyncService>();
}

Of course for this to work I have to register the types, which can be done in the ApplicationCore class since the implmentations of both interfaces are located in the Core library:

public class ApplicationCore
{
    public void Startup(Action<ContainerBuilder> dependencyBuilder)
    {
        var builder = new ContainerBuilder();

        builder.RegisterType<DataService>().SingleInstance().As<IDataService>();
        builder.RegisterType<SyncService>().SingleInstance().As<ISyncService>();

        dependencyBuilder(builder);

Note however, that whilst this is a bit of an improvement, the ViewModelLocator is still an example of a service locator, which is an anti-pattern (http://blog.ploeh.dk/2010/02/03/ServiceLocatorisanAnti-Pattern/). I’m yet to find a workable improvement that will still allow me to construct the ViewModelLocator in XAML.

Integration Synchronization Wrapper and Restructuring Application Services

So far all the Mobile Service operations, including holding the instance of the MobileServiceClient, has been done by the MainViewModel. Clearly as the application grows this is not a viable solution so we need some application services which can be used to hold the reference to the MobileServiceClient and to facilitate application logic such as data access and synchronisation. To this end I’m going to create two services, IDataService and ISyncService with their corresponding implementations as follows:

public interface IDataService
{
    IMobileServiceClient MobileService { get; }

    Task Initialize(string aadAccessToken);
}

public class DataService: IDataService
{
    private readonly MobileServiceClient mobileService = new MobileServiceClient(
        Constants.MobileServiceRootUri,
        "wpxaIplpeXtkn------QEBcg12",
        new MobileServiceHttpHandler()
        );

    public IMobileServiceClient MobileService
    {
        get { return mobileService; }
    }

    public async Task Initialize(string aadAccessToken)
    {
        var jobj = new JObject();
        jobj["access_token"] = aadAccessToken;
        var access = await MobileService.LoginAsync(MobileServiceAuthenticationProvider.WindowsAzureActiveDirectory, jobj);
        Debug.WriteLine(access != null);
        var data = new MobileServiceSQLiteStore("inspections.db");
        data.DefineTable<RealEstateProperty>();
        data.DefineTable<Inspection>();

        await MobileService.SyncContext.InitializeAsync(data, new CustomMobileServiceSyncHandler());

    }
}

The IDataService implementation holds the reference to the IMoblieServiceClient. This will need to be initialized by passing in the Azure Active Directory access token but there after the MobileService accessor can be used to access data directly through the IMobileServiceClient instance.

public interface ISyncService
{
    event EventHandler<DualParameterEventArgs<double, string>> Progress;
    Task Synchronise(bool waitForCompletion);
    Task ForceUpload();
}

public class SyncService: ISyncService
{
    [Flags]
    private enum SyncStages
    {
        None = 0,
        UploadChanges = 1,
        PullProperties = 2,
        PullInspections = 4,
        All = UploadChanges | PullProperties | PullInspections
    }

    public event EventHandler<DualParameterEventArgs<double,string>> Progress;

    public IDataService DataService { get; set; }

    private ISynchronizationContext<SyncStages> SynchronizationManager { get; set; }

    public SyncService(IDataService dataService)
    {
        DataService = dataService;
        SynchronizationManager = new SynchronizationContext<SyncStages>();
        SynchronizationManager.DefineSynchronizationStep(SyncStages.UploadChanges, UploadPendingLocalChanges);
        SynchronizationManager.DefineSynchronizationStep(SyncStages.PullProperties, DownloadChangesToRealEstateProperties);
        SynchronizationManager.DefineSynchronizationStep(SyncStages.PullInspections, DownloadChangesToInspections);
        SynchronizationManager.SynchronizationChanged += SynchronizationManager_SynchronizationProgressChanged;
    }

    public async Task Synchronise(bool waitForCompletion)
    {
        await SynchronizationManager.Synchronize(SyncStages.All, waitForSynchronizationToComplete: waitForCompletion);
    }

    public async Task ForceUpload()
    {
        await SynchronizationManager.Synchronize(SyncStages.UploadChanges, true, true);
    }

    private void SynchronizationManager_SynchronizationProgressChanged(object sender, SynchronizationEventArgs<SyncStages> e)
    {
        var message = e.ToString();
        if (Progress != null)
        {
            Progress(this,new object[]{ e.PercentageComplete, message});
        }
    }

    private async Task<bool> UploadPendingLocalChanges(ISynchronizationStage<SyncStages> stage)
    {
        await DataService.MobileService.SyncContext.PushAsync(stage.CancellationToken);
        return true;
    }
    private async Task<bool> DownloadChangesToRealEstateProperties(ISynchronizationStage<SyncStages> stage)
    {
        await DataService.MobileService.PullLatestAsync<RealEstateProperty>(stage.CancellationToken);
        return true;
    }
    private async Task<bool> DownloadChangesToInspections(ISynchronizationStage<SyncStages> stage)
    {
        await DataService.MobileService.PullLatestAsync<Inspection>(stage.CancellationToken);
        return true;
    }
}

The ISyncService defines the actual synchronization steps. Rather than simply exposing a generic Synchronize method that accepts the a SyncStages parameter to determine which steps are synchronized, the ISyncService actually exposes high level methods for performing a full synchronize (Synchronize) and just to upload pending changes (ForceUpload). Note that the former has a parameter indicating whether the method should wait synchronization completion before returning, whereas the latter will always wait for the upload part of the synchronize to complete.

To make these services available to the view models of the application the BaseViewModel has been updated to include properties for both services:

public class BaseViewModel : INotifyPropertyChanged
{
    public IDataService DataService { get; set; }
    public ISyncService SyncService { get; set; }

And of course the ViewModelLocator is updated to create instances of these services and assign them to the view model when they’re created:

public class ViewModelLocator
{
    public IDataService DataService { get; set; }
    public ISyncService SyncService { get; set; }

    public ViewModelLocator()
    {
        DataService=new DataService();
        SyncService=new SyncService(DataService);
    }

    public MainViewModel Main
    {
        get { return CreateViewModel<MainViewModel>(); }
    }

    private readonly Dictionary<Type, object> viewModels = new Dictionary<Type, object>();

    private T CreateViewModel<T>() where T:new()
    {
        var type = typeof (T);
        object existing;
        if (!viewModels.TryGetValue(type, out existing))
        {
            existing = new T();
            viewModels[type] = existing;
        }

        var baseVM = existing as BaseViewModel;
        if (baseVM != null)
        {
            baseVM.DataService = DataService;
            baseVM.SyncService = SyncService;
        }

        return (T)existing;
    }
}

Complex Synchronization Wrapper

One of the more complex tasks in building offline-enabled (aka occasionally connected/disconnected) software is how you handle synchronization. Most synchronization frameworks typically handle synchronization of one form of data. For example Mobile Services allow for synchronization of individual tables based on a query. However, for most application this isn’t sufficient – they may require database synchronization in addition to uploading of new images taken on the device and downloading any associated documents for offline viewing. This means you need a synchronization layer that can co-ordinate synchronization of different data types/formats/processes.

There may be times within the application where you don’t want to perform a full synchronization. For example if the user creates a new record, the application should attempt to push this new record to the server immediately but it may not want to do a full synchronization until the user hits the sync button.This means you need a mechanism where you can partition the synchronization layer and only trigger synchronization of individual parts as required.

Here’s an example of what the synchronization wrapper might look like in action:

[Flags]
public enum SyncStages
{
    None=0,
    Stage1=1,
    Stage2=2,
    Stage3=4,
    All = Stage1 | Stage2 | Stage3
}

public class TestSynchronization     {
    public SynchronizationContext<SyncStages> SynchronizationManager { get; set; }

    public Action<string> Progress { get; set; }

    public TestSynchronization()
    {
        SynchronizationManager = new SynchronizationContext<SyncStages>();
        SynchronizationManager.DefineSynchronizationStep(SyncStages.Stage1, Step1);
        SynchronizationManager.DefineSynchronizationStep(SyncStages.Stage2, Step2);
        SynchronizationManager.DefineSynchronizationStep(SyncStages.Stage3, Step3);
        SynchronizationManager.SynchronizationChanged += SynchronizationManager_SynchronizationProgressChanged;
    }

    private void SynchronizationManager_SynchronizationProgressChanged(object sender, SynchronizationEventArgs<SyncStages> e)
    {
        var message = e.ToString();
        Progress(message);
    }

    private const int Step1Stages = 5;

    public async Task<bool> Step1(ISynchronizationStage<SyncStages> step)
    {
        step.RegisterSubStagesCount(Step1Stages);
        for (int i = 0; i < Step1Stages; i++)
        {
            step.StartSubStage();
            await Task.Delay(1000, step.CancellationToken);
            step.EndSubStage();
            if (step.CancellationToken.IsCancellationRequested) return false;
        }
        return true;
    }

    public async Task<bool> Step2(ISynchronizationStage<SyncStages> step)
    {
        await Task.Delay(2*1000, step.CancellationToken);
        return true;
    }

    public async Task<bool> Step3(ISynchronizationStage<SyncStages> step)
    {
        step.RegisterSubStages(Step3Stages.S3Stage1, Step3Stages.S3Stage2, Step3Stages.S3Stage3);

        await step.RunSubStage(Step3Stages.S3Stage1, Step3Sub);

        await step.RunSubStage(Step3Stages.S3Stage2, Step3Sub);

    await model.SynchronizationManager.Synchronize(SyncStages.Stage1 | SyncStages.Stage2,
        waitForSynchronizationToComplete: true);
}

private async void FullSynchronizeClick(object sender, RoutedEventArgs e)
{
    await model.SynchronizationManager.Synchronize(SyncStages.All,
        cancelExistingSynchronization:true,
        waitForSynchronizationToComplete: true);

        return true;
    }

    private enum Step3Stages
    {
        S3Stage1,
        S3Stage2,
        S3Stage3
    }

    private async Task<bool> Step3Sub(ISynchronizationStage<Step3Stages> step)
    {
        step.Progress(0.2);
        await Task.Delay(2000);
        step.Progress(0.7);
        await Task.Delay(2000);
        step.Progress(1.0);
        return true;
    }

}

Triggering and/or cancelling synchronization can then be done using the following:

await model.SynchronizationManager.Synchronize(SyncStages.Stage1 | SyncStages.Stage2, waitForSynchronizationToComplete: true);
await model.SynchronizationManager.Synchronize(SyncStages.All,
                cancelExistingSynchronization:true,
                waitForSynchronizationToComplete: true);
await model.SynchronizationManager.Cancel(true);

The first line triggers stages 1 and 2 to be synchronized – it won’t cancel any existing synchronization process and will only return once it has completed the synchronization process; The second line triggers all stages to be synchronized and will cancel any existing synchronization process; The third line will cancel any existing synchronization and only return once they’ve been cancelled.

I’ve attached a first pass at an implementation of such a sync framework. Note that the actual sync logic is in the steps shown above, it’s the framework for scheduling them and reporting progress which is being shown in the sample.

SynchronizationWrapper.zip (48.4KB)

Adding a Background Task to the Windows Platform Applications

At some point you’re likely to want to run code in the background – this might be to update live tiles, or to do periodic synchronization of data. In this post I’ll add a background task to the Windows platform applications. I’ll start by adding a new Windows Runtime Component to the solution.

image

As a Windows RT component there are some additional restrictions on the definition of types and the way methods are exposed. However, I can still add a reference to the background project to the Core library of the application. As the Core library encapsulates all the logic for the application this should be sufficient to perform background operations such as synchronizing data or updating tile contents.

Next I want to add a reference from the Universal applications (Windows and Windows Phone) to the background task project. Once done, it’s time to create the actual task that will be invoked in the background. I’ll replace the default Class1.cs with SyncTask.cs with the following templated structure:

public sealed class SyncTask : IBackgroundTask
{
    private BackgroundTaskCancellationReason cancelReason = BackgroundTaskCancellationReason.Abort;
    private volatile bool cancelRequested = false;
    private BackgroundTaskDeferral deferral = null;
    //
    // The Run method is the entry point of a background task.
    //
    public async void Run(IBackgroundTaskInstance taskInstance)
    {
        try
        {
            Debug.WriteLine("Background " + taskInstance.Task.Name + " Starting...");

            //
            // Get the deferral object from the task instance, and take a reference to the taskInstance;
            //
            deferral = taskInstance.GetDeferral();
            //
            // Associate a cancellation handler with the background task.
            //
            taskInstance.Canceled += OnCanceled;

            //
            // Query BackgroundWorkCost
            // Guidance: If BackgroundWorkCost is high, then perform only the minimum amount
            // of work in the background task and return immediately.
            var cost = BackgroundWorkCost.CurrentBackgroundWorkCost;

            if (cost == BackgroundWorkCostValue.High)
            {
                // Only push changes
            }
            else
            {
                // Do full sync
            }
        }
        catch (Exception ex)
        {
            Debug.WriteLine(ex.Message);
        }
        finally
        {
            if (deferral != null)
            {
                deferral.Complete();
            }
        }
    }

    //
    // Handles background task cancellation.
    //
    private void OnCanceled(IBackgroundTaskInstance sender, BackgroundTaskCancellationReason reason)
    {
        //
        // Indicate that the background task is canceled.
        //
        cancelRequested = true;
        cancelReason = reason;
        Debug.WriteLine("Background " + sender.Task.Name + " Cancel Requested...");
    }
}

I also need to define this task as a background task in the Declarations section of the project manifest files. In this case we’re going to be triggering the background task based on a system event.

image

Lastly, the background task needs to be registered. For this I’m using this BackgroundTaskManager class which wraps the registration process for tasks.

public class BackgroundTaskManager
{
    /// <summary>
    /// Register a background task with the specified taskEntryPoint, name, trigger,
    /// and condition (optional).
    /// </summary>
    /// <param name="taskEntryPoint">Task entry point for the background task.</param>
    /// <param name="name">A name for the background task.</param>
    /// <param name="trigger">The trigger for the background task.</param>
    /// <param name="condition">An optional conditional event that must be true for the task to fire.</param>
    public static async Task<BackgroundTaskRegistration> RegisterBackgroundTask(String taskEntryPoint, String name, IBackgroundTrigger trigger, IBackgroundCondition condition)
    {
        BackgroundExecutionManager.RemoveAccess();
        var hasAccess = await BackgroundExecutionManager.RequestAccessAsync();
        if (hasAccess == BackgroundAccessStatus.Denied) return null;

        var builder = new BackgroundTaskBuilder();
        builder.Name = name;
        builder.TaskEntryPoint = taskEntryPoint;
        builder.SetTrigger(trigger);
        BackgroundTaskRegistration task = builder.Register();
        Debug.WriteLine(task);

        return task;
    }

    /// <summary>
    /// Unregister background tasks with specified name.
    /// </summary>
    /// <param name="name">Name of the background task to unregister.</param>
    /// <param name="cancelRunningTask">Flag that cancels or let task finish job</param>
    public static void UnregisterBackgroundTasks(string name, bool cancelRunningTask)
    {
        //
        // Loop through all background tasks and unregister any with SampleBackgroundTaskName or
        // SampleBackgroundTaskWithConditionName.
        //
        foreach (var cur in BackgroundTaskRegistration.AllTasks)
        {
            if (cur.Value.Name == name)
            {
                cur.Value.Unregister(cancelRunningTask);
            }
        }
    }
}

The actual registration is done when the application starts. In this case I’m going to cheat a little and include it in the OnNavigatedTo for the MainPage:

protected async override void OnNavigatedTo(NavigationEventArgs e)
{
    base.OnNavigatedTo(e);

    try
    {
        BackgroundTaskManager.UnregisterBackgroundTasks("Background Sync Task", true);
        BackgroundTaskRegistration taskReg =
            await BackgroundTaskManager.RegisterBackgroundTask(typeof(SyncTask).FullName,
                "Background Sync Task",
                new SystemTrigger(SystemTriggerType.InternetAvailable, false),
                null);
    }
    catch (Exception ex)
    {
        // Swallow this to ensure app doesn't crash in the case of back ground tasks not registering
        Debug.WriteLine(ex.Message);
    }
}

Notice that this task is registering interest in the InternetAvailable trigger, allowing the background task to be invoked whenever connectivity changes. Note that this process works for both Windows and Windows Phone Universal projects.

Update:

What I forgot to include here is that if you want tasks to run in the background on the Windows platform you need to make them lock screen enabled. To do this set the “Lock screen notifications” to either Badge or Badge and Tile Text.

image

This will cause a couple of red circles indicating issues with the configuration of the application. The first is that you need to specify either location, timer, control channel or push notifications in the Declarations tab – I’ve chosen to check the push notification checkbox

image

The other requirement is a badge logo – I’ve only specified one of the three options. I’d recommend providing all three if you are serious about delivering a high quality application.

image

Update 2:

When it actually came to run this I must confess I ran into an issue with a typo. Back on the second screenshot an observant reader would have noticed the spelling mistake on the namespace RealEstateInspector. When you attempt to register the service I saw something similar to the following:

image

Full exception details:

System.Exception occurred
  HResult=-2147221164
  Message=Class not registered (Exception from HRESULT: 0x80040154 (REGDB_E_CLASSNOTREG))

Fixing up the Client For Writing to Azure Blob Storage with Shared Access Signature

In my previous post I updated the service logic for retrieving the Shared Access Signature (SAS) to return the full Url of the blob container including the SAS. In order for this to work I also need to update the client logic. This gets much simpler as I can simply construct a new CloudBlockBlob by amending the Url to include the name of the blob to be written to.

private async void CaptureClick(object sender, RoutedEventArgs e)
{
    var picker = new MediaPicker();
    var sas = string.Empty;
    using (var media = await picker.PickPhotoAsync())
    using (var strm = media.GetStream())
    {
        sas = await CurrentViewModel.RetrieveSharedAccessSignature();

        // Append the image file name to the Path (this will
        // retain the SAS as it's in the query string
        var builder = new UriBuilder(sas);
        builder.Path += "/testimage" + Path.GetExtension(media.Path);
        var imageUri = builder.Uri;

        // Upload the new image as a BLOB from the stream.
        var blob = new CloudBlockBlob(imageUri);
        await blob.UploadFromStreamAsync(strm.AsInputStream());
    }
}

But, we can actually do even better…. what we get back is a Url, including the SAS, for the blob container. So we can use the Azure Storage library to create a CloudBlobContainer and then acquire the blob reference from there – this does the work of combining the urls for us.

private async void CaptureClick(object sender, RoutedEventArgs e)
{
    var picker = new MediaPicker();
    var sas = string.Empty;
    using (var media = await picker.PickPhotoAsync())
    using (var strm = media.GetStream())
    {
        sas = await CurrentViewModel.RetrieveSharedAccessSignature();
        var container = new CloudBlobContainer(new Uri(sas));
        var blobFromContainer = container.GetBlockBlobReference("testimage" + Path.GetExtension(media.Path));
        await blobFromContainer.UploadFromStreamAsync(strm.AsInputStream());
    }
}

Modifying the GET Request for the SharedAccesSignature Controller

In the previous post I noted that the code was pretty messy, particularly for the client code with a bunch of hardcoded literals. To fix this I’m going to encapsulate the full URL for blob storage into the server code, meaning that the client shouldn’t have to know the url of blob storage – this will make it easy to administer this in the future as things change.

It turns out that in order to make this change all I needed to do is to return the full blob container url (including the SAS) instead of just the SAS.

var ub = new UriBuilder(container.Uri.OriginalString)
{
    Query = container.GetSharedAccessSignature(sasPolicy).TrimStart('?')
};
sas =  ub.Uri.OriginalString;

The client code of course needs to be updated to handle the full Uri being passed back – Note that we didn’t include the name of the blob as part of creating the Uri. This is something the client should do. Since the SAS is for access to the whole container, the client doesn’t have to request a new SAS for each blob, only for each container it wants to write to.

Saving Image to Blob Storage Using Shared Access Signature

In this post I’m  going to bring together a couple of my previous posts that discuss retrieving and saving images, and retrieving a Shared Access Signature from a controller which will allow me to write to a particular container within Blob Storage. To complete the implementation I’ll use the Windows Azure Storage library from NuGet – it only installs for Windows platforms as there’s no PCKL or Xamarin support for this library currently.

image

As the Windows Azure Storage library is current platform specific, I’ll need to wrap it in a simple interface that makes it easy for me to write data to Blob Storage – I’ll come back to that. For the time being I’m just going to retrieve the SAS and use it along with the storage library to upload an image. So I’ll start by invoking the sharedaccesssignature controller using the GET verb as I want to ensure the container is created if it doesn’t already exist. This will return a SAS which I can use in the upload process.

public async Task<string> RetrieveSharedAccessSignature()
{
    var sas = await MobileService.InvokeApiAsync<string>("sharedaccesssignature", HttpMethod.Get,
        new Dictionary<string, string> { { "id", "test" } });
    return sas;
}

Next I want to capture an image, in this case picking a photo, and uploading it to a specified blobg.

private async void CaptureClick(object sender, RoutedEventArgs e)
{
    var picker = new MediaPicker();
    var sas = string.Empty;
    using (var media = await picker.PickPhotoAsync())
    using (var strm = media.GetStream())
    {
        sas = await CurrentViewModel.RetrieveSharedAccessSignature();

        Debug.WriteLine(sas);

        // Get the URI generated that contains the SAS
        // and extract the storage credentials.
        var cred = new StorageCredentials(sas);
        var imageUri = new Uri("
https://realestateinspector.blob.core.windows.net/test/testimage.png");

        // Instantiate a Blob store container based on the info in the returned item.
        var container = new CloudBlobContainer(
            new Uri(string.Format("
https://{0}/{1}",
                imageUri.Host, "test")), cred);

        // Upload the new image as a BLOB from the stream.
        var blobFromSASCredential = container.GetBlockBlobReference("testimage.png");
        await blobFromSASCredential.UploadFromStreamAsync(strm.AsInputStream());
    }

}

Clearly this code isn’t well factored but it’s here as a quick example of how you can use a SAS to upload content to blob storage.

Simplifying Shared Access Signature Generation with the Mobile Services ResourceBroker

In my post Storing the Big Stuff in Blob Storage I showed you how to manually create a shared access signature. The Azure Mobile Services team have done a really nice job of making this even easier with the ResourceBroker NuGet package. Getting started documentation is available via GitHub (https://github.com/Azure/azure-mobile-services-resourcebroker) and of course the package, which I’ve added to my Mobile Service project, is available via NuGet.

image

The changes I needed to make to my SharedAccessSignature controller are:

- Change my Web.config to include the ResourceBrokerStorageConnectionString appSetting – the documentation talks about adding this via the Configure panel of the mobile service portal but you’ll need to add it to web.config for debugging. Also the format of this string should be similar to the following:

<add key="ResourceBrokerStorageConnectionString"
     value="DefaultEndpointsProtocol=https;AccountName=realestateinspector;AccountKey=LxWu0q2UvQ7ddxXvIP3UfV4ozDkLpgaSkUxkK8--------------------------BYHTpTrAGaHjLoynH+61ng==" />

- Change the base class of the controller to ResourcesControllerBase (I needed to add an import statement to the top of the file too)

- Add routing information to the WebApiConfig.cs file (as per the documentation on GitHub)

// Create a custom route mapping the resource type into the URI.    
var resourcesRoute = config.Routes.CreateRoute(
     routeTemplate: "api/resources/{type}",
     defaults: new { controller = "resources" },
     constraints: null);

// Insert the ResourcesController route at the top of the collection to avoid conflicting with predefined routes.
config.Routes.Insert(0, "Resources", resourcesRoute);

- Initially I removed the contents of my controller but then I realised that there are limitations on the ResourceControllerBase (eg the Blob container must exist and that I needed to specify an actual blob, not just a container for access), so I kept my code and modified it to work with the new connection string.

public class SharedAccessSignatureController :  ResourcesControllerBase
{
    public async Task<string> Get(string id)
    {
        var sas = string.Empty;

        if (!string.IsNullOrEmpty(id))
        {
            // Try to get the Azure storage account token from app settings. 
            string storageAccountConnectionString;

            if (Services.Settings.TryGetValue("ResourceBrokerStorageConnectionString", out storageAccountConnectionString) )
            {
                // Set the URI for the Blob Storage service.
                var account = CloudStorageAccount.Parse(storageAccountConnectionString);
                // Create the BLOB service client.
                var blobClient = new CloudBlobClient(account.BlobStorageUri,account.Credentials);

                // Create a container, if it doesn't already exist.
                var container = blobClient.GetContainerReference(id);
                await container.CreateIfNotExistsAsync();

                // Create a shared access permission policy.
                var containerPermissions = new BlobContainerPermissions();

                // Enable anonymous read access to BLOBs.
                containerPermissions.PublicAccess = BlobContainerPublicAccessType.Blob;
                container.SetPermissions(containerPermissions);

                // Define a policy that gives write access to the container for 1h
                var sasPolicy = new SharedAccessBlobPolicy()
                {
                    SharedAccessStartTime = DateTime.UtcNow,
                    SharedAccessExpiryTime = DateTime.UtcNow.AddMinutes(59).AddSeconds(59),
                    Permissions = SharedAccessBlobPermissions.Write | SharedAccessBlobPermissions.Read
                };

                sas = container.GetSharedAccessSignature(sasPolicy);
            }
        }

        return sas;
    }
}

- To get my code to work I also had to amend the route to the following, adding the id as an optional parameter

var resourcesRoute = config.Routes.CreateRoute(
        routeTemplate: "api/sharedaccesssignature/{type}/{id}",
        defaults: new { controller = "sharedaccesssignature", id = RouteParameter.Optional },
        constraints: null);

Calling the controller from Fiddler can be done in two ways:

GET: http://localhost:51539/api/sharedaccesssignature/blob/test

POST: http://localhost:51539/api/sharedaccesssignature/blob
Content-Type: application/json
{
    "name": "myblob",
    "container": "test",
    "permissions": "w",
    "expiry": "2015-12-01T07:34:42Z"
}

Don’t forget that if during debugging you set authenticationlevel to anonymous, to make sure you change it back to User or Application before publishing.

Adding Xamarin Components via Visual Studio

Sometimes you’ll want to use some of the components from the Xamarin component store. This can be done directly in Visual Studio. Right-click Components node within iOS or Android project and select Get More Components

image

Once added you’ll see the component listed in Solution Explorer under the Components node. However, the Components, like NuGet packages, are located at a solution level. This makes them easy to reference from other projects.

image

In this case the Xamarin.Mobile component can be referenced by our Universal, Windows Phone 8.0, iOS and Android projects. This will make performing operations such as selecting or capturing photos much easier. However, be aware that there are some APIs that aren’t supported across all platforms. For example whilst you can reference the component from the WP8.1 application, MediaPicker fails as it’s not supported.

Cross Platform File Storage with PCLStorage NuGet Package

Before I can get onto using the SharedAccessSignature to upload content to Blob storage I first need to generate and save content. As this relies on platform specific implementations of capturing images from the camera and then accessing the file system. I’ll start with the latter and I’ll use a NuGet package called PCLStorage which has implementations for all the platforms I’m interested in. From NuGet I locate the PCLStorage package and install it into the client applications.

image

In my MainViewModel I’ve added the following WriteFile and ReadFile methods that for the time being will write and read an simple text file:

private string fileText;

public string FileText
{
    get { return fileText; }
    set
    {
        if (FileText == value) return;
        fileText = value;
        OnPropertyChanged();
    }
}

public async Task WriteFile()

{
    var file =
        await
            FileSystem.Current.LocalStorage.CreateFileAsync("test.txt", CreationCollisionOption.ReplaceExisting);
    using(var stream = await file.OpenAsync(FileAccess.ReadAndWrite))
        using(var writer = new StreamWriter(stream))
        {
            await writer.WriteAsync(FileText);
        }
}
public async Task ReadFile()
{
    var file =
        await
            FileSystem.Current.LocalStorage.GetFileAsync("test.txt");
    using (var stream = await file.OpenAsync(FileAccess.Read))
    using (var reader= new StreamReader(stream))
    {
        FileText = await reader.ReadToEndAsync();
    }
}

The FileText property can be databound to a TextBox in the XAML based projects allowing for user input which will be persisted between application instances. The actual location of the file is platform specific, for example on WPF this file is located at C:\Users\Nick\AppData\Local\RealEstateInspector\RealEstateInspector.Desktop\1.0.0.0\test.txt which is local data specific to both user and application. PCLStorage also defines a roaming storage option which again will be dependent on the platform implementation.

Storing the Big Stuff in Blob Storage

Often mobile applications need to store and retrieve large object data, typically photos and video. This is where Azure blog storage comes into play. In order to write into blob storage you need an access key. However, you’d never distribute an actual access key out to a mobile application, even temporarily, as it’s a really bad idea – if someone gets hold of the access key they have access to everything stored in your blob storage. Luckily blob storage has the notion of shared access signatures which you can think of as short term access passes to blob storage. These are typically created using a full access key and as such this operation is done service side.

I’m going to create a dedicated API just for granting shared access signatures to specific containers (which you can think of as single level folders). In this case the containers will be created with public read access on the contents – since the list of blobs per container will be protected and clients will still need a shared access signature in order to write to containers, this should be ample security for a large proportion of application scenarios.

I’ll start off by creating a new Storage area within the Azure management portal.

image

Once created you’ll need to record the storage account name (realestateinspector) and the storage account access key. Add these values into the appSettings section of the web.config file for the Azure Mobile Service

<appSettings>

  <add key="STORAGE_ACCOUNT_NAME"
       value="realestateinspector" />
  <add key="STORAGE_ACCOUNT_ACCESS_KEY"
       value="LxWu0q2UvQ7ddxXvIP3UfV4ozDkLpgaSkUx------------------------------------33WBYHTpTrAGaHjLoynH+61ng==" />

I’ll create a new controller in my Azure Mobile Service based on the Custom Controller item template

image

The bulk of this api controller sits within a single GET operation:

[AuthorizeLevel(AuthorizationLevel.User)]
public class SharedAccessSignatureController : ApiController
{
    public ApiServices Services { get; set; }

    public async Task<string> Get(string containerToAccess)
    {
        var sas = string.Empty;

        if (!string.IsNullOrEmpty(containerToAccess))
        {
            // Try to get the Azure storage account token from app settings. 
            string storageAccountName;
            string storageAccountKey;

            if (Services.Settings.TryGetValue("STORAGE_ACCOUNT_NAME", out storageAccountName) &&
                Services.Settings.TryGetValue("STORAGE_ACCOUNT_ACCESS_KEY", out storageAccountKey))
            {
                // Set the URI for the Blob Storage service.
                var blobEndpoint = new Uri(string.Format("
https://{0}.blob.core.windows.net", storageAccountName));

                // Create the BLOB service client.
                var blobClient = new CloudBlobClient(blobEndpoint, new StorageCredentials(storageAccountName, storageAccountKey));

                // Create a container, if it doesn't already exist.
                var container = blobClient.GetContainerReference(containerToAccess);
                await container.CreateIfNotExistsAsync();

                // Create a shared access permission policy.
                var containerPermissions = new BlobContainerPermissions();

                // Enable anonymous read access to BLOBs.
                containerPermissions.PublicAccess = BlobContainerPublicAccessType.Blob;
                container.SetPermissions(containerPermissions);

                // Define a policy that gives write access to the container for 1h
                var sasPolicy = new SharedAccessBlobPolicy()
                {
                    SharedAccessStartTime = DateTime.UtcNow,
                    SharedAccessExpiryTime = DateTime.UtcNow.AddMinutes(59).AddSeconds(59),
                    Permissions = SharedAccessBlobPermissions.Write | SharedAccessBlobPermissions.Read
                };

                sas = container.GetSharedAccessSignature(sasPolicy);
            }
        }

        return sas;
    }
}

If you change the AuthorizationLevel to Anonymous you can run up this controller and use Fiddler to generate the shared access signature by invoking a GET on (eg http://localhost:51539/api/SharedAccessSignature/test, where test is the name of the container we’re requesting access to. If you want to check that the container has been created and the appropriate security set, you can use the CloudBerry Explorer for Azure Blob Storage.

image

After entering credentials you can immediately see the folders in your blob storage which will in this case have the container “test” which was created when I made the request to the SharedAccessSignature service.

image

You can also use Fiddler to prepare and launch a query – don’t forget to switch the AuthorizationLevel back to User before deploying your services otherwise anyone will be able to access content from your blob storage.

ViewModel Refactoring, INotifyPropertyChanged and Running on UI Context with Dispatcher

In a previous post I showed how to use SignalR as one option for providing feedback to the client application during a long running service operation. However, I didn’t display this on the UI because that would have made the blog post long and complicated as I tried to explain propagating a change from a non-UI thread, back onto the UI thread so it could be displayed within the app. In this post I’m going to do just that – what’s interesting is that each platform handles this scenario slightly different with some caring about thread affinity, whilst others do not.

Firstly, I’ll update the layout to include a TextBlock that’s going to display the progress:

<TextBlock Text="{Binding Progress}" />

In this case databinding to the Progress property on the MainViewModel:

private string progress;
public string Progress
{
    get { return progress; }
    set
    {
        if (Progress == value) return;
        progress = value;
        OnPropertyChanged();
    }
}

Note that as this property changes it calls the OnPropertyChanged method which will be used to raise the PropertyChanged event specified in the INotifyPropertyChanged interface – this is what the XAML data binding framework uses to detect when bound properties are changed. In this case I’m going to implement this interface in BaseViewModel, and have MainViewModel inherit from BaseViewModel, rather than implement it in every view model.

public class MainViewModel:BaseViewModel  { … }

public class BaseViewModel:INotifyPropertyChanged
{
   
public event PropertyChangedEventHandler PropertyChanged;

    protected virtual void OnPropertyChanged([CallerMemberName] string propertyName = null)
    {
        PropertyChangedEventHandler handler = PropertyChanged;
        if (handler != null) handler(this, new PropertyChangedEventArgs(propertyName));

    }
}

Now, let’s modify the delegate that gets invoked when we receive an update from the service. Previously we just had Debug.WriteLine(msg), let’s change that to  Progress=msg. When you run this in your Universal Windows application – BANG – Exception due to an attempt to update the UI (ie the Text attribute on the TextBlock) from a non-UI thread. Interestingly doing the same thing in the WPF application doesn’t throw the same exception. In order to update the Progress property, we first need to jump back onto the UI thread, which is fine, if we were working directly with the Page/View. However, we don’t have any notion of a Dispatcher, UI threads etc from within the MainViewModel. This sounds like another scenario for a dependency injected implementation, so here goes:

My BaseViewModel is extended to include a UIContext object:

private readonly UIContext context=new UIContext();

public UIContext UIContext

{
    get { return context; }
}

The UIContext object wraps and abstracts away the loading of an implementation of IUIContext:

public interface IUIContext
{
    Task RunOnUIThreadAsync(Func<Task> action);
}

public class UIContext
{
    private IUIContext runContext;
    private IUIContext RunContext
    {
        get
        {
            if (runContext == null)
            {
                runContext = ServiceLocator.Current.GetInstance<IUIContext>();

            }
            return runContext;
        }
    }

    public async Task RunAsync(Action action)
    {
        await RunAsync(async () => action());
    }

    public async Task RunAsync(Func<Task> action)
    {
        var context = RunContext;
        await context.RunOnUIThreadAsync(action);
    }
}

Of course, we now need an implementation of IUIContext for our Universal Windows application:

public class UniversalUIContext : IUIContext
{
    public async Task RunOnUIThreadAsync(Func<Task> action)
    {
        await CoreApplication.MainView.CoreWindow.Dispatcher.RunAsync(CoreDispatcherPriority.Normal,async  () => await action());
    }
}

And this needs to be registered with Autofac in the ClientApplicationCore – note the use of the NETFX_CORE compilation attribute.

CoreApplication.Startup(builder =>
{
    builder.RegisterType<SignalRFactory>().As<ISignalR>();
#if NETFX_CORE
    builder.RegisterType<UniversalUIContext>().As<IUIContext>();
#endif
});

And finally I need to go back to my initial progress statement and update it from msg => Progress = msg, to the following:

async msg =>
    await UIContext.RunAsync(() =>
    {
        Progress = msg;
    })

This issue isn’t specific to Universal applications, and you should always make sure you have access to the UI thread when making UI changes (such as updating data bound properties).

A Simple ViewModelLocator for Spawning ViewModels for XAML Based Applications

There are numerous frameworks out there that provide mechanisms for instantiating view models. Long again, when I first started building XAML based applications and became familiar with MVVM, I stepped through a number of different ways of creating and wiring up view models. In this post I’m going to show a very basic implementation of a locator to instantiate view models. Into the Core library I will add a ViewModelLocator class which exposes a property, Main, that will return a new instance of the MainViewModel.

public class ViewModelLocator
{
    public MainViewModel Main
    {
        get { return CreateViewModel<MainViewModel>(); }
    }

    private T CreateViewModel<T>() where T:new()
    {
        return new T();
    }
}

I’m going to want a single instance of this class to be created and kept around for the duration of my application’s lifecycle. One option would be to instantiate it within my ApplicationCore class I introduced previously. However, I actually want the instance of the ViewModelLocator to be accessible via XAML, so for this reason it makes more sense to instantiate it as a XAML resource. In the WPF and Universal (Win/WP8.1) applications I can simply add this to the app.xaml file to the Application.Resource element.

<Application.Resources>
        <core:ViewModelLocator x:Key="Locator" />
</Application.Resources>

In the MainWindow (WPF) and MainPage (Universal) I can now specify the DataContext in the opening element, removing the need to create the MainViewModel in code in the codebehind file. Eg

<Window x:Class="RealEstateInspector.Desktop.MainWindow"
        xmlns="
http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        DataContext="{Binding Main, Source={StaticResource Locator}}" >

The current implementation of Xamarin.Forms doesn’t seem to support creating Application resources in XAML. However, by tweaking the App constructor, I can add an instance of the ViewModelLocator:

public App()
{
    Resources = new ResourceDictionary();
    Resources.Add("Locator", new ViewModelLocator());

    MainPage =new MainPage();
}

The syntax in the MainPage of the XForms application is similar except BindingContext replaces DataContext:

BindingContext="{Binding Main, Source={StaticResource Locator}}"

There seems to be a quirk in XForms at the moment in that the Binding expression calls to the Main property repeatedly – since it gets a different instance of the MainViewModel back each time, it ends up in an endless loop trying to get a consistent value. To prevent this, we can add a view model dictionary to the ViewModelLocator and change the behaviour to always return the same instance of the view model.

private readonly Dictionary<Type, object> viewModels = new Dictionary<Type, object>();

private T CreateViewModel<T>() where T:new()
{
    var type = typeof (T);
    object existing;
    if (!viewModels.TryGetValue(type, out existing))
    {
        existing = new T();
        viewModels[type] = existing;
    }
    return (T)existing;
}

And there you have it – view model location across all applications.

Refactoring SignalR Feedback for Cross Platform with AutoFac and CommonServiceLocator for Simple Dependency Injection

At this point I’m starting to think about “real application” problems such as how services/components/view models are located and how navigation between view models is going to work. For anyone following this series of posts I currently have Universal (Windows/Windows Phone), Xamarin Forms (iOS (broken), Android and Windows Phone 8.0) and WPF clients. So far I’ve been covering how to handle accessing data (Mobile Services) and authentication (Azure Active Directory) and to a lesser degree data binding (XAML, Data binding and View Models) but I haven’t looked at how the client applications are going to be architected. There are some core issues which need to be solved in a general way so that it will work across all client applications, such as how view models are located (created and supplied to corresponding view) and how navigation can be triggered within a view model that will correlate to a page/view change in the UI layer. This issue I’ll leave to another day as it generally involves either picking an existing framework (MvvmCross, Mvvmlight etc) or rolling your own – and I’m still undecided on what to do.

What I do know is that in order to refactor my SignalR implementation, which is currently done within the WPF Window codebehind, so that it will work on all platforms I’m going to need to define a set of interfaces in the Core library (PCL) which is implemented by each platform. I should be able to reuse the same code on all platforms for the implementation so the code can go into the Shared.Client project – it just needs to reference the platform specific implementation of SignalR.

Before we get onto the refactoring, I’m going to add a couple of NuGet packages which will give use an IoC that we can call on to resolve dependencies. I was initially going to jump forward and include MvvmLight but I went looking for the navigation support and was a little disappointed – I’ll come back and do a full evaluation when I get to dealing with navigation but for now I’ve opted to go for Autofac. However, just in case I want to change IoC provider at a later stage I’ve also opted to include the CommonServiceLocator so as to provide a bit of an abstraction from any given library when it comes to resolving dependencies.

I’ve added the following NuGet packages to the Core and all client projects

Autofac

CommonServiceLocator

Autofac Extras: Microsoft Common Service Locator

Both Autofac and the CommonServiceLocator require logic to be run on application startup. There is both platform independent and platform specific code to be run. For this reason I’ve added the ClientApplicationCore class to the Shared.Client project and the ApplicationCore class to the Core library, with the following implementation:

public class ClientApplicationCore
{
    public static ClientApplicationCore Default { get; set; }

    static ClientApplicationCore()
    {
        Default=new ClientApplicationCore();
    }

    public ClientApplicationCore()
    {
        CoreApplication=new ApplicationCore();
    }

    public ApplicationCore CoreApplication { get; set; }

    public void ApplicationStartup()
    {
        CoreApplication.Startup(builder => builder.RegisterType<SignalRFactory>().As<ISignalR>());
    }
}

 

public class ApplicationCore
{
    public void Startup(Action<ContainerBuilder> dependencyBuilder)
    {
        var builder = new ContainerBuilder();
        dependencyBuilder(builder);
        // Perform registrations and build the container.
        var container = builder.Build();

        // Set the service locator to an AutofacServiceLocator.
        var csl = new AutofacServiceLocator(container);
        ServiceLocator.SetLocatorProvider(() => csl);
    }
}

You’ll notice that at this stage all I’m registering is SignalRFactory as an implementation of the ISignalR interface. This interface is one of two that I’ve defined within the Core library to assist with the refactored SignalR support:

public interface ISignalR
{
    Task<ICommunicationHub> Connect<THub>(IMobileServiceClient mobileService);
}

public interface ICommunicationHub:IDisposable
{
    string ConnectionId { get; }

    IDisposable Register<TMessageType>(string eventName, Action<TMessageType> handler);
}

The implementation is placed in the Shared.Client project so that it is reused across all client platforms:

internal class SignalRFactory : ISignalR
{
    public async Task<ICommunicationHub> Connect<THub>(IMobileServiceClient mobileService)
    {
        var hubConnection = new HubConnection(MainViewModel.MobileService.ApplicationUri.AbsoluteUri);

        hubConnection.Headers["x-zumo-application"] = MainViewModel.MobileService.ApplicationKey;

        IHubProxy proxy = hubConnection.CreateHubProxy(typeof(THub).Name);
        await hubConnection.Start();

        return new CommunicationHub { Connection = hubConnection, Hub = proxy };
    }
}

internal class CommunicationHub : ICommunicationHub
{
    public string ConnectionId { get { return Connection.ConnectionId; } }
    public HubConnection Connection { get; set; }
    public IHubProxy Hub { get; set; }
    public IDisposable Register<TMessageType>(string eventName, Action<TMessageType> handler)
    {
        return Hub.On(eventName, handler);
    }

    public void Dispose()
    {
        if (Connection != null)
        {
            Connection.Dispose();
            Connection = null;
            Hub = null;
        }

    }
}

The implementation of the GenerateReport method on the MainViewModel can now be refactored to look up the ISignalR implementation:

public async void GenerateReport()
{
    string message;
    try
    {
        var signalr = ServiceLocator.Current.GetInstance<ISignalR>();
        var hub= await signalr.Connect<LongRunningFeedbackHub>(MobileService);
        hub.Register<string>("Progress", msg => Debug.WriteLine(msg));

        var result = await MobileService.InvokeApiAsync<string>("Reporter", HttpMethod.Get, new Dictionary<string, string>{{"id", hub.ConnectionId}});
        message = result;
    }
    catch (MobileServiceInvalidOperationException ex)
    {
        message = ex.Message;
    }
    Debug.WriteLine(message);
}

And that’s it – this should work without modification on each of the client platforms….

Long Running Azure Mobile Service Call With Feedback using SignalR

I’ve been thinking a little more about alternative mechanisms for sending feedback to the client applications when a long running mobile service call is executing. In my previous post on Long Running Custom API Calls in Azure Mobile Service I discussed returning immediately to the client application whilst continuing to process a long running task. Unfortunately this means there is no way to provide feedback. In this post I’m going to add in SignalR to provide that real time communications link.

 

 

image

I’ll define a class that in herits from Hub and exposes a Services property – this will be populated automatically by the Mobile Service.

public class LongRunningFeedbackHub : Hub
{
    public ApiServices Services { get; set; }
}

Next I’ll amend the Get service method to a) take a parameter (which is the connection id of signalr on the client) and b) provide feedback during the long running task via the hub object. Note the use of the dynamic Progress method which will correlate to the client side proxy method that it subscribes to.

public async Task<string> Get(string id)
{
    var host = new ReportHost();
    host.DoWork(async (cancel) =>
    {
        try
        {
            var hub = Services.GetRealtime<LongRunningFeedbackHub>();

            var max = 5;
            for (int i = 0; i < max; i++)
            {
                await Task.Delay(TimeSpan.FromSeconds(5), cancel);

                hub.Clients.
                    Client(id)
                    .Progress(string.Format("{0}% complete",100*i/5));
            }
        }
        catch (Exception ex)
        {
            // Don't bubble the exception - do something sensible here!
            Debug.WriteLine(ex.Message);
        }
    });
    Services.Log.Info("Hello from custom controller!");
    return "Hello";
}

I added the initialization logic for the SignalR to the Register method of the WebApiConfig.cs file, as well as providing an explicit route to support the id parameter

config.Routes.MapHttpRoute(
    name: "DefaultApi",
    routeTemplate: "api/{controller}/{action}/{id}",
    defaults: new { action = RouteParameter.Optional, id = RouteParameter.Optional }
);

SignalRExtensionConfig.Initialize();

On the client side I add the SignalR .NET Client library from NuGet

image

And then add a ConnectToSignalR method to establish the hub connection and return a connectionId, which will then be passed into the view model via the GenerateReport method.

private async void GenerateReportClick(object sender, RoutedEventArgs e)
{
    var connectionId=await  ConnectToSignalR();
    CurrentViewModel.GenerateReport(connectionId);
}

private async Task<string> ConnectToSignalR()
{
    var hubConnection = new HubConnection(MainViewModel.MobileService.ApplicationUri.AbsoluteUri);
    //if (user != null)
    //{
    //    hubConnection.Headers["x-zumo-auth"] = user.MobileServiceAuthenticationToken;
    //}
    //else
    //{
    hubConnection.Headers["x-zumo-application"] = MainViewModel.MobileService.ApplicationKey;
    //}
    IHubProxy proxy = hubConnection.CreateHubProxy("LongRunningFeedbackHub");
    await hubConnection.Start();

    //string result = await proxy.Invoke<string>("Send", "Hello World!");
    //var invokeDialog = new MessageDialog(result);
    //await invokeDialog.ShowAsync();

    proxy.On<string>("Progress",
        msg => Debug.WriteLine(msg));

    return hubConnection.ConnectionId;
}

The only change to the GenerateReport method is for it to accept an id parameter and for this parameter to be passed into the custom api

var result = await MobileService.InvokeApiAsync<string>("Reporter", HttpMethod.Get, new Dictionary<string, string>{{"id",connectionId}});

When this is run and the GenerateReport method is invoked, the current percentage complete is passed back to the client and appears in the handler for the Progress message.

Long Running Custom API Calls in Azure Mobile Service

As I pointed out in my previous post a common scenario for custom apis in a Mobile Service is to hand off tasks that aren’t easily done on a mobile device, or are better suited to being done server side (eg report creation). Quite often these tasks can take longer than the default timeout of most service requests (typically 60seconds) which means the mobile app ends up raising an exception which is not very useful. A better approach is to queue the work somehow and then to periodically check on it’s progress. In this post I’m just going to demonstrate one way to allow the service to respond immediately, whilst continuing to carry out the task in the background. Note that the ReportHost is based on the JobHost class described in Phil Haack’s post on the dangers of background tasks

public async Task<string> Get()
{
    var host = new ReportHost();
    host.DoWork(async (cancel) =>
    {
        try
        {
            await Task.Delay(TimeSpan.FromMinutes(2), cancel);
        }
        catch (Exception ex)
        {
            // Don't bubble the exception - do something sensible here!
            Debug.WriteLine(ex.Message);
        }
    });
    Services.Log.Info("Hello from custom controller!");
    return "Hello";
}

And the ReportHost class

public class ReportHost : IRegisteredObject
{
    private readonly ManualResetEvent reportLock = new ManualResetEvent(false);
    private readonly CancellationTokenSource cancellation=new CancellationTokenSource();

    public ReportHost()
    {
        HostingEnvironment.RegisterObject(this);
    }

    public void Stop(bool immediate)
    {
        cancellation.Cancel();
        reportLock.WaitOne();
        HostingEnvironment.UnregisterObject(this);
    }

    public void DoWork(Func<CancellationToken,Task> work)
    {
        Task.Run(async () =>
        {
            await work(cancellation.Token);
            reportLock.Set();
        });
    }
}

I haven’t shown any client side code for the timebeing because it remains the same (although it won’t timeout now!). The next step is to provide some way that the client can check on the progress of the work item.

Invoking a Custom API in Azure Mobile Service

The full scenario is that we have a task that needs to be performed by the Mobile Service that might take a while to complete. The first step is to define a custom api which will invoke the task (alternatively you could hijack a table controller to launch the task as part of one of the CRUD actions) and to have this called from the client applications. However, this alone is not sufficient for long running tasks as the call to the service may timeout before the task completes. I’ll come back to that in a future post but for now, let’s look at creating a custom api.

The first step is to add a new controller based on the Microsoft Azure Mobile Services Custom Controller template.

image

I’ll give the new controller a name

image

For the time being the only change I’ll make is to include the AutorizeLevel and AuthorizeInspector attributes to enforce the security policies required for accessing our services:

[AuthorizeLevel(AuthorizationLevel.User)]
[AuthorizeInspector]
public class ReporterController : ApiController
{
    public ApiServices Services { get; set; }

    // GET api/Reporter
    public async Task<string> Get()
    {
        Services.Log.Info("Hello from custom controller!");
        return "Hello";
    }

}

Invoking this from the client can easily be done from within the MainViewModel:

public async void GenerateReport()
{
    string message;
    try
    {
        var result = await MobileService.InvokeApiAsync<string>("Reporter", HttpMethod.Get, null);
        message = result;
    }
    catch (MobileServiceInvalidOperationException ex)
    {
        message = ex.Message;
    }
    Debug.WriteLine(message);
}

Easy done – a simple api that we can invoke within our Mobile Service to do work. Note that in this case it’s a Get requrest with no parameters and a simple string return type. We can adjust this to be a Post, accept parameters and return a complex object by adjusting both the controller method definition (ie change Get to Post, or even just add a Post method) and invokeapiasync call.

Handling MobileServiceConflictException with Azure Mobile Service with Offline Sync

Whenever you do offline sync, there is a risk of conflicts emerging between client and server updates. Of course the data architecture should be done to minimise this (eg guid based primary keys) but this won’t always eliminate the issue. One example of where this might happen is if the client gets cut off midway through pushing changes to the server. For example if I were to insert a new record on the client, push the changes but before the changes had been confirmed back to the client, the network connection was terminated. The server now has the new record but the client thinks it still needs to send the record – when it does, a conflict arises where both server and client have records with the same primary key. Let’s replicate this and then look at solving it.

I’ll create an AddProperty method into my MainViewModel:

public async Task AddProperty()
{
    var table=MobileService.GetSyncTable<RealEstateProperty>();
    var prop = new RealEstateProperty
    {
        Address = "New Random Property"
    };
    await table.InsertAsync(prop);
    await MobileService.SyncContext.PushAsync();
}

Run this, and insert a breakpoint after the InsertAsync but before the PushAsync. At this point inspect the prop object and retrieve the Id. Next, using either Sql Server Management Studio or Visual Studio 2015, connect to the SQL Server instance and run the following query (replacing the Id with the one retrieved in previous step).

insert into realestateinspector.RealEstateProperties (Id,Address,Deleted) Select '0a1f8994-4a4b-4548-921a-4da0186b3f6c','Not created on client',0

Now, if I let the PushAsync continue it will fail, causing an exception to be raised.

image

There are a couple of places that this can be handled. The first is where the call to PushAsync is made – this isn’t great as pushing to the remove service won’t necessarily happen at this point. For example you might insert a record but not push immediately. In this case when you next issue a pull request the push will be done prior to doing the pull. A better way to handle it is to supply a custom MobileServiceSyncHandler as part of the initialization of the sync context:

 

await MobileService.SyncContext.InitializeAsync(data, new CustomMobileServiceSyncHandler());

The sync handler could look like the following (this is very basic and just drops any conflicts)

public class CustomMobileServiceSyncHandler : MobileServiceSyncHandler
{
    public async override Task<JObject> ExecuteTableOperationAsync(IMobileServiceTableOperation operation)
    {
        try
        {
            return await base.ExecuteTableOperationAsync(operation);
        }
        catch (MobileServiceConflictException cex)
        {
            Debug.WriteLine(cex.Message);
            throw;
        }
    }

    public override Task OnPushCompleteAsync(MobileServicePushCompletionResult result)
    {
        foreach (var error in result.Errors)
        {
            if (error.Status == HttpStatusCode.Conflict)
            {
                error.CancelAndUpdateItemAsync(error.Result);
                error.Handled = true;
            }
        }
        return base.OnPushCompleteAsync(result);
    }
}

Restricting Access to Azure Mobile Service Base on Azure Active Directory Group

In my previous post I controlled access to the GetAll method on my base controller by determining whether the authenticated (from AAD) user was a member of the Inspectors AAD group. This is actually quite a slow process and not something you really want to do every session. Ideally I’d like this check to be done once when the user authenticates against the mobile service (which happens after they authenticate against AAD) and for the IsInspector claim to be added to the user identity. Unfortunately for the life of me I can’t work out how to force OWIN into accepting an additional claim – I’m sure there’s a way, but I ended up settling for an alternative approach.

My approach actually improves on two aspects over what I was previously doing. The first is that I implement the checking logic as an attribute which can then be applied to the root controller. The second is that by storing a cookie in the response, I can reduce the need to re-query AAD for the group membership. This solution is based on a couple of great blog posts:

http://www.acupofcode.com/2014/04/general-roles-based-access-control-in-the-net-backend/

http://www.acupofcode.com/2014/03/roles-based-access-control-in-mobile-services-and-azure-active-directory/

The AuthorizeInspector attribute looks as follows:

[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = false, Inherited = true)]
public class AuthorizeInspector : AuthorizationFilterAttribute
{
    public static ActiveDirectoryClient RetrieveActiveDirectoryClient(string token)
    {
        var baseServiceUri = new Uri(Microsoft.Azure.ActiveDirectory.GraphClient.Constants.ResourceId);
        var activeDirectoryClient =
            new ActiveDirectoryClient(new Uri(baseServiceUri, Constants.ADTenant),
                async () => token);
        return activeDirectoryClient;
    }

    public async override Task OnAuthorizationAsync(HttpActionContext actionContext, CancellationToken cancellationToken)
    {
        await base.OnAuthorizationAsync(actionContext, cancellationToken);

        var cookie = HttpContext.Current.Request.Cookies["IsInspector"];
        var isInspector = cookie != null ? cookie.Value : null;
        if (isInspector != null)
        {
            if (!(bool.Parse(isInspector)))
            {
                actionContext.Response = new HttpResponseMessage(HttpStatusCode.Forbidden);
            }
            return;
        }

        var controller = actionContext.ControllerContext.Controller as ApiController;
        if (controller == null)
        {
            return;
        }
        var user = controller.User as ServiceUser;

        //var user = User as ServiceUser;
        var aadCreds = (await user.GetIdentitiesAsync()).OfType<AzureActiveDirectoryCredentials>().FirstOrDefault();
        Debug.WriteLine(aadCreds.AccessToken);

        var token = actionContext.Request.Headers.GetValues(Constants.RefreshTokenHeaderKey)
            .FirstOrDefault();

        var auth = new AuthenticationContext(Constants.ADAuthority, false);
        var newToken = await auth.AcquireTokenByRefreshTokenAsync(token,
                Constants.ADNativeClientApplicationClientId, "
https://graph.windows.net");

        var client = RetrieveActiveDirectoryClient(newToken.AccessToken);
        var grps = await client.Groups.ExecuteAsync();
        var moreGroups = grps.CurrentPage;

        while (moreGroups != null)
        {
            foreach (var grp in grps.CurrentPage)
            {
                if (grp.DisplayName == "Inspectors")
                {
                    if ((await client.IsMemberOfAsync(grp.ObjectId, aadCreds.ObjectId)) ?? false)
                    {
                        HttpContext.Current.Response.Cookies.Add(new HttpCookie("IsInspector", true.ToString()));

                        return;
                    }
                }
            }
            if (grps.MorePagesAvailable)
            {
                grps = await grps.GetNextPageAsync();
                moreGroups = grps.CurrentPage;
            }
            else
            {
                grps = null;
                moreGroups = null;
            }
        }
        HttpContext.Current.Response.Cookies.Add(new HttpCookie("IsInspector", false.ToString()));
    }
}

As you can see this follows roughly the same logic for querying AAD group membership. However, this time I’m adding a cookie based on whether the user is an Inspector or not.This attribute can now be applied to the RealEstateBaseTableController.

[AuthorizeInspector]
public class RealEstateBaseTableController<TEntity> : TableController<TEntity>
    where TEntity : class, ITableData
{

One thing to be aware of is that this cookie will persist even if the user logs out. As such, we need some way of associating the cookie with the current user session. It may be that an additional cookie is used to associate the access token with the IsInspector cookie. For example:

public override async Task OnAuthorizationAsync(HttpActionContext actionContext,
    CancellationToken cancellationToken)
{
    await base.OnAuthorizationAsync(actionContext, cancellationToken);

    var controller = actionContext.ControllerContext.Controller as ApiController;
    if (controller == null)
    {
        return;
    }
    var user = controller.User as ServiceUser;

    //var user = User as ServiceUser;
    var aadCreds = (await user.GetIdentitiesAsync()).OfType<AzureActiveDirectoryCredentials>().FirstOrDefault();
    Debug.WriteLine(aadCreds.AccessToken);

    var cookie = HttpContext.Current.Request.Cookies["IsInspector"];
    var isInspector = cookie != null ? cookie.Value : null;
    var accessTokenCookie = HttpContext.Current.Request.Cookies["IsInspectorAccessToken"];
    var access_token = accessTokenCookie != null ? accessTokenCookie.Value : null;
    if (isInspector != null && access_token == aadCreds.AccessToken)
    {
        if (!(bool.Parse(isInspector)))
        {
            actionContext.Response = new HttpResponseMessage(HttpStatusCode.Forbidden);
        }
        return;
    }

    var token = actionContext.Request.Headers.GetValues(Constants.RefreshTokenHeaderKey)
        .FirstOrDefault();

    var auth = new AuthenticationContext(Constants.ADAuthority, false);
    var newToken = await auth.AcquireTokenByRefreshTokenAsync(token,
        Constants.ADNativeClientApplicationClientId, "
https://graph.windows.net");

    var client = RetrieveActiveDirectoryClient(newToken.AccessToken);
    var grps = await client.Groups.ExecuteAsync();
    var moreGroups = grps.CurrentPage;

    try
    {
        while (moreGroups != null)
        {
            foreach (var grp in grps.CurrentPage)
            {
                if (grp.DisplayName == "Inspectors")
                {
                    if ((await client.IsMemberOfAsync(grp.ObjectId, aadCreds.ObjectId)) ?? false)
                    {
                        HttpContext.Current.Response.Cookies.Add(new HttpCookie("IsInspector", true.ToString()));

                        return;
                    }
                }
            }
            if (grps.MorePagesAvailable)
            {
                grps = await grps.GetNextPageAsync();
                moreGroups = grps.CurrentPage;
            }
            else
            {
                grps = null;
                moreGroups = null;
            }
        }
        HttpContext.Current.Response.Cookies.Add(new HttpCookie("IsInspector", false.ToString()));
    }
    finally
    {
        HttpContext.Current.Response.Cookies.Add(new HttpCookie("IsInspectorAccessToken", aadCreds.AccessToken));

    }
}