Nick's .NET Travels

Continually looking for the yellow brick road so I can catch me a wizard....

Call out to the ADAL team! – Authenticate Using External Browser

In my post, Authorizing Access to Resources using Azure Active Directory, I talk about authenticating using the built in browser on the device, rather than authenticating via a webview, which is all too common. Unfortunately despite being fully supported by Azure Active Directory, the team responsible for ADAL haven’t, as far as I can tell, provided support for using an external browser to authenticate.

I was super impressed when I just downloaded the Facebook app on Windows, that it supports “Log in with Browser”.

image

In my opinion, this not only represents a more secure form of authentication (since I can validate the website I’m signing into), it is also a better experience, since I’m already logged into Facebook in the browser anyhow.

I definitely encourage developers to consider using the external browser, rather than supporting SDKs and libraries that us the in-app browser.

UseWindowsAzureActiveDirectoryBearerAuthentication v’s UseJwtBearerAuthentication for Authorization with Azure Active Directory for an ASP.NET Web API

In my previous post, Securing a Web API using Azure Active Directory and OWIN, I covered how to authorize requests against Azure Active Directory using the UseWindowsAzureActiveDirectoryBearerAuthentication extension method in the OWN startup class. This extension method has been designed specifically for Azure Active Directory but if you think about it, the Authorization token is just a JWT token, so in theory you could take a much more generic approach to authorizing access by validating the JWT. This can be done using the UseJwtBearerAuthentication extension method.

There are a couple of steps to using the UseJwtBearerAuthentication extension method. Firstly, in order to validate the signature of the JWT, we’re going to need the public certificate that matches the key identifier contained in the JWT. In my post on Verifying Azure Active Directory JWT Tokens I cover how to examine the JWT using https://jwt.io in order to retrieve the kid, retrieve the openid configuration, locate the jwks uri, retrieve the keys and save out the key as a certificate. In the post I used the certificate (ie wrapping the raw key in ---BEGIN---, ---END--- markers) to validate the JWT; in this case I’ve copied the contents into a text file which I’ve named azure.cer and added it to the root of my web project (making sure the build action is set to Content so it is deployed with the website).

The next thing to do is to remove the UseWindowsAzureActiveDirectoryBearerAuthentication extension method, replacing it with the following code.

var fileName = HostingEnvironment.MapPath("~/") + "azure.cer";
var cert = new X509Certificate2(fileName);
app.UseJwtBearerAuthentication(new JwtBearerAuthenticationOptions
{
    AllowedAudiences = new[] {ConfigurationManager.AppSettings["ida:Audience"]},
    IssuerSecurityTokenProviders = new IIssuerSecurityTokenProvider[]
    {
        new X509CertificateSecurityTokenProvider(ConfigurationManager.AppSettings["ida:IssuerName"], cert)
    }
});

This code uses the azure.cer certificate file combined with the Audience and IssuerName which I’ve added to the web.config.

<add key="ida:Audience" value="a07aa09e-21b9-4e86-b269-a18903b5fe54" />
<add key="ida:IssuerName" value="https://sts.windows.net/55cc17b5-7d2a-418e-86a6-277c54462485/" />

The Audience is the application id (aka client id) of the Azure application registration. The IssuerName needs to match to what appears in the JWT. Opening one of the tokens in https://jwt.io it’s the ISS value that you want to use as the IssuerName.
image

Now you can run the project and see that again the requests are validated to ensure they’re correctly signed.

Securing a Web API using Azure Active Directory and OWIN

In this post we’re going to look at how to use Azure Active Directory to secure a web api built using ASP.NET (full framework – we’ll come back to .NET Core in a future post). To get started I’m going to create a very vanilla web project using Visual Studio 2017. At this point VS2017 is still in RC and so you’ll get slightly different behaviour than what you’ll get using the Visual Studio 2015 templates. In actual fact the VS2015 templates seem to provide more in the way of out of the box support for OWIN. I ran into issues recently when I hadn’t realised what VS2015 was adding for me behind the scenes, so in this post I’ll endeavour not to assume anything or skip any steps along the way.

image

After creating the project, the first thing I always to is to run it and make sure the project has been correctly created from the template. In the case of a web application, I also take note of the startup url, in this case http://localhost:39063/. However, at this point I also realised that I should do the rest of this post following some semblance of best practice and do everything over SSL. Luckily, recent enhancements to IIS Express makes it simple to configure and support SSL with minimal fuss. In fact, all you need to do is select the web project node and press F4 (note, going to Properties in the shortcut menu brings up the main project properties pane, which is not what you’re after) to bring up the Properties window. At the bottom of the list of properties is the SSL Enabled and SSL URL, which is https://localhost:44331/. Take note of this url as we’ll need it in a minute.

image

To setup the Web API in order to authorize requests, I’m going to create a new application registration in Azure Active Directory. This time I need to select Web app / API from the Application Type dropdown. I’ll give it a Name (that will be shown in the Azure portal and when signing into use this resource) and I’ll enter the SSL address as the Sign-on URL. This URL will also be listed as one of the redirect URIs used during the sign in process. During debugging you can opt to do this over HTTP but I would discourage this as it’s no longer required.

image

After creating the application, take note of the Application Id of the newly created application. This is often referred to as the client id and will be used when authenticating a user for access to the web api.

Application Id (aka Client Id): a07aa09e-21b9-4e86-b269-a18903b5fe54

We’re done for the moment with Azure Active Directory, let’s turn to the web application we recently created. The authorization process for in-bound requests involves extracting the Authorization header and processing the bearer token to determine if the calling party should have access to the services. In order to do this for tokens issues by Azure AD I’ll add references to both the Microsoft.Own.Security.ActiveDirectory and Microsoft.Own.Host.SystemWeb packages.

image

Note: Adding these references takes a while! Make sure they’re completely finished before attempting to continue.

Depending on the project template, you may, or may not, already have a Startup.cs file in your project. If you don’t, add a new item based on the OWIN Startup class template

image

The code for this class should be kept relatively simple:

[assembly: OwinStartup(typeof(SampleWebApp.Startup))]
namespace SampleWebApp
{
    public partial class Startup
    {
        public void Configuration(IAppBuilder app)
        {
            ConfigureAuth(app);
        }
    }
}

Additionally, you’ll want to add another partial class file Startup.Auth.cs in the App_Start folder.

namespace SampleWebApp
{
    public partial class Startup
    {
        public void ConfigureAuth(IAppBuilder app)
        {
        }
    }
}

And now we get to adding the middleware that will be used to process the authorization header

public void ConfigureAuth(IAppBuilder app)
{
    app.UseWindowsAzureActiveDirectoryBearerAuthentication(
        new WindowsAzureActiveDirectoryBearerAuthenticationOptions
        {
             Tenant = ConfigurationManager.AppSettings["ida:Tenant"],
             TokenValidationParameters = new System.IdentityModel.Tokens.TokenValidationParameters
             {
                 ValidAudience = ConfigurationManager.AppSettings["ida:Audience"]
             }
        });
}

This uses the configuration manager to extract the Tenant and Audience settings from the web.config (and subsequently the Azure portal settings when you publish to the cloud):

<add key="ida:Audience" value="a07aa09e-21b9-4e86-b269-a18903b5fe54" />
<add key="ida:Tenant" value="nicksdemodir.onmicrosoft.com" />

The tenant is the Id, or in this case, the domain of the tenant where the application is registered. The Audience is the application Id of the application registered in Azure AD.

Warning: If you run the application now and get an error relating to a missing type, you may have to revert the Microsoft.Owin.Security.ActiveDirectory to the most recent v4 package. At the time of writing this post there seems to be an incompatibility between v5 and Owin.

Reference to type 'TokenValidationParameters' claims it is defined in 'System.IdentityModel.Tokens.Jwt', but it could not be found

Ok, we’re ready to try making requests. I’m going to use Fiddler but you can use any other tool that’s able to generate and send HTTP requests. The first attempt will be a GET request to https://localhost:44331/api/values which is one of the default controllers that was created from the project template. Depending on what your project template included, the valuescontroller may, or may not, have the Authorize attribute applied to it. If, like me, you didn’t have the Authorize attribute applied to the valuecontroller, you should get a valid response back to your HTTP request. In this case, you’re going to want to add security to the valuecontroller by adding the Authorize attributes:

[Authorize]
public class ValuesController : ApiController
{

Now, try making the request again – you should now get a 401 Unauthorized error. The body of the response should say:

{"Message":"Authorization has been denied for this request."}

Clearly this is the case since we didn’t send the Authorization header. This time, let’s add an Authorization header, with the word “Bearer” and a token consisting of random text:

Authorization: Bearer abcdefg

This should generate the same response. However, let’s start to look into this further. To get more diagnostic information, add the following to the web.config file for the project

<system.diagnostics>
  <switches>
    <add name="Microsoft.Owin" value="Verbose" />
  </switches>
</system.diagnostics>

Now when you make the request you should see more diagnostic information in the Output window in Visual Studio:

Microsoft.Owin.Security.OAuth.OAuthBearerAuthenticationMiddleware Error: 0 : Authentication failed
System.ArgumentException: IDX10708: 'System.IdentityModel.Tokens.JwtSecurityTokenHandler' cannot read this string: 'abcdefg’.
The string needs to be in compact JSON format, which is of the form: '<Base64UrlEncodedHeader>.<Base64UrlEncodedPayload>.<OPTIONAL, Base64UrlEncodedSignature>'.

As we should have predicted, the token we passed in isn’t a value Jwt – it’s not even valid JSON. Let’s fix this by generating an actual access token for this Web API. In a previous post I walked through manually the process of authenticating, retrieving an authorization code and then exchanging it for an access token. I’ll do the same here.

First I’m going to launch an authorization url in the browser and sign in using credentials from the nicksdemodir.onmicrosoft.com tenant:

https://login.microsoftonline.com/nicksdemodir.onmicrosoft.com/oauth2/authorize?client_id=a07aa09e-21b9-4e86-b269-a18903b5fe54&response_type=code&redirect_uri=https://localhost:44331/

The authorization url is made up of various components:

nicksdemodir.onmicrosoft.com – This is the domain name of the tenant where the web application is registered with Azure AD. You can also use the tenant Id (guid format)

a07aa09e-21b9-4e86-b269-a18903b5fe54 – This is the application id of the application registration in Azure AD

code – This indicates that the response should be an authorization code

https://localhost:44331/  - This is the uri that the browser will be redirected back to, passing with it the code in the query string.

Make sure you have the web application running, otherwise the redirect uri won’t resolve at it may be hard to extract the code from the query string (depending on the browser). After signing in, you’ll be redirected back to your web application with a URL similar to (the code has been shortened for brevity):

https://localhost:44331/?code=zvrs_zz0…….05B_ggAA&session_state=ef2986b8-75bd-484a-b9b9-68f0e46ab569

The next thing to do is to prepare a POST request in your http tool of choice with the following:

URL: https://login.microsoftonline.com/nicksdemodir.onmicrosoft.com/oauth2/token

Body: grant_type=authorization_code&client_id=a07aa09e-21b9-4e86-b269-a18903b5fe54&client_secret=c06kP0Q9ENGpZGbiZTqB1QQaZUWNe190mCittRMr&redirect_uri=https://localhost:44331/&code=zvrs_zz0…….05B_ggAA&resource=a07aa09e-21b9-4e86-b269-a18903b5fe54

The Body parameters are broken down as:

a07aa09e-21b9-4e86-b269-a18903b5fe54 – This is the application id of the application registration in Azure AD. It’s required as both the client_id and the resource, since we’re using the access token to access the web application itself.

c06kP0Q9ENGpZGbiZTqB1QQaZUWNe190mCittRMr – This is a private key (aka client secret) issued by the Azure AD application to ensure the security of token requests. I’ll come back to this in a second and show how to create one for your application.

https://localhost:44331/ – The redirect uri for the application – required here to verify the calling party as it has to align with what’s in Azure AD

zvrs_zz0…….05B_ggAA – This is the authorization code returned in the previous step

To generate the client secret in Azure AD simply click on the Keys tab within the details of the application registration. You can then create a new key by entering a description. The description is only seen by you, so give it a name that’s meaningful to you. Note that once you save the new key, you will only be shown the value of the key once. Once you leave the page, the value of the key will never been shown again.

image

The key created in the Azure AD should be used as the client secret when doing the authorization code to access token exchange.

The response to this POST should return JSON which includes and access token value. Add the access token to the authorization header:

Authorization: Bearer G1ZsPGjPF6qJO8Sd5HctnKqNk_8KDc-………Lpy9P8sDWdECziihaPWyseug9hgD119keoZuh4B

This should give you a 200 response with data. And there you have it – you’ve successfully secured your web api so that it requires the user to be authenticated using Azure Active Directory.

Improving the Azure Active Directory Sign-on Experience

I was talking to a customer the other day and had to log into the Azure portal. Normally when I launch the portal I’m already signed in and I’m not prompted but for whatever reason this time I was prompted to authenticate. Doing this in front of the customer lead to three interesting discussions:

- Use of two factor authentication to secure sign in
- Separate global administrator account for primary organisation tenant
- Company branding for Azure AD sign in

Firstly, the use of two factor authentication (TFA) is a must requirement for anyone who is using the Azure portal – if you are an administrator of your organisation, please make sure you enforce this requirement for anyone accessing your tenant/directory/subscription. This applies to staff, contractors, guests etc who might be using your Azure portal or the Office 365 portal. In fact, in this day in age, I would be enforcing two factor authentication for all employees – note that Outlook and Skype for Business are still stuck in the dark-ages and don’t access TFA sign in. For these you’ll need to generate an application password (go to https://myapps.microsoft.com, click on your profile image in top right corner and select “Profile”, click through to “Additional security verification,” click on the “app passwords” tab and then click “Create” to generate an app password.

Ok, next is the use of a separate global administrator account – this is in part tied to the previous point about using TFA. If you’re a global administrator of your tenant and you enable TFA, you won’t be able to generate app passwords. This is essentially forcing you down the path of best practice, which is to have a separate account which is the global administrator for your tenant. If other people in your organisation need administrative permissions, you can do this on a user or role basis within the Azure portal – our preference is to assign permissions to a resource group but there is enough fidelity within the portal to control access at the level you desire.

The other thing we’ve also enforced is that we do not host any Azure resources in our primary tenant (ie in our case builttoroam.com). Given the importance of Office365 based services we felt it important that we isolate off any resources we create in Azure to make sure they’re completely independent of our primary tenant. The only exception to this is if we are building internal LOB applications (ie only apps for Built to Roam use) – for these we include the app registrations within the builttoroam.com tenant so that we can restrict sign in and at the same time deliver a great sign in experience for our employees. For example we’re using Facebook Workplace (https://workplace.fb.com/) – we configured this within the builttoroam.com tenant in Azure AD to allow for a SSO experience.

Now, onto the point of this post – the last thing that came out of signing into the portal in front of the customer was that they were startled when we went to sign into the portal and our company branding appeared. To illustrate, when you first land on the portal sign in page you see:

image

After entering my email address, the sign in page changes to incorporate the Built to Roam branding

image

This not only improves the perception (for internal and external users), it also gives everyone a sense of confidence that they’re signing into a legitimate Built to Roam service.

In order to set this up, you need to navigate to the Active Directory node in the Azure portal and click on the Company branding. If you’re using Office 365 you should already have access to this tab. However, if you’re not, you may need to sign up for Active Directory Premium – you can get started using the Premium trial:

image

Once you’ve opened the Company branding tab (if you have just activated the trial, you may need to wait a few minutes and/or sign out and back in again in order for the Company branding tab to open) you can click on the link to “Configure company branding now”

image

There are a number of options and images that you can configure:

image

After saving the changes, if you attempt to sign in, you’ll notice the new images/colours etc appear. In this case, you can see that the welcome text at the bottom of the sign in page has been changed to what I entered in the company branding tab. Unfortunately because I didn’t set the sign in page image, the default is used, so you can’t see the red (#FF0000) background I set – you can see glimpses of it if you resize the page. This can be fixed by simply uploading a transparent image.

image

The ability to customise the sign in experience is just one way to improve the experience for you staff and customers.

Useful OAuth, OpenID Connect, Azure Active Directory and Google Authentication Links

Over the past couple of weeks I’ve been assisting with the development work of an enterprise system that uses both Azure Active Directory (Azure AD) and Google to authenticate users. It’s a cross platform solution which means we need code that works across both authentication platforms, and the three mobile platforms. Unfortunately this is easier said than done – The Azure AD team have done a reasonable job with the ADAL library but it’s not like we can repurpose that library for authenticating against Google. This is a tad annoying since both Azure AD and Google both use OAuth and OpenID Connect, so you’d expect there to be a good library that would work across both.

In trying to find a workable solution I can across a number of links that I want to bookmark here for future reference:

OAuth 2

Home - https://oauth.net/2/

The OAuth home page is a good starting point if you want to get more links and information about OAuth (1 and 2) but I actually found it’s main use for me was to point at the OAuth 2.0 Framework RFC

OAuth 2.0 Framework RFC - https://tools.ietf.org/html/rfc6749

You can think of the OAuth 2.0 Framework RFC as being the specification for OAuth 2.0. There are some extensions and other standards that relate to OAuth 2.0 but this is a must read if you want to understand what OAuth 2.0 is all about. You may need to refer back to this when reading other blogs/tutorials as it can help clarify what each of the roles and responsibilities are in the process.

Simple overview of OAuth 2 - https://aaronparecki.com/2012/07/29/2/oauth2-simplified

This overview provides a quick summary of the various flows for OAuth 2.0. However, I disagree with the use of the implicit workflow for mobile applications. Whilst mobile applications are not “trusted,” which would normally imply the use of the implicit workflow, the reality is that the implicit workflow can’t issue refresh tokens. This means that unless you want your users to have to log in each time they use your mobile application, you need to use the Authorization Code workflow (the client secret shouldn’t be required when requesting access tokens for mobile apps – this depends on which authentication provider you’re using).

 

OpenID Connect

Home - http://openid.net/connect/

The OpenID Connect home page is again a good starting point as it links to the many different parts of the OpenID Connect standard. OpenID Connect builds on top of OAuth 2.0 in order to provide a mechanism for users to be authenticated as well as authorized for resource access. In addition to the creation of access tokens, OpenID Connect defines an id_token which can be issued in absence of any resource that is just used to identify the user that has authenticated.

OpenID Connect Core 1.0 - http://openid.net/specs/openid-connect-core-1_0.html

This is the core specification of OpenID Connect. Similar to the specification for OAuth, this is worth both a read and to be used as a reference when working with OpenID Connect implementations.

OpenID Connect Session Management 1.0 - http://openid.net/specs/openid-connect-session-1_0.html

Whilst still in draft this standard covers how implementers are supposed to handle log out scenarios, which is useful as your application can’t simply delete it’s access tokens when a user opts to log out. Ideally when a user logs out, you’d want to make sure both cached tokens are cleared, along with invalidating any access or refresh tokens.

 

Google

OAuth 2.0 Overview - https://developers.google.com/identity/protocols/OAuth2

OpenID Connect - https://developers.google.com/identity/protocols/OpenIDConnect

Google’s documentation isn’t too bad but does require you to read all of the pages as the OAuth and OpenID Connect implementation details seem to be scattered across the pages. The assumption is that for any given type of application you can simply read the one page – unfortunately, if you want to get an understanding of the Google implementation, you really need to read everything. Authenticating/authorizing with Google is significantly simpler than with Azure AD as there is no notion of linking your application registration with specific permissions to other applications registered with Azure AD. This is a significant limitation of using Google sign in, as you can really only use it to authenticate and then use the token to access various Google APIs.

 

Azure Active Directory

Azure AD Developer’s Guide - https://docs.microsoft.com/en-au/azure/active-directory/develop/active-directory-developers-guide

Authentication Scenarios for Azure AD - https://docs.microsoft.com/en-au/azure/active-directory/develop/active-directory-authentication-scenarios

Azure AD is a much longer read, and it’s very easy to get lost in the world of application configuration and settings. My recommendation is to start with something simple, and then grow from that. For examples, start by authenticating a use to sign into your mobile app, then extend it so that you can use the access token to connect to a Web API, and then on to retrieve information from other Microsoft services within the Web API, and then perhaps make it all multi-tenanted (that’s one for another post!).

Microsoft Partner, Office 365 and Visual Studio Online

This is a somewhat off-topic post which only applies to Microsoft Partners who happen to use Office 365 and Visual Studio Online. As a Microsoft Silver Partner we have a number of licenses which we’re able to use internally. This includes a number of MSDN subscriptions which can be assigned to staff and Office 365 seats.

Over the last couple of weeks we have migrated across to Office 365 (this was actually done over a 24 hour period as it was relatively simple to move from one hosted exchange account to Office 365). One of the awesome things about Office 365 is that users belong to the Azure AD tenant (ie we have builttoroam.com set up as a tenant and all Office 365 users with an email address of [email protected] belong to that tenant).

We’ve been using Azure for development for a long time but as Azure AD is still in its infancy we were originally just adding staff in based on their Microsoft Account. We also did the same for Visual Studio Online. All in all this was quite messy and felt very disconnected. Well today we took the plunge and connected Visual Studio Online to our Azure AD account. To make sure the transition went smoothly we have temporarily added the Microsoft Account for each staff member to our Azure AD tenant – this will mean that in the short term they can continue to access VSO using their Microsoft Account. Going forward we would expect that everyone will start to use their workplace account (ie their [email protected] Azure AD account) to sign into both the Azure portal and Visual Studio Online.

Here’s where the issue is. Currently in Visual Studio 2013 you can online sign in using a Microsoft Account (this is fixed in Visual Studio 2015 where you can both sign in with a workplace account, and you can have multiple accounts signed in). The upshot is that the staff will have to sign into VS2013 with their Microsoft Account, and then connect to VSO using their workplace account – this seems simple enough, right! wrong! well partially right. Firstly, we still need to assign the MSDN subscriptions to the staff. After being assigned a license, the staff simply go to MSDN and activate the subscription. This gives them a license to use Visual Studio 2013 which is automatically picked up when they sign in using their Microsoft Account.

The issue is that if they then go to Visual Studio 2015 and sign in with their workplace account, they won’t have access to the MSDN subscription (since it’s associated with their workplace account). The solution is to go to their MSDN subscription details (signing into msdn.microsoft.com with their Microsoft Account) and under the Visual Studio Online section, click the Link workplace account option and enter their workplace email address. After doing this, Visual Studio 2015 will pick up the linked MSDN subscription and license, as well as picking up the Visual Studio Online projects in the Team Explorer window.

The upshot is that after doing all this, staff should be able to sign into Visual Studio Online, Office 365, Azure Portal and Visual Studio 2015 using their workplace account. It’s only Visual Studio 2013 where they will have to sign in with their Microsoft Account, and then connect to VSO using their workplace account.

Fix Login Failures to Azure Mobile Services with Enhanced Users Features for Javascript Backends

One of the applications that we’ve been working on that leverages the older javascript backend version of Azure Mobile services was seeing some cases where users couldn’t login. For a while we put it down to incidental issues with our application but we then detected a pattern where it was related to Microsoft Accounts that belonged to one of the domains associated with outlook.com (eg live.com, outlook.com, outlook.com.au, hotmail.com). I don’t know exactly which domains it does affect, nor whether it was all email addresses on those domains. However, what I do know is that in order to fix the issue we had to follow the instructions that Carlos had posted a while ago on the Enhanced Users Feature. We hadn’t done this previously as I wasn’t confident that it wouldn’t break our production application but it turns out the only change we needed to do, as per his instructions, was change to use the asynchronous getIdentities method (which is also an update we were supposed to do a while ago!).

Source Code for Real Estate Inspector Sample on GitHub

I’ve got around to publishing the current source code pieces to GitHub. It’s currently very hotch-potch as I’ve been focussing on demonstrating/fleshing out a lot of the concepts for my blog posts. Over the coming weeks I’ll be extending out the actual functionality and will periodically update the code on GitHub. For now, it’s a good place to pick through the various code I’ve been talking about over the last couple of months

Using a Refresh Token to Renew an Expired Access Token for Azure Active Directory

Currently my application attempts to acquire the access token silently which equates to looking to see if there is a current (ie not expired) token in the token cache. However, tokens don’t live for very long, so it’s quite likely that a token won’t be found. This unfortunately leads to a poor user experience as the user will quite often be prompted to sign in. There is an alternative, which is to use the refresh token, returned as part of initially acquiring the access token, to silently request a new access token. This of course is on the assumption that the refresh token hasn’t expired.

Here is a quick summary, as at the time of writing, of the different tokens and their expiry rules (a good explanation here):

  • Azure AD access tokens expire in 1 hour (see the expires_on attribute that is returned when acquiring an access token).
  • Refresh tokens expires in 14 days (see the refresh_token_expires_in attribute that is returned when acquiring an access token).
  • Access tokens can be refreshed using the refresh-token for a maximum period of time of 90 days, from the date that the access token was acquired by prompting the user.

The authentication logic can be amended to retrieve the list of refresh tokens, attempt to acquire token silently, followed by an attempt to acquire token via the refresh token. Failing that the user would be prompted to sign in.

var authContext = new AuthenticationContext(Configuration.Current.ADAuthority);

var tokens = authContext.Tokens();
var existing = (from t in tokens
                where t.ClientId == Configuration.Current.ADNativeClientApplicationClientId &&
                        t.Resource == Configuration.Current.MobileServiceAppIdUri
                select t).FirstOrDefault();
if (existing != null)
{
    try
    {
        var res = await authContext.AcquireTokenSilentAsync(
            Configuration.Current.MobileServiceAppIdUri,
            Configuration.Current.ADNativeClientApplicationClientId);
        if (res != null && !string.IsNullOrWhiteSpace(res.AccessToken))
        {
            return res.AccessToken;
        }
    }
    catch (Exception saex)
    {
        Debug.WriteLine(saex);
    }

    try
    {
        var res = await
            authContext.AcquireTokenByRefreshTokenAsync(existing.RefreshToken,
                Configuration.Current.ADNativeClientApplicationClientId);
        if (res != null && !string.IsNullOrWhiteSpace(res.AccessToken))
        {
            return res.AccessToken;
        }
    }
    catch (Exception saex)
    {
        Debug.WriteLine(saex);
    }

}

Azure Active Directory with Mobile Services without Prompting Every Time the Application Starts

Currently, every time the application is run the user is prompted to sign into Azure Active Directory, and then the AD issued token is then used to login to Azure Mobile Service. Not only is this a pain for the user (for example if they’ve only just been in the application, to have to sign in again feels somewhat unnecessary), it also adds latency on startup as well as preventing the application from running when offline. In the next couple of posts I’ll look at a couple of techniques to consider in order to improve this sign on experience.

Firstly, it’s worth noting that there was an update posted for the Azure Active Directory Authentication library (ADAL) on NuGet – it’s still prerelease but worth updating to if you’re using v3 of the library. More info on NuGet, here.

One of the nice things about ADAL is that it provides a cache for tokens. In addition to being able to query what tokens are in the cache (for example in order to then login to the Mobile Service) it also wraps the check to determine if a token is still valid. To do this, I can call AcquireTokenSilentAsync to authenticate silently ie without prompting the user. If a valid access token is found in the token cache it will be returned. In the case that no valid token is found and exception is raised and I then need to invoke AcquireTokenAsync as I did previously.

var authContext = new AuthenticationContext(Configuration.Current.ADAuthority);
try
{
    var res = await authContext.AcquireTokenSilentAsync(
        Configuration.Current.MobileServiceAppIdUri,
        Configuration.Current.ADNativeClientApplicationClientId);
    if (res != null && !string.IsNullOrWhiteSpace(res.AccessToken))
    {
        return res.AccessToken;
    }
}
catch (Exception saex)
{
    Debug.WriteLine(saex);
}

As Windows Phone 8.0 isn’t supportedyet by v3 of ADAL, I also need to update my custom implementation of the AuthenticationContext. Firstly, to add a static list of previously acquired tokens:

private static readonly List<AuthenticationResult> Tokens = new List<AuthenticationResult>();  

Next, I need to update my AuthenticationResult option to decode more than just the access and refresh tokens:

public class AuthenticationResult
{
    private static DateTime epoch = new DateTime(1970, 1, 1, 0, 0, 0, DateTimeKind.Utc).ToLocalTime();

    public string ClientId { get; set; }

    [JsonProperty("access_token")]
    public string AccessToken { get; set; }

    [JsonProperty("refresh_token")]
    public string RefreshToken { get; set; }

    [JsonProperty("resource")]
    public string Resource { get; set; }

        [JsonProperty("expires_on")]
    public long ExpiresOnSeconds { get; set; }

    public DateTime ExpiresOn
    {
        get { return epoch.AddSeconds(ExpiresOnSeconds); }
    }

    private long refreshTokenExpiresInSeconds;
    [JsonProperty("refresh_token_expires_in")]
    public long RefreshTokenExpiresInSeconds
    {
        get { return refreshTokenExpiresInSeconds; }
        set
        {
            refreshTokenExpiresInSeconds = value;
            RefreshTokensExpiresOn = DateTime.Now.AddSeconds(refreshTokenExpiresInSeconds);
        }
    }

    public DateTime RefreshTokensExpiresOn { get;private set; }

}

At the end of the AcquireTokenAsync method I need to set the ClientId property on the AuthenticationResult (so I know which AAD client id it was returned for as this isn’t returned in the response) and add the result to the Tokens list:

var result = JsonConvert.DeserializeObject<AuthenticationResult>(await data.Content.ReadAsStringAsync());
result.ClientId = ClientId;
Tokens.Add(result);
return result;

Finally I need to implement the AcquireTokenSilentAsync method – although it doesn’t require async/Task I’ve kept the method consistent with ADAL to avoid conditional code when calling the method

public async Task<AuthenticationResult> AcquireTokenSilentAsync(
    string resource,
    string clientId)
{
   

    var result = (from t in Tokens
                        where t.ClientId == clientId &&
                                   t.Resource == resource &&
                                   t.ExpiresOn > DateTime.Now
                        select t).FirstOrDefault();

    return result;
}

Note that this implementation doesn’t persist the access token beyond the current session. However, it will avoid the need to reauthenticate if the user does happen to do something that would otherwise require authentication.

Custom Domains for Azure Mobile Services

As packaging a cloud based solution one of the tasks is to change the configuration of the services so that they have a application specific domain. In the case of Azure websites this feature has been available for quite a while in the form of custom domains. However, it was only recently that this capability was added to Azure Mobile Services. This enables me to change the Mobile Service url from https://realestateinspector.azure-mobile.net to https://realestate.builttoroam.com. This capability is only available to Mobile Services running in Standard mode, which can be quite a costly commitment if custom domains are the only reason to upgrade.

Here’s a quick run through of setting up a custom domain. Note that this doesn’t include setting up SSL for your custom domain, which is highly recommended. There is more information here that includes using wildcard SSL certificates, which might be useful if you are packaging multiple services (eg Mobile Service and a Website) off the same base domain.

The first thing to do is to setup a CName record (alternatively you can setup an A record using these instructions) – this needs to be done with the name service that hosts the DNS records for your domain.

image

If you simply try to browse to the new URL you’ll see quite a useful 404 message. The first option is exactly the scenario I now face – I have to configure the Mobile Service to know about the custom domain.

image

Currently there is no UI in the Azure portal for managing custom domains for Mobile Services, unlike for Azure Websites where it can all be configured in the portal. Instead, I need to use the Azure CLI. Before doing this, make sure you are using v0.8.15 or higher (v0.8.15 is current at time of writing). Note that I ran into some issues upgrading the Azure CLI – docs online suggest using npm (eg npm update azure-cli, or npm update azure-cli –g depending on whether you installed the azure-cli globally or not). However, I found that this wasn’t working – the output suggested it had updated to 0.8.15 but when I queried azure –v I saw an earlier version. Turns out that I’d installed the azure-cli via the Web Platform Installer – in this case you either need to uninstall the azure-cli via the platform installer, or simply install the new version via the platform installer (which is what I did).
Adding a custom domain is then relatively straight forward: azure mobile domain add <mobileservicename> <customdomain> eg

image

Now when you browse to the new url you see the typical Mobile Service status homepage.

image

When I run my client applications I need to update the Mobile Service Client URL to point to the new url. I can then see in Fiddler that the traffic is indeed going to the new custom domain.

image

Database Migrations with Package Manager Console and Azure Mobile Services

I was caught out recently after I published an incorrect database migration into my cloud base Azure Mobile Service (I created a second controller based on the RealEstateProperty entity instead of the PropertyType entity). The upshot is that I only noticed this when all the properties of the entities I was synchronizing down from the cloud came back with null for most of their properties. Initially I thought my issue was with the migration I had performed on the database, so I thought I’d roll back to a previous version. My most recent migration was “201502260615263_Added proeprty type entity” and I wanted to roll it back to the previous migration, “201501061158375_AddedInspections”. To do this you can simply call the update-database method in the Package Manager Console:

update-database –TargetMigration “201501061158375_AddedInspections”

However, I wanted to invoke this only the database for the Mobile Service running in the cloud. To do this I need to add the –ConnectionString and –ConnectionProviderName attributes. The latter is easy as it needs to be the static value “System.Data.SqlClient” but the former requires two steps:

- In the Azure Management Portal go to the SQL Databases tab and then select the database that correlates to the Mobile Service. With the database selected, click “Manage” from the toolbar – this will prompt to add a firewall rule allowing access from your computer (this only happens the first time or again if your ip address changes). You need to add this firewall rule as Visual Studio will be attaching directly to the database to run the code-first migration on the database.

image

- From the Dashboard pane of the SQL Server database, select Connection Strings from the right link menu, and copy the contents of the ADO.NET connection string.

image

Now I can add the connection string to the update-database method:

update-database –TargetMigration “201501061158375_AddedInspections” –ConnectionString “Server=tcp:p7zzqmjcmf.database.windows.net,1433;Database=realestateinspector;User ID={my username};Password={your_password_here};Trusted_Connection=False;Encrypt=True;Connection Timeout=30;” –ConnectionProviderName “System.Data.SqlClient”

I checked that this had removed the PropertyType table (which was part of the migration I just reversed) and then removed the old migration file, “201502260615263_Added proeprty type entity.cs”, and then regenerated the new migration by calling add-migration again:

add-migration ‘Added proeprty type entity’

Given that the Mobile Service itself hadn’t changed at that point I figured that I’d simply call update-database without the TargetMigration parameter but with the ConnectionString that points to my actual Mobile Service. This seemed to go ok but then when I ran my Mobile Service and attempted to synchronize my PropertyType entities – this caused an exception because I’d discovered the root of my issue, which was I had two controllers both referencing the RealEstateProperty entity. I fixed that and republished my Mobile Service. Now synchronization worked, but mainly because there were no entities in the PropertyType table in the database, so I then attempted to add a PropertyType using the direct access (rather than synchronizing entities) in the MobileServiceClient (using GetTable instead of GetSyncTable) – this caused some weird exception as it seemed to require that the CreatedAt property be set. I’ve never had to do this on previous inserts, so I sensed something was wrong. Using the Visual Studio 2015 CTP I connected directly to the SQL Server database and sure enough on my PropertyType table there were no triggers for insert/update. Usually this is where the CreatedAt column is updated.

So, feeling a little puzzled I decided to undo my migration on my Mobile Service database once more. But this time, instead of attempting to change any of the migration scripts, all I did was republish my Mobile Service. Now when I attempted to add a PropertyType it worked, no problems. Checking with Visual Studio 2015, the trigger on the PropertyType table had been successfully created. At this point I’m not sure what exactly happens when the Mobile Service runs but it seems to do more than just applying the code-first migrations. It definitely seems to me that updating the cloud database using the package manager console seemed to skip the validation step that Mobile Services does in order to add the appropriate triggers, and thus should be avoided.

Multiple Bootstrapper in WebApiConfig for Mobile Service

In my “wisdom” I decided to rename the primary assembly for my Mobile Service (ie just changing the assembly name in the Properties pane for the Mobile Service).

image

This all worked nicely when running locally but when I published to Azure I started seeing the following error in the Log, and of course my service wouldn’t run…

Error: More than one static class with name 'WebApiConfig' was found as bootstrapper in assemblies: RealEstateInspector.Services, realestateinspectorService. Please provide only one class or use the 'IBootstrapper' attribute to define a unique bootstrapper.

Turns out that when I was publishing I didn’t have the “Remove additional files at destination” box checked in the Publish Web dialog. This meant that my old Mobile Service assembly (ie with the old name) was still floating around. As reflection is used over assemblies in the bin folder to locate the bootstrapper, it was picking up the same class in both assemblies…. hence the issue.

image

Checking the “Remove additional files at destination” box ensures only those files that are currently in your Mobile Service project are deployed.

Azure Active Directory Graph API and Azure Mobile Service

Last month in an earlier post I talked about using the Azure Active Directory Graph API Client library in my Azure Mobile Service. Whilst everything I wrote about does indeed when published to the cloud, it does raise a number of errors that are visible in the Log and the status of the service ends up as Critical – which is definitely something I don’t want. The error looks something like the following:

Error: Found conflicts between different versions of the same dependent assembly 'System.Spatial': 5.6.2.0, 5.6.3.0. Please change your project to use version '5.6.2.0' which is the one currently supported by the hosting environment.

Essentially the issue is that the Graph API references a newer version of some of the data libraries (System.Spatial, Microsoft.Data.OData, Microsoft.Data.Edm and Microsoft.Data.Services.Client to be exact). What’s unfortunate is that even using the runtime redirect in the web.config file to point to the newer versions of these library which are deployed with the service, the errors still appear in the log. As there essentially doesn’t seem to be any compatibility issues between the Graph API and the slightly older version (ie 5.6.2.0) I even tried downgrading the other libraries (you can use the –Force function in package management console to remove NuGet packages even if others are dependent on them, so I removed the new versions and added the old version back in) but of course Visual Studio then fails its validation checks during compilation.

The upshot is that you have to either:

- Wait for the Mobile Services team to upgrade their backend to support the new versions of these libraries…..personally I don’t understand why this causes an error in the logs and forces the service to critical, particularly since my service actually appears to be operating fine!

- Downgrade the Graph API Library back to the most recent v1 library – this references an older version of those libraries so has now issues. Unfortunately it doesn’t contain the well factored ActiveDirectoryClient class, making it harder to query AAD.

Migrating Data Between Blob Storage Accounts in Azure

Over the last couple of posts I’ve been talking about working with different configurations and in my previous post I noted that one of the things we had to do was to migrate some data that had been entered into the Test environment into Production environment (again I stress that I’m not recommending it but occasionally you have to bend the process a little). One of the challenges we encountered was that we not only had to migrate the database, which was easy using the database copy capability in the Azure portal, we also needed to migrate the related blob storage data from one account into another. Here’s some quick code that makes use of the Azure Storage client library (WindowsAzure.Storage package via NuGet and more information here).

Firstly in the app.config we have two connection strings:

<connectionStrings>
    <add name="BlobMigrator.Properties.Settings.SourceStorage"
      connectionString="DefaultEndpointsProtocol=https;AccountName=sourceaccount;AccountKey=YYYYYYYYYYY" />
    <add name="BlobMigrator.Properties.Settings.TargetStorage"
      connectionString="DefaultEndpointsProtocol=https;AccountName=targetaccount;AccountKey=XXXXXXXXXXXXXXX" />

</connectionStrings>

Next, some straight forward code to iterate through containers in one storage account and copy content across to the target account:

var source= CloudStorageAccount.Parse(Settings.Default.SourceStorage);
var target= CloudStorageAccount.Parse(Settings.Default.TargetStorage);

var sourceClient = source.CreateCloudBlobClient();
var targetClient = target.CreateCloudBlobClient();

var containers= sourceClient.ListContainers("searchprefix").ToArray();
Debug.WriteLine("Source containers: " + containers.Length);
var idx = 0;
foreach (var cnt in containers)
{
    var tcnt =targetClient.GetContainerReference(cnt.Name);
    await tcnt.CreateIfNotExistsAsync();

    var sblobs = cnt.ListBlobs();
    foreach (var sblob in sblobs)
    {
        var b = await sourceClient.GetBlobReferenceFromServerAsync(sblob.Uri);
        var tb = tcnt.GetBlockBlobReference(b.Name);
        var ok = await tb.StartCopyFromBlobAsync(b.Uri);
        Debug.WriteLine(ok);
    }
    idx++;
    Debug.WriteLine("Migrated {0} of {1} - {2}",idx,containers.Length,cnt.Name);
}

In this case it’s limiting the containers that are copied to those that start with the prefix “searchprefix” but this is optional if you want to copy all containers.

Different Cloud Environments for Development, Testing and Production

One of the aspects of developing applications that have a cloud backend that gets overlooked initially is how to separate development from test and production versions of the application. For web applications ASP.NET solved this by supporting transformations in the web.config file based on build configuration (eg web.Debug.config and web.Release.config). However, this issue is harder with client applications that don’t have config files and don’t understand configuration transformations. The other issue with transformations is that they’re only applied during the publishing process, rather than simply when you change the build configuration in Visual Studio.

I’ll come back to talk about how I’ve chosen to handle different application configurations in a later post. In this post I want to discuss how we’ve handled having multiple environments for our Mobile Service backend; this includes how we decided to do this working with our development team v’s the client site.

Our strategy was to have three environments: Development, Testing and Production. Development was housed within the Built to Roam development Azure subscription which the development team have access to. For the most part anyone within the development team could deploy to this environment at any stage – of course there was some self management involved to minimize breaking changes. As an aside, as I’ve pointed out in a previous post, it is possible to set up Mobile Services to run locally, even if you enable Azure Active Directory authentication. The Development environment was also based on an Azure Active Directory (AAD) tenant explicitly created for the development of that project – that way accounts could be added/removed without affecting any other AAD.

Test and Production were both created in the customers Azure subscription. This was to minimize differences between these environments. These environments also connected to the customers AAD which meant that testing could be carried out with real user accounts since their AAD was synchronized with their internal AD. In a case where writing is supported back to AAD you may want to consider having test pointing to a separate AAD instance but for our purposes AAD was read only so there was no issue in using the same AAD tenant for both Test and Production.

For each of these we created a separate Mobile Service, named according to the environment, with production being the exception as we decided to drop the “production” suffix. Taking the RealEstateInspector example our services would be called:

Development – RealEstateInspectorDev
Testing – RealEstateInspectorTest
Production – RealEstateInspector

Note that we shortened both Development and Testing to just Dev and Test for simplicity.

We also created corresponding storage accounts, with names that matched the names of the mobile service

We also created corresponding applications in the appropriate Azure Active Directory, again with names that matched the corresponding environment. We didn’t use the same applications for Testing and Production to ensure we could configure them separately if required.

One issue we faced is that during the first iteration of development as the system was undergoing final testing in the Testing environment some real data was entered into the system. This meant that rather than simply deploying to Production we actually needed to migrate data from Testing to Production (definitely not something I would recommend as best practice). To do this was actually relatively simple using the ability in Azure to copy a SQL database and then within the Mobile Service change the database that it points to. We also had to migrate content from one storage account to another for which we couldn’t find a simple out of the box tool to use. However, this was actually much simpler than we thought and I’ll come back to this in a future post.

Integration Synchronization Wrapper and Restructuring Application Services

So far all the Mobile Service operations, including holding the instance of the MobileServiceClient, has been done by the MainViewModel. Clearly as the application grows this is not a viable solution so we need some application services which can be used to hold the reference to the MobileServiceClient and to facilitate application logic such as data access and synchronisation. To this end I’m going to create two services, IDataService and ISyncService with their corresponding implementations as follows:

public interface IDataService
{
    IMobileServiceClient MobileService { get; }

    Task Initialize(string aadAccessToken);
}

public class DataService: IDataService
{
    private readonly MobileServiceClient mobileService = new MobileServiceClient(
        Constants.MobileServiceRootUri,
        "wpxaIplpeXtkn------QEBcg12",
        new MobileServiceHttpHandler()
        );

    public IMobileServiceClient MobileService
    {
        get { return mobileService; }
    }

    public async Task Initialize(string aadAccessToken)
    {
        var jobj = new JObject();
        jobj["access_token"] = aadAccessToken;
        var access = await MobileService.LoginAsync(MobileServiceAuthenticationProvider.WindowsAzureActiveDirectory, jobj);
        Debug.WriteLine(access != null);
        var data = new MobileServiceSQLiteStore("inspections.db");
        data.DefineTable<RealEstateProperty>();
        data.DefineTable<Inspection>();

        await MobileService.SyncContext.InitializeAsync(data, new CustomMobileServiceSyncHandler());

    }
}

The IDataService implementation holds the reference to the IMoblieServiceClient. This will need to be initialized by passing in the Azure Active Directory access token but there after the MobileService accessor can be used to access data directly through the IMobileServiceClient instance.

public interface ISyncService
{
    event EventHandler<DualParameterEventArgs<double, string>> Progress;
    Task Synchronise(bool waitForCompletion);
    Task ForceUpload();
}

public class SyncService: ISyncService
{
    [Flags]
    private enum SyncStages
    {
        None = 0,
        UploadChanges = 1,
        PullProperties = 2,
        PullInspections = 4,
        All = UploadChanges | PullProperties | PullInspections
    }

    public event EventHandler<DualParameterEventArgs<double,string>> Progress;

    public IDataService DataService { get; set; }

    private ISynchronizationContext<SyncStages> SynchronizationManager { get; set; }

    public SyncService(IDataService dataService)
    {
        DataService = dataService;
        SynchronizationManager = new SynchronizationContext<SyncStages>();
        SynchronizationManager.DefineSynchronizationStep(SyncStages.UploadChanges, UploadPendingLocalChanges);
        SynchronizationManager.DefineSynchronizationStep(SyncStages.PullProperties, DownloadChangesToRealEstateProperties);
        SynchronizationManager.DefineSynchronizationStep(SyncStages.PullInspections, DownloadChangesToInspections);
        SynchronizationManager.SynchronizationChanged += SynchronizationManager_SynchronizationProgressChanged;
    }

    public async Task Synchronise(bool waitForCompletion)
    {
        await SynchronizationManager.Synchronize(SyncStages.All, waitForSynchronizationToComplete: waitForCompletion);
    }

    public async Task ForceUpload()
    {
        await SynchronizationManager.Synchronize(SyncStages.UploadChanges, true, true);
    }

    private void SynchronizationManager_SynchronizationProgressChanged(object sender, SynchronizationEventArgs<SyncStages> e)
    {
        var message = e.ToString();
        if (Progress != null)
        {
            Progress(this,new object[]{ e.PercentageComplete, message});
        }
    }

    private async Task<bool> UploadPendingLocalChanges(ISynchronizationStage<SyncStages> stage)
    {
        await DataService.MobileService.SyncContext.PushAsync(stage.CancellationToken);
        return true;
    }
    private async Task<bool> DownloadChangesToRealEstateProperties(ISynchronizationStage<SyncStages> stage)
    {
        await DataService.MobileService.PullLatestAsync<RealEstateProperty>(stage.CancellationToken);
        return true;
    }
    private async Task<bool> DownloadChangesToInspections(ISynchronizationStage<SyncStages> stage)
    {
        await DataService.MobileService.PullLatestAsync<Inspection>(stage.CancellationToken);
        return true;
    }
}

The ISyncService defines the actual synchronization steps. Rather than simply exposing a generic Synchronize method that accepts the a SyncStages parameter to determine which steps are synchronized, the ISyncService actually exposes high level methods for performing a full synchronize (Synchronize) and just to upload pending changes (ForceUpload). Note that the former has a parameter indicating whether the method should wait synchronization completion before returning, whereas the latter will always wait for the upload part of the synchronize to complete.

To make these services available to the view models of the application the BaseViewModel has been updated to include properties for both services:

public class BaseViewModel : INotifyPropertyChanged
{
    public IDataService DataService { get; set; }
    public ISyncService SyncService { get; set; }

And of course the ViewModelLocator is updated to create instances of these services and assign them to the view model when they’re created:

public class ViewModelLocator
{
    public IDataService DataService { get; set; }
    public ISyncService SyncService { get; set; }

    public ViewModelLocator()
    {
        DataService=new DataService();
        SyncService=new SyncService(DataService);
    }

    public MainViewModel Main
    {
        get { return CreateViewModel<MainViewModel>(); }
    }

    private readonly Dictionary<Type, object> viewModels = new Dictionary<Type, object>();

    private T CreateViewModel<T>() where T:new()
    {
        var type = typeof (T);
        object existing;
        if (!viewModels.TryGetValue(type, out existing))
        {
            existing = new T();
            viewModels[type] = existing;
        }

        var baseVM = existing as BaseViewModel;
        if (baseVM != null)
        {
            baseVM.DataService = DataService;
            baseVM.SyncService = SyncService;
        }

        return (T)existing;
    }
}

Fixing up the Client For Writing to Azure Blob Storage with Shared Access Signature

In my previous post I updated the service logic for retrieving the Shared Access Signature (SAS) to return the full Url of the blob container including the SAS. In order for this to work I also need to update the client logic. This gets much simpler as I can simply construct a new CloudBlockBlob by amending the Url to include the name of the blob to be written to.

private async void CaptureClick(object sender, RoutedEventArgs e)
{
    var picker = new MediaPicker();
    var sas = string.Empty;
    using (var media = await picker.PickPhotoAsync())
    using (var strm = media.GetStream())
    {
        sas = await CurrentViewModel.RetrieveSharedAccessSignature();

        // Append the image file name to the Path (this will
        // retain the SAS as it's in the query string
        var builder = new UriBuilder(sas);
        builder.Path += "/testimage" + Path.GetExtension(media.Path);
        var imageUri = builder.Uri;

        // Upload the new image as a BLOB from the stream.
        var blob = new CloudBlockBlob(imageUri);
        await blob.UploadFromStreamAsync(strm.AsInputStream());
    }
}

But, we can actually do even better…. what we get back is a Url, including the SAS, for the blob container. So we can use the Azure Storage library to create a CloudBlobContainer and then acquire the blob reference from there – this does the work of combining the urls for us.

private async void CaptureClick(object sender, RoutedEventArgs e)
{
    var picker = new MediaPicker();
    var sas = string.Empty;
    using (var media = await picker.PickPhotoAsync())
    using (var strm = media.GetStream())
    {
        sas = await CurrentViewModel.RetrieveSharedAccessSignature();
        var container = new CloudBlobContainer(new Uri(sas));
        var blobFromContainer = container.GetBlockBlobReference("testimage" + Path.GetExtension(media.Path));
        await blobFromContainer.UploadFromStreamAsync(strm.AsInputStream());
    }
}

Modifying the GET Request for the SharedAccesSignature Controller

In the previous post I noted that the code was pretty messy, particularly for the client code with a bunch of hardcoded literals. To fix this I’m going to encapsulate the full URL for blob storage into the server code, meaning that the client shouldn’t have to know the url of blob storage – this will make it easy to administer this in the future as things change.

It turns out that in order to make this change all I needed to do is to return the full blob container url (including the SAS) instead of just the SAS.

var ub = new UriBuilder(container.Uri.OriginalString)
{
    Query = container.GetSharedAccessSignature(sasPolicy).TrimStart('?')
};
sas =  ub.Uri.OriginalString;

The client code of course needs to be updated to handle the full Uri being passed back – Note that we didn’t include the name of the blob as part of creating the Uri. This is something the client should do. Since the SAS is for access to the whole container, the client doesn’t have to request a new SAS for each blob, only for each container it wants to write to.

Saving Image to Blob Storage Using Shared Access Signature

In this post I’m  going to bring together a couple of my previous posts that discuss retrieving and saving images, and retrieving a Shared Access Signature from a controller which will allow me to write to a particular container within Blob Storage. To complete the implementation I’ll use the Windows Azure Storage library from NuGet – it only installs for Windows platforms as there’s no PCKL or Xamarin support for this library currently.

image

As the Windows Azure Storage library is current platform specific, I’ll need to wrap it in a simple interface that makes it easy for me to write data to Blob Storage – I’ll come back to that. For the time being I’m just going to retrieve the SAS and use it along with the storage library to upload an image. So I’ll start by invoking the sharedaccesssignature controller using the GET verb as I want to ensure the container is created if it doesn’t already exist. This will return a SAS which I can use in the upload process.

public async Task<string> RetrieveSharedAccessSignature()
{
    var sas = await MobileService.InvokeApiAsync<string>("sharedaccesssignature", HttpMethod.Get,
        new Dictionary<string, string> { { "id", "test" } });
    return sas;
}

Next I want to capture an image, in this case picking a photo, and uploading it to a specified blobg.

private async void CaptureClick(object sender, RoutedEventArgs e)
{
    var picker = new MediaPicker();
    var sas = string.Empty;
    using (var media = await picker.PickPhotoAsync())
    using (var strm = media.GetStream())
    {
        sas = await CurrentViewModel.RetrieveSharedAccessSignature();

        Debug.WriteLine(sas);

        // Get the URI generated that contains the SAS
        // and extract the storage credentials.
        var cred = new StorageCredentials(sas);
        var imageUri = new Uri("
https://realestateinspector.blob.core.windows.net/test/testimage.png");

        // Instantiate a Blob store container based on the info in the returned item.
        var container = new CloudBlobContainer(
            new Uri(string.Format("
https://{0}/{1}",
                imageUri.Host, "test")), cred);

        // Upload the new image as a BLOB from the stream.
        var blobFromSASCredential = container.GetBlockBlobReference("testimage.png");
        await blobFromSASCredential.UploadFromStreamAsync(strm.AsInputStream());
    }

}

Clearly this code isn’t well factored but it’s here as a quick example of how you can use a SAS to upload content to blob storage.