I smell some Rote Code

I smell some Rote Code

Martin Fowler highlights some of the common misconceptions around pair programming.  In particular his last point reinforces my comments about refactoring duplicated code (ie that duplicated code normally indicates code that is the wrong place).  I feel the important concept here is that you should always think about where you write your code (which of course is easier if you have two people thinking about it) and you should always consider if there is a way to reduce the amount of code you are writing.  This might be through code generation or reusing code written elsewhere.  Done well this process will result in more code being pushed back into an application framework and subsequently less code being written in the long run.

Joel on Dynamic Languages – The Microsoft Developer Show

Joel on Dynamic Languages – The Microsoft Developer Show

I forgot to mention that the next show is now live at http://msdev.thepodcastnetwork.com where I interview Joel Pobar formerly from the CLR team at Microsoft.  Although the show notes are currently not available yet, Joel did a great job of outlining the main points of the interview on his blog. Stay tuned as there should be more on the show later this week….

When refactoring goes wrong

When refactoring goes wrong

Paul Stovell makes some interesting points regarding refactoring and the separation of UI and business logic.  I agree with Paul that refactoring can be over done but I would put a word of caution around not refactoring duplicate code.  IMHO code that is duplicated is not only a prime candidate for refactoring it nearly always points to an area of the system that needs to be redesigned.  For example say that you have duplicate code between two user controls.  Not only is this code duplicated, it is also likely to be in the wrong place.  As a result of moving this code (most likely business logic) out into a separate class you not only premote reuse, you reduce the number of lines of duplicated code, improve maintainability and you get good separation of ui from business logic.

SQL Server Everywhere renamed to SQL Server Compact Edition

SQL Server Everywhere renamed to SQL Server Compact Edition

Steve Lasker reports that due to customer feedback Microsoft has renamed SQL Server Everywhere to SQL Server Compact Edition.  Although this is a bit more of a mouthful, and IMHO a weak name, it does have the advantage that the SQLServerCE namespace is once again relevant.

October CTP for Orcas available for download

October CTP for Orcas available for download

I’m not sure that if you release something on the second last day of the month that you can really classify it as an October CTP.  Anyhow, the October CTP for the next version of Visual Studio, Orcas, is available for download here.  As Rob Caron indicates you should drop past the feedback site and provide any changes you think should be made or report bugs you have found.

WCF for Windows Mobile

WCF for Windows Mobile

With the next wave of .NET technologies almost upon us we need to spare some thought about what kind of support we are going to see in Windows Mobile.  With the Compact Framework we have come to expect a subset of the full functionality available in the .NET Framework.  Already we have heard that the second release of Windows Presentation Foundation Everywhere (WPF/e) will support mobile devices but what about the Windows Communication Foundation.  Look no further, Roman Batoukov has the scoop here!

Changing the ACS model

Changing the ACS model

This morning I was listening to one of the older shows on ARCast entitled IASA with Paul Preiss, which talks about the International Association for Software Architects.  As Paul is keen to point out this association is FREE to join and yet seeks to represent achritects around the globe, building a community through which all members can be involved and learn.  This is the style of professional association that is really leading the way as it recognises that in order to distinguish the good architects from the bad it needs to have them all contributing.  Unlike the Australian Computer Society, which has strict (and yet difficult to define) requirements for entry, the IASA encourages anyone interested in architecture to join.

Operationally IASA is supported via vendor sponsorship.  However they have a unique mechanism for really putting this association to good use.  Instead of the vendor just supplying money to IASA, the vendor must be actively involved with the association.  Paul explains this in more detail in the ARCast show.

I think that the ACS has to rationalise its operations, revamp its marketing and drastically change the way that it looks at its membership.  I’ve already made comments to the effect that the cost of membership is too high ($0 is the ideal of course) but I suspect that the professional memberships requirements are just too exclusive.  The whole idea of separating membership from certification is that you can have one without the other.  Why can’t someone with an interest in computing be a member of the ACS?  The distinction should really be around whether the ACS recognises that individual as having and maintaining a high standard of expertise/knowledge (the whole purpose of the CP Program).  Perhaps this appears that I have changed my tune; well you might be right.  I have been doing some thinking and having some lengthy discussions with various colleagues over the benefits of being an ACS member and I have become somewhat disenfranchised with the current bureaucracy that is involved with the organisation.

Some thoughts on the ACS

Some thoughts on the ACS

Following a number of posts by fellow MVPs Rob Farley (here, here and here) and Mitch Denny (a number of posts on the ACS and professionalism) I thought that I should add my 2 cents worth.  I will start with briefly recapping some of the discussions that have occurred to date.  If you are familiar with these, please don’t stop reading, just jump over this section

Previous thoughts

– “Digital natives won’t do school. But they still want to learn” –  I couldn’t agree more with this comment.  Towards the end of my university degree I was soooo bored that I rarely attended lectures.  The process of attending lectures IMHO is basically a waste of time as quite often the lecturer is just reciting what the textbook already says.  I much prefer the tutorial style learning that involves group collaboration and active discussion on a topic.

– “Need to reinvent school” – As a follow on from the previous point, if we still want the Digital natives to learn, we need to reinvent the learning process.  For example we are already seeing computers being used in classrooms.  How does this impact the way that a teacher communicates with the students?  Are they prepared for this?

– Certifications (and perhaps uni degree) are a way of distinguishing yourself.  Like it or not, when someone is looking over a CV the more credentials (relevant of course) you have the more likely you are to into the “interview” pile.  This does not necessarily equate to getting the job as I think there is much more to a good employee that being able to study and pass exams.

– Professionalism – holy grail or a waste of time?  I’ve been a long time supporter of the ACS but recently I have taken a step back and am re-evaluating whether professionalism in the IT sector is ever going to work or provide value.  I think the ACS should spend more time/money on building resources for members than focussing on this holy grail of professionalism. 

– Tertiary education – not for everyone, but those who don’t have it typically devalue it.  The whole concept of a university degree is that it is designed to encourage thinking.  In fact degrees are really a precursor to going into research.  In the IT space, a large proportion of people should NOT get a university degree as they are really looking for a vocational education – ie how to get a job in IT.  There are other forms of Tertiary education that are much better suited to this than university.  This in no way should exclude them from ACS membership!

– ACS needs to grow to have more voice with government but also to provide more benefits to members. Cyclic argument since the only way to grow is to demonstrate returns to members. 

– Aging membership – recently this has been reversed with the YIT program but still a major issue both from public perspective and internally via the decision makers

– New technology to support learning – most universities do this poorly. In fact the best ones to embrace technology are those supporting remote learning.

Some new thoughts

The Computer Professional Program, formerly the certification program, is one of the activities that the PD Board is involved with.  Reading the propaganda that is on the website I’m immediately hit with the following questions
– Who is on the Advisory Committee?
– Who is on the Academic Board? – note that some of these positions are vacant!
– Who are the mentors/tutors and what are their backgrounds?
– What technology is being used to support Group Forums/Cohorts?  In fact, how is the material for the course presented – ring binder, word document, powerpoint….
– Why would I study through the CP Program rather than a post graduate degree from a reputable university or another industry group?  This is a significant point and one that the ACS continues to fail on.

ACS Marketing is still the worse I have ever see, primarily because it is so out of touch with the IT industry and the mechanisms for communicating to the Digital Natives.  For example:
– The website still doesn’t support IE7 properly (haven’t tried Firefox)
– No RSS feed support on the website – it is no wonder that people struggle to find or attend meetings
– No ACS Member blogs – what is the ACS up to and why can’t I see what interesting things other members are up to.  Even a public list of which members (such as myself, Rob and Mitch) write their own blogs

As you can tell from this post my time here in New Zealand has in no way improved my opinion of the Australian Computer Society.  In fact I would go so far as to say that there is a lot of work that needs to be done but as with all volunteer organisations there are not enough Indians to get everything done.  I feel that a change of priorities is needed and that this needs to come from the Digital Natives

Scoble hits the nail on the head w.r.t. Zune

Scoble hits the nail on the head w.r.t. Zune

Unfortunately Scoble hits the nail on the head with his critical review of the Zune v’s the iPod.  As I mentioned in my previous post the Zune device doesn’t support WiFi and as such also doesn’t have a podcast client.  All in all, there is no “killer feature” in this product and while it will give Apple a run for its money, it won’t wipe out or even come close to replacing the iPod. It will of course give us Microsoft Landers an MS device to play with (although why you wouldn’t by a Windows Mobile device that has phone and pda capabilities as well I will never understand, being a Geek at heart).

Community Server meets Google Site Map

Community Server meets Google Site Map

One of the issues I have been facing over the previous couple of weeks is trying to elevate my blog’s search ratings in Google.  While Windows Live Search seems perfectly happy to index my blog, Google seems unable to return anything from my blog.  I went trawling on the Community Server forums and discovered that Dan Bartlet has been hard at work and has upgraded his GoogleSiteMap addin to support Community Server v2+.  This comes complete with simple instructions that simply require you to upload the binary files, modify a config file, or two, and hey presto you have a Google Sitemap.  Whether this helps my rating is yet to be seen, but at least the Webmaster tool Google provides allows you to query how your site is indexed…..

Domain Specific Search – Search .NET

Domain Specific Search – Search .NET

Last week Dan Appleman launched Search .NET, a custom search engine that is specifically for .NET related material.  I am honoured to have my blog in the list of searchable content.  If you have any resources that you feel should be included, let Dan know and I’m sure he will review suggestions and where applicable add them.

Podcast:: The Microsoft Developer Show

Podcast:: The Microsoft Developer Show

I have been meaning to return to the Daily Developers for a while now and I finally did.  The first thing I added was additional information on how to Start a Podcast.  While I don’t claim to be an extra, what I did add was information on how The Microsoft Developer Show is produced.  In summary the tools I use are:

  • Outlook – Communication and co-ordination of recording sessions (and of course with the team at The Podcast Network to co-ordinate uploading each show)
  • Skype – Most shows involve chatting with someone who is not based in Wellington, NZ (where I currently am), so Skype provides a cheap way to have an hour long chat.
  • Skylook – This is a fantastic product that integrates Outlook and Skype.  Not only does it add recording functionality within Skype, it also adds answering machine and reminder capabilities to Outlook (more on this in a bit).
  • Audacity – One the recording has been made, Audacity is used to tidy up the rough bits (umms, errs, repeats, mistakes, NDA material etc).  It is also used to mix in music for the beginning and ending to give the show some element of professionalism.
  • The Podcast Network – Of course, there isn’t much point in recording a show if you have no where to put it.  The team at the TPN do a great job w.r.t. support and assistance with getting started.

WM5 V’s UMPC

WM5 V’s UMPC

Over at Dr Neil’s Notes Neil and Hugo started what I think should be a multi-part discussion on the differences and similarities between WM5 and UMPC devices both from a consumer and a developer perspective.  My only frustration with this podcast was that it didn’t stay on topic, which makes me think that a second attempt at this topic is in order.  What they did cover was whether there is a future for the Windows Mobile OS – this is an interesting point as we are seeing devices not only decrease in size, weight etc, but also increase in hardware functionality (ie longer battery life, more storage…).  As Neil points out Windows Mobile (essentially Windows CE) has a different set of priorities that Windows Vista.  While support for occasionally connected devices has really improved in Windows Vista, the operating system is still much heavier that Windows CE which effectively cuts the battery life, in some cases by as much as half.

The show also covers the new Zune device, and future versions of the iPod, where WiFi will be built into the device.  Neil makes the point that, much to our dismay, the built in WiFi will be locked down so that you can only connect to other Zune devices.  Currently I have to open iTunes (or equivalent), let it download the latest shows for the podcasts I have subscribed to and then I have to sync it with my iPod.  Although I have become used to doing this (I tend to kick this off before going to bed each night) I would much prefer for the device to be able to automatically download the podcasts when it is connected to a WiFi network.  Does anyone know of a petition that we can all sign to convince Microsoft of the errors of their ways?

Inking with the Wacom Graphire 4 Tablet

Inking with the Wacom Graphire 4 Tablet

Earlier this week I was one of a few New Zealanders to be given a Wacom Graphire 4 Tablet to review.  As I made the trade off last year to substitute a Tablet PC for a high end laptop, the Wacom Tablet provides a pluggable (USB) solution for Inking.  Last week I upgraded my laptop to run Vista RC2 and I am itching to experiment with the new Inking capabilities.  Anyhow, if you are interested in what the Wacom Tablet looks like, here are the photos (thanks to my i-mate K-Jam) from the “unwrapping”:

  

Some pictures of the Sept Orcas CTP

Some pictures of the Sept Orcas CTP

Following my previous post on the device improvements in the September Orcas CTP I was asked to post some images.  Anyway here they go:

Device Security Manager

Smart Device Testing

New Smart Device Project

One of the new features I didn’t include in my previous post was that there is going to be a new wizard for creating a smart device projects.  In the new project dialog if you select the root language node (ie Visual Basic or C#) you will notice that there is a Device Application (which is the old wizard) and a new Smart Device Project.  The new wizard looks like…. and has been designed to be easy to select the project type you want to create.

SQL Server Everywhere – how big is that data?

SQL Server Everywhere – how big is that data?

As i mentioned in my previous post regarding SQL/e we have encountered a number of issues during our development phase.  One of the issues we encountered early on was that, for a number of different reasons, a reinitialisation maybe required for a merge subscription.  When a SQL/e database is set up as a subscriber to a merge subscription the initial dataset (this is usually a subset of the full database, partitioned according to device/user/other information) is replicated – we refer to this as a full sync.  From then on each sync only uploads/downloads changes that have been made to ensure the database is up to date – we refer to this as a partial sync.  Occasionally a sync was taking much longer than expected.  It turns out that it was essentially reinitialising the subscription, forcing a full sync.

Reinitialisation can result from a number of reasons, such as:

  • SQL/e subscription is reinitialised (directly) by the client
  • Subscriptions are reinitialised (directly) on the server
  • Changes are made to the database schema that require all subscriptions to be reinitialised
  • A subscription has been inactive for too long – effectively a timeout

After spending quite a bit of time reading documentation there appears to be no way to predict (from the client) whether a sync will be a full or partial, prior to it starting.  Take the example that a user is travelling and is connecting using their cellula network (expensive connectivity in most locations).  Before commiting to a large download it would be great for the user to be notified and presented with an option to cancel the sync.  This is also relevant from a timing perspective.  The user may just want to access some data, instead they have to wait 10 minutes for a full sync to complete.

Giving up on the documentation we went hunting for what information resides on the server regarding each subscription.  It turns out that for every subscription the server tracks information about when the subscription was last sync’d, the schema version they have, and much more.  By querying this information it is possible to determine whether the next sync to be carried out will be a full or partial sync.  For example we could run a query like (-1 indicates a reinitialisation is required):

select schemaversion
from dbo.sysmergesubscriptions
where pubid = ‘4875EFFF-BA73-4660-B7A0-781CAC97384E’ –Publication GUID
and subid = ‘9990AD5A-6A39-4234-A0B4-26BDB29B4C21’ –Subscription GUID

Now that we can query the database on the server side we need a way for the client to be able to access this information.  This is done via a webservice which runs with enough privileges to be able to query this information on the server.  The only remaining trick is to work out what the publication GUID and subscription GUIDs are.  On the SQL/e database, once the subscription has been setup, there is a __sysMergeSubscription table that can be queried for this information.

In summary the process is quite simple.  Prior to syncing we query the local database for the publication/subscription ID (if these are not there then it have to be a full sync).  This information is sent to the webservice that queries the subscription information on the server.  If a full sync is required we prompt the user to confirm that they are willing to proceed with the sync.

Hope this information helps others with this issue as we had a tough time trying to locate a good information source on how all these bits and pieces work (or more specifically all the things which can – and inevitably will – go wrong with merge replication).  While I would definitely still recommend using replication to push data out to an occasionally connected client, it is not something you can just “assume it will work”.

SQL Code Camp – NZ style

SQL Code Camp – NZ style

The details of the SQL Code Camp has recently been posted on the NZ .NET user group website.  Despite only being here for a short 6 month contract, I seem to be classified as being from New Zealand.  Anyhow, I’m going to be discussing SQL Server Everywhere, so if you are in NZ on the weekend of the 25/26th November then you should definitely attend the code camp at Porirua!

Vista UAC

Vista UAC

Warner across at GottaBeMobile makes an interesting point that User Access Control in Vista can be, at times, quite frustrating to say the least.  That is for all us power users.  Unfortunately given the current state of most applications – where due to poor application design they have had to use some OS hack – UAC is going to be annoying for everyone.  Developers need to build applications that can work in the normal user context, only requiring elevated privileges (perhaps) during installation.  IMHO Vista’s UAC is no longer Microsoft’s problem, it is up to every developer out there to fix their application to require less privileges, or at least define what privileges it does need (most applications have no idea).

SQL Server Everywhere Security

SQL Server Everywhere Security

As we approach the final stages of our third sprint here at Intilecta I took the opportunity this evening to look back at some of the issues we have faced using SQL Server Everywhere for replicating and caching offline data.  Following any of the numerous online samples it is easy to get basic synchronisation working.  However, when the architectural issues associated with an enterprise application kick in there are a number of issues that SQL Server Everywhere doesn’t really cater for.

One these issues is how security is managed.  From the local database point of view there are some things that you can do to protect the local database:

  • Have a strong password on the database (as we are creating the local database as an offline store we randomly create this password – the user should never have to key in a password)
  • Enable encryption on the database when it is created (again this is enabled as part of the connection string you specify when creating the local database)
  • Put the database in a user specific directory (ie under “documents and settings” (or “Users” in the case of Vista) – this will ensure you have filesystem protection from other users accessing the raw data file, unless of course they are a local admin)

Ok, but I’m writing my application in .NET which means that any password I use can be easily decoded from the assembly.  This is where you can use the managed wrapper for the Windows DPAPI (see the ProtectedData class in the .NET Framework v2) to encrypt/decrypt the password, placing the resulting value in the registry.  Again the registry key you select should be specific to the current user.

Now that we have secured the local database, how do we connect to the server to synchronise the data down to the local database.  In simple terms there are three main parties involved in the synchronisation process (well there are actually more, but these are the main security concerns).  At the back end you have the database itself which supports SQL Server or Windows Authentication.  You then have IIS, as SQL Server Everywhere only supports synchronising using a Merge Publication accessed via a virtual directory, which supports Anonymous, Windows and Basic Authentication.  Lastly you have the local database, which we have just covered.

Where this gets difficult is when you consider that it is essentially IIS that is pulling data from the database.  Which ever user IIS is running as (or impersonating) will be the user that accesses the database. In order to configure a subscription you start by defining what type of security you are going to use to authenticate against IIS.  You do this by either providing a username and password, or not (in which case anonymous access is used).  This will determine which authenticate mechanism is used against IIS, with the following interesting cases:

  • If you have anonymous access enabled on the virtual directory, this will always be the preferred authentication method
  • If you have Windows Authentication enabled the user that you are running the application as will be used to authenticate against IIS – is this what you want?

Once you have authenticated with IIS it is time to determine what user will be used to access the database.  Again, when you set up the subscription you can either specify a username/pasword combination, or not (in which case pass through authentication is used).  If you don’t provide a username the user that IIS is running as will be used to authenticate against sql server.  This user will either be the impersonated account (if using anonymous access), the application account (if Windows Authentication), or a specified account (in the case of basic authentication).

You will note here that we have been including Windows Authentication in this discussion, despite the fact that the documentation clearly states that it is not supported.  The reason for this is that it half works.  In fact it fully works, with the exception that IIS and SQL Server have to reside on the same machine.  As soon as you put them on different boxes you can only use basic authentication with IIS.  The main reason for this (I guessing) is that SQL Everywhere is based on all the work done to build SQL CE and since Windows Mobile never supported Windows Authentication, we can assume it was never in the original product spec.

The upshot of all this discussion is that only Basic Authentication is supported (well anonymous is, but not an ideal solution for securing your data), which means that you need to store a username/password with your application. Alternatively the user could provide a username/password – this option is clearly not great as they have to enter it everytime you want to sync.

Workaround – The workaround that we came up with makes a compromise in both directions with the net effect of improving usability of the product.  Only as a last resort could we afford to prompt the user even once to enter a username/password combination.  The application user has already authenticated with the machine, so we should be able to use this information to authenticate against IIS.  Well we can, in so far as we can authenticate against a webservice.  This webservice will provide us with the necessary username/password combination that we will use to authenticate against IIS. 

The compromise here is that it is not great that a username/password combination is being passed across the wire.  However, the data is being protected since the user has to authenticate using Windows Authentication and the channel itself is encrypted using SSL.  Lastly, the username/password is NEVER cached locally so it is actually harder for anyone to steal the username/password.  Oh, the other plus is that if we ever want to change the username/password, we can.

The end result is that we have a system that authenticates using basic authentication, but without either the poor usability that prompting for a username/password results in or the negative security issues with hardcoding a username/password into the application.

I hope you find this useful and if you have any comments on how you have addressed similar issues I would love to hear from you.

Zero (known & unacceptable) defects

Zero (known & unacceptable) defects

One of the most arguments that seems to dominate the world of agile development is around the concept of zero defect software.  While I agree with the points that Dr Neil raises about zero defect software being in part a mindset – if you think you write bugs, you will write bugs – I disagree that software can remain in a zero defect state.  In fact, as the title suggests I think that there needs to be two adjustments to the term.  Firstly, the concept of zero defect is almost impossible, unless you use formal methods, or equivalent, to prove your software is “correct” (which of course overlooks the discussion around what “correct” is).  Hence the “known” augmentation – you can preserve the zero defect state, so long as you know what the defects are.

The second addition is the word “unacceptable”.  Take the example where you are building a product that integrates services from a number of vendors, each of which provides their logo to be displayed in the application alongside their functionality.  During the development phase one of the vendors changes their marketing campaign, including their logo.  Of course, you now have a defect in your software that the logo is wrong.  This isn’t a feature or enhancement, the logo is wrong (although I’m sure that some will argue this point).  Given that you have a number of outstanding features that need to be completed before you can ship the current version of the software do you spend the time to fix this issue?  Or do you mark this defect as “acceptable” and move it to the next release cycle.

Of course the danger about moving things to the next release cycle is that the defects never get fixed.  If you are going to mark defects as “acceptable” you need a process where these defects are continuously re-evaluated each development cycle, or even during the cycle.