Since starting at Intilecta almost 6 months ago I often get asked about what it is we do. Without giving too much away (check out Intilecta's website for a bit of an overview) this post should provide a bit of background on where we would like to be positioned in the market. The following image illustrates an information triangle where the information in the base is used by the largest number of people within an organisation. This is typically operational information that is used on a day-to-day basis to run the organisation. As you go up the triangle fewer and fewer people need access to this type of information. Perhaps "type of information" is the wrong phrase as it is more about how this information is accessed, as it nearly all comes from the underlying data that the organisation records about customers, sales etc.
The triangle has been broken into two halfs. The top half can be thought of as more traditional BI (ie Business Intelligence) - most BI vendors support ad-hoc querying and analytical services that allow you to do modelling, pattern and trend identification. In fact this seems to be the direction that a lot of these vendors are travelling in (ie up the triangle, building more sophisticated analytical services to improve the patterns/trends that are identified). Most BI vendors also play in the alerts space, which falls below the line into what we like to think of as the Behavioural Intelligence (or the new BI). The reason for this name is that it is the alerts and the operational information that encourages behavioural change within an organisation (eg instead of waiting for an existing customer to make a new order by monitoring their consumption you can predict and be proactive by calling them when you think they are about ready to order). Most BI vendors fail to provide operational information that is tailored to an organisation in an interface that is readily available to the user - in fact a lot rely on the organisation deploying their client tools that simply provide an ad-hoc query interface. Intilecta is positioning themselves as a leader in Behavioural Intelligence architecting designware that is based on interface guidance from leading designers (Stephen Few, Alan Cooper and others), integrating with existing business productivity tools (for example Outlook) and deploying it using an occasaionally connected infrastructure so it is available when and where the user needs it.
This post is more of a permanent pointer so that I don't have to trawl the web every time I want to find this information. One of the common tasks when building an installer for an desktop application is to add your own Custom Actions. These can be hooked up to the installer to execute at various points such as Before/After Install, Before/After Rollback and Before/After Uninstall. Each action can be written in managed code in a similar way to a typical event handler. The process goes a bit like this (assumes that you have already created your setup project and have added the primary output of your application):
- Add an Installer item to your application (from the New Item dialog select the Installer template)
- If the Installer item is not currently open, double-click it in the Solution Explorer to open the designer
- From the Properties grid select the Event view, followed by the event that you want to wire up (eg AfterInstall) - Double-clicking the drop-down box will create an empty event handler!
- Add your logic into the event handler that is created (this is your custom action)
- Right-click the setup project and select Custom Actions from the View menu
- Right-click the action (ie Install, Commit, Rollback or Uninstall) that you want to wire up (should correspond to the event you wrote the event handler for earlier) and select Add Custom Action (if you right-click the Custom Actions node it will wire up events for all four actions)
- From within the Application Folder select the Primary Output from your application (if it is not in this folder, or your installer is in a different class library you might need to add that using the Add File, Add Output or Add Assembly buttons).
These are the basic steps for getting your Custom Actions to execute as part of the installation process. Where this becomes a little tricky is where you want to prompt the user for some information during installation and then use this information as part of the custom action. For example, perhaps we are installing a service that needs to run as a particular user, we might want to configure this as part of the installation process.
- The first step is to create the user interface - Right-click the setup project and select User Interface
- Depending on where you want the dialog Right-click a node and select Add Dialog
- Select one of the pre-built dialogs (for example Textboxes (A))
- From the Properties grid configure the appropriate fields (eg set Edit1Label - "Specify the user to run service as:" and Edit1Property - "SERVICEUSER") - this sets up the field so that user input is assigned to that property
- The next step is to pass this value into the Custom Action. To do this we need to specify the CustomActionData property so that this value is passed into the Installer. On the Custom Action view select the Installer and add the property information into the CustomActionData property using a /name=value sequence (eg "/name1=value1 /name2=value2"). In this case something like "/username=[SERVICEUSER]"
- Within the Installer this property can be accessed via the Context property - Me.Context.Parameters("username")
The last thing that you might want to do is to make this property configurable for when the installer is run from the command line in quiet mode. To do this you still need to pass in the command line argument into the Installer. For example I could run the installer using the following:
msiexec SERVICEUSER="Fred" /i MyApplication.msi /quiet /passive
There is one catch which is you need to specify the Edit1Value=[SERVICEUSER] back on the User Interface view of the Installer. With that you are all done (well, excluding all the other mirad of properties you can tweak on your installer!)
Oh, it must be the time for writing posts on building installers because I just noticed that Dave Glover has posted on building Vista compatible installers for Windows Mobile applications.
I got into work this morning to discover that Find had stopped working in Visual Studio. Instead of returning the correct matches it returns a particularly unhelpful message: "Find was stopped in progress." I recall coming across this issue in VS2003 but thought that it had been fixed in VS2005. A quick trawl returned a couple of options, the one that fixed my solution was pressing Ctrl-ScrLk. Don't know exactly what this key combination is supposed to do (and to be honest haven't bothered to look it up) but it does seem to cancel an executing Find. For some reason VS occasionally seems to get stuck thinking that it should always cancel any Find. Pressing this key combination resets this flag allowing you to resume searching.
I have joined fellow MVP Rob Farley in learning how to integrate wHooiz correctly (well at least the way it was designed) into Community Server. It turns out that there are a couple of little tricks (thanks Clarke for the pointers!):
1) Modify your Community Server configuration file (communityserver.config - in the root folder of you CS installation) to allow scripts. Note that the markup and html tags should already exist in the configuration file:
<script src = "true" type = "true" language = "true" />
2) Adding your profile to the About section of your blog. In Community Server: Log in, Go to My Settings, Under the Settings tab select About My Blog, Change the About My Blog text box to HTML mode and add the following snippet:
Here the 31 indicates your wHooiz profile Id
3) Adding your buddies list to the News section of your blog. In Community Server: Log in, Go to My Settings, Under the Settings tab select Title Description and News, Add the following snippet to the News text box:
(To get the correct snippet go to the Options page under your wHooiz profile)
You should be all setup now, until more features become available....
After a few more hours trying to isolate the issue with deploying via ClickOnce we have a solution, so I'm happy to announce that Sync Tester should be ok to run (note that I have changed this back to just the runnable version, removing the install option for the moment). One of the limitations around SQL Mobile was that it was constrained by the path length that could be used in the connection string. Clearly to support ClickOnce deployment this limitation had to be lifted since by default ClickOnce apps end up in a folder which has a longish path. Unfortunately in RC1 this limitation doesn't seem to be working properly when the database is used with merge replication. To get around this Sync Tester uses a temporary file name (eg My.Computer.FileSystem.GetTempFileName - although I add ".sdf" so that it is the correct file type).
The other feature that made it into this version (basically cause I got tired of typing each of the values in every time I ran the application) is that it remembers the parameters you enter between sessions. This is easily done by wiring the textboxes up, using the designer, to application settings (user). Note this would have been much harder had it not been for the VB.NET application framework which automatically loads (C# too) and saves (VB.NET only) application settings on application startup/shutdown.
In my previous post I indicated that I was having some issues with the Sync Tester tool being ClickOnce deployed. I've spent a few hours trying to isolate the problem, intially fearing that my code was faulty, or that there were some deployment dependencies that I had missed. This wasn't the case and it turns out that it appears to be a fault with the application being ClickOnce deployed. Once I have installed the application on the local machine (the download site has changed to allow local installation now) if I copy the contents of the application folder to say c:\temp\SyncTester and run the application it works 100% and can continuously synchronise against any publication. However if I try to run the exact same application from the ClickOnce installation folder (typically something like c:\documents and settings\<user name>\Local Settings\Apps\2.0\......) it fails with a "Internal error: Invalid reconciler parameter" message. This is really easy to replicate, just go to the download page, install and run the application and try to sync against any publication (this will fail). Copy the files to a new location and it will work! Go figure???
One of my readers, Alice Marshall, has pointed me to an interesting article that talks about SOA. I would highly recomment checking out this post as well as what Dr Craig Miller has to say.
Champion of occasionally connected (aka smart client) data, Steve Lasker, has posted on how to configure SQL Server CE to run as a database for your ASP.NET website. Essentially it is a simple as adding the following line of code:
As Steve points out the scenario that this was originally added for was building sdf files (ie SQL Server CE database files) on the server side before being shipped out to the device. Although merge replication is great for synchronising changes. Unfortunately it is not particularly good for the initial download of data to the device. I recently synchronised (from scratch) a 20Mb datafile to my K-Jam and it took over an hour (requiring continuous connectivity during this time). The actual download of data took only a fraction of that time; it appears that a significant amount of time is taken applying the data to the database. As such it is preferable, for the initial setup, to build this file on the server and just download it to the device.
In my previous post I have started to put together some tools for working and debugging merge replication. Unfortunately I'm still having issues with the ClickOnce deployment of the SyncTester app. The second app to be put together is going to be a webservice that can be used to prebuild a SQL Server CE database with specific publications. The scenario goes a bit like this: Your application starts up, determines that it doesn't have a database, makes a call to the webservice (passing through the details of the publications that it wants the database to be subscribed to), the webservice communicates to the server database and builds the sdf file, the webservice passes back the download location for the sdf file (or you could actually pass back the sdf from the webservice), the sdf file is downloaded and the application continues to load. More info on this app when I get time to build it ;-)
Yesterday we were busy trying to work out why our synchronisation was breaking when we changed networks. To do this I wanted to abstract away the database synchronisation from both our application and sql management studio (since you can't really afford to have that installed on every client machine you are going to work with) so I built a very ruidmentary app that would setup a database file, setup the sync and then allow us to repetitively sync against the server. While it was syncing we could then disconnect/reconnect network connections to see what effect it has.
Last night I decided that this app would be a great tool for the community to have access to as it could be easily run on any client machine to test whether you could sync to a particular publication. So, last night I invested a few hours to tidying it up (just a little) and publishing it (thanks to ClickOnce). Anyhow you can take the early version of this app for a test drive here. Let me know what you think, what features you would like and whether you think this tool is useful? Also, let me know whether this is something that you would like to have the option to install (rather than just running online as is the current configuration)?
(Although this app relies on SQL Server CE RC1 it doesn't install it on the local machine as it uses dll deployment. Let me know if the application doesn't work for whatever reason!)
Update: Unfortunately the ClickOnce deployed version doesn't seem to be working properly. I should be able to fix this today and will post an update when it is ready for further testing.
Over the weekend I was asked a couple of times what podcasts I listen too and how do I find the time? Well the short answer is that I only find 30min (approx) on the way to and from work. I catch the train which usually entails a 10 minute walk to the trains station, 5 mins of waiting for the train, 10 mins on the train and then a further 5 mins walk to the office. As such I usually end up listening to about a podcast (occasionally 2) a day. I typically choose from the following list:
<Self Promotion>The Microsoft Developer Show - A (mostly) weekly show that features developers around the world working with Microsoft technologies</Self Promotion>
Arcast - A great architecture show hosted by Ron Jacobs out of Microsoft (speaking of which he will be visiting NZ next week!)
Dr Neil's Notes - Neil Roodyn's weekly synopsis of what is happening around the traps
UberTablet - Mr Tablet PC himself, Hugo occasionally finds the time to talk it up with people doing things in the Tablet/UMPC space
OnTheRun - All the technology you can get your hands on and more
dotNetRocks - One of the longest running shows and IMHO goes for too long and tends to be drawn out. Still a great listen but you need a good hour of time to kill
HanselMinutes - I've only just started listening to this show and it's been great. A wide cross section of topics!
SQLDownUnder - Greg Low talks about everything data and not just with people down under
The other point of interest that came out of SQL Code Camp followed Adam's insightful presentation on Reporting Services best practices (which would be more appropriately labelled "Adam's rules to better reporting"). On the way back to the city I asked Adam who he turned to for information on user interface design. In typical Adam fashion his answer was himself. One of the concepts behind the product we have built at Intilecta is that there are two distinct parts, the core delivery platform (otherwise known as the content delivery platform or the CDP for short) and the content itself. Unlike most ISVs who build a product then worry about how it looks, a large proportion of our development resources (and testing) have gone into building intuitive, effective and minimal interfaces. To this end we have a number of reference books that we consult for opinions. We also try to read a number of design oriented blogs and other design critiques. A couple of the names that are worth a read:
If you know of anyone else who has a great eye for user interface design, please feel free to add a comment!
P.S. did you know that there is a new member of the DPE team in Australia, Shane Morris, who is a User eXperience Bloke!
Although VB.NET has in the past missed out on some fairly crucial language features such as anonymous methods and iterators (both absent from VB2005 which shipped with .NET FX 2.0) it seems that it is going to edge C# out in the race for the most obscure operators. Mitch posted about the C# implementation of the "because" or "justification" operator. The VB.net equivalent will looks something like:
1 + 1 = 2 because 2 - 1 = 1
1 + 1 = 3 because true
1 + 1 = 3 justbecause
Although the implementation is a little more wordy (literally) they extended this concept by adding the "why" operator. For example
why 1 + 1 = 2
The real question would be what does the why operator return and where could you use it. Well in the simplest terms the why operator allows the framework to expose any previously defined truths. For example in Mitch's case where he defined the truth 1 + 1 = 3, you could write the query why 3 + 3 = 9 which would return a proof which includes the truths (1 + 1 = 3, 1 x 3 = 3) as well as an execution plan in the form of a proof tree indicating the order that these truths are applied.
You would think this would be something that C# could easily add into their syntax, allowing you to write:
¿ (1 + 1 == 2)
But unfortunately I have it on good authority that C# will not get this feature as they deem it to be too productive and is a security hole as it allows the developer to query the logic of the .NET Framework developers at Microsoft.
Ok I have to admit I'm a big fan of everything Office 2007 with one MAJOR exception. The synchronisation built into Outlook 2007 sux and is considerably worse than Outlook 2003 - don't ask me how it can be worse but it is. I used to synchronise using RPC over HTTP so that I can sync regardless of what network I'm on. For some reason this seems to hang. To get around this I now connect to the VPN and sync using the LAN. This was acceptable until today. I've been sitting at SQL Code Camp (NZ) on the CafeNET wireless network (plug for the kind sponsors of this event) and despite being connect to the VPN I still can't sync my mail. I can however access the OWA web client at reasonable speed - go figure. Microsoft you MUST learn how to build a product that can synchronise, afterall it's just not that hard.
Update: After a thinking about this post again last night I realised that I should probably include some footage of exactly what's going on and why I think that the information provided to the end user in Outlook is pathetic and occasionally just plain wrong! Here is a rough sequence of images (thanks to SnagIt) that shows just how bad Outlook is. Remember I am connected using a VPN connection and I can not only access my email using OWA, I can also access the fileshare on the same server that Exchange is running on (we are using Small Business Server).
Just Connected: This seems to be quite sensible, Outlook detects that I've connected to a new network so it goes exploring to see if it can communicate with the exchange server
Sending Completed: Progress indicator states that "2 of 3 Tasks have completed" - ummm, I only see 2 tasks in the list and only 1 of them states that it is complete.
Not Really Completed: Hang on, the progress indicator states that Sending was completed, yet I still have mail waiting to be sent (I actually pressed send yesterday morning and have been waiting over 24 hours for this to go - I'm glad this wasn't an urgent email).
Now there's an Error: Oh, so now Outlook has realised that yes there were indeed only 2 tasks and that neither could be complete because the server is unavailable (remember at this stage I can still access everything on this server)
Still no connection: Now Outlook has completely given up and stated that it is just disconnected (read: I can't access the server, not my problem, fix your connection and I'll try again but in the meantime it's not my problem). Again remember I can access the server (and it's not even that slow)!!!
Update 2: Turns out that half my problems were related to my Small Business Server (which is of course running my exchange instance). This server has been extremely reliable but decided over the weekend to "require a reboot". Of course there was nothing in the event logs to indicate this and AFAIK there was no way of my working out what was going wrong. I was online talking with SBS guru Wayne Small when I decided to logon using remote desktop, check for (and install) any updates and then reboot the server. Not sure whether it was an update or the reboot but the server seems to be ok again and I once again have email (on either RPC over HTTP or VPN connection). The moral of this story is still that the error information is not good enough.
One trick I did learn from Wayne, that perhaps was in the Outlook 101 lesson I must have missed, was that if you hold down Ctrl and click on the Outlook icon in the taskbar there are a couple of additional options in the pop-up menu. The one that helped me out a little was the "Connection Status..." item. Great tip if you are having connectivity/sync issues with Outlook. Thanks Wayne!
Over the last couple of days I have rebuilt my laptop with the RTM build of Vista. As expected there have been a few hurdles, the biggest of which was getting replication to work. With SQL Server Management Studio I could easily create the publication but setting it up as a web virtual directory (so that it can be subscribed to by SQL Server CE) was another matter. After installing Vista I went into the Add/Remove programs and selected the IIS windows component. Unfortunately this doesn't include the II6 compatibility API - which most applications that were designed for II6 and earlier use. To get the virtual directory setup you need to install the compatibility API by checking the option selected in the following image (you also need to be running SSMS as administrator to communicate using these APIs)
So you have probably heard about Live gadgets and if you have been playing with Vista you will have see the SideBar Gadgets but what you may not have seen are SideShow gadgets. What's a SideShow, well it is the small display on the outside (typically) of a laptop that could show things like the time or information about the song that is currently playing. It is very similar to the reduced display that some mobile phones have on the outside. Well the great news is that it is going to be possible to build gadgets for the SideShow. Without steeling his thunder I suggest you check out Daniel Moth's blog and the the SideShow blog for more information. If you are interested, you should also keep tabs on the new .NET Micro Framework.