google chrome software updates make everything else feel broken


I am growing more and more annoyed at the Apple's and the Adobe's of the world who are constantly interrupting my work to tell me that there are updates waiting for me to install. Why do I have to manage this? Yes I know that I can go in there and tweak the settings so that I don't get annoyed... but why should I even have to do that? I would need to do that across every user account on every machine I use on a regular basis! (5) This is noise, and it isn't at all necessary for me to have to think about it.

I believe this is part of the convenience of applications delivered in the cloud. (sorry to throw that term out there) It is part of the convenience that has me accepting fewer features in order to get that functionality.

There are companies out there who understand this, and are working hard to create a better user experience. Google for example went to great lengths in order to be able to update their entire browser in under 100 KB of download just to make the experience more efficient, user friendly and safer. Not only that but chrome updates happen automatically, again to help with security. It's very close to the experience I get by logging into gmail, which is always up to date.  Now I can already hear rattles of complaint about "control" over your own machine, but I have to ignore this or I'll get completely derailed. I'll say in short that I think that control is an illusion, and the alternatives are worse.

If you have not already read this post by the chrome developers I highly recommend it.
http://blog.chromium.org/2009/07/smaller-is-faster-and-safer-too.html

And then you have the Apple, who has gone from a position of pushing hard to avoid needing to reset the operating system unless absolutely needed to requiring reboots for Quicktime, Safari and sometimes even iTunes (especially in windows).  You can argue all you want that it's because of kernel extensions or who knows what else - but at the end of the day it doesn't matter why. This is bad design and leads to a diminished user experience.


It seems like more and more often I am prompted with a software update dialog that looks like this one. Three out of four of the updates require a reboot! Are you serious? And chances are good that I will continue to ignore this dialog because closing all of my browser sessions, and ending the long running handbrake encode, and stopping the download I have going, and turning off the TV program I'm watching all amount to a whole lot of trouble for something I don't think I should even have to think about.

Seriously what is so special about Safari that they require me to download 36.2 MB and restart my machine for a point release?? Maybe this didn't seem so offensive before Chrome, but now it feels dated and clunky.

I really should pick on Adobe here for the horror that is the Adobe Updater, but honestly I really do expect more from Apple. Adobe is too easy of a target.

 Please fix this Apple, you can do better.  Where is the Steve Jobs who wanted to save lives by reducing boot times??

build it (so it's easy) and they will come (make the right decision)

One of the biggest lessons I think I've learned over the past few years is that you have to be very careful with what you make easy to do in a software system.


When you are working within a preexisting system, it is very hard to work effectively outside the bounds of that system. Whether you are limited by time constraints, peer pressure, political decisions or just pure technical inertia, those early/legacy decisions in a system will have long reaching impacts on the decisions of those who follow.

I'll give you an example. On a product I worked on for years, the decision to use an object relational mapper (ORM) in early stages was based on a desire to eliminate boilerplate code, reduce the learning curve for new developers and generally push the development of new entities down to the entire team rather than specializing this role in one person.  All in all the reasoning was sound, but the inability to see some of the psychological aspects that would impact the future had some serious impact on the future of the system.
  1. Developers stop thinking about database impacts because they never really SEE the database
  2. The object model and the schema become inexorably tied *
  3. Accessibility to the DAL ends up being given to junior developers who may not have otherwise dealt with it yet.** 
  4. Things that could reasonably be built OUTSIDE the ORM end up dropped in without consideration because developers are following the template. 
* This can be avoided by having data contracts that are specific to the DAL
** There is nothing inherently wrong with ORM, and if your DAL is properly abstracted so that the ORM isn't propagated throughout the stack then this too isn't necessarily a problem. 

I add those two caveats because I really don't have an issue with ORM, in fact I think used properly it makes way more sense then to waste days of development doing repetitive and simple CRUD work.

However the deeper issues that arose for us still focused on convenience. It was convenient to expose the friendly querying methods of the domain objects that mapped to our tables directly to the business logic assemblies. It was convenient to let junior developers write code that accessed those objects as if retrieval and persistence were magically O(1) operations. Of course in reality we discovered embarrassingly late that we had more than a few graphs of objects that were being loaded upon traversal, leading to a separate mapper triggered SELECT for each object and its children. This is the kind of thing that only becomes apparent when you test with large datasets and get off of your local machine and see some real latency.

And yes, in this case, I think you could argue that QA dropped the ball here. But as a professional software developer you really never want to see issues like this get that far.

I've picked one example, but there are many many others in the system I'm referring to. Including but not limited to a proliferation of configuration options and files, heavy conceptual reuse of classes and functionality that are only tangentially related to each other, and an increasing reliance on a "simple" mechanism to do work outside the main code base.

Ultimately, this post has less to do with ORM and proper abstraction and more to do with understanding how your current (and future) developers will react to those decisions. I think a conscious effort has to be paid to how a human will game your system. You need to come up with penalties for doing dumb things if possible, and the path of least resistance for the right ones.  There are entire books dedicated to framework and platform development that encompass some of these ideas, but they apply at every level really in my opinion. (except maybe the one man shop?)



way to go, LRO

LRO does it again, water on the moon! That's so cool! NASA is important people, we're laying the foundation for future generations here.

http://www.space.com/scienceastronomy/090923-moon-water-discovery.html

And of course finding water is not the same as finding lakes, but imagine the potential for fuel sources and or human sustenance. Water is damn heavy, and not something we can easily take with us.

Pretty cool stuff.

hanselman tools 2009!!

Saw this on reddit tonight, hanselman has updates his legendary tools list for 2009. So what was going to be an evening of actual coding is slowing turning into an evening of trying out cool new tools that have made his list.  (I’m writing this blog post in windows Live Writer after seeing it in the list)

But what’s an hour or two of my time compared to the time that must go into compiling this list? Totally worth checking out, especially if you’ve never had a look before.

It’s nice, my mac envy always takes a slight dip in moments like these.

LRO sends us some underwhelming evidence!

I remain a huge fan of projects like LRO, and personally still believe that the disbelievers are crackpots but I also have to admit to being a little underwhelmed by the photos listed here on NASA's site for the Lunar Reconnaissance Orbiter.

http://www.nasa.gov/mission_pages/LRO/multimedia/lroimages/apollosites.html

Still, I'm excited "we're" (go NASA) going back, and if anything this just really highlights for me how damn big our "little" moon is. Easy to forget that the satellite that's taking those photos is still 50 kilometers! away from the surface, so somewhat understandable that we're not getting the close ups I'd love to see.

There is a pretty cute video of the LRO launch party here (check side bar) that is worth checking out. The highlight for me was seeing the communications in action, the laser ranging support for the data coming from LRO looks like a large robot with big green blinking eyes.



Congratulations and good luck LRO team, if you can now just convince Sarah then your mission can surely be called a great success.

lessons learned from online gambling - predicting scalability

I work with someone who has spent a few years working for an online poker company who shall remain nameless. This company was responsible for a poker platform that supported both their own branded poker offering as well as being an engine for other companies who would layer on their branding. My colleague played an important role in taking their fairly well built existing system from thousands of users to tens of thousands of users, and in the process exposing a large handful of very deep bugs, some of which were core design issues.

Looking at this site http://www.pokerlistings.com/texas-holdem and seeing just this small sample of some of the top poker sites is a bit insane.  We're talking close to 100,000 concurrent players at peak hours JUST from the 16 top "texas hold-em" sites listed here. Who are these people? (I'm totally gonna waste some money online one of these days by the way) I'm sure this is just the tip of the iceberg too.  


It's a great domain for learning critical systems in my opinion. Real time, high concurrency, real money, third party integrations for credits and tournaments and the vast reporting that goes on for all that data being generated.

My colleague's experience in dealing with real time load and breaking the barriers of scalability are truly fascinating and a good source of learning for me. While I realize there are bigger puzzles out there, but it's not every day you have direct access to that experience where you work.  In any case, one such learning that I am in the process of trying to apply is a more forward looking approach to load modeling. That is, rather than to simply design and test for scalability; to actually drill down into the theoretical limit of what you are building in an attempt to predict failure.

This prediction can mean a lot of different things of course, being on a spectrum with something like a vague statement about being IO bound to much more complicated models of actual transactions and usage to enable extrapolating much richer information about those weak points in the system. In at least one case, my boss has taken this to the point where the model of load was expressed as differential equations prior to any code being written at all. Despite my agile leanings I have to say I'm extremely impressed by that. Definitely something I'd throw on my resume. So I'm simultaneously excited and intimidated at the prospect of delving into our relatively new platform that we're building in the hopes of producing something similar. I definitely see the value in at least the first few iterations of highlighting weak points and patterns and usage. How far I can go from there will be a big question mark.

For now I'll be starting at http://www.tpc.org/information/benchmarks.asp and then moving into as exhaustive list as I can of the riskiest elements of our system. From there I'll need to prioritize and find the dimensions that will impact our ability to scale. I expect with each there will be natural next steps to removing the barrier (caching, distributing load, eliminating work, etc) and I hope to be able to put a cost next to each of those.

Simple!

sometimes it's helpful to think about what NOT to do

Came across this list of "anti-patterns" on wikipedia tonight. I'm tempted just to copy and paste the contents here but that would make me feel dirty.

Definitely a good list though and something worth reminding ourselves of every once in a while when thinking about the systems we build.
http://en.wikipedia.org/wiki/Anti-pattern

the rise and fall of myspace (and twitter)

This is a great post on how myspace rose and fall and how the same thing applies to Twitter (and I'd imagine Facebook as well) Some really good thoughts. Getting popular before you have your mission can forever trap you into that identity vacuum where popularity is everything.

http://codybrown.name/2009/08/06/myspace-is-to-facebook-as-twitter-is-to-______/

A good read, and the level of blogging I'd like to work towards (more pictures!).

manager schedule vs maker schedule

Popular comp-sci essayist and lisp hacker extraordinaire Paul Graham recently posted this article on the difference between a manager's schedule and a maker's schedule. This is really inline with my own views on this issue and really sums up a big problem we have where I work with meetings being scheduled with the makers and the impact that has. We've had tons of discussions around the cost of mental context switching, but even that's an understatement of the problem...

Great work, it's always helpful hearing echoes of these types of thoughts beyond my own everyday sphere. Who knows maybe I can use to add some weight to my arguments.

http://paulgraham.com/makersschedule.html

performance tuning to an insane level

Ok, so I have to admit that I've been one to disregard figures around performance when arguing with co-workers over the merit of managed code vs C/C++. I've even used the argument that statically typed languages like Java and C# offer more hints to the compiler that allow for optimizations not possible in unmanaged code. I still have a fairly pragmatic view of the spectrum of cost to deliver (skill set/maintainability) vs performance gains... but regardless of all that..... wow this article completely humbled and inspired me.

I don't know shit.

http://stellar.mit.edu/S/course/6/fa08/6.197/courseMaterial/topics/topic2/lectureNotes/Intro_and_MxM/Intro_and_MxM.pdf

taking a step back/up/sideways (thebrain)

Around 2003, 2004 I had a bit of a mild obsession with organizing my life into digital form, creating as many mappings as I could from my everyday existence into some kind of digital form. This of course included ideas and thoughts, writings and paintings, music and movies, friends and bits of information about those friends etc. This amass of data have gone from one disjointed medium to another (.txt files in folders, emails, blog posts, napkins) without ever really achieving any chohesiveness or real improvement in my ability to actually synthesize or act on all that information.

At that time the closest tool I found that could come close to mapping ideas and thoughts in a way that made sense to me was "TheBrain" a piece of software intended for visualizing information, and as importantly, the links between that information. As a user, you add "thoughts" to your brain which become nodes in a large graph of ideas, thoughts, attributes, urls etc. You can then very quickly build child, parent and sibling links between those nodes to add more context and information. Those links can then be categorized to add even more context to the relationship between ideas which is often very important information. Unlike a lot of "mind mapping" tools I've used though, the brain does not force you into a tree structure. Your thoughts can have multiple parents and siblings or "jumps" to thoughts anywhere else in your brain, whether they are directly related or not. Hyperlinks, imagine that! I think for the actual exercise of brainstorming and free flow thought this is critical. It also mimicks how I visualize my own thoughts working. I've since tried personal wiki's which give you a lot of the same flexibility but lack the very fast keyboard entry for connections and nodes and force you to think at the level "inside" the node, by editing the page. Whereas the brain allows you to think and work at the level of the topology, which is really effective.

I stopped using the software basically because of the overhead of attempting to keep the futile mapping exercise up to date. Imagine if every idea and thought you had needed to be compulsively cataloged manually into a program in order to keep the overall picture intact. It just doesn't work, at least not for me, and not for long and detracts from my overall goals. I found that every time I opened my brain there was just too much catchup work to do in order to get things synced up and in order.

The other thing that changed for me since 2004 has been search. Between Google and Spotlight on my mac I basically stopped categorizing information in the same ways I used to have to. It becomes less and less necessary to build nested categories of programs or emails or documents etc. Search has exposed the entire hierarchy in a glance, in many cases keeping the context intact that would have derived the categories. Still there is value in the information of that structure, but in a far less visible way.

Well I've resurrected the brain and am using it in a new way that seems to be actually working for me. For starters I don't keep any notes or real content in the brain, this is one of the clumsiest facets of the tool and almost seems like an afterthought. It really is all about the topology, which for me is fine since the bulk of what I need is often in dedicated stores (subversion, sharepoint, team foundation server) I've found any attempt to use the clumsy palettes and data entry forms to just be too cumbersome to bother with.

Focusing on the nodes and connections, and learning all the keyboard shortcuts have enabled me to use the brain in a way that is a lot like I sometimes use notepad when I need to brainstorm. Except rather than dashes, asterisks and plus-signs I'm using jumps, parents and children of small snippets of text. It's really a cool feeling being able to navigate this very large graph of interrelated concerns and ideas with very little effort. Wiki's and other hypertextual forms have served me well too, but never with so little impediment.

For high level thinking and free form brainstorming this tool is the best I've used. And provided I keep it to just the free flowing jumping and navigating it's incredibly useful.

Simple Extensibility in .NET

I've used this approach a few times when I essentially need a really simple plugin / provider model within my applications so I thought I'd jot down the relevant details here for posterity using an old project for adding post commit hooks to subversion.

Consider this a somewhat simplistic approach, not suitable for production code without a bit more plumbing. If you are going all out and need true add-in's for your .NET based product I recommend checking out the managed add-in framework , very robust stuff and not that hard to implement. In a lot of cases though the isolation, discoverability, communication pipelines etc are a bit overkill. The example I'll show is a subversion hook that allows for very simple addition of new .NET "actions" to execute on PostCommit. In this case the "add-ins" are only written in house, and editing a config file to hook them up is completely acceptable etc etc.

The solution

Subversion.Contracts : This project is the bridge between our dispatcher and the plugins that will do the work.
Subversion.Plugins : Any of the actions we wish to take post commit are added here, but could just as easily be distributed across as many assemblies and projects as necessary as long as they reference the contracts.
Subversion.Dispatcher : This is the console application that actually receives the arguments from subversion and translates them into our contracts, then executes the appropriate actions (note no references to the plugins project)


The Contract

The contracts are relatively simple, but whatever you put in them this is the interface for the "plugin" that will need to implement. In our case this is IPostCommitHandler :

using System;

namespace Subversion.Contracts
{
 public interface IPostCommitHandler
 {
  void ExecuteCommand(PostCommitArgs a);
 }
}

Pretty simple, essentially just a "do whatever you want" method that passes the arguments from subversion wrapped up in a simple class. See the attached zip if you want the guts of the subversion specific stuff.


The Plugin

using System;
using Subversion.Contracts;

namespace Subversion.Plugins
{
 public class ExecuteForAllCommits : IPostCommitHandler
 {
  #region IPostCommitHandler Members

  public void ExecuteCommand(PostCommitArgs a)
  {
   SendEmailNotification.SendEmail(a.Argument, a.Revision);
  }

  #endregion
 }
}


Again, very simple and in this case we're passing off the execution to a static class that again is not shown, but what gets executed isn't all that important in this case.. simply fill in what you need.

The Dispatcher (Plugin Host)

using System;
using System.Collections;
using System.Text.RegularExpressions;
using System.Configuration;
using Subversion.Contracts;

namespace Subversion.Dispatcher
{
 ///
 /// 
 /// Summary description for PostCommit.
 /// 
 class PostCommit
 {

  private static string subversionPath = ConfigurationSettings.AppSettings["SubversionPath"];

  static void Main(string[] args)
  {
   SubversionRevision rev = ParseRevision(args);
   ArrayList commands = DispatchGlobalCommands(rev);
   DispatchNamedCommands(rev, commands);
  }

  private static SubversionRevision ParseRevision(string[] args)
  {
   SubversionRevision rev;
   if (args.Length == 2)
   {
    rev = new SubversionRevision(subversionPath, args[0], args[1]);
   }
   else
   {
    rev = new SubversionRevision(subversionPath, string.Empty, string.Empty);
   }
   return rev;
  }

  private static void DispatchNamedCommands(SubversionRevision rev, ArrayList commands)
  {
   string[] commitLines = rev.CommitLog.Split(Environment.NewLine[0]);
   // Handle Named Commands
   string registeredCommands = String.Join("|", (string[])commands.ToArray(typeof(string)));
   Regex CommandSearch = new Regex(@"(" + registeredCommands + @")\s*:\s*(.+)?", RegexOptions.IgnoreCase);
   foreach (string line in commitLines)
   {
    string lowerline = line.ToLower();
    for (Match Matches = CommandSearch.Match(lowerline); Matches.Success; Matches = Matches.NextMatch())
    {
     string handlerString = ConfigurationSettings.AppSettings["command:" + Matches.Groups[1].ToString()];
     DispatchCommand(handlerString, Matches.Groups[2].ToString(), rev);
    }
   }
  }

  private static ArrayList DispatchGlobalCommands(SubversionRevision rev)
  {
   // Handle global commands 
   ArrayList commands = new ArrayList();
   for (int i = 0; i < ConfigurationSettings.AppSettings.Count; i++)
   {
    string key = ConfigurationSettings.AppSettings.GetKey(i);
    string val = ConfigurationSettings.AppSettings.Get(i);
    string[] cmdParts = key.Split(':');
    if (cmdParts.Length == 2 && cmdParts[0] == "command")
    {
     if (cmdParts[1].StartsWith("*"))
     {
      DispatchCommand(val, cmdParts[1].Substring(cmdParts[1].IndexOf(",") + 1), rev);
     }
     else
     {
      commands.Add(cmdParts[1]);
     }
    }
   }
   return commands;
  }


  /// 
  /// Call the appropriate method for the command name given with the argument given
  /// no processing of the argument happens here. 
  /// 
private static void DispatchCommand(string handlerString, string argument, SubversionRevision rev)
  {
   // We don't want properly configured commands to stop working because of errors so trap
   // everything here...
   try
   {
    if (handlerString != null && handlerString.Length > 0)
    {
     string[] typeAndAssembly = handlerString.Split(',');
     if (typeAndAssembly.Length == 2)
     {
      System.Reflection.Assembly a = System.Reflection.Assembly.Load(typeAndAssembly[1]);
      System.Type t = a.GetType(typeAndAssembly[0], true);
      object handler = System.Activator.CreateInstance(t);
      if (handler is IPostCommitHandler)
      {
       ((IPostCommitHandler)handler).ExecuteCommand(new PostCommitArgs(argument,rev));
      }
     }
    }
   }
   catch (Exception) 
   { //TODO: log errors 
   }
  } 
 }
}


There is some plumbing in this class that isn't directly related to this post, but I've left it all anyway. Subversion will run this command every time a checkin is made, and the process ends and starts over again each time. This allows for some pretty simple handling of loaded assemblies and whatnot, if you have a longer running process or are dealing with some scale be cautious. ;-)

The Main function has two jobs, parse and create the revision, then read the application configuration file and start issueing commands for the received revision. Commands are in two parts, those defined in config to be executed always (global commands) and those that are interpreted from the subversion commit log itself, parsed out and executed with arguments from the revision log.

Here are some example commands defined in the config
<!-- Commands -->
<add key="command:*,chris" value="Subversion.Plugins.ExecuteForAllCommits,Subversion.Plugins" />
<add key="command:*,check-ins" value="Subversion.Plugins.ExecuteForAllCommits,Subversion.Plugins" />
<add key="command:bug" value="Subversion.Plugins.UpdateBugTracker,Subversion.Plugins" />
<add key="command:cc" value="Subversion.Plugins.SendEmailNotification,Subversion.Plugins" />


  • in the key we have "command:[name]" signifies a command arriving in a revision where somewhere in the revision log we'll see the command name followed by a colon, anything following the colon is then passed to the plugin as an argument. If the name is an asterisk then we simply execute for all, with an optional argument being passed to the plugin. (so the first example emails chris for all revisions, and the second emails an account named check-ins
  • the value portion here is what directs the program where to look for the appropriate plugin and class to execute. I copied the format I found in a web.config file which is to put the class name followed by the assembly name separated by a comma.

In retrospect if I were doing something similar again I'd probably create a better structured format rather than relying on all this string parsing... but old code is what it is in this case.

Finally we call DispatchCommand for each parsed out command which is the last piece of this old code that I'm attempting to document here for reuse. DispatchCommand will read the class name and assembly name, load the assembly name and attempt to instantiate the class/type named in order to call it using our IPostCommitHandler interface.

There are a few ways to do this, and for this project I'm simply calling "System.Reflection.Assembly.Load" which relies on the fact that my plugins are located in my bin directory. I've also done this using a "plugin store" which is a fancy way to say I had a dynamic path configured that I could read my assemblies from. In this case you can use LoadFile or LoadFrom, LoadFrom will load dependencies automatically while LoadFile loads just the assembly and will potentially load duplicate copies. (see the documentation) In order to get the dll's in place for this project we just simply add a post build event like so...

copy $(TargetDir)*.* $(SolutionDir)\Subversion.Dispatcher\$(OutDir)

If after instantiating the named type from the loaded assembly we actually have an IPostCommitHandler then make the call! Done.

System.Reflection.Assembly a = System.Reflection.Assembly.Load(typeAndAssembly[1]);
System.Type t = a.GetType(typeAndAssembly[0], true);
object handler = System.Activator.CreateInstance(t);
if (handler is IPostCommitHandler)
{
 ((IPostCommitHandler)handler).ExecuteCommand(new PostCommitArgs(argument,rev));
}

So that's that. You can download the code here - it should basically work as is if you are looking for a shortcut to extending subversion with .NET. I was relatively lazy with getting this posted - so if you got this far, can use the code, and have problems with it leave a comment and I'll try to help if I can.

Transcendent Man!

I read the singularity is near last year and really enjoyed it, despite a few misgivings for Kurzweil's ego and some dubious use of statistics. One of the things I found myself really intrigued by was Kurweil himself and this movie looks like a fun look at the man and his ideas.



Do I believe him? Part of me wants to, definitely. The ultimate end-game of the singularity is fascinating and wondrous, but I actually found some of the more intermediate steps in his projections to be more fascinating. Maybe that's just a factor of what I can relate to. One example of this was the idea that nano-technology will lead us to self-assembling products from base materials and an instruction set transmitted as information. So much of my life is already so information focused that the idea of being able to go 100% information based and the implications on how society is structured etc... it's mind numbingly cool. Try to imagine how much energy, time and effort we put into moving goods around this planet and how incredible it would be for all of that to end.

Anyway, looking forward to renting this one when it becomes available.

From test spy to Verify() with Moq

Moq is now my favorite unit testing framework for .NET, and a great poster child for the power of the lambda expression support added to C#. If you are not doing unit tests or Test Driven Development you should, and if you already are and have not checked out Moq, you should.

My tests previous to Moq were using NMock, a very handy tool that looks like a lot of other mock frameworks. In order to setup a mock call you would write something similar to this :

[Simple NMock example]
Mockery mocks = new Mockery();
 IWidgetAdapter mockAdapter = mocks.NewMock();

 IList mockWidgets = new List();
 Widget mockWidget = new Widget();
 mockWidget.Name = "Mock Widget";
 mockWidgets.Add(mockWidget);

 Stub.On(mockAdapter).Method("LoadWidgets").WithNoArguments().Will(Return.Value(mockWidgets));
 WidgetManager widgetManager = new WidgetManager(mockAdapter);

The ugliest thing in the expression above for me was the literal string that describes the method name that will be called. All of a sudden my fancy refactoring tools don't quite reach all of my code and things become brittle. Sure you say, but I run these tests all the time! So it is caught right away anyway right? Yeah, but who wants to be searching and replacing these values after every refactor? Just does not feel right.

Here's the Moq equivelent:

[simple Moq Example]
IList mockWidgets = new List();
 Widget mockWidget = new Widget();
 mockWidget.Name = "Mock Widget";
 mockWidgets.Add(mockWidget);

 Mock mockAdapter = new Mock();
 mockAdapter.Setup(cmd => cmd.LoadWidgets(It.IsAny())).Returns(mockWidgets);
 WidgetManager widgetManager = new WidgetManager(mockAdapter.Object);

See that the "LoadWidgets" string disappears, and refactoring code now properly refactors tests right along with it, very very handy. Some find the need to add .Object when referencing the underlying mocked type annoying (on the call to WidgetManager) but personally I find this a small price to pay.

When I first started using Moq a few weeks ago I didn't go much beyond that example. Which speaks to Moq in that it is VERY easy to get started without much effort and more advanced features really don't get in the way of the simple features.

For a while I was able to do a lot of the testing I had in place by Asserting on values I either had access to or were being returned to me. In those cases where the values I needed were being returned to someone else (say a Service for example) I was in the habit of building stub classes (Test Spy in this case) to handle the outgoing data.

So using the generic service as an example, and wanting to observe and Assert that I am sending the correct requests to that service my previous code would have looked something like this:

[Test spy example]
public class AuthenticationSpy : IAuthenticationService
{
 #region Test Helpers
 public IList ReceivedReqeustContexts = new List();
 public AuthenticationResponse ExpectedResponse { get; set; }
 #endregion

 public AuthenticationResponse AuthenticateUser(AuthenticationRequest request)
 {
  return ExpectedResponse; 
 }

 public AuthenticationResponse RenewAuthenticationTicket(RequestContext context)
 {
  this.ReceivedReqeustContexts.Add(context);
  return ExpectedResponse;
 }

}

[TestMethod]
public void RenewExpiredTicketTest()
{
 AuthenticationSpy _authenticationMock = new AuthenticationSpy();
 Mock _respondingMock = new Mock();

 Mock mockServices = new Mock();
 mockServices.Setup(cmd => cmd.GetAuthenticationService(It.IsAny())).Returns(_authenticationMock);
 mockServices.Setup(cmd => cmd.GetRespondingService(It.IsAny())).Returns(_respondingMock.Object);

 ServiceWrapper.Current.ServiceProvider = mockServices.Object;

 _authenticationMock.ExpectedResponse = GetGoodAuthenticationResponse(DateTime.UtcNow.Add(ServiceWrapper.Current.TimerSleepTimeSpan.Subtract(TimeSpan.FromMinutes(1))));

 // initialize will call authenticate() in the service wrapper
 ServiceWrapper.Current.Initialize("testing", "Password1", "http://auth", "http://resp");

 // now setup and call any method to trigger a renew of our now expired authentication ticket
 SetupCreateResponse(Guid.NewGuid());
 SurveyController.StartSurvey(new StartSurveyArgs());

 // confirm renew was actually called
 Assert.IsTrue(_authenticationMock.ReceivedReqeustContexts.Count == 1);
}


This works, and in some cases the control given to you with your test spy can be really helpful, but if I can avoid it I will every time. More classes and more code means more maintenance, even if it is in the test code. So I finally read the docs on the Verify() method on Moq objects and it is awesome. ;-) Here's the same code handled with Moq properly and without the need for a whole new class imitating the authentication service.

[using Verify example]
[TestMethod]
public void RenewExpiredTicketTest()
{
 Mock _authenticationMock = new Mock();
 Mock _respondingMock = new Mock();

 Mock mockServices = new Mock();
 mockServices.Setup(cmd => cmd.GetAuthenticationService(It.IsAny())).Returns(_authenticationMock.Object);
 mockServices.Setup(cmd => cmd.GetRespondingService(It.IsAny())).Returns(_respondingMock.Object);

 ServiceWrapper.Current.ServiceProvider = mockServices.Object;

 _authenticationMock.Setup(cmd => cmd.AuthenticateUser(It.IsAny()))
  .Returns(GetGoodAuthenticationResponse(DateTime.UtcNow.Add(ServiceWrapper.Current.TimerSleepTimeSpan.Subtract(TimeSpan.FromMinutes(1)))));

 // initialize will call authenticate() in the service wrapper
 ServiceWrapper.Current.Initialize("testing", "Password1", "http://auth", "http://resp");
 
 // now setup and call any method to trigger a renew of our now expired authentication ticket
 SetupCreateResponse(Guid.NewGuid());
 SurveyController.StartSurvey(new StartSurveyArgs());

 // confirm renew was actually called
 _authenticationMock.Verify(cmd => cmd.RenewAuthenticationTicket(It.IsAny()), Times.AtLeastOnce());
}

Check out Part 2 in the 4 part series "Beginning Mocking with Moq 3 " gives a short description of how Validate works.

Not bad eh? Again the power of the lambda expression here jumps out at you. Full intellisense and compiler support for describing exactly what you expect that method to receive. The "It" class allows for no description "It.IsAny<t>()" or very precise description as above. The "Times" check also allows you to narrow  Significant savings in code and maintenance and actually using the testing framework as intended (imagine that) ! My only slight annoyance so far is in having to keep count of the number of times a method has been called in order to check that the last piece of code actually resulted in a call and not some code way earlier.

silverlight 3 - after the high

I failed to convince my manager at work that sending me and a few members of my team to MIX was a worthwhile expense in this economy. So instead I spent a couple days this sprint with http://live.visitmix.com/ on one screen and visual studio in the other. I have to say, Microsoft did an amazing job with MIX in terms of getting me excited and having me "tuned in". If you are at all interested in web development on the Microsoft stack and  haven't checked out the keynote I'd recommend it. I really enjoyed Buxton's presentation and Guthrie was amusing.

So now that it's been a week, and "the Gu" and all those dancing flashy lights are no longer influencing my opinion... I'm STILL excited about Silverlight 3. Sadly the development tools can't be run in parallel with Silverlight 2 and we're near the end of our sprint so can't afford the risk. Which is really too bad because one of the things our current application is leveraging is the wcf duplex polling module. A lovely little COMET like implementation for server push. The version of the duplex polling that made it into the Silverlight 2 toolkit was a little more bare than your typical Microsoft module. And while it works pretty well, it leaves a lot of plumbing code in the hands of the programmer, specifically a lot of asynchronous channel handling code that is  a bit of pain to deal with. (though a bit educational too) Anyways, this is one of those areas that Microsoft is improving on in Silverlight 3, and one of those things I'm excited about. Right next to the simpler duplex polling usage for me is the introduction of binary serialization for web services (including duplex!). When comparing to Flex and the myriad of tools and options for using AMF Silverlight was really behind the ball on this one. When we eventually decided to build our tool in Silverlight as opposed to Flex we basically committed ourselves to rolling our own binary serialization.  I'm very happy we're not going to have to follow through on that.  Read more from the web services team :

http://blogs.msdn.com/silverlightws/archive/2009/03/20/what-s-new-with-web-services-in-silverlight-3-beta.aspx


Another great addition in the realm of things-that-were-annoying-but-possible-and-already-in-flex is the new navigation uri support within Silverlight 3. Check out Tim Heuer's typically great post on all the silverlight changes here.  (link specifically to the nav)


Lastly to round out my list of really exciting enhancements to SL3 are the network monitoring API, which gives developers events to subscribe to detect when the network is and isn't present - as well as assembly caching which is huge, allowing Silverlight to cache assemblies  like the toolkit so that once a user has been exposed to it they don't necessarily have to download it again until a new version is required. This in turns makes XAP's smaller which is always a good thing. 


So to summarize, I think the top five features from the slew of enhancements that I'm looking forward to are : 

  1. Binary Serialization
  2. Duplex  polling enhancements
  3. Network detection API
  4. Assembly Caching
  5. Navigation and Deep Linking suport

My perspective on Silverlight is very biased to the needs of our application of course. And our application will live and die on the network, with performance being a top concern in everything we do.  Controls are nice but we can buy those from vendors like Telerik, animation and media are cool for demos but likely won't do much for us in the short term. The out of browser story is huge, but again with a SaaS app that relies on the network we don't envision a whole lot of offline work happening in the early versions of our app. 


Honorable mentions for features go to GPU acceleration (performance) and the SaveFileDialog (control) and Expression Blend 3. I don't use Blend much myself, but the current version is a huge pain for our team. Maybe more on that in a separate post.  

decision making : flip a coin then check your gut

I once heard an interesting anecdote about how to make a difficult decision between two paths. When you find yourself spinning, alternating between one choice and then the other, it can be helpful to simply assign each choice "heads" or "tails" and flip a coin. When you reveal what side the coin landed on pay attention to your emotional reaction... are you relieved or are you disappointed?  Try it sometime, it really can work.

I recently spent about three weeks or so doing an in-depth analysis of Adobe Flex vs Microsoft Silverlight for an enterprise application and I really feel like I ultimately decided via the coin flip method (without actually flipping the coin). Our company is about to embark on a new product aimed at the enterprise that will require levels of functionality and control that Ajax alone can not provide. We are essentially looking to take a workflow that has been heavily dominated by Word and Outlook and drag it into the future with real-time collaborative tools in the spirit of Google Docs.

I ended up choosing Silverlight, despite the potential risk adoption may pose. At the end of the day we believe our target market will be willing to accept the Silverlight install process, and that the underlying engine (.net) provides far more robustness for building the kind of application we're looking to build. Honestly this is a whole other post, but the nail in the coffin for Flex ended up being the lack of threading support for developers. On nearly every other level the two were neck and neck, with very subjective "wins" for either and Flash being the clear winner when it comes to adoption etc.

What's interesting though is that my first choice was Flex. After weeks of agonizing I decided we needed to build this thing in Flex, working around the lack of threading where necessary and going with the safe route of next to zero adoption barriers. It only took a weekend after making that decision to flip-flop. I was supposed to be making the call as if this were my company on the line, and with a clear vision of the unknowable future... at the end of the day though taking the safe and compromised route just didn't feel right. I could see the complexity of our application snowballing in the future, I could see the legacy of the flash runtime catching up with us, I could see a competitor choosing to build their offering in Silverlight and spanking us in the next year. Making the decision from a technical standpoint the only winner was Silverlight, if the business deemed the adoption risk too great then fine we could do Flex, I was prepared for either.

My proverbial flip of the coin had basically taken those three weeks of opinions and research and testimonials and flame wars had all gelled together once I had made an actual commitment to choosing Flex. It was only then that my gut told me what I needed to know and I have not looked back from Silverlight since.


Flex data services limitations (FlexBuilder generated wsdl code sucks)

The post saved me a ton of time. It's a bit embarrassing for Adobe in my mind to ship something this buggy. I was seriously running into these issues within an hour of trying to connect Flex to our .NET Soap based services.
"MyMethod can’t return an object of with the type name MyMethodResult."
You're fracking kidding me right?  Wow. (and there are more along these lines)

http://lukesh.wordpress.com/2008/11/24/very-important-limitations-of-flex-data-services/

After fighting with the above and other bugs I was rewriting a lot of the generated code from FlexBuilder and it was just pointless. And sure, generated code isn't the greatest to rely on anyway, but give me a break. In the end I used the WebORB presentation server to handle the communication to our .NET code, as well as for generation of the initial proxy classes for the client and I have to say it was an excellent experience compared to the crap built into FlexBuilder.

QWERTY Myth and the entrenchment of Flash

This is a great article about the myth of how the best technology doesn't necessarily win. Granted, sometimes the best technology does not win, but there is a persistent and pervasive sense that the populous often chooses the "VHS" over the far superior alternative. The article addresses the VHS vs Beta debate directly as well as the victory over Dvorak by QWERTY. To encourage you to read the original I won't reveal the clever arguments made.

http://www.reason.com/news/show/29944.html   (Read Me!)

I'm posting this because there seems to be a real sense of fait accompli when it comes to the Flash vs Silverlight debate. Critical mass has already been acheived, why would content producers or development shops choose to target any other platform than the Flash runtime when users have clearly already made their choice? How could Beta possibly make a resurgence against an already entrenched VHS? It would take an entire round of evolution before DVD would come along and supplant the status quo. There are a couple reasons why this article has relevance for Silverlight, and why the VHS / Beta argument doesn't hold water.
  1. Flash vs Silverlight is about a producer investment in technology NOT a consumer investment. Machines are powerful enough, and installations simple enough that the relative cost of owning both technologies is nothing like owning two peices of hardware. 
    1. If there is a competitive advantage for a producer to be gained via a specific technology they will use it. Any differentiators in a competitive field like software has a high potential of making a return. This is a very different decision process than it is for consumers. 
    2. Consumers don't really care or even know which technology is driving their rich content. They care that it "just works" (like flash based video in comparison to WMP or Quicktime) and that the functionality they desire is there. Without a right-click most users won't even realize which is which behind the curtain once they have both installed. 
    3. "Owning" everyone (high adoption) is really not that big a deal when your competition can also have 100% adoption at the same time. This is not like choosing a computer or an operating system. Only Microsoft can prevent themselves from achieving their penetration goals.
  2. Better technology does win. I'm not saying that Silverlight is necessarily the better technology right now, Flash maintains an edge on some specific rendering speeds it appears, and their designer tools are clearly better... but Silverlight has the benefit of coming at this with second mover advantage. They didn't start from scratch, they built out a proven technology (.NET) into new ground by largely copying and improving on the entrenched technology. (sure looks copied from my perspective but that's a different post) The .NET runtime, threading, compiled/managed code and the lack of legacy in Silverlight will all combine to produce demonstrations of browser based technology that will be extremely difficult and expensive to reproduce on the Flash runtime. 
  3. Silverlight does not have to "kill" Flash to win, it only needs to join Flash in the 90% adoption numbers to be a great success.
I like both technologies by the way, I'm just entertained by some of the almost religious like statements on those on the Flash side that sound a lot like any attempt to improve or even add to the status quo is a total waste of time. (or somehow an affront to their own efforts)

Silverlight controls

Silverlight 2 may not have the control set that Flex developers are used to seeing out of the box but there are a significant number of control vendors who are stepping up to the plate to fill the void. It seems as though Microsoft's strategy has been to get the Silverlight 2 runtime out as quickly as possible (and as lean as possible) always knowing that this type of extension to the framework would exist.

I do still hope to see Microsoft push a little further in controls that are downloaded once and only once with the framework itself thereby making our applications leaner - but it's a pretty serious tradeoff until the runtime has the kind of penetration that Flash enjoys.

Anyway, here's a nice post from Tim Heuer that does a good round up of where to find those missing controls.

http://timheuer.com/blog/archive/2009/01/28/comprehensive-list-of-silverlight-controls.aspx