1 TB drive won't format using Disk Utility

I just bought a 1TB external HD, the "Maxtor OneTouch 4 Plus" this weekend on sale at London Drugs. A bit of an impulse purchase but I've been digiizing all of our dvd's lately into iTunes and had completely run out of space...

The drive has a bunch of automated backup features I'll never use, so I skipped all the software and went to use the drive directly from Mac OS X.  First step here is to convert the format of the drive from it's default of NTFS to something Mac can natively use. My impulse was to simply "format" and continue but unfortunately every time I tried the format disk utility would abort with nothing useful showing in the console logs. I went through this a few times with different file system settings and nothing worked. (See this ArsTechnica link for how to choose your filesystem)

Odd.

I then tried to partition the drive - and two partitions or more would all of a sudden work. Again odd, but I didn't want more than one partition in this case, but reverting back to one partition would cause the same problem all over again.  I resorted to Google at this point and came across this very useful although somewhat poorly formatted post on Seagate forums. (seems to be a generic problem with  disk utility)

http://forums.seagate.com/stx/board/message?board.id=freeagent&thread.id=1062

The gist of the problem being that the default partition table format needs to be changed to GUID. You can apparently only acheive this by partitioning (in this case to two partitions) so you can change the setting for partition table format and then partitioning again back to one partition with the new partition table format intact. Annoying but easily worked around once you stumble on the right answer.

Note this issue is actually new to 10.5.* and you can also solve the problem by formatting from an older version of Mac OS X (from a boot CD for example).

FlexBuilder 3 First Impressions

Where we're coming from


So at the beginning of the year I was tasked with evaluating a number of technologies for RIA development for the next evolution of my company's product. Up to this point we had been relying extensively on ASP.NET forms with a traditional post-back model that was responsible for a lot of wasted time and bandwidth. We've leveraged a lot of Ajax in the past few years, starting with simple fixes like trees and list based controls that use load on demand and going all the way up to full fledged single page applications that consumed purely services.

This has worked, but the cost is overwhelming for a development team of our size and makeup. We hire smart generalists for the most part, favoring developers with C++/Java/C# backgrounds. Some of our developers have acquired some deeper skills on the client side, but where possible we attempt to leverage control vendors like Telerik  and ComponentArt  as much as possible. They do an excellent job of hiding some of the complexity involved in cross browser web interfaces, but you will inevitably have to "hit the metal" and get your hands dirty. Relying on third parties also removes a lot of the control needed to do things the way you need them done. Regardless despite being a huge fan of the http://docs.google.com suite of tools, I have witnessed far too much ugliness in our organization with supporting multiple browsers (including having to support IE 6) and pushing the limits of complicated UI in the browser. As the size of the DOM increases and the size of our data sets increase we see wild variance in client performance with respect to things like drag and drop. I know it can be done, I know we are not at the limit yet, but seriously this is not pragmatic for our software and our market and our developers. I am a big fan of the view that JavaScript is becomming the assembly of the web, those who do this shit well, do it well by lifting themselves out of the muck with good abstractions like GWT.

One thing I think should add here in defense of Ajax though; UI design plays a really important role in the effectiveness of the DHTML approach and honestly I believe part of our problem has been designing a far richer interface than we could afford in the technology we were leveraging at the time. Take a close look at Google's lack of decoration, images etc. These things certainly matter.

Next steps... evaluation

Anyway, I'm getting off topic as usual. In the beginning of 2008 my feature matrix analysis really narrowed our options from about a dozen technologies (including XUL, ActiveX, Applets, JavaFX, Silverlight, ClickOnce, Ajax, Flex) down to three. Silverlight, Flex or Ajax. At the time of my evaluation Flex was at version 2, JavaFX was vapourware and Silverlight2 was in Beta. Given that we are a .NET shop and already have the C# programmers, the Silverlight option was looking like it was going to cleanly win out over Flex. Ajax was honestly only at the table still because we needed to justify our position and show we clearly evaluated all our options. Flex was seen as less desirable due to being based on ECMAScript and having to retool and retrain.

For the most part we've seen this as two relatively equivalent technologies with different stories for the developers. While there are important differences between how code is delivered and executed in Flex vs Silverlight, but at a high level we believe technically we can deliver our application in either technology very effectively. We prefer to keep working in C#, but the limited penetration of Silverlight is a serious risk for an application delivered in a SaaS model. That single fact has transformed the whole exercise into largely a business decision.  I don't doubt Microsoft will be able to push their offering significantly, but I would not bet money on where they will be in 1 year. (Windows Media Player STILL doesn't equal flash in penetration)

Tool support however remains something that is extremely important to developers, and is one of those things Microsoft often trots out in arguing the superiority of their platform. We swallowed that line pretty easily at first, knowing that under the hood all the code written for Flex is just a variation of ECMAScript (JavaScript) is enough to scare us off. How can you acheive the refactorability and tool support provided by current and future versions of Visual Studio with a loosely typed language like ActionScript?

Trying it out


This week I downloaded FlexBuilder3 after one of our senior executives setup a call for us with Adobe evangelists to get more details on why to go with Flex. Again the motivation for this coming back to penetration and wanting to ensure we are making the right decision for what will become a million plus dollar iniitiave to re-engineer. I wanted to get some hands on time with the latest version of FlexBuilder (3) that had come out since our initial research.

I was immediately surprised by the leaps Flex had taken since I last really dived in. I'll admit there was some bias here though as I am also a huge fan of Eclipse, so the fact that FlexBuilder is built on Eclipse is in my mind a huge win. (not new btw)

The effort in actually building an application that connected to our existing .NET web services was embarrassingly trivial. FlexBuilder has a simple tool for generating and managing proxy classes to represent your web services. So after literally pasting a url into a wizard I had code for talking to our .NET SOAP based web services. (seemed to only support SOAP 1_0 not 1_2)  I then got started with the Form Designer and had a simple application talking to our backend in under an hour even counting the little things that tripped me up like where to add my event handlers which wasn't immediately apparent. (too reliant on double clicking controls apparently ;-)   hint : <mx:Script> tags and dom style event callouts)

The concept of states in Flex and the ease with which I was able to create a number of them in the designer and bind those to a dropdown for switching between them was pretty eye opening. A state in Flex is defined by the differences between your main UI (or just another state) and the state you wish to have/be in. The IDE allows you to visually manage these states and then visually modify each one to represent application states. I don't have an early sense of whether this actually scales for complex applications, but at first glance it's very cool. (Think hierarchical state machine)  Couple this with the data binding model and you have some very effective UI management tools at your disposal. Maybe this only looks cool coming from our antiquated asp.net approaches, but this stuff is exciting. (Silverlight/WPF have the same capability, maybe even a little more advanced but with more overhead in my opinion) Having your model drive all changes is so much more manageable,  scalable... and just correct than having explicit assignments in page PreRender methods that set visibility based on the state of that model. Barf.

The control toolkit out of the box with Flex is also extremely impressive. Check out this post for a list of all the FlexBuilder 3 Controls included out of the box . For now at least this control set will mean being highly more productive in the early stages of development than if we were either having to roll our own or rely on third party vendors. And of course you can roll your own in both Silverlight and Flex and each can be just about anything imaginable.

So I'm sold, at least sold on the fact that Flex deserves considerably more attention than what we had previously given it. I've bought the "Flex 3 Cookbook", and "Adobe Flex 3 Training From the Source" and I'm intending on spending at least some of this Christmas holiday catching up on just what's possible with that silly little flash technology.


FlexBuilder 3 Controls

Controls included with FlexBuilder 3 out of the box below... check out some third party components here .

Notes will be updated as I actually get a chance to put some of these to use.

FlexBuilder 3 Controls
Control Name Notes
AdvancedDataGrid  *Professional version only*
+ multi column sorting
+ grouping
+ tree view
+ printing support
-- Still no paging support out of the box
AlertControl  not sure how this gets grouped with these other controls as it is not an explicit control but a static method "show()" which can be used from where ever. 
Button  check out the FlexLib CanvasButton for a more flexible option
CheckBox  
ColorPicker  
ComboBox  
DataGrid  See this article for some hints on implementing paging
DateChooser  
DateField  
HSlider  
HorizontalList  
Image  
Label  
List  very fast and flexible, lots of issues with scrolling but workable
LinkButton 
NumericStepper  
OLAPDataGrid  *Professional version only*
PopUpButton  
PopUpMenuButton  
ProgressBar  
RadioButton  
RadioButtonGroup  
RepeaterSLOOOW when binding to many objects, look at List based controls instead
RichTextEditor  Limited html formatting, seems to work ok
SWFLoader  
Text  
TextArea  
TextInput  
TileList  see list, very fast control but takes a bit more work for smooth work
Tree  
VSlider  
VideoDisplay  FLV based video player with simple cue and playback control 
FlexBuilder 3 Chart Controls
AreaChart  
BarChart  
BubbleChart  
CandleStickChart  
ColumnChart  
HLOCChart  
Legend  
LineChart  
PieChart  
PlotChart  
FlexBuilder 3 Navigation Controls
Accordion  Great control, but needed to go custom almost immediately ( see FlexLib for custom header) 
ButtonBar  
LinkBar  
Menu  
MenuBar  
TabBar  
TabNavigator  
ToggleButtonBar  
ViewStack  
FlexBuilder 3 Layout Controls
ApplicationControlBar  
Canvas  
ControlBar  
Form  
FormHeading  
Grid  
HBox  
HDividedBox  
HRule  
ModuleLoader  not really a layout control? ModuleLoader allows you to load components of the application on demand, lowering initial download size and improving encapsulation
Panel  
Scrollbar
Spacer
Tile  
TitleWindow  
VBox  
VDividedBox  
VRule  

visual c++ lesson 0.0.0.0.1 precompiled headers

I come from a background of managed memory and interpreted languages. I'm a big proponent of pragmatic approaches to problems and as little re-inventing of the wheel as humanly possible. I don't think the world needs another text editor, and I personally don't feel the need to write my own version of the stack I rely on for application development. (.NET Framework and IIS)

This however gives me less credibility with all those "real" programmers out there. The ones who read assembly for fun and don't believe in memory management or virtual runtimes/machines.  I consistently find myself in battles with control freaks who argue that building an application on top of an application server like Tomcat or IIS is dangerous and excessive when it's so much simpler to just write your own daemon and connection handling.

Regardless. It is difficult to argue without having credible experience with the alternatives. Not only that, but a number of my dream jobs require extensive C/C++ knowledge (Google) and many important FOSS projects require the same. So I am finally diving in and (re-)learning some C++ with an initial task of writing an XML to ASN.1 converter. (don't ask why)

I'm doing this in visual studio 2008 with Visual C++, which as I'm learning has it's own learning associated with it. First is the question of ATL , MFC , WIN32  or just Blank. Visual studio doesn't give you a whole lot of background on why you might choose one or the other, but some simple wikipedia reading spelled out that for my project I wanted the simplest option. So I went ahead with Win32 Console as it seemed to have the least amount of overhead with easiest start.

(Why would anyone use the C++ CLR option?)

From here I moved to some very simple HelloWorld love with some file IO. I've taken a course that used C for half of the assignments so I am not completely new to this, but I definitely needed a reminder. Here again I was presented with another option, stdio  or iostream ? More wikipedia love, apparently stdio is the old C way of doing things and iostream is the new object oriented way of doing things. There seems to be a lot of contention still about when to use which when, but for my purposes the stream approach seemed more appropriate.

And include how?

#include "iostream"
//OR
#include <iostream>
//OR
#include <iostream.h>

Well visual studio doesn't allow the last one so that's easy. Using either of the first two work though works because using the double quotes option will check for an implementation defined location for the file before falling back to the same behavior provided by using the angle brackets. Ok, one more down.

Don't forget that in Visual C++ adding the above import does not explicitly import the associated namespace. So in order to actually use cin or cout you need to either prefix every instance with std::cin or use the statement "using namespace std;" in order to simply use those identifiers normally.

The next couple hours were learning how to create classes, use namespaces and some simple iterative build up of some basic classes to represent the document I needed to import. Still barely outside the realm of a hello world really.

Now, for this little project I am actually more interested in the ASN part of the problem than I am in the XML parsing so I looked for a parser. I came across the autumn project which references a parser written by Dr. Ir. Frank Vanden Berghen which was appealing as it was relatively small, portable and self contained. You can find the files here.

Now for the fun part. When attempting to compile my newly added class was throwing a dozen or so errors that didn't exactly make sense to me. I was prepared for some pretty ugly work in trying to port this thing from GCC to the Microsoft compiler, so I didn't really question these errors. To begin with it was mostly types that were missing. So I would search for the definition of that type and find it in a header file that I was sure was being included. At this point I naively began to move code around in an effort to understand the error. Moving one struct from the header to the cpp file seemed to resolve one error and cause a few more. This seemed to validate my guess that the header was somehow not being included. I had no expectation that this should just work out of the box of course though, so perhaps some of this code was just wrong. I started to chase down the parts of the code that were dependent on flags such as #ifdef WIN32 (I really like how visual studio grays out the code that will not be included based on those conditionals, very nice).

This went on for maybe an hour before I was convinced that this had to be easier. Looking more closely at the build output rather than the error log (which should always be done much sooner than this) revealed this warning :


1>c:\documents and settings\c\my documents\visual studio 2008\projects\xmltoasn\xmltoasn\xmlparser.cpp(82) : warning C4627: '#include <Windows.h>': skipped when looking for precompiled header use
1>        Add directive to 'stdafx.h' or rebuild precompiled header


This is one of those steps where I know I should have asked more questions when I first started my project. The project by default included my "main" file, but it also included these two stdafx files (header and cpp) which I briefly looked at but didn't dig into. The comment at the top of stdafx.h shows this :


// stdafx.h : include file for standard system include files,
// or project specific include files that are used frequently, but
// are changed infrequently

Which if you don't know what pre-compiled headers are may not make a ton of sense. And it sounds to me like this is optional in any case. Well, it isn't, at least not if you have the build options on your project set to use precompiled headers which by default I did. Simply adding "#include <windows.h>" to the stdafx.h file resolved all the problems. So in fact the xmlParser module WAS portable, and I just didn't have a clue.

The other way to solve this problem is to actually change the precompiled headers setting for your project to not use precompiled headers at all.



So this was a bit frustrating, but all in all a good first foray into this shit, and I'm at least a few steps closer to having a program that actually does something.

ajax for mac lovers

Another Ajax Framework :  (or rather, an Application Framework)
http://cappuccino.org/
http://cappuccino.org/learn/

 Demo app built using it:
http://280slides.com/
 
And a teaser for all those interface builder lovers out there : 
I came across this in reader this morning and was totally  blown away by how well the 280 slides application  worked. It's really impressive... on non IE browsers. I sent the link to a co-worker to check out and he basically dismissed it as too slow and unresponsive, "typical web app". It took a minute or two to realize the browser was the problem at which point I launched the app in IE 8 Beta (IE 8!) and it performed terribly.It looked terrible, it was jerky and generally just a big let down after the speed of Chrome. 






Whatever the technical reasons are, it sucks. Let's hope the new generation of javascript engines (FF3.1 and Chrome) are able to push Microsoft into stepping up. I'm excited about Flex and Silverlight and JavaFX but really I want to believe we can keep pushing the browser without the plugin. 


C++ linking

This is a post for myself, to basically bookmark the excellent work of someone else. My post is contributing practically nothing (maybe adding some context/weight for his article) but here it is anyway. ;-)

http://blog.copton.net/articles/linker/index.html

Despite not being an active user of C++ I really enjoyed this post. I actually feel a little smarter and better informed for having read it. Despite the mess in the C++  tool chain being described, this kind of reading actually makes me feel more inclined to dig into this stuff not less. Anyway, filing this one away as something to potentially come back to.

Lunar Reconnaissance Orbiter

A good friend of mine Sarah believes that the moon landings were a hoax. Despite being a huge science geek , a fan of NASA and a member of the planetary society she subscribes to the idea that man has not in fact walked on the moon, and the entire thing was a lie perpetrated in an effort to win the political war with the USSR. Or something along those lines.

Somehow this deeply disturbs me. Anything coming from Sarah carries considerable weight, so I can't just discard her opinion. How could it be that the same person avidly following the mars rover mission also believes that we couldn't land someone on the moon (or rather, didn't)? I can say that I have not watched/read/been brain washed by the same materials that she has, but I'm expecting to be convinced to do so after this post.

Skepticism is vitally important to science, no doubt. I think scientific thinkers must challenge anything that doesn't make sense to them. I myself am open to hearing the counter arguments, and I even spent a half hour or so reading the respective wikipedia articles on the matter...

http://en.wikipedia.org/wiki/Apollo_Moon_Landing_hoax_accusations
http://en.wikipedia.org/wiki/Independent_evidence_for_Apollo_Moon_landings

I also think while conspiracy theories are fun (x-files we miss you) they are also wrong 99.99% of the time. In a time where science is considered elitist and unnecessary in one of the most important political, economic and scientific centers on earth, and when NASA continues to face diminishing budgets and smaller mandates it seems terribly unproductive to undermine those efforts being made by real engineers and scientists by giving credence to crack pot theories about hoaxes.

Consider how difficult it is to keep a secret. Imagine the incentive to those cynical people out there who wish to undermine the real achievements of science. Remember the movie contact and Ocham's Razor? Do we make hundreds of assumptions about the ability for hundreds of people in the government, at NASA and elsewhere all keeping a secret about the landings? That the telemetry mirrors left behind were done by unmanned missions (and that we had the robotics capable at that time to do so?) That the long documentation trail left behind from all the steps leading up to Apollo 11 were somehow part of it? That the FIVE additional moon landings (after the public had already lost interest) were also faked just to add weight to the first faked landing? Or can we assume that no such plot exists and that NASA's account is roughly accurate? I realize you can flip this and play the other side, but you can read the articles for the details rather than me iterating through the arguments that have already been made on both sides.

I have to admit, I want to believe. I want to believe that we really did achieve the mantel of a moon landing. That we as a culture were able to step outside of our regular bullshit to come together and accomplish something truly spectacular for mankind.

LRO - finally time to shut up the crackpots

In any case, the only real reason I started this post, besides to provoke you Sarah, was so that I could mention the upcoming Lunar Reconnaissance Orbiter , a mission I will be really looking forward to. It looks like we're going to get a lot more familiar with our friend the moon. This can only be a good thing as privatized space exploration steps up and produces more tourism and public interest in things beyond our humble planet. The moon may seem a bit provincial at this point, but if you were to seriously consider visiting it (when you make your millions on the internet) do you not get totally stoked? It seems like the next logical jumping point for our more grandiose visions. LRO is launching in early 2009 from Cape Canaveral this mission will include (from wikipedia)


      • Characterization of deep space radiation in Lunar orbit
      • High-resolution mapping (max 0.5 m) to assist in the selection and characterization of future landing sites
And onboard instrumentation will most importantly include :
LROC — The Lunar Reconnaissance Orbiter Camera (LROC) has been designed to address the measurement requirements of landing site certification and polar illumination.[11] LROC comprises a pair of narrow-angle cameras (NAC) and a single wide-angle camera (WAC). LROC will fly several times over the historic Apollo lunar landing sites, with the camera's high resolution, the lunar rovers and Lunar Module descent stages and their respective shadows will be clearly visible. It is expected that this photography will boost public acknowledgement of the validity of the  landings, and discredit the Apollo conspiracy theories.[12]




It will be nice to put this to rest. Long live Elvis.

What you are reading is consuming energy

Consumption is one of those things that is on my mind a lot. Both economically as I aim to live debt free and with as little "stuff" as really needed as well as in other forms of energy. Buy local, buy less packaging, drive less, eat less! It goes on and on.

One of the really interesting things with looking at google appengine is their metering and logging of your application. The GAE has limits on how much disk, cpu and bandwidth your application can consume before you have to pay for those resources.

 
(some stats on a very infrequently used blogquotes app)


This model of utility computing has been something that has been kicked around and toyed with for literally four or five decades. In the beginning it was envisioned that computing would be just like the electrical grid, and you would pay for computing resources in much the same way as you do now. This was back when no one ever believed that a household could use or would need a computer that took up an entire room.

That of course all changed, and we drifted to the current state where everyone has their own (or three). Are we are drifting back to a utility model again with the usefulness of having your data live in the cloud? I know I certainly care less and less about the machine I happen to be using when accessing my data.  If the data is in the cloud and therefore accessible anywhere you go, and more importantly from any device you choose (iPhone!) then it naturally just makes sense to perform operations on that data within the cloud as well. Why bring it all down to the client to compute values? Why own multiple computers and have idle processors and half empty disks? (wish I had that problem actually) I think it's a bit early to signal the death knell for the personal computer, far from it, but it certainly gets you thinking.

In terms of energy consumption these are all having significant impact on the overall picture. There still exists an incredible amount of power on the client machine that is largely going untapped with increasingly thin clients (but of course that is reversing now too) and that power is going under utilized when we have servers that are performing the same poorly crafted functions doing the work millions of times over for every page view etc.

There was a blog post in Jan 2007 that talked about how much energy would be saved if google switched from white to black. This post evolved into a full on article on the topic, and a website (Blackle) with a counter for energy saved.

This is all very interesting to me, but to get to my point... Using GAE and looking at very precise measurements of the resources my code and application are using was an incredible moment of perspective for me. Here I am, looking at a direct correlation to the algorithm I choose and a measurable amount of resources being consumed by that decision, amazing really. This is just profiling on the aggregate, but it feels profound. Somehow being in the utility computing frame of mind and looking at my "bill" I am compelled to rethink every aspect of my design to find ways to use less resources. This can only be a good thing.

software fundamentals are exciting?

I came across a nice list of fundamental axioms of development on reddit this morning that made me a little pumped. Pumped because I'm in the middle of a big transition at work that in a lot of respects has me starting over with a new team and a new mandate.

I'll be focusing on solutions, custom work and a view towards short term revenue vs long term research and development for products. Given the economic climate, it's a shift I can understand and on a personal level one I'm looking forward to. I am saddened of course to be leaving the product I've spent the last four years working on, but at the end of the day software is software and this is going to be a big challenge for me.  

Here's the list (http://www2.computer.org/portal/web/buildyourcareer/fa035) highlights for me :
EF1. Efficiency is more often a matter of good design than of good coding. So, if a project requires efficiency, efficiency must be considered early in the life cycle.

Q4. Trying to improve one quality attribute often degrades another. For example, attempts to improve efficiency often degrade modifiability.

T1. Most software tool and technique improvements account for about a 5- to 30-percent increase in productivity and quality. But at one time or another, most of these improvements have been claimed by someone to have "order of magnitude" (factor of 10) benefits. Hype is the plague on the house of software.

T1 I believe after having fallen for the tools pitch more than a  few times. At the same time though I think one of the differences in the "great programmers are 30 times more efficient than mediocre programmers" comes down to mastery of the tool set. Watch a proficient developer fly through their code and it's easy to see. On the other hand I've seen excellent "users" who fly through a terrible design and become constrained by EF1.

Anyway, for me this is reminscent of the pragmatic programmers list, which as obvious as a lot of it is really made me focus on the core of my craft. See Jeff Atwood's site for a quick reference if you have not seen this list before :

http://www.codinghorror.com/blog/files/Pragmatic%20Quick%20Reference.htm

While I'm at this, I've had some accumulated Martin Fowler wisdom around estimates and scoping that I've been meaning to post about. Working in custom solutions will mean writing a lot of proposals and giving fixed cost estimates which is going to be a new game for me...

Martin Fowler: Estimates
http://www.martinfowler.com/bliki/ThrownEstimate.html   <-- Technical debt, casting quick estimates
http://martinfowler.com/bliki/XpVelocity.html    <- Nebulous Units of Time

and on dealing with fixed scope....
http://martinfowler.com/bliki/ScopeLimbering.html  <-- dragging clients towards a more agile process
http://martinfowler.com/bliki/FixedPrice.html
http://martinfowler.com/bliki/FixedScopeMirage.html

frequent beachballs in mac os x caused by bad fonts

I've been testing the latest nightly builds of firefox 3.1 over the past few days and while generally impressed with the performance improvements in javascript was quite disapointed that it was causing my iMac to go into frequent hangs where I would see the spinning beachball of death for many seconds before I could continue working again. It became bad enough that I finally had to ditch my firefox testing efforts.

Much to my chagrin the problem continued well after I had stopped using good ole Minefield. I began to explore running processes via activity explorer and just generally clean up my machine. So after uninstalling a bunch of apps and services I wasn't running anymore and still experiencing the same problem.

At this point I was worried because the cpu was not the issue, iTunes would continue to play with no problem (and even respond to the hotkeys on my keyboard for switching tracks) it was just the UI that was freezing, and generally as I was opening files. Disk issue? Memory? Maybe even something where the network was introducing some latency? After some googling and consternation over my potentially failing disk I finally did what I should have in the first place and started to dig into the system logs via Console.

Sure enough one group of messages stood out right away...

06/09/08 9:21:29 PM com.apple.ATSServer[14462] ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
06/09/08 9:21:29 PM com.apple.ATSServer[14462] 2008.09.06 21:21:29.63
06/09/08 9:21:29 PM com.apple.ATSServer[14462] ATSServer got a fatal error (status: -4) while processing a message (id: 20) from pid=14309.
06/09/08 9:21:29 PM com.apple.launchd[386] (com.apple.ATSServer) Throttling respawn: Will start in 10 seconds
06/09/08 9:21:39 PM com.apple.ATSServer[14465] ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
06/09/08 9:21:39 PM com.apple.ATSServer[14465] 2008.09.06 21:21:39.90
06/09/08 9:21:39 PM com.apple.ATSServer[14465] ATSServer got a fatal error (status: -4) while processing a message (id: 20) from pid=14309.
06/09/08 9:21:39 PM com.apple.launchd[386] (com.apple.ATSServer) Throttling respawn: Will start in 10 seconds
06/09/08 9:21:41 PM quicklookd[14463] [QL ERROR] 'Creating thumbnail' timed out for '<QLThumbnailRequest /Library/Fonts/LiberationSans-Regular.ttf>'
06/09/08 9:21:50 PM Console[14309] Failure with ATSFontGetUnicodeCharacterCoverage(). Disabling font fallback optimization for characters not renderable.
06/09/08 9:21:51 PM com.apple.ATSServer[14468] ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

So the ATSServer (Apple Type Server) was choking and it looks like the beachballs I was seeing were related to the throttling of the respawn of the server. So I'd be staring at a beachball for up to 10 seconds while the rest of the OS hummed along fine. A quick google search revealed the most common reason for this kind of failure is a corrupt font. At this point I shot around in my chair and asked my wife pleadingly and only somewhat accusatorily whether she had by any remote chance installed any fonts lately.....  Yes.  So I gave her control of my screen so she could clean up what had been added and like magic the beachballs ended. No reboot or anything required, ATSServer has not crashed since and I am writing this in Minefield 3.1b1pre with no problems! (sorry to blame you firefox)

Sadly I do not have the patience to go through the exercise of finding which fonts specifically caused the problems. I really don't have much use for the extra fonts so I'm just as happy to have them all gone. Still hopefully this helps someone.

Now if I could just clean up all these damn mds errors that keep cropping up ...
mds[34]: (Error) Import: importer:0x84b600 Importer start failed for 501 (kr:268435459 (ipc/send) invalid destination port)

From Chrome with Love

A million bloggers all posting on the same topic, why shouldn't I join in. The Google chrome team has got to be enjoying themselves right now. I read the comic yesterday and really enjoyed it. Seeing a company I can't help but admire sit down and rethink the browser in so thorough of a manner is inspiring. Even just the QA involved is pretty damn impressive.

I've been using the chrome browser for a day now and have to same I'm pretty happy with it. I can already feel a need for some of the firefox extensions that I rely on so heavily, but at the same time I feel more productive and less distracted in this browser than I do in firefox. It's FAST, really fast on my machine at work. I can't wait until this is available for my mac.

I really enjoyed this article from John Siracusa, it sums up nicely what I find so inspiring about this and points out the real motivation for Google to create yet another browser.

http://arstechnica.com/staff/fatbits.ars/2008/09/02/straight-out-of-compton

(I'm almost regretting my recent commitment to developing an RIA in silverlight! Where oh where are my first class developer tools for browser based development.... )

Regression Ratios

Regression is a nasty issue. Ongoing regression from bug fixes can be a pretty clear indicator that there are some serious problems with your code base, your process, your team or all of the above. As an example case consider the effect of moving from 1 in 5 of all bug fixes causing an unrelated issue to crop up to 1 in 10. Given an imaginary scenario where 1000 bugs are found (a medium size project) and a team is closing 20 of those bugs a day then in an extremely simple model we have just added two weeks to our timeline simply from regression issues.  (handy chart from google spreadsheets below)



Of course its much more insidious then that though in the real world. For one, regression bugs only pop out at you the minute they are introduced if you're lucky (they are obvious). More likely is that about half of them will show themselves and the other half will start cropping up near the release date during regression passes. If you alter the model above to show that you'll see little bulges near the end of the line that start to make time lines feel very elusive and untrustworthy. And given that no QA process is perfect you have to believe that whatever is causing you to introduce those bugs that you are seeing is also causing bugs that are yet to be found.

The importance of quality can never be over-stated, and the effect of poor quality can have a really dramatic impact on a team as well as your customers. Here you are, slaving over your own code, crafting a well thought out and tested solution to a problem, only to have someone else come along and break your code with their fix. Here you are, proudly about to successfully bring a feature out of the QA phase and into the hands of customers and all these elements that you had already seen work are failing. How quickly your trust in your fellow developer begins to wane.

Here though is a problem that I have come to see the light on as being truly systemic. Yes there are horrid developers, and yes there are some difficult technologies, and yes some level of regression is inevitable.... but levels of regression like those that I've seen can usually be traced back to process, to your design and architecture, to your requirements and most importantly to your culture.

In terms of process there are some stock answers to bringing this number down like TDD and constant refactoring. I believe in both of these, but in my experience these can be difficult to enfuze into a culture. Without full buy-in and a cultural shift in developers these are just more TLA's to throw on the steaming pile of buzzwords that will help you magically improve.

Having just come to the end of a release of our product in the past few weeks I am looking forward to whittling away at our own regression ratios.

Changes we're making :

  • No more sharing of the bug load. Every developer owns their module/feature/code from design through to maintenance. This include assigning ownership to old modules whose owners may no longer be around. If you write bad code, you will fix more bugs. If you fix more bugs, you will have less time to spend on new bug-ridden features. This will mean our actual bug trends won't be the pretty descent we've worked so hard on achieving. But the overall effect of a self correcting system should help. 
  • More atomic checkins. Checkins that span branch points, or handling merges of bugs can be extremely error prone with subversion and our current branching strategy. We hope to address this somewhat by eliminating bug fixes that span multiple revisions... Ideally this will involve :
    • Patch files attached to the bugtracker for code reviews to avoid multiple checkins when reviews spot errors
    • When changes still need to be made to a bug when things are caught in QA then we also revert the original checkin, and checkin again with all changes at once in the new revision. 
    • Bugs will be closed diligently and new bugs opened rather than morphing the bug over time as new side effects or slightly related bugs are spotted. 
  • More automated testing. This is last on the list not because it's least important, but because we don't expect as quick of an impact as with the other two steps. A major number of the issues we run into remain UI issues. And while WatiN does do wonders for us, it's painstakingly slow to get the number of tests to where they should be (and keep them up to date). 
Lastly just for interest here's the graph of how we've actually done. Black shade are high severity bugs. The first mountain was the release previous where we amalgamated all previous releases into this (migration of existing clients) we had some pretty serious regression in that release after Christmas. The  second range is the most recent release which is looking amazing in comparison, though at this resolution you are missing some definite climbs amongst that fall. Definite progress anyway.... 


Documenting architecture

One of my side projects at work right now is documenting the architecture of a product that has already been built but will be going through a re-architecting with a focus on a more robust schema and applying some of the learning we've gone through in discovering exactly how our product is being used and ways in which our users want to extend the platform. SaaS and SOA are two good buzz words we'll be throwing around a lot, although to be honest we've been in the SaaS model for years now, just not following all of the best practises. (examples, check out litwareHR)

So despite documentation being at the heart of the architect's role I find it extremely difficult to find good documentation on how to approach a task like this. I have Craig Larman's book Applying UML Patterns which I've enjoyed, but I still find myself grappling for where to even begin sometimes.

These articles on IBM have been good reads for this and I'd reccommend giving them a read if you are facing similar challenges.

Part1
http://www.ibm.com/developerworks/library/ar-archdoc1/index.html?S_TACT=105AGX20&S_CMP=EDU

Part2
http://www.ibm.com/developerworks/library/ar-archdoc2/index.html?S_TACT=105AGX20&S_CMP=EDU

Part 3
http://www.ibm.com/developerworks/library/ar-archdoc3/index.html?S_TACT=105AGX20&S_CMP=EDU

I'm assuming there will be more of these which I'm looking forward to.

Networks, Assignment 1

I just finished my first assignment in a beginning networking course I'm taking and I am so far pretty impressed with how interesting this stuff is. I have a working knowledge of networking that includes decent understanding of the application layer, high level knowledge of the transport layer and basically just awareness of the link layer. It's pretty rare that in my position as a developer that I need to answer questions about the link layer. (thank you my friends in IT)

Some of the questions are actually kind of fun in that they had me visualizing data flowing through networks in ways I had not before. For example given a link between two hosts X km apart, with a transmission rate of R and a propagation delay of N....

      2.4.d What is the width (in meters) of a bit in the link? Is it longer than a football field?

Kind of useless, but super fascinating at the same time, imagining the physical manifestation of all this work I do day in and day out. Pulling these bits from all over the world is so effortless, so fast and so transparent that it's easy to forget the actual resources behind it.
The football field question actually relates to a pretty interesting concept called bandwidth-delay, which refers to the amount of data that exists "on the wire" or "on the air" at any given moment. Data that has been sent but not yet acknowledged. It's helpful in determining minimum buffer sizes for receivers and transmitters over a given link.

http://en.wikipedia.org/wiki/Bandwidth-delay_product

Another element to this first assignment was to setup apache as a proxy server on your local machine which was a bit surprising. I assumed at first that the assignment meant squid, but no, apparently apache itself can be configured to be a proxy server for a number of protocols including both ftp and http traffic.There are numerous articles out there on using it as a personal ad blocker, or caching server.

For reference this is what I had to do to Httpd.conf to make it work :

LoadModule disk_cache_module modules/mod_disk_cache.so
LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_http_module modules/mod_proxy_http.so
LoadModule proxy_connect_module modules/mod_proxy_connect.so

<IfModule mod_proxy.c>
ProxyRequests On
<Proxy *>
Order deny,allow
Deny from all
Allow from 10.0.1.2/255.255.255.0
</Proxy>
</IfModule>

<IfModule mod_disk_cache.c>
CacheRoot "c:\apachecache\"
CacheDirLevels 5
CacheDirLength 3
</IfModule>
Interestingly if you get that configuration wrong, you actually get a big "It Works!" page shown in your browser for any page you try to visit. Go figure. My mistake at that point was just not having uncommented the right modules, so apache was just serving the It Works page rather than attempting to proxy my request. 

blogquotes prototype is working

I'm supposed to be studying for a challenge exam I'm writing this week in "Advanced" Operating Systems. Instead I spend a good chunk of the day today working on blogquotes in between watching/playing with my daughter.

I can justify this time spent because I did all of this work exclusively from a bash shell using vi to refresh myself on some of the content for the course. In this session I was able to use wget,grep,awk,vi,a shell script and some file permissions. I'm not so sure that will get me through the exam but it was fun and I was finally able to put some time towards my random quote include for the blog.

You can see the quotes being pulled in now in the top right hand corner of this page. So far only my wife and I are using it, as this is very much a proof of concept. It works basically from end to end, but without a lot of features that are going to be necessary as this grows. Paging, searching, caching, tags, some UI polish, and some testing. It's very gratifying though to reach this first phase and actually get something working. Adding features will probably happen a lot quicker now that I actually have a user. ;-)

I also took the opportunity to try out some of Yahoo's client side api's in the YUI. Currently I'm using XHR, Layout and DataTable. I was amazed at how quick it was to basically "assemble" my application. Google's app engine makes the CRUD a total cake-walk, and yahoo's user interface library has no dependencies on server side code but works seamlessly with a JSON backed RPC scheme in Python. It's a whole new world! Now as long as my application doesn't get popular and I have to start paying for resources. ;-)

Some interesting snags while working on this latest revision:
  • Randomly selecting an entity in GQL
  • Django Utils simplejson can't serialize google's db.Model classes, so I had to actually proxy my model class to a simpler structure that I serialize to the client via JSON and XHR
  • No unique id's for google user accounts, all you have is email which isn't exactly something people are going to appreciate me passing on url's for the random inclusion widget. The solution was simple for what I needed, a user preference entity keyed on google's User db type and storing a UUID as the publishingKey, that id now becomes my unique id which won't change even if you change your google account name
  • Google has a very cool AJAX Library SDK for sharing hosting/serving up of all the most popular frameworks like dojo and jquery
I think I'll use google code to start some documentation around bugs and features for this tool rather than simply the blog so look for those details elsewhere.

code samples in blogger are a pain

I can't say I enjoy writing in the blogger post interface, in fact it's pretty frustrating. For a while there I was using google docs to write posts(which I loved), then I would just publish to my blog. That actually worked great until the actual publishing process which doesn't allow you to control the title very effectively and totally messed up my rss feed even if I did fix the title. Then I tried scribefire which again was really promising but it's a cramped UI and again the publishing process was really clunky for my workflow. (things remain drafts for me for weeks at a time)

Anyway, I'm looking at my last post and those code samples are embarrassingly poorly formatted. Not only that but if you check the source the blogger editor is introducing tons of html space entities which drives me nuts considering I'm using whitespace:pre on my blockquotes anyway.

I'm really inclined to just use the tools I have when it comes to this site, primarily so that I focus on writing and not tinkering. Since moving my website from a hosted environment to blogger I have actually started to focus again on my writing and my projects rather than tinkering with a wheel that's been built a thousand times (photo gallery scripts, php and perl cgi trickery for mundane templating etc). So while I will probably end up spending time on this at some point I really just want to find something that "just works" for showing code in blog posts. More to come I'm sure.

Microsoft's add-in framework and the need for diligence

We've recently put Microsoft's managed add-in framework (part of .NET 3.5) into very effective use building a plug-in system for a large asp.net application at work. Essentially the framework in place allows other developers (and our own team for out of stream releases) to develop new functionality for our platform that runs the entire life-cycle for a given widget. In our case for this particular widget we're talking about plugins being responsible for up to 4 asp.net controls in different contexts (for example data collection and reporting as two separate controls) as well as a script injection point where plug-ins are able to extend the scriptability of our platform. 
For us going with the framework gave us a few things we didn't have with our original design for the add-ins. 
  1. Tools to help enforce the pattern
  2. An extra layer of versioning over the somewhat naive approach we started with
  3. Built in discovery, provisioning, and a communication pipeline for serializing types and calls across the contracts that make up the interface between host and plugin
  4. And last but not least support from Microsoft. This is somewhat more minor than the points above, but it helps legitimize our design when we are following the best practices laid out by Microsoft and used by others in similar situations. The documentation and training available also make getting other developers up to speed on the framework that much easier.
There have been numerous challenges in using the framework, but perhaps the most surprising of all for me was the human element and how simple it became over the life of the project to break the pattern by coupling components across or outside the pipeline.

Examples :
  1. Referencing an assembly from both the addin and the host that shared code that should have been passed across the pipeline. 
  2. Bypassing the pipeline completely by calling web services from the addin code (client side or server side calling code) 
  3. Conditional code in the host making decisions based on the type of the addin
  4. Loose coupling based on common knowledge (that shouldn't be common) 
These all basically come down to a breach of contract or an absence of contract for various operations that we needed addins to handle. On some level all of these things can be excused and safely done without compromising the framework if they are done right. It's a slippery slope though and requires a commitment to not be lazy to avoid the temptation to sidestep the pipeline.

In the case of #1 above the shared assembly started off very benign. Essentially some shared utility code for handling urls and some common resource tasks. Why rewrite when that code already existing the main project? Break it off from the project so that it has no dependencies then drop it in. Except that slowly the terrible pain of building contracts, views and adapters for every little interface or interface change drives you towards shortcuts. "Oh I'll just put this code here to test and then fix it later"  Even worse are those cases where you've chosen the path of least resistence in dealing with a bug resulting from unexpected behavior with serialization across the pipeline. It only took a few weeks of not being completely on top of this before I discovered our project was littered with types that were being shared directly between host and addin. Any change meant a recompilation of both projects, completely defeating the purpose.

#2 is a legitimate need in our scenario, and we've found ourselves needing to creating proxy services that wrap our own services just to protect against the inevitable change that will follow. Given that third party developers may be writing code for the platform we have to make an effort to protect from change in all of our interfaces, web service or otherwise. In retrospect I think it would have made more sense to strictly enforce a team division so that no one writing addin code was also writing host code.This probably would have gone a long way to preventing these types of problems.

#3 and #4 are a little more insidious and harder to spot without strict code review. #3 for us isn't technically breaking anything in terms of the interface or future versioning, but adds cruft and generally points to a missing method or property on the interface. The last thing you need as the host is to have case statements littered throughout your code looking for addins. #4 took many forms, and in some cases it's fine. An ok example might be sharing enums, which provided they are defined in the contracts or slightly worse something like a utility class is ok. A not ok example for me was code like this :  extension.GetSetting("Menu_Text");  which in this case has two errors. One "GetSetting" shouldn't really exist because how an addin chooses to configure itself should be transparent to the host. Second this code depends on the addin having a value defined in it's config file for the key "Menu_Text". This is next to impossible to enforce and can of course easily break.

Replacing this with extension.MenuText; should be trivial, and a no-brainer. When we started using the framework back in December we were rolling the supporting code by hand. To give you a sense of what this entails, this is how you would define an extension who's only job is to return MenuText as in the code above :

IExtensionContract.cs
using System.AddIn.Pipeline;
using System.AddIn.Contract;

namespace SimpleExtensionContracts
{
 [AddInContract]
 public interface ExtensionContract : IContract
 {
  string MenuText { get; set; }
 }
}

IExtension.cs
namespace SimpleExtensionContracts.AddInViews
{
    
    [System.AddIn.Pipeline.AddInBaseAttribute()]
    public interface IExtension
    {
        string MenuText
        {
            get;
            set;
        }
    }
}

IExtension.cs
namespace SimpleExtensionContracts.HostViews
{
    
    public interface IExtension
    {
        string MenuText
        {
            get;
            set;
        }
    }
}

IExtensionContractToViewHostAdapter.cs
namespace SimpleExtensionContracts.HostSideAdapters
{
    
    [System.AddIn.Pipeline.HostAdapterAttribute()]
    public class IExtensionContractToViewHostAdapter : SimpleExtensionContracts.HostViews.IExtension
    {
        private SimpleExtensionContracts.ExtensionContract _contract;
        private System.AddIn.Pipeline.ContractHandle _handle;
        static IExtensionContractToViewHostAdapter()
        {
        }
        public IExtensionContractToViewHostAdapter(SimpleExtensionContracts.ExtensionContract contract)
        {
            _contract = contract;
            _handle = new System.AddIn.Pipeline.ContractHandle(contract);
        }
        public string MenuText
        {
            get
            {
                return _contract.MenuText;
            }
            set
            {
                _contract.MenuText = value;
            }
        }
        internal SimpleExtensionContracts.ExtensionContract GetSourceContract()
        {
            return _contract;
        }
    }
}

IExtensionHostAdapter.cs
namespace SimpleExtensionContracts.HostSideAdapters
{
    
    public class IExtensionHostAdapter
    {
        internal static SimpleExtensionContracts.HostViews.IExtension ContractToViewAdapter(SimpleExtensionContracts.ExtensionContract contract)
        {
            if (((System.Runtime.Remoting.RemotingServices.IsObjectOutOfAppDomain(contract) != true) 
                        && contract.GetType().Equals(typeof(IExtensionViewToContractHostAdapter))))
            {
                return ((IExtensionViewToContractHostAdapter)(contract)).GetSourceView();
            }
            else
            {
                return new IExtensionContractToViewHostAdapter(contract);
            }
        }
        internal static SimpleExtensionContracts.ExtensionContract ViewToContractAdapter(SimpleExtensionContracts.HostViews.IExtension view)
        {
            if (view.GetType().Equals(typeof(IExtensionContractToViewHostAdapter)))
            {
                return ((IExtensionContractToViewHostAdapter)(view)).GetSourceContract();
            }
            else
            {
                return new IExtensionViewToContractHostAdapter(view);
            }
        }
    }
}

IExtensionViewToContractHostAdapter.cs
namespace SimpleExtensionContracts.HostSideAdapters
{
    
    public class IExtensionViewToContractHostAdapter : System.AddIn.Pipeline.ContractBase, SimpleExtensionContracts.ExtensionContract
    {
        private SimpleExtensionContracts.HostViews.IExtension _view;
        public IExtensionViewToContractHostAdapter(SimpleExtensionContracts.HostViews.IExtension view)
        {
            _view = view;
        }
        public string MenuText
        {
            get
            {
                return _view.MenuText;
            }
            set
            {
                _view.MenuText = value;
            }
        }
        internal SimpleExtensionContracts.HostViews.IExtension GetSourceView()
        {
            return _view;
        }
    }
}

IExtensionAddInAdapter.cs
namespace SimpleExtensionContracts.AddInSideAdapters
{
    
    public class IExtensionAddInAdapter
    {
        internal static SimpleExtensionContracts.AddInViews.IExtension ContractToViewAdapter(SimpleExtensionContracts.ExtensionContract contract)
        {
            if (((System.Runtime.Remoting.RemotingServices.IsObjectOutOfAppDomain(contract) != true) 
                        && contract.GetType().Equals(typeof(IExtensionViewToContractAddInAdapter))))
            {
                return ((IExtensionViewToContractAddInAdapter)(contract)).GetSourceView();
            }
            else
            {
                return new IExtensionContractToViewAddInAdapter(contract);
            }
        }
        internal static SimpleExtensionContracts.ExtensionContract ViewToContractAdapter(SimpleExtensionContracts.AddInViews.IExtension view)
        {
            if (view.GetType().Equals(typeof(IExtensionContractToViewAddInAdapter)))
            {
                return ((IExtensionContractToViewAddInAdapter)(view)).GetSourceContract();
            }
            else
            {
                return new IExtensionViewToContractAddInAdapter(view);
            }
        }
    }
}

IExtensionContractToViewAddInAdapter.cs
namespace SimpleExtensionContracts.AddInSideAdapters
{
    
    public class IExtensionContractToViewAddInAdapter : SimpleExtensionContracts.AddInViews.IExtension
    {
        private SimpleExtensionContracts.ExtensionContract _contract;
        private System.AddIn.Pipeline.ContractHandle _handle;
        static IExtensionContractToViewAddInAdapter()
        {
        }
        public IExtensionContractToViewAddInAdapter(SimpleExtensionContracts.ExtensionContract contract)
        {
            _contract = contract;
            _handle = new System.AddIn.Pipeline.ContractHandle(contract);
        }
        public string MenuText
        {
            get
            {
                return _contract.MenuText;
            }
            set
            {
                _contract.MenuText = value;
            }
        }
        internal SimpleExtensionContracts.ExtensionContract GetSourceContract()
        {
            return _contract;
        }
    }
}

IExtensionViewToContractAddInAdapter.cs
namespace SimpleExtensionContracts.AddInSideAdapters
{
    
    [System.AddIn.Pipeline.AddInAdapterAttribute()]
    public class IExtensionViewToContractAddInAdapter : System.AddIn.Pipeline.ContractBase, SimpleExtensionContracts.ExtensionContract
    {
        private SimpleExtensionContracts.AddInViews.IExtension _view;
        public IExtensionViewToContractAddInAdapter(SimpleExtensionContracts.AddInViews.IExtension view)
        {
            _view = view;
        }
        public string MenuText
        {
            get
            {
                return _view.MenuText;
            }
            set
            {
                _view.MenuText = value;
            }
        }
        internal SimpleExtensionContracts.AddInViews.IExtension GetSourceView()
        {
            return _view;
        }
    }
}

Yeah, seriously. One interface and one string accessor requires nine class/interfaces and over 200 lines of code (which obviously could be made less with formatting etc).  It's also possible to share the views between addin and host but then you lose part of the more compelling robustness of the framework. If you are interested in where these classes come into play and how the add-in framework actually works check out this link for a good description.
Anyway, I can sympathize with the developers in wanting to speed up the process a bit, but the answer is not to bypass the pipeline. The answer is code generation! Thankfully by the time we realized our mistake Microsoft had released a CTP of their pipeline generator which is a nifty little visual studio addin which picks up the output of the Contracts project and uses reflection to find all of the contracts and generate the necessary projects and files for the pipeline. It literally saved us tons of hours and made the addin framework actually usable. Of couse the code generation is only going to work until we version one side or the other, but at that point we should have solidified those interfaces considerably so it will matter a lot less.

Anyway, long story short, the add-in framework is great, but it's really important for the entire team to understand the goal and be diligent in ensuring that all that extra framework code isn't just being wasted by introducing dependencies.

Bill C61 notes

So I have a great personal distrust and disgust in the way copyright law has continually degraded and been abused by large corporations over the past 30 or 40 years . (Thanks Mickey!) I cringe at the idea of the RIAA sueing people to protect their broken business model and laugh my ass off when bands like Metallica (2) and Kiss make total asses of themselves while those artists that are still relevant embrace new ways of engaging their fans. Anyway, that's the context for this post. I am legitimately interested in seeing how Canada will follow other countries in protecting artists, content producers and consumers.

As much as I abhor the record industry I do think the law should reflect the reality of the new digital landscape. Content creators need to be protected, and consumers should get their money's worth when buying or consuming copyrighted material.

Anyway I watched these videos about the new Bill C61, and as painful as it is to listen to Jim Prentice repeat the same meaningless quip over and over in response to these questions, I did find it interesting. I've been really concerned about Canada following in the steps of the DMCA, and I imagine that under the covers this is largely similar, particularly where "digital locks" are concerned but I've so far not heard anything really terrible.


http://www.digital-copyright.ca/node/4761  (videos)

Some highlights from the videos
  • Need for international cooperation, if it lets me watch more movies online for cheaper then going to the video store then yeah let's go. How long has that been possible in the US?
  • "and of course new technologies such as mp3 players and memory sticks" I just included that because it struck me as funny.
  • Time shifting and format shifting is preserved
  • BUT "digital locks" which are chosen by businesses (ie stopping format shifting) will be legally enforceable and will allow those time shifting and format shifting rights to be circumvented
  • Having personally already rented videos on iTunes I can see some benefit to these locks and some of the new models they enable (death to blockbuster)
  • Also though owning a bunch of iTunes tracks I can't use on other devices makes me hate the locks. But ultimately this is my decision on what to buy so I can't complain too much. It's important to let the market decide on some of these issues I think. Although how much of a market is it really when there are only a handful of really big labels and studios producing all the content?
  • no liabilities for ISP's is great I think
  • New limits on the liability for "personal use" of copyrighted material to $500 PER INFRINGED WORK  (they kept playing up $500 limit as a good thing for consumers but it sounds like downloading 4 movies is still a $2000 hit)We all know how quickly this would completely sink most households.  In Prentice's example this goes from five videos at $20,000 each  = $100,000 to $500 total... he didn't seem to really have that part down and I'm still not sure if it's $500 per work or per incident or whatever?
  • Not that that matters as this is totally unenforceable, the law will enable companies to more confidently invest in delivery mechanisms that rely on locks and have a clearer understanding of rights, but none of it will help anyone actually enforce it. (Unless the companies do it themselves ala the RIAA sueings)
  • This whole bill was rammed through right at the end of summer with little consultation, seems ugly
  • When timshifting you can't store those as a library of recordings, again very vague and would seemingly limit PVR software quite a bit.
  • You can't import devices into Canada that enable bypassing locks (vague, could encompass a lot of devices )
  • Teenagers seem to be the "they" in all these videos. Makes the questioners and the lawmakers seem out of touch. I understand teenagers are big offenders but they are by far not the only ones.
  • What are the whitewood treaties Mr Prentice brings up? I'm assuming I didn't hear him because I didn't see anything on my initial Googling of it.

The business network video is the best video of the bunch, if you want a summary just scroll down and watch that one.

One of the things I was most surprised to hear was from Mr. Sookman saying that time shifting and format shifting is currently NOT legal? Really? I had always been under the impression that this was actually legal in Canada.

Check out this website for more information on Bill C61
http://www.digital-copyright.ca/