And of course finding water is not the same as finding lakes, but imagine the potential for fuel sources and or human sustenance. Water is damn heavy, and not something we can easily take with us.
Pretty cool stuff.
Saw this on reddit tonight, hanselman has updates his legendary tools list for 2009. So what was going to be an evening of actual coding is slowing turning into an evening of trying out cool new tools that have made his list. (I’m writing this blog post in windows Live Writer after seeing it in the list)
But what’s an hour or two of my time compared to the time that must go into compiling this list? Totally worth checking out, especially if you’ve never had a look before.
It’s nice, my mac envy always takes a slight dip in moments like these.
Still, I'm excited "we're" (go NASA) going back, and if anything this just really highlights for me how damn big our "little" moon is. Easy to forget that the satellite that's taking those photos is still 50 kilometers! away from the surface, so somewhat understandable that we're not getting the close ups I'd love to see.
There is a pretty cute video of the LRO launch party here (check side bar) that is worth checking out. The highlight for me was seeing the communications in action, the laser ranging support for the data coming from LRO looks like a large robot with big green blinking eyes.
Congratulations and good luck LRO team, if you can now just convince Sarah then your mission can surely be called a great success.
It's a great domain for learning critical systems in my opinion. Real time, high concurrency, real money, third party integrations for credits and tournaments and the vast reporting that goes on for all that data being generated.
My colleague's experience in dealing with real time load and breaking the barriers of scalability are truly fascinating and a good source of learning for me. While I realize there are bigger puzzles out there, but it's not every day you have direct access to that experience where you work. In any case, one such learning that I am in the process of trying to apply is a more forward looking approach to load modeling. That is, rather than to simply design and test for scalability; to actually drill down into the theoretical limit of what you are building in an attempt to predict failure.
This prediction can mean a lot of different things of course, being on a spectrum with something like a vague statement about being IO bound to much more complicated models of actual transactions and usage to enable extrapolating much richer information about those weak points in the system. In at least one case, my boss has taken this to the point where the model of load was expressed as differential equations prior to any code being written at all. Despite my agile leanings I have to say I'm extremely impressed by that. Definitely something I'd throw on my resume. So I'm simultaneously excited and intimidated at the prospect of delving into our relatively new platform that we're building in the hopes of producing something similar. I definitely see the value in at least the first few iterations of highlighting weak points and patterns and usage. How far I can go from there will be a big question mark.
For now I'll be starting at http://www.tpc.org/information/benchmarks.asp and then moving into as exhaustive list as I can of the riskiest elements of our system. From there I'll need to prioritize and find the dimensions that will impact our ability to scale. I expect with each there will be natural next steps to removing the barrier (caching, distributing load, eliminating work, etc) and I hope to be able to put a cost next to each of those.
Definitely a good list though and something worth reminding ourselves of every once in a while when thinking about the systems we build.