Are Exceptions Always Errors?
It would be easy enough to assume so–but surprisingly, that’s not always the case. So the following quote from this post:
"If there's an exception, it should be assumed that something is terribly wrong; otherwise, it wouldn't be called an exception."isn't true in all cases. In chapter 18 of Applied Microsoft .NET Framework Programming (page 402), Jeffrey Richter writes the following:
"Another common misconception is that an 'exception' identifies an 'error'."He goes on to use a number of different examples where an thrown exception is not because of an error. Before reading Richter, I certainly believed that exceptions were errors–and implemented application logging on the basis of that belief. The exception that showed me this didn’t always apply was ThreadAbortException. This exception gets thrown if you call Response.Redirect(url). The redirect happens just fine, but an exception is still thrown. The reason? When that overload of Response.Redirect is called, execution of the page where it’s called is stopped immediately by default. This violates the assumption that a page will execute fully, but is not an error. Calling Response.Redirect(url,false) prevents ThreadAbortException from being thrown, but it also means you have to write your logic slightly differently.“An exception is the violation of a programmatic interface’s implicit assumptions."
The other place I’d differ with the original author (Billy McCafferty) is in his description of “swallow and forget”, which is:
} catch (Exception ex) { AuditLogger.LogError(ex); }
The fact that it’s logged means there’s somewhere to look to find out what exception was thrown. I would define “swallow and forget” this way:
}catch(Exception ex){
}
Of course, if you actually catch the generic exception, FxCop would flag that as a user violation. I’m sure McCafferty was using this as an example.
SourceForge to the Rescue
I’d been hunting around for awhile trying to find a tool to automatically convert some .resx files into Excel so the translation company we’re using for one of our applications would have something convenient to work with. It wasn’t until today that I found RESX2WORD. It’s actually 2 utilities: one executable to convert .resx files into Word documents, and another to do the reverse.
The resulting Word from the resx2word executable has a paragraph of instructions to the translator and automatically duplicates the lines that need translating.
Google Webmaster Tools
I just started playing with Google Webmaster Tools yesterday. I was very interested to find out where this blog has been showing up in search results. According to the “top search queries” stats, the queries my site appeared most for were “ndbunit” and “failed mergers”. Considering that I only wrote one post about NDbUnit, and one about the Daimler-Chrysler split, I found that surprising.
Webmaster Tools includes a lot more statistics that look as if they’d be very informative. I’ll explore them later, as well as trying out the sitemap functionality.
Aftermath: The Failure of Virtual Case File
The FBI awarded Lockheed-Martin (my former employer) the lead role in implementing Sentinel, a follow-up effort to the failed VCF project, in March 2006. Nearly two years later, it will be interesting to see what lessons (if any) the FBI learned.
Lessons Learned: The Failure of Virtual Case File
I came across this article about the failure of the Virtual Case File project about a week ago. I read things like this in the hope of learning from the mistakes of others (instead of having to make them myself). What follows are some of the conclusions I drew from reading the article, and how they might apply to other projects.
Have the Right People in the Right Roles
The author of the article (Harry Goldstein) calls the appointment of Larry Depew to manage the VCF project “an auspicious start”. Since Depew had no IT project management experience, putting him in charge of a project so large with such high stakes struck me as a mistake. This error was compounded by having him play the role of customer advocate as well. In order to play of the role of project manager effectively, you can’t be on a particular side. Building consensus that serves the needs of all stakeholders as well as possible simply couldn’t happen with one person playing both roles.
Balance Ambition and Resources
The FBI wanted the VCF to be a one-stop shop for all things investigative. But they lacked both the necessary infrastructure and the people to make this a realistic goal. A better approach would have prioritized the most challenging of the individual existing systems to replace (or the one with the greatest potential to boost productivity of FBI agents), and focused the efforts there. The terrorist attacks of 9/11/2001 exposed how far behind the FBI was from a technology perspective, added a ton of political pressure to hit a home run with the new system, and probably created unrealistically high expectations as well.
Enterprise Architecture is Vital
This part of Goldstein’s article provided an excellent definition of enterprise architecture, which I’ve included in full below:
This blueprint describes at a high level an organization's mission and operations, how it organizes and uses technology to accomplish its tasks, and how the IT system is structured and designed to achieve those objectives. Besides describing how an organization operates currently, the enterprise architecture also states how it wants to operate in the future, and includes a road map--a transition plan--for getting there.Unfortunately, the FBI didn't have an enterprise architecture. This meant there was nothing guiding the decisions on what hardware and software to buy.
Delivering Earlier Means Dropping Features
When you combine ambition beyond available resources with shorter deadlines, disaster is virtual certainty. When SAIC agreed to deliver six months earlier than initially agreed, that should have been contingent on dropping certain features. Instead, they tried to deliver everything by having eight teams work in parallel. This meant integration of the individual components would have to be nearly flawless–a dubious proposition at best.
Projects Fail in the Requirements Phase
When a project fails, execution is usually blamed. The truth is that failed projects fail much earlier than that–in requirements. Requirements failures can take many forms, including:
- No written requirements
- Constantly changing requirements
- Requirements that specify "how" instead of "what"
In addition, it appears that there wasn’t a requirements traceability matrix as part of the planning documents. The VCF as delivered in December 2003 (and rejected by the FBI), did things that there weren’t requirements for. Building what wasn’t specified certainly wasted money and man-hours that could have been better spent. I also inferred from the article that comprehensive test scenarios weren’t created until after the completed system had been delivered. That could have (and should have) happened earlier than it did.
Buy or Borrow Before You Build
Particularly in the face of deadline pressure, it is vital that development buy existing components (or use open source) and integrate them wherever practical instead of building everything from scratch. While we may believe that the problem we’re trying to solve is so unique that no software exists to address it, the truth is that viable software solutions exist to subsets of many of the problems we face. SAIC building an “an e-mail-like system” when the FBI was already using GroupWise for e-mail was a failure in two respects. From an opportunity cost perspective, the time this team spent re-inventing the wheel couldn’t be spent working on other functionality that actually needed to be custom built. They missed an opportunity to leverage existing functionality.
Prototype for Usability Before You Build
Teams that build successful web applications come up with usability prototypes before code gets written. At previous employers (marchFIRST and Lockheed-Martin in particular), after “comps” of key pages in the site were done, usability testing would take place to make sure that using the system would be intuitive for the user. Particularly in e-commerce, if a user can’t understand your site, they’ll go somewhere else to buy what they want. I attribute much of Amazon’s success to just how easy they make it to buy things.
In the case of the VCF, the system was 25% complete before the FBI decided they wanted “bread crumbs”. A usability prototype would have caught this. What really surprises me is that this functionality was left out of the design in the first place. I can’t think of any website, whether it’s one I’ve built or one I’ve used, that didn’t have bread crumbs. That seemed like a gigantic oversight to me.
.NET Developers Search
The latest podcast of Hanselminutes mentioned a custom search engine focused on .NET topics. It’s using Google Custom Search, and at least for a search term like MSMQ, the searchdotnet.com results look a bit different than regular Google results. The creator of the site, Dan Appleman, has authored a number of books on Microsoft technologies (primarily .NET and VB). He seems to be going for a “quality vs. quantity” approach with the sites he includes as sources (which makes sense for this sort of niche search engine).
Lightroom: Day 1
If you love iPhoto, I warn you–stop reading now. Once you read even a little about what Adobe Lightroom can do, you’ll want to try it. Once you’ve tried Lightroom, you simply won’t be content with going back to iPhoto. I’m only 1 day into the 30-day trial of Lightroom, and I’m done with iPhoto. I haven’t even tried Apple’s Aperture yet. If you’re still reading, it’s already too late. I can’t be held responsible for the money you will almost certainly spend.
Metadata Browser
After importing around 200 photos into Lightroom, this was the first feature I played with. It lets you filter which pictures you see by any one of a number of variables, including lens (if you use more than one), aperture, shutter speed, and ISO speed rating. So two clicks let me see how all the photos I shot at a shutter speed of 1/500th second looked. Two more clicks, and I could see how everything I shot with my 50 f/1.4 looked, or how what I took with my zoom lens looked.
Quick Develop
This feature enables you to apply changes to crop ratio, white balance, and tone (including exposure) across multiple photos. So when I needed to change the white balance in a group of my shots, it was as simple as selecting the group, changing white balance to “Flash” from “As Shot”. The same is true of underexposed shots. My friend Sandro pointed out four photos that were underexposed. I simply selected the four he pointed out, and pressed the button for +1/3 of a stop until they were bright enough for my taste. In retrospect, a single click of the +1 stop button would have been even faster.
Develop
This is the step in Lightroom’s workflow where you make more detailed changes to individual photos. Each change you make to a photo shows up in a “History” widget to the left, so you can rollback individual changes with ease. I only cropped photos here, but I could have changed any number of things about them.
Web
While this feature isn’t so much about the photos themselves as it is about how you can share them, this part of the workflow is where Lightroom really shines. Generating this page took a few clicks, and a couple of slider moves. On top of that, I didn’t even have to use another application to upload it to the web–I did it directly from Lightroom. There are quite a few different page templates to choose from.
The features I’ve described so far barely scratch the surface of what Lightroom can do. One of the things that impresses me about Lightroom is not just the amount of things it does that iPhoto can’t (or does badly, cough iWeb cough) but how much less time it takes to handle hundreds of photos by comparison.
URL aliasing
After dealing with a few of the gigantic URLs to SharePoint documents in e-mail, a custom version of TinyURL seems like a good idea. It looks like this guy thought so too. A bit later, Mike Marusin did as well. I do wonder if the latest version of SharePoint provides that functionality out of the box though.
Outlook Lookout
While I wait for Google to fix their desktop search bug, I’m using Lookout (version 1.2.0.1924) to search Outlook. I was under the distinct impression that Lookout got killed after Microsoft bought the vendor (Lookout Software), especially when I read this post on Channel 9.
It looks like the helpdesk people at work squirreled away an old installation of it, because Googling for it brought back nothing but Windows Desktop Search links.
Google Desktop Bug
It looks like Google Desktop 5.5 has a bug that prevents users from opening forwarded e-mail attachments. There are more details in this Google Groups thread.
A brief note on version control, labeling, and deployments
One thing I didn’t realize about CruiseControl.NET until recently was that it automatically labels your builds. It uses a
We still need to get our continuous integration setup working again, but in the interim, manual labeling of releases is still helpful.
Exposing InnerException
This week, an application I work on started logging an exception that provided no help at all in debugging the problem. My usual practice of running the app in debug mode with production values in the config file failed to reproduce the error too. After a couple of days of checking a bunch of different areas of code (and still not solving the problem), Bob (a consultant from Intervention Technologies) gave me some code to get at all the InnerException values for a given Exception. Download the function from here.
StackTrace can be pretty large. Since we log to a database, I was worried about overrunning the column width. I also wasn’t keen on the idea of looking at so large a text block if an Exception was nesting four or five additional ones. So instead of implementing the code above, I changed the code to log at each level. Doing it this way adds a log entry for each InnerException. Because the log viewer I implemented displays the entries in reverse-chronological order, the root cause of a really gnarly exception displays at the top. The changes I made to global.asax looked like this.
The result of this work revealed that the app had been complaining about not being able to reach the SMTP server to send e-mail (which it needs to send users their passwords when they register or recover lost passwords).
Once we’d established that the change was working properly, it was time to refactor the code to make the functionality more broadly available. To accomplish this, I updated our Log4Net wrapper like this.
App_Code: Best In (Very) Small Doses
When I first started developing solutions on version 2.0 of the .NET Framework, I saw examples that had some logic in the App_Code folder. For things like base pages, I thought App_Code was perfect–and that’s how I use it today. When I started to see applications put their entire middle tiers in App_Code however, I thought that was a bad idea. Beyond not being able to unit test your components (and the consequences associated with a lack of unit testing, coverage, etc), it just seemed … wrong.
Fortunately, there are additional reasons to minimize the use of App_Code:
First earthquake
I was at the San Jose Airport waiting on a flight to Los Angeles when this earthquake hit. It didn’t register with me that an earthquake was happening until a few seconds went by and I saw the walls and ceiling moving more and more. Everyone in the terminal (including me) dashed toward the nearest doorway or arch they could find. It was quite a scary experience, and one I hope never to repeat.
Chocolate Sunday at Cacao Anasa
I spent a few hours this afternoon making chocolate at the Cacao Anasa kitchen. My friend Peter invited me to the third of these events since I’m in town for a conference. If it’s possible to have more fun in a kitchen, I’m not sure how. We made (and ate) truffles, cookies, chocolate bars, chocolate soup, and a bunch of chocolate-flavored drinks (spicy hot chocolate, and an assortment of stronger beverages).
Now that I’ve had freshly-made chocolate from my own hands, I’m sure I’ll be a lot more picky about what I buy. The CEO/owner of Cacao Anasa, Anthony Ferguson, gave us a great education on the making of chocolate (and some of the health benefits). Before today, I would never have known that chocolate is actually tempered, not unlike steel. His biography is even more impressive than the great chocolate.
Defending debuggers (sort of)
I came across this post about debuggers today. I found it a lot more nuanced than the Giles Bowkett post on the same topic. The part of the post I found the most useful was when he used the issue of problems in production to advocate for two practices I’m a big fan of: test-driven development and effective logging.
I’m responsible for an app that wasn’t developed using TDD and had no logging at all when I first inherited it. When there were problems in production, we endured all manner of suffering to determine the causes and fix them. Once we added some unit tests and implemented logging in key locations (global.asax and catch blocks primarily), the number of issues we had dropped off significantly. And when there were issues, the log helped us diagnose and resolve problems far more quickly.
The other benefit of effective logging is to customer service. Once I made the contents available to a business analyst, she could see in real-time what was happening with the application and provide help to users more quickly too.
Whether you add it on after the fact or design it in from the beginning, logging is a must-have for today’s applications.
What tests are really for
Buried deep in this Giles Bowkett post is the following gem:
"Tests are absolutely not for checking to see if things went wrong. They are for articulating what code should do, and proving that code does it."While it comes in the midst of an anti-debugger post (and an extended explanation of why comments on the post are closed), it is an excellent and concise statement of the purpose of unit tests. It explains perhaps better than anything else I've read the main reason unit tests (or specifications, as the author would call them) should be written before the code.
Quick fix for "Failed to enable constraints" error
If you use strongly-typed datasets in .NET, you’ve encountered the dreaded “Failed to enable constraints …” message. I most recently encountered it this morning, while unit testing some new code. There were far fewer search results for the phrase than I expected, so I’ll add my experience to the lot.
The XSD I’m working with has one table and one stored procedure with four parameters. A call of this stored procedure (via a method in a business logic class) returns a one-column one-row result set. My code threw a ConstraintException each time the result set value was zero (0). To eliminate this problem, I changed the value of AllowDBNull attribute of each column in the XSD table from False to True (if the value wasn’t True already). When I ran the unit tests again, they were successful.
I’ll have to research this further at some point, but I think part of the reason for ConstraintException being thrown in my case was the difference between the stored procedure’s result set columns and the table definition of the associated table adapter.
In any case, setting AllowDBNull to True is one way to eliminate that pesky error.
Multiple meanings of test-driven development
This Roy Osherove post surprised me because I hadn’t been aware of so many different interpretations of the idea before. Because I’ve always believed in at least some design up front, my understanding of TDD was the first of four mentioned:
"Test Driven Development: the idea of writing your code in a test first manner. You may already have an existing design in place."The fourth interpretation of TDD applies pretty well to what I do now whenever I inherit an application. Writing unit tests of the middle tier(s) of the application (if none exist) has proven to be very helpful whenever I've done this. Adding new features becomes easier, and regression testing becomes much easier.
Reactions to inherited code
This entertaining Phil Haack post on inherited code definitely tells the truth when it says:
"Here’s the dirty little secret about being a software developer. No matter how good the code you write is, it’s crap to another developer."Until I read this post, I hadn't even thought about the thousands of lines of code I've left behind at various employers and how my successors regarded them. I remember getting very positive feedback from a colleague who wrote the .NET replacement for my VB6/COM+/SQL Server 2000 implementation of our online recruitment tool, but that's it. My usual reaction to code I've inherited has rarely been empathetic. Phil's right in trying to be more understanding. And to anyone who has inherited my code in the past, I hope I didn't make your job too hard.