Red Gate Buys Reflector

I just came across the news this morning.  I used Reflector a lot when I was first learning .NET.  Lately, I’ve been using it with the Graph and DependencyStructureMatrix plug-ins to figure out where applications are too tightly coupled.  I’m glad it’s staying free to users.


An Alternative to NUnitForms

I first heard about Project White from someone at the Agile 2008 conference last week.  I haven’t had a chance to play with it yet, I’m very curious to see how it compares.  Since it comes from Thoughtworks, I think it’s going to be good.  If it makes testing of modal forms and dialogs easier, I’m already sold.

If anyone out there has already been using Project White successfully, it would be great to hear from you.


Writing the contents of a string to a text file

I should have assumed this existed, based on Ed Poore’s comment on this post, but here’s the actual command:

System.IO.File.WriteAllText
There are multiple overloads for it, but the most basic one is: File.WriteAllText(filePath,contentString).

NDbUnit Revisited

I first wrote about NDbUnit back in 2006.  Unfortunately, it doesn’t appear much new has happened with the project since then.  The current version (1.2) is still a great help for unit testing when you need to put a database into a known state.  If an application you’re testing uses strongly-typed datasets as its data access layer (DAL), integrating the use of NDbUnit into your existing unit tests is even easier because it uses XSD files.

I’ve revised my previous sample project using Visual Studio 2008.  You can download the source as a zip file.  Make sure you have NUnit (I used version 2.4.7) and NDbUnit installed before you attempt to run the sample tests.


A couple of "old-school" CS principles

Robert Martin is a guy our CEO and architects really respect when it comes to software design and development.  Somehow, I managed to not have heard of the guy before this year, so I’ve started reading his stuff.  Here are a couple of his older columns that may prove quite useful if you find yourself building APIs in your work:

The first one explains a lot of the issues I've seen with applications in previous jobs.  In any number of applications, a simple change would have a ripple effect that touched a lot more than just one line of code.  Following the open-closed principle more strictly would haved save me many headaches.

Strongly-typed DataSets in Subversion

Strongly-typed datasets are the default option for creating a data access layer (DAL) with the various .NET versions of Visual Studio. From the XSD file that defines a strongly-typed dataset, Visual Studio generates a [XSD].Designer.cs and [XSD].xss.  They’re regenerated every time you change the XSD, even if you just change the layout.  This can become a problem when working in teams and it’s necessary to merge changes.  If your Subversion repository is configured to version the generated files, they’ll be marked as conflicting when you update.

These are the steps I’ve taken to merge changes in the situation above:

  1. Delete [XSD].Designer.cs and [XSD].xss.
  2. Resolve conflicts in the XSD file (and mark them as resolved).  This will generate new versions of [XSD].Designer.cs and [XSD].xss.
  3. When resolving conflicts in the files generated in step 2, use the whole file that was just generated.
This will be much easier than trying to resolve conflicts in generated files.

Changing Primary Keys from "int" to "uniqueidentifier"

I’m in the process of doing this for a project that uses Microsoft SQL Server.  One of the “gotchas” I came across was that once you’ve switched from “int” to “uniqueidentifier”, @@IDENTITY and SCOPE_IDENTITY references won’t work.  The second response in this thread pointed me in the right direction.  You have to call NEWID() in the context where you need it (and save the value) in order to be able to refer to it later.


Refactoring

Last night, I went to a presentation on refactoring by Jonathan Cogley.  My notes are below:

refactor - improve the design of existing code incrementally

Code must:

  • do the job
  • be maintainable/extensible
  • communicate its intent
Code that doesn't accomplish all of the above is broken.

Refactorings

  • rename
  • extract method
  • inline method
  • introduce explaining variable
  • move method
  • inline temp
technical debt - anything that needs to be done to the code that gets put off until a later date

I found a much better definition for technical debt.  It makes a nice argument in favor of refactoring (though not as good as it would be with some way to quantify and measure it).

code smell - indication that something could be wrong with the code

Code Smells

  • duplicated code
  •  long methods
  • large classes
  • Too many private/protected methods
  • Empty catch clause (FxCop flags these by default)
  • Too many static methods
  • Variables with large scope
  • Poorly-named variables
  • Comments
  • Switch statements
  • Unnecessary complexity
Even though comments in code to tend to get out of date, I'm not sure I'd call them a code smell.  Wikipedia has another definition of code smell, along with a link to a taxonomy of code smells.

When to refactor:

  • before a change
  • after all current tests are green
Sometimes, refactoring is necessary to understand code.

reduce scope - bring variable closer to where it’s used

Be sure your unit tests don’t re implement what the tested code is doing.

Eliminate double assignments (a = b = 0) for clarity.

Each method should have only one operation/concept.

If you must use code comments, they should explain the “why”.  The code should be clear enough to explain the “what”.

Favor virtual instance methods where possible in your code.

Avoid using the debugger.  Write unit tests instead.

Performance improvements tend to make code harder to understand.  Don’t use refactoring to address application performance.

Recommended reading:

Refactoring to Patterns


Null Coalescing Operator

I didn’t know about this C# 2.0 operator (??) until ReSharper suggested it as a replacement for a particular use of arithmetic if (?:) that I’d added to some code recently.  I already prefer C# to VB.NET because of its terse syntax and stricter compiler, so this discovery tipped the scales just that much more.

The most recent blog post Google coughed up for this operator is this one, from Aaron Zupancic.  Aaron links to another post that demonstrate its use for viewstate.


TeamCity 3.0

Now there’s a freeware version of it that supports up to 20 users and build configurations.  We were looking at setting up CruiseControl.NET again for continuous integration at work, but this will be much easier.


The trouble with using strongly-typed datasets

Apparently, if your database-driven website is under heavy concurrent user load, the  Adapter.Fill method in the .NET Framework (called by code generated by the XSD in Visual Studio) begins to fail because it doesn’t close connections properly.

The next time I need a data access layer for anything of substance, strongly-typed datasets are off the list.


Are Exceptions Always Errors?

It would be easy enough to assume so–but surprisingly, that’s not always the case. So the following quote from this post:

"If there's an exception, it should be assumed that something is terribly wrong; otherwise, it wouldn't be called an exception."
isn't true in all cases. In chapter 18 of Applied Microsoft .NET Framework Programming (page 402), Jeffrey Richter writes the following:
"Another common misconception is that an 'exception' identifies an 'error'."

“An exception is the violation of a programmatic interface’s implicit assumptions."

He goes on to use a number of different examples where an thrown exception is not because of an error. Before reading Richter, I certainly believed that exceptions were errors–and implemented application logging on the basis of that belief. The exception that showed me this didn’t always apply was ThreadAbortException. This exception gets thrown if you call Response.Redirect(url). The redirect happens just fine, but an exception is still thrown. The reason? When that overload of Response.Redirect is called, execution of the page where it’s called is stopped immediately by default. This violates the assumption that a page will execute fully, but is not an error. Calling Response.Redirect(url,false) prevents ThreadAbortException from being thrown, but it also means you have to write your logic slightly differently.

The other place I’d differ with the original author (Billy McCafferty) is in his description of “swallow and forget”, which is:

} catch (Exception ex) { AuditLogger.LogError(ex); }

The fact that it’s logged means there’s somewhere to look to find out what exception was thrown.  I would define “swallow and forget” this way:

}catch(Exception ex){

}

Of course, if you actually catch the generic exception, FxCop would flag that as a user violation.  I’m sure McCafferty was using this as an example.


SourceForge to the Rescue

I’d been hunting around for awhile trying to find a tool to automatically convert some .resx files into Excel so the translation company we’re using for one of our applications would have something convenient to work with.  It wasn’t until today that I found RESX2WORD.  It’s actually 2 utilities: one executable to convert .resx files into Word documents, and another to do the reverse.

The resulting Word from the resx2word executable has a paragraph of instructions to the translator and automatically duplicates the lines that need translating.


Lessons Learned: The Failure of Virtual Case File

I came across this article about the failure of the Virtual Case File project about a week ago. I read things like this in the hope of learning from the mistakes of others (instead of having to make them myself). What follows are some of the conclusions I drew from reading the article, and how they might apply to other projects.

Have the Right People in the Right Roles

The author of the article (Harry Goldstein) calls the appointment of Larry Depew to manage the VCF project “an auspicious start”. Since Depew had no IT project management experience, putting him in charge of a project so large with such high stakes struck me as a mistake. This error was compounded by having him play the role of customer advocate as well. In order to play of the role of project manager effectively, you can’t be on a particular side. Building consensus that serves the needs of all stakeholders as well as possible simply couldn’t happen with one person playing both roles.

Balance Ambition and Resources

The FBI wanted the VCF to be a one-stop shop for all things investigative. But they lacked both the necessary infrastructure and the people to make this a realistic goal. A better approach would have prioritized the most challenging of the individual existing systems to replace (or the one with the greatest potential to boost productivity of FBI agents), and focused the efforts there. The terrorist attacks of 9/11/2001 exposed how far behind the FBI was from a technology perspective, added a ton of political pressure to hit a home run with the new system, and probably created unrealistically high expectations as well.

Enterprise Architecture is Vital

This part of Goldstein’s article provided an excellent definition of enterprise architecture, which I’ve included in full below:

This blueprint describes at a high level an organization's mission and operations, how it organizes and uses technology to accomplish its tasks, and how the IT system is structured and designed to achieve those objectives. Besides describing how an organization operates currently, the enterprise architecture also states how it wants to operate in the future, and includes a road map--a transition plan--for getting there.
Unfortunately, the FBI didn't have an enterprise architecture. This meant there was nothing guiding the decisions on what hardware and software to buy.

Delivering Earlier Means Dropping Features

When you combine ambition beyond available resources with shorter deadlines, disaster is virtual certainty. When SAIC agreed to deliver six months earlier than initially agreed, that should have been contingent on dropping certain features. Instead, they tried to deliver everything by having eight teams work in parallel. This meant integration of the individual components would have to be nearly flawless–a dubious proposition at best.

Projects Fail in the Requirements Phase

When a project fails, execution is usually blamed. The truth is that failed projects fail much earlier than that–in requirements. Requirements failures can take many forms, including:

  • No written requirements
  • Constantly changing requirements
  • Requirements that specify "how" instead of "what"
The last two items describes the VCF's requirements failure. The 800+ page document described web pages, form button captions, and logos instead of what the system needed to do.

In addition, it appears that there wasn’t a requirements traceability matrix as part of the planning documents.  The VCF as delivered in December 2003 (and rejected by the FBI), did things that there weren’t requirements for.  Building what wasn’t specified certainly wasted money and man-hours that could have been better spent.  I also inferred from the article that comprehensive test scenarios weren’t created until after the completed system had been delivered.  That could have (and should have) happened earlier than it did.

Buy or Borrow Before You Build

Particularly in the face of deadline pressure, it is vital that development buy existing components (or use open source) and integrate them wherever practical instead of building everything from scratch.  While we may believe that the problem we’re trying to solve is so unique that no software exists to address it, the truth is that viable software solutions exist to subsets of many of the problems we face.  SAIC building an “an e-mail-like system” when the FBI was already using GroupWise for e-mail was a failure in two respects.  From an opportunity cost perspective, the time this team spent re-inventing the wheel couldn’t be spent working on other functionality that actually needed to be custom built.  They missed an opportunity to leverage existing functionality.

Prototype for Usability Before You Build

Teams that build successful web applications come up with usability prototypes before code gets written.  At previous employers (marchFIRST and Lockheed-Martin in particular), after “comps” of key pages in the site were done, usability testing would take place to make sure that using the system would be intuitive for the user.  Particularly in e-commerce, if a user can’t understand your site, they’ll go somewhere else to buy what they want.  I attribute much of Amazon’s success to just how easy they make it to buy things.

In the case of the VCF, the system was 25% complete before the FBI decided they wanted “bread crumbs”.  A usability prototype would have caught this.  What really surprises me is that this functionality was left out of the design in the first place.  I can’t think of any website, whether it’s one I’ve built or one I’ve used, that didn’t have bread crumbs.  That seemed like a gigantic oversight to me.


A brief note on version control, labeling, and deployments

One thing I didn’t realize about CruiseControl.NET until recently was that it automatically labels your builds.  It uses a ... naming scheme.  The way this helps with deployments is that you can always look figure out what you’ve deployed to production in the past–as long as you only deploy labeled builds.

We still need to get our continuous integration setup working again, but in the interim, manual labeling of releases is still helpful.


Exposing InnerException

This week, an application I work on started logging an exception that provided no help at all in debugging the problem. My usual practice of running the app in debug mode with production values in the config file failed to reproduce the error too. After a couple of days of checking a bunch of different areas of code (and still not solving the problem), Bob (a consultant from Intervention Technologies) gave me some code to get at all the InnerException values for a given Exception. Download the function from here.

StackTrace can be pretty large. Since we log to a database, I was worried about overrunning the column width. I also wasn’t keen on the idea of looking at so large a text block if an Exception was nesting four or five additional ones. So instead of implementing the code above, I changed the code to log at each level. Doing it this way adds a log entry for each InnerException. Because the log viewer I implemented displays the entries in reverse-chronological order, the root cause of a really gnarly exception displays at the top. The changes I made to global.asax looked like this.

The result of this work revealed that the app had been complaining about not being able to reach the SMTP server to send e-mail (which it needs to send users their passwords when they register or recover lost passwords).

Once we’d established that the change was working properly, it was time to refactor the code to make the functionality more broadly available. To accomplish this, I updated our Log4Net wrapper like this.


App_Code: Best In (Very) Small Doses

When I first started developing solutions on version 2.0 of the .NET Framework, I saw examples that had some logic in the App_Code folder. For things like base pages, I thought App_Code was perfect–and that’s how I use it today. When I started to see applications put their entire middle tiers in App_Code however, I thought that was a bad idea. Beyond not being able to unit test your components (and the consequences associated with a lack of unit testing, coverage, etc), it just seemed … wrong.

Fortunately, there are additional reasons to minimize the use of App_Code:


Defending debuggers (sort of)

I came across this post about debuggers today. I found it a lot more nuanced than the Giles Bowkett post on the same topic. The part of the post I found the most useful was when he used the issue of problems in production to advocate for two practices I’m a big fan of: test-driven development and effective logging.

I’m responsible for an app that wasn’t developed using TDD and had no logging at all when I first inherited it. When there were problems in production, we endured all manner of suffering to determine the causes and fix them. Once we added some unit tests and implemented logging in key locations (global.asax and catch blocks primarily), the number of issues we had dropped off significantly. And when there were issues, the log helped us diagnose and resolve problems far more quickly.

The other benefit of effective logging is to customer service. Once I made the contents available to a business analyst, she could see in real-time what was happening with the application and provide help to users more quickly too.

Whether you add it on after the fact or design it in from the beginning, logging is a must-have for today’s applications.


What tests are really for

Buried deep in this Giles Bowkett post is the following gem:

"Tests are absolutely not for checking to see if things went wrong. They are for articulating what code should do, and proving that code does it."
While it comes in the midst of an anti-debugger post (and an extended explanation of why comments on the post are closed), it is an excellent and concise statement of the purpose of unit tests.  It explains perhaps better than anything else I've read the main reason unit tests (or specifications, as the author would call them) should be written before the code.

Quick fix for "Failed to enable constraints" error

If you use strongly-typed datasets in .NET, you’ve encountered the dreaded “Failed to enable constraints …” message.  I most recently encountered it this morning, while unit testing some new code.  There were far fewer search results for the phrase than I expected, so I’ll add my experience to the lot.

The XSD I’m working with has one table and one stored procedure with four parameters.  A call of this stored procedure (via a method in a business logic class) returns a one-column one-row result set.  My code threw a ConstraintException each time the result set value was zero (0).  To eliminate this problem, I changed the value of AllowDBNull attribute of each column in the XSD table from False to True (if the value wasn’t True already).  When I ran the unit tests again, they were successful.

I’ll have to research this further at some point, but I think part of the reason for ConstraintException being thrown in my case was the difference between the stored procedure’s result set columns and the table definition of the associated table adapter.

In any case, setting AllowDBNull to True is one way to eliminate that pesky error.