A visit to Iowa City

Last weekend, I visited my cousin Kevin at the University of Iowa to sit on his Ph. D defense.  For the past five years, he’s been working in pharmaceutical chemistry figuring out how to create vaccines that can be delivered directly to human genes.  I’m no chemist, so the bulk of his talk was way over my head, but it was very cool to see his command of the material and how well he presented.  When he came back from the private portion of his defense, we knew he’d succeeded.

After a celebratory lunch, Kevin took his brother Richard, sister Michelle, and me to a firing range to shoot.  By firing range, I don’t mean some shiny building with paper targets on motorized tracks.  We drove about an hour from Iowa City to a fenced-in area outdoors with some metal stands and a big pit.  You bring your own guns, ammo, and targets.  When other people are around, you have to signal them that you’re going to put targets out so they stop shooting.  We turned our fire on some empty steel solvent containers with four different weapons: a Ruger pistol (.22 LR ammunition), a Ruger rifle, a Springfield 1911 (.45 ammunition), and an M1 Garand (7.62mm rounds).  After spending a couple hours shooting, I will never look at Hollywood shoot-em-ups the same way again.  Movies seem totally fake compared with the noise and recoil of large-caliber weapons.  We had fun, and we turned out to be half-decent shots (for rookies).


StackOverflow Dev Days DC

In this case, DC = Falls Church, VA.  I went to the State Theatre to attend this conference.  Considering the cost ($99/person), the conference turned out to be a great value.  I wrote up a conference report to share with my bosses and co-workers and it’s included below.  It has footnotes because I typed it up in Word 2007 and pasted it in here with very little editing (because after all this writing, I’m feeling a bit lazy).

Summary The main reasons the creators of stackoverflow.com came up with this conference include the following:

  1. Bring together developers that are part of the Stack Overflow community[1]
  2. Teach developers something outside their immediate field[2]
  3. Accomplish 1 & 2 at low cost[3]
A fourth reason I would add is to pitch FogBugz 7.  It’s the primary product offering of Fogcreek Software, so it would have been odd for a conference it supports to not do at least a little advertising.  Spolsky also attempted to divide the venue by area for networking around certain topics, but I’m not sure how successful that was.

The conference succeeded in its main objectives.  At $99 per person, this conference was a bargain.  Given the diversity of topics and caliber of speakers, the price could have been higher and the venue would still have sold out.  Of the seven official topics presented (there was an eighth on Agile after the conference ended), only the ASP.NET MVC talk used technology that I had hands-on production experience with.  I was disappointed not to see a presentation on Android, but that was the only thing obviously missing from the day.

Keynote: Joel Spolsky If I were to boil down Joel Spolsky’s keynote to a single phrase, it would be this:

“Death to the dialog box!”

Spolsky’s talk argued persuasively that software often forces users to make decisions about things they don’t care about, or don’t know enough about to answer correctly.  Among his examples were modal dialog boxes for products like QuickBooks and Internet Explorer, and the Windows Vista Start button.  He talked about the other extreme (overly simple applications) as well, using the “build less” design philosophy of the 37signals team as an example.[4] Equating those kinds of applications with Notepad was a reach (and clearly played for laughs), but described the limitations of the alternative to complex applications pretty well.  The examples did a good job of setting up the choice between simplicity and power.

He cited an experiment in the selection of jam from The Paradox of Choice: Why More Is Less[5] to show the potential drawbacks of too many choices.  When the results of this experiment showed that a display with fewer choices resulted in an order of magnitude more sales of jam, it put a monetary value on the design decision to limit choices.

Predictably, his examples of the right kind of design were products from Apple.  It takes a lot more effort to put a Nokia E71 in vibrate mode than it does an iPhone.  Spolsky pointed to the iPod’s lack of buttons for Stop and Power as examples of addition by subtraction.  The best example he chose was actually Amazon’s 1-Click shipping.  In addition to offering the most reasoned defense I’ve heard yet of Amazon winning that patent, he explained how it works for multiple purchases.

A few other takeaways from the Spolsky’s keynote that I’ve tried to capture as close to verbatim as possible are:

  • The computer shouldn’t set the agenda.
  • Bad features interrupt users.
  • Don’t give users choices they don’t care about.
iPhone Development: Dan Pilone This talk successfully combined technical depth on iPhone development with information about business models for actually selling an app once it’s complete.  Pilone discussed which design patterns to use (MVC, DataSource, Delegate) as well as what paid applications are selling for in the App Store (the highest-grossing ones sell for between $4.99 and $9.99).

One of the most useful parts of the talk was about the approval process.  He gave his own experience of getting applications through the submission process, including one that was rejected and the reasons why.  According to him, 2 weeks is average time it takes Apple to accept or reject an application.  It’s even possible for upgrades of a previously-accepted app to be rejected.

Pilone did a good job of making it clear that quality is what sells applications.  He used the Convert[6] application (from taptaptap) as an example.  It’s just one of over 80 unit converter applications in the App Store, but it’s beating the competition handily.  OmniFocus was his second example. Revenue Models

  • Ad-supported (very difficult to make money with these)
  • Paid
  • In-app upgrades[7]
Dan Pilone is the co-author of Head First iPhone Development[8], which will be available for sale on October 29.

His recommendation for selling apps in the App Store is to release a paid version first, then an ad-supported version.  This advice seemed counterintuitive to me, but I suspect he suggested it because there’s no data on the in-app upgrades revenue model.  I see in-app upgrades as Apple’s most explicit support for the “freemium”[9] business model yet.

ASP.NET MVC: Scott Allen This talk was basically a demo of a preview version of ASP.NET MVC 2.  Allen wrote code for his demonstration on-the-fly (with the sort of mistakes that can happen using this approach), so the example was pretty basic.  The takeaways I thought were useful for getting started with the technology were:

  • External projects that add features to ASP.NET MVC
    • MVCContrib
    • MVC T4
    • You can combine standard WebForms and MVC in the same solution—particularly useful if you’re trying to migrate an application from ASP.NET to ASP.NET MVC.  Allen mentioned the blogging platform Subtext[10] as an example of one application attempting this kind of migration.
This talk left a lot to be desired.  StackOverflow is the most robust example of what can be built with ASP.NET MVC.  A peek inside even a small part of actual StackOverflow source using ASP.NET MVC would have made a far more compelling presentation.

FogBugz and Kiln Even though this was strictly a product pitch, I’ve included it in the report because of how they implement a couple of ideas: plug-in architecture and tagging.

Plug-in architectures as an idea aren’t new—what was different was the motivation Joel Spolsky described for it.  One of his intentions was to make it easier for people to extend the functionality of FogBugz in ways he didn’t want.  He isn’t a fan of custom fields, so instead of building them into the core product, they’re implemented as a plug-in.  He demonstrated Balsamiq integration (via plug-in) as well, so the architecture does enable extension in ways he likes as well.

Tagging isn’t a new idea either—what I found very interesting is how they apply them in FogBugz.  Spolsky pitched them as a substitute for custom workflow.  His idea (as I understood it) was that bugs could be tagged with any items or statuses outside the standard workflow.  There wasn’t much more detail than this, but I think the idea definitely is worth exploring further.

Python: Bruce Eckel His talk was supposed to be about Python, but Bruce Eckel covered a lot more than that.  The most important takeaways of his talk were these:

  1. In language design, maintaining backward compatibility can cripple a language.
  2. The best programming languages change for the better by not being afraid of breaking backward compatibility.
  3. “Threads are unsustainable.”
Eckel’s talk gave the audience a history of programming languages, as well as a hierarchy of “language goodness”.  For the main languages created after C, the primary motivation for creating them was to fix the problems of its predecessor.  So C++ was an attempt to fix the problems of C, while Java was an attempt to fix C++.  His assertion about the primary motivation behind Ruby was this (I’m paraphrasing):
Ruby intends to combine the best of Smalltalk and the best of Perl.
He made his point about the problems of backward compatibility by comparing an attempt to add closures to Java to language changes made by Ruby and Python.  An article titled “Seeking the Joy in Java” goes into greater detail on the Java side of things.[11] In the case of Java, the desire to maintain backward compatibility often prevents changes to a language which could fix things that are poorly implemented.  The authors of Python and Ruby aren’t afraid to break backward compatibility to make improvements, which makes them better languages than Java (in Eckel’s view).

Here’s Eckel’s hierarchy of programming languages:

Python (his favorite)
Ruby, Scala, Groovy (languages he also likes)
Java
C++
C
Eckel also mentioned Grails as a framework he likes.

Another one of his pronouncements that stood out was a hope that Java would become the COBOL of the 21st century.

Eckel’s argument regarding the difficulty of writing good multithreaded code is one I’ve heard before.  He pointed to Python as a language with libraries for handling both the single processor task-switching and multi-processor parallel execution models of concurrency.

Google App Engine: Jonathan Blocksom Jonathan Blocksom gave a great overview of Google App Engine.  He’s a member of Google’s Public Sector Project Team (not the App Engine team), and I think that helped him present the information from more of an audience perspective.  He did a nice job of describing the architecture and the advantages of using it.  Some of the applications running on Google App Engine include:

Blocksom also discussed some of the limitations of App Engine:
  • 30-second time limit on asynchronous tasks
  • No full text search
jQuery: Richard D. Worth This may not have been the best talk for those already familiar with jQuery, but for me (someone unfamiliar with jQuery), it was close to perfect.  The presenter did an excellent job of showing its advantages over regular ECMAScript.  He used a clever trick to minimize the amount of typing during his demos by using slides with only the changed lines highlighted.  The “find things then do stuff” model he explained made it very easy to grasp what he was doing as he increased the complexity of his examples.[12]

Wrap-up

After the conference ended, a “metaStackOverflow” question was added to collect reviews of the conference from its attendees.[13] The top answer (as of October 28, 2009) also includes links to slides for three of the talks, which I’ve included here:
[1] [blog.stackoverflow.com/2009/05/s...](http://blog.stackoverflow.com/2009/05/stack-overflow-developer-days-conference/)

[2] Ibid

[3] Ibid

[4] gettingreal.37signals.com/toc.php

[5] www.amazon.com/gp/produc…

[6] taptaptap.com

[7] This revenue model is brand-new—Apple only began to support this within the past week or so.

[8] www.amazon.com/Head-Firs…

[9] en.wikipedia.org/wiki/Free…

[10] www.subtextproject.com

[11] www.artima.com/weblogs/v…

[12] Worth used http://jsbin.com/ for some of the more complex demos.  It’s a very good tool I hadn’t seen before.

[13] meta.stackoverflow.com/questions…


A .NET Client for REST Interface to Virtuoso

For my current project, I’ve been doing a lot of work related to the Semantic Web.  This has meant figuring out how to write SPARQL queries in order to retrieve data we can use for testing our application.  After figuring out how to do this manually (we used this SPARQL endpoint provided by OpenLink Software), it was time to automate the process.  The Virtuoso product has a REST service interface, but the only example I found here for interacting with it used curl.  Fortunately, some googling revealed a really nice resource in the Yahoo Developer Network with some great examples.

I’ve put together a small solution in Visual Studio 2008 with a console application (VirtuosoPost) which executes queries against http://dbpedia.org/fct/ and returns the query results as XML.  It’s definitely not fancy, but it works.  There’s plenty of room for improvement, and I’ll make updates available here.  The solution does include all the source, so any of you out there who are interested in taking the code in an entirely different direction are welcome to do so.


Adventures in SPARQL

If this blog post seems different than usual, it’s because I’m actually using it to get tech support via Twitter for an issue I’m having.  One of my tasks for my current project has me generating data for use in a test database. DBPedia is the data source, and I’ve been running SPARQL queries to retrieve RDF/XML-formatted data against their Virtuoso endpoint.  For some reason though, the resulting data doesn’t validate.

For example, the following query:

PREFIX owl: <[www.w3.org/2002/07/o...](http://www.w3.org/2002/07/owl#>) PREFIX xsd: <[www.w3.org/2001/XMLS...](http://www.w3.org/2001/XMLSchema#>) PREFIX rdfs: <[www.w3.org/2000/01/r...](http://www.w3.org/2000/01/rdf-schema#>) PREFIX rdf: <[www.w3.org/1999/02/2...](http://www.w3.org/1999/02/22-rdf-syntax-ns#>) PREFIX foaf: <[xmlns.com/foaf/0.1/...](http://xmlns.com/foaf/0.1/>) PREFIX dc: <[purl.org/dc/elemen...](http://purl.org/dc/elements/1.1/>) PREFIX : <[dbpedia.org/resource/...](http://dbpedia.org/resource/>) PREFIX dbpedia2: <[dbpedia.org/property/...](http://dbpedia.org/property/>) PREFIX dbpedia: <[dbpedia.org/>](http://dbpedia.org/>) PREFIX skos: <[www.w3.org/2004/02/s...](http://www.w3.org/2004/02/skos/core#>) SELECT ?property ?hasValue ?isValueOf WHERE { { <[dbpedia.org/resource/...](http://dbpedia.org/resource/Bank>) ?property ?hasValue FILTER (LANG(?hasValue) = 'en') .} UNION { ?isValueOf ?property <[dbpedia.org/resource/...](http://dbpedia.org/resource/Bank>) } }
generates the RDF/XML output here.  If I try to parse the file with an RDF validator (like this one, for example), validation fails.  Removing the attributes from the output takes care of the validation issues, but what I'm not sure of is why the node ids are added in the first place.

Adding File Headers Made Easy

One of the things on my plate at work is a macro for adding a file header and footer to all the source files in a Visual Studio solution. The macro I put together from my own implementation, various web sources, and a colleague at work accomplished the goal at one time–but inconsistently. So I’d been exploring other avenues for getting this done when Scott Garland told me about the File Header Text feature of ReSharper. You simply put in the text you want to appear at the top of your source file, add a new Code Cleanup profile, check the Use File Header Text option, then run the new profile on your solution.

The result: if the filename ends in “.cs”, ReSharper will add the value in File Header Text as a comment to the top of the file. It’s even clever enough not to add duplicate text if a file already contains it in its header. So if you need to add copyright notices or any other text to the top of your source code files, and you use ReSharper, you’ve already got what you need.


Snow Leopard: Days 1-2

Thanks to a pre-order from Amazon on August 3, a copy of Snow Leopard arrived on my doorstep August 28. The install was uneventful–typical of Mac OS X installs. I put in the DVD, clicked through a few dialog boxes, went to run a couple of errands. When I got back, I logged in as usual.

So far, I haven’t noticed many differences between Leopard and Snow Leopard.  The few of note:

  • Hard drive space.  Before installing Snow Leopard, I had around 14GB of free space.  After installing Snow Leopard (and the latest version of XCode) I have 27GB of free space.  It's quite a bit more freed space than the 7GB Apple advertised
  • Printing.  I have a HP LaserJet 1022.  I had to re-install it after upgrading to Snow Leopard and use an Apple driver.  It still works just fine.
  • Battery Status.  Apple added information on battery health.  Since my MacBook Pro is closing in on 3 years old, the "Service Battery" message is most likely correct.  Apple Support already has a thread about it.  Another thing I've noticed which may also be new to Snow Leopard is that I'm getting battery life percentages for my Bluetooth keyboard and mouse as well.
  • Character/Keyboard Viewer.  A new widget in the upper-right of the screen.  I haven't found any particular use for it yet.
  • Mail.  When I first started it, the app prompted me for some sort of upgrade.  Once it was done, the notes from my iPhone showed up under a Reminders item.
  • Quicken.  I'm still using Quicken 2007 for Mac, so I saw a little prompt about Rosetta when I first launched it.  What I really need to do is get out of Quicken 2007 into something else, but that's a subject for another post.
I can't say I've noticed any speed differences one way or the other so far--but it's only been a couple of days.

Random SQL Tricks (Part 2)

In my previous random SQL tricks post, I discussed how to generate random alphanumeric strings of any length.  A slight variation on that idea that also proved useful in generating test data is the following stored procedure (which generates a varchar consisting entirely of numbers):

CREATE PROCEDURE [dbo].[SpGenerateRandomNumberString] @randomString varchar(15) OUTPUT AS BEGIN SET NOCOUNT ON DECLARE @counter tinyint DECLARE @nextChar char(1) SET @counter = 1 SET @randomString = ''

WHILE @counter <= 15 BEGIN SELECT @nextChar = CHAR(48 + CONVERT(INT, (57-48+1)*RAND()))

SELECT @randomString = @randomString + @nextChar SET @counter = @counter + 1 END END GO

The range in the select for @nextChar maps to ASCII values for the digits 0-9.  Unlike the stored procedure from my first post, there’s no if statement to determine whether or not the random value retrieved is allowed because the ASCII range for digits is contiguous.  The needs of my application restricted the length of this numeric string to 15 characters.  For more general use, the first refactoring would probably add string length as a second parameter, so the numeric string could be a variable length.


A long overdue upgrade

I’m finally running the latest version of WordPress (I’ve been way behind on upgrading).  I’d be curious to hear from those of you who visit regularly what you think of the new look.  I’d considered a couple others:

I ultimately chose this one for the random post feature that appears in the upper-right of the page.

Build Server Debugging

Early in June, I posted about inheriting a continuous integration setup from a former colleague.  Since then, I’ve replaced CruiseControl.NET and MSTest with TeamCity and NUnit 2.5.1, added FxCop, NCover, and documentation generation (with Doxygen).  This system had been running pretty smoothly, with the exception of an occasional build failure due to SQL execution error.  Initially, I thought the problem was due to the build restoring a database for use by some of our integration tests.  But when replacing the restore command with a script for database creation didn’t fix the problem, I had to look deeper.

A look at the error logs for SQL Server Express 2005 revealed a number of messages that looked like:

SQL Server has encountered <x> occurrence(s) of cachestore flush ...
Most of what I found in my initial searches indicated that these could be ignored.  But a bit more googling brought me to this thread of an MSDN SQL Server database forum.  The answer by Tom Huleatt that recommended turning off the Auto-Close property seemed the most appropriate.  After checking in database script changes that included the following:
ALTER DATABASE <database name> SET AUTO_CLOSE OFF

GO

none of the builds have failed due to SQL execution errors.  We’ll see if these results continue.


Random SQL Tricks (Part 1)

One of my most recent tasks at work has been generating test data for integration tests of a new application.  We don’t have the version of Visual Studio which does it for you, and rather than write an app that did it, I spent the past week hunting for examples that just used Transact-SQL.  The initial post that I found the most useful is this one, in which the author provides five different ways of generating random numbers.  I use his third method quite often, as you’ll see in this post (and any others I write on this topic).

One of our needs for random test data was alphanumeric strings of varying lengths.  Because the content of the text mattered less than the need for text, it didn’t have to resemble actual names (or anything recognizable).  The first example I found of a T-SQL stored procedure for generating a random string was in this blog post by XSQL Software.  The script does generate random strings, but they include non-alphanumeric characters.  To get the sort of random strings I wanted, I took the random number generation method from the first post and the stored procedure mentioned earlier and adapted them to this:

CREATE PROCEDURE [dbo].[SpGenerateRandomString] @sLength tinyint = 10, @randomString varchar(50) OUTPUT AS BEGIN SET NOCOUNT ON DECLARE @counter tinyint DECLARE @nextChar char(1) SET @counter = 1 SET @randomString = ''

WHILE @counter <= @sLength BEGIN SELECT @nextChar = CHAR(48 + CONVERT(INT, (122-48+1)*RAND()))

IF ASCII(@nextChar) not in (58,59,60,61,62,63,64,91,92,93,94,95,96) BEGIN SELECT @randomString = @randomString + @nextChar SET @counter = @counter + 1 END END END

The range in the select for @nextChar is the set of ASCII table values that map to digits, upper-case letters, and lower-case letters (among other things).  The “if” branch values in the set are those ASCII table values that map to punctuation, brackets, and other non-alphanumeric characters.  Only alphanumeric characters are added to @randomString as a result.  Having a stored procedure like this one available makes it much easier to generate test data, especially since it can be called from other stored procedures.


Introducing Doxygen

Last Wednesday evening, I gave a presentation on Doxygen at RockNUG.  I didn’t actually bother with slides in order to give as much time as possible to actually demonstrating how the tool worked, so this blog post will fill in some of those gaps.

Doxygen is just one of a number of tools that generate documentation from the comments in source code.  In addition to C# “triple-slash” comments, Doxygen can generate documentation from Java, Python, and PHP source.  In addition to HTML, Doxygen can provide documentation in XML, LaTeX, and RTF formats.

Getting up to speed quickly is pretty easy with Doxywizard.  It will walk you through configuring all the necessary values in wizard or expert mode.  When you save the configuration file it generates, the purpose and effect of each setting is thoroughly documented.  One thing I will note that may not be readily apparent is that you can run Doxygen against multiple directories with source code to get a single set of documentation.  It just requires that the value of your input (INPUT) property contain all of those directories (instead of a single one).


Converting MSTest Assemblies to NUnit

If you wanted to convert existing test assemblies for a Visual Studio solution from using MSTest to NUnit, how would you do it?  This post will provide one answer to that question.

I started by changing the type of the test assembly.  To do this, I opened the .proj file with a text editor, then used this link to find the MSTest GUID in the file and remove it (the guid will be inside a ProjectTypeGuids XML tag).  This should ensure that Visual Studio and/or any third-party test runners can identify it correctly.  Once I saved that change, the remaining steps were:

  • replace references to Microsoft.VisualStudio.QualityTools.UnitTestFramework with nunit.framework
  • change unit test attributes from MSTest to NUnit (you may find a side-by-side comparison helpful)
  • delete any code specific to MSTest (this includes TestContext, DeploymentItem, AssemblyInitialize, AssemblyCleanup, etc)
After the above steps, NUnit ran the tests without any further modifications.  All of my calls to Assert worked the same way in NUnit that they did in MSTest.

MSBuild Transforms, Batching, Well-Known Metadata and MSTest

Thanks to a comment from Daniel Richardson on my previous MSTest post (and a lot more research, testing, & debugging), I’ve found a more flexible way of calling MSTest from MSBuild.  The main drawback of the solution I blogged about earlier was that new test assemblies added to the solution would not be run in MSBuild unless the Exec call to MSTest.exe was updated to include them.  But thanks to a combination of MSBuild transforms and batching, this is no longer necessary.

First, I needed to create a list of test assemblies.   The solution is structured in a way that makes this relatively simple.  All of our test assemblies live in a “Tests” folder, so there’s a root to start from.  The assemblies all have the suffix “.Test.dll” too.  The following CreateItem task does the rest:

<CreateItem Include="$(TestDir)**bin$(Configuration)*.Test.dll" AdditionalMetadata=“TestContainerPrefix=/testcontainer:"> <Output TaskParameter=“Include” ItemName=“TestAssemblies” /> </CreateItem>

The task above creates a TestAssemblies element, which contains a semicolon-delimited list of paths to every test assembly for the application.  Since the MSTest command line needs a space between each test assembly passed to it, the TestAssemblies element can’t be used as-is.  Each assembly also requires a “/testcontainer:” prefix.  Both of these issues are addressed by the combined use of transforms, batching, and well-known metadata as shown below:

<Exec Command=""$(VS90COMNTOOLS)..IDEmstest.exe” @(TestAssemblies->'%(TestContainerPrefix)%(FullPath)',' ‘) /runconfig:localtestrun.testrunconfig" />

Note the use of %(TestContainerPrefix) above.  I defined that metadata element in the CreateItem task.  Because it’s part of each item in TestAssemblies, I can refer to it in the transform.  The %(FullPath) is well-known item metadata.  For each assembly in TestAssemblies, it returns the full path to the file.  As for the semi-colon delimiter that appears by default, the last parameter of the transform (the single-quoted space) replaces it.

The end result is a MSTest call that works no matter how many test assemblies are added, with no further editing of the build script.

Here’s a list of the links that I looked at that helped me find this solution:


Detect .NET Framework Version Programmatically

If you need to determine what versions of the .NET Framework are available on a machine programmatically, you’d ideally use a C++ program (since it has no dependencies on .NET).  But if you can guarantee that .NET 2.0 will be available, there’s another option.  The source code (written by Scott Dorman) is ported from a C++ program.  I’m using the library for an application launcher that verifies the right version of the .NET Framework is available (among other prerequisites).


Calling MSTest from MSBuild or The Price of Not Buying TFS

When one of my colleagues left for a new opportunity, I inherited the continuous build setup he built for our project.  This has meant spending the past few weeks scrambling to get up to speed on CruiseControl.NET, MSTest and Subversion (among other things).  Because we don’t use TFS, creating a build server required us to install Visual Studio 2008 in order to run unit tests as part of the build, along with a number of other third-party tasks to make MSBuild work more like NAnt.  So the first time a build failed because of tests that had passed locally, I wasn’t looking forward to figuring out precisely which of these pieces triggered the problem.

After reimplementing unit tests a couple of different ways and still getting the same results (tests passing locally and failing on the build server), we eventually discovered that the problem was a bug in Visual Studio 2008 SP1.  Once we installed the hotfix, our unit tests passed on the build server without us having to change them.  This hasn’t been the last issue we’ve had with our “TFS-lite” build server.

Build timeouts have proven to be the latest hassle.  Instead of the tests passing locally and failing on the build server, they actually passed in both places.  But for whatever reason, the test task didn’t really complete and build timed out.  Increasing the build timeout didn’t address the issue either.  Yesterday, thanks to the Microsoft Build Sidekick editor, we narrowed the problem down to the MSTest task in our build file.  The task is the creation of Nati Dobkin, and it made writing the test build target easier (at least until we couldn’t get it to work consistently).  So far, I haven’t found (or written) an alternative task, but I did find a blog post that pointed the way to our current solution.

The solution:

<!– MSTest won’t work if the tests weren’t built in the Debug configuration –> <Target Name=“Test:MSTest” Condition=" ‘$(Configuration)’ == ‘Debug’"> <MakeDir Directories="$(TestResultsDir)" /> <MSBuild.ExtensionPack.FileSystem.Folder TaskAction=“RemoveContent” Path="$(TestResultsDir)" />

<Exec Command=""$(VS90COMNTOOLS)..IDEmstest.exe" /testcontainer:$(TestDir)<test assembly directory>bin$(Configuration)<test assembly>.dll /testcontainer:$(TestDir)<test assembly directory>bin$(Configuration)<test assembly>.dll /testcontainer:$(TestDir)<test assembly directory>bin$(Configuration)<test assembly>.dll /runconfig:localtestrun.testrunconfig" />

</Target>

TestDir and TestResultsDir are defined in a property group at the beginning of the MSBuild file.  VS90COMNTOOLS is an environment variable created during the install of Visual Studio 2008.  Configuration comes from the solution file.  Actual test assembly directories and names have been replaced  with <test assembly> and <test assembly directory>.  The only drawback to the solution so far is that we’ll have to update our MSBuild file if we add a new test assembly.


CruiseControl.NET, MSBuild and Multicore CPUs

When I was trying to debug a continuous build timeout at work recently, I came across this Scott Hanselman post about parallel builds and builds with multicore CPUs using MSBuild.  While adding /m to the buildArgs tag in my ccnet.config didn’t solve my timeout problem (putting the same unit tests into a different class did), pooling multiple MSBuild processes will certainly help as our builds get bigger.


The unexpected home of IsHexDigit

I was about to write a method that checked to see if a character was a hexadecimal value when it occurred to me that I should google for it.  I was going to name it IsHexDigit, and googling for that revealed this link.  I’m not sure why it’s in the System.Uri class, but it’s less code for me to write.


Implementing Mouse Hover in WPF

We’ve spent the past couple of weeks at work giving ourselves a crash course in Windows Presentation Foundation (WPF) and LINQ.  I’m working on a code example that will switch the datatemplate in a list item when the mouse hovers over it.  Unfortunately, WPF has no MouseHover event like Windows Forms does.  The usual googling didn’t cough up a ready-made answer.  Some hacking on one example did reveal a half-answer (not ideal, but at least a start).

First, I set the ToolTip property of the element I used to organize my data (in this case, a StackPanel).  Next, I added a ToolTipOpening event for the StackPanel.  Here’s the code for StackPanel_ToolTipOpening: private void StackPanel_ToolTipOpening(object sender, ToolTipEventArgs e) { e.Handled = true; ContentPresenter presenter = (ContentPresenter)(((Border)((StackPanel)e.Source).Parent).TemplatedParent); presenter.ContentTemplate = this.FindResource(“Template2”) as DataTemplate; }

The result: instead of a tooltip displaying when you hover over a listbox row, the standard datatemplate is replaced with an expanded one that displays more information.  This approach definitely has flaws.  Beyond being a hack, there’s no way to set how long you can hover before the templates switch.

Switching from an expanded datatemplate back to a standard one involved a bit less work.  I added a MouseLeave event to the expanded template.  Here’s the code for the event: private void StackPanel_MouseLeave(object sender, MouseEventArgs e) { ContentPresenter presenter = (ContentPresenter)(((Border)((StackPanel)e.Source).Parent).TemplatedParent); presenter.ContentTemplate = this.FindResource(“ScriptLine”) as DataTemplate; }

So once the mouse moves out of the listbox item with the expanded template, it switches back to the standard template.  Not an ideal solution, but it works.

This link started me down the path to finding a solution (for reference).


Gotta love this April Fool's Day gag from Google

Here’s the e-mail home page.

They’ve got a little announcement, technical specs, even a blog with annoying, cutesy music.