software development
Adding File Headers Made Easy
One of the things on my plate at work is a macro for adding a file header and footer to all the source files in a Visual Studio solution. The macro I put together from my own implementation, various web sources, and a colleague at work accomplished the goal at one time–but inconsistently. So I’d been exploring other avenues for getting this done when Scott Garland told me about the File Header Text feature of ReSharper. You simply put in the text you want to appear at the top of your source file, add a new Code Cleanup profile, check the Use File Header Text option, then run the new profile on your solution.
The result: if the filename ends in “.cs”, ReSharper will add the value in File Header Text as a comment to the top of the file. It’s even clever enough not to add duplicate text if a file already contains it in its header. So if you need to add copyright notices or any other text to the top of your source code files, and you use ReSharper, you’ve already got what you need.
Random SQL Tricks (Part 2)
In my previous random SQL tricks post, I discussed how to generate random alphanumeric strings of any length. A slight variation on that idea that also proved useful in generating test data is the following stored procedure (which generates a varchar consisting entirely of numbers):
CREATE PROCEDURE [dbo].[SpGenerateRandomNumberString] @randomString varchar(15) OUTPUT AS BEGIN SET NOCOUNT ON DECLARE @counter tinyint DECLARE @nextChar char(1) SET @counter = 1 SET @randomString = ''
WHILE @counter <= 15 BEGIN SELECT @nextChar = CHAR(48 + CONVERT(INT, (57-48+1)*RAND()))
SELECT @randomString = @randomString + @nextChar SET @counter = @counter + 1 END END GO
The range in the select for @nextChar maps to ASCII values for the digits 0-9. Unlike the stored procedure from my first post, there’s no if statement to determine whether or not the random value retrieved is allowed because the ASCII range for digits is contiguous. The needs of my application restricted the length of this numeric string to 15 characters. For more general use, the first refactoring would probably add string length as a second parameter, so the numeric string could be a variable length.
Build Server Debugging
Early in June, I posted about inheriting a continuous integration setup from a former colleague. Since then, I’ve replaced CruiseControl.NET and MSTest with TeamCity and NUnit 2.5.1, added FxCop, NCover, and documentation generation (with Doxygen). This system had been running pretty smoothly, with the exception of an occasional build failure due to SQL execution error. Initially, I thought the problem was due to the build restoring a database for use by some of our integration tests. But when replacing the restore command with a script for database creation didn’t fix the problem, I had to look deeper.
A look at the error logs for SQL Server Express 2005 revealed a number of messages that looked like:
SQL Server has encountered <x> occurrence(s) of cachestore flush ...Most of what I found in my initial searches indicated that these could be ignored. But a bit more googling brought me to this thread of an MSDN SQL Server database forum. The answer by Tom Huleatt that recommended turning off the Auto-Close property seemed the most appropriate. After checking in database script changes that included the following:
ALTER DATABASE <database name> SET AUTO_CLOSE OFFnone of the builds have failed due to SQL execution errors. We’ll see if these results continue.GO
Random SQL Tricks (Part 1)
One of my most recent tasks at work has been generating test data for integration tests of a new application. We don’t have the version of Visual Studio which does it for you, and rather than write an app that did it, I spent the past week hunting for examples that just used Transact-SQL. The initial post that I found the most useful is this one, in which the author provides five different ways of generating random numbers. I use his third method quite often, as you’ll see in this post (and any others I write on this topic).
One of our needs for random test data was alphanumeric strings of varying lengths. Because the content of the text mattered less than the need for text, it didn’t have to resemble actual names (or anything recognizable). The first example I found of a T-SQL stored procedure for generating a random string was in this blog post by XSQL Software. The script does generate random strings, but they include non-alphanumeric characters. To get the sort of random strings I wanted, I took the random number generation method from the first post and the stored procedure mentioned earlier and adapted them to this:
CREATE PROCEDURE [dbo].[SpGenerateRandomString] @sLength tinyint = 10, @randomString varchar(50) OUTPUT AS BEGIN SET NOCOUNT ON DECLARE @counter tinyint DECLARE @nextChar char(1) SET @counter = 1 SET @randomString = ''
WHILE @counter <= @sLength BEGIN SELECT @nextChar = CHAR(48 + CONVERT(INT, (122-48+1)*RAND()))
IF ASCII(@nextChar) not in (58,59,60,61,62,63,64,91,92,93,94,95,96) BEGIN SELECT @randomString = @randomString + @nextChar SET @counter = @counter + 1 END END END
The range in the select for @nextChar is the set of ASCII table values that map to digits, upper-case letters, and lower-case letters (among other things). The “if” branch values in the set are those ASCII table values that map to punctuation, brackets, and other non-alphanumeric characters. Only alphanumeric characters are added to @randomString as a result. Having a stored procedure like this one available makes it much easier to generate test data, especially since it can be called from other stored procedures.
Introducing Doxygen
Last Wednesday evening, I gave a presentation on Doxygen at RockNUG. I didn’t actually bother with slides in order to give as much time as possible to actually demonstrating how the tool worked, so this blog post will fill in some of those gaps.
Doxygen is just one of a number of tools that generate documentation from the comments in source code. In addition to C# “triple-slash” comments, Doxygen can generate documentation from Java, Python, and PHP source. In addition to HTML, Doxygen can provide documentation in XML, LaTeX, and RTF formats.
Getting up to speed quickly is pretty easy with Doxywizard. It will walk you through configuring all the necessary values in wizard or expert mode. When you save the configuration file it generates, the purpose and effect of each setting is thoroughly documented. One thing I will note that may not be readily apparent is that you can run Doxygen against multiple directories with source code to get a single set of documentation. It just requires that the value of your input (INPUT) property contain all of those directories (instead of a single one).
Converting MSTest Assemblies to NUnit
If you wanted to convert existing test assemblies for a Visual Studio solution from using MSTest to NUnit, how would you do it? This post will provide one answer to that question.
I started by changing the type of the test assembly. To do this, I opened the .proj file with a text editor, then used this link to find the MSTest GUID in the file and remove it (the guid will be inside a ProjectTypeGuids XML tag). This should ensure that Visual Studio and/or any third-party test runners can identify it correctly. Once I saved that change, the remaining steps were:
- replace references to Microsoft.VisualStudio.QualityTools.UnitTestFramework with nunit.framework
- change unit test attributes from MSTest to NUnit (you may find a side-by-side comparison helpful)
- delete any code specific to MSTest (this includes TestContext, DeploymentItem, AssemblyInitialize, AssemblyCleanup, etc)
MSBuild Transforms, Batching, Well-Known Metadata and MSTest
Thanks to a comment from Daniel Richardson on my previous MSTest post (and a lot more research, testing, & debugging), I’ve found a more flexible way of calling MSTest from MSBuild. The main drawback of the solution I blogged about earlier was that new test assemblies added to the solution would not be run in MSBuild unless the Exec call to MSTest.exe was updated to include them. But thanks to a combination of MSBuild transforms and batching, this is no longer necessary.
First, I needed to create a list of test assemblies. The solution is structured in a way that makes this relatively simple. All of our test assemblies live in a “Tests” folder, so there’s a root to start from. The assemblies all have the suffix “.Test.dll” too. The following CreateItem task does the rest:
<CreateItem Include="$(TestDir)**bin$(Configuration)*.Test.dll" AdditionalMetadata=“TestContainerPrefix=/testcontainer:"> <Output TaskParameter=“Include” ItemName=“TestAssemblies” /> </CreateItem>
The task above creates a TestAssemblies element, which contains a semicolon-delimited list of paths to every test assembly for the application. Since the MSTest command line needs a space between each test assembly passed to it, the TestAssemblies element can’t be used as-is. Each assembly also requires a “/testcontainer:” prefix. Both of these issues are addressed by the combined use of transforms, batching, and well-known metadata as shown below:
<Exec Command=""$(VS90COMNTOOLS)..IDEmstest.exe” @(TestAssemblies->'%(TestContainerPrefix)%(FullPath)',' ‘) /runconfig:localtestrun.testrunconfig" />
Note the use of %(TestContainerPrefix) above. I defined that metadata element in the CreateItem task. Because it’s part of each item in TestAssemblies, I can refer to it in the transform. The %(FullPath) is well-known item metadata. For each assembly in TestAssemblies, it returns the full path to the file. As for the semi-colon delimiter that appears by default, the last parameter of the transform (the single-quoted space) replaces it.
The end result is a MSTest call that works no matter how many test assemblies are added, with no further editing of the build script.
Here’s a list of the links that I looked at that helped me find this solution:
Detect .NET Framework Version Programmatically
If you need to determine what versions of the .NET Framework are available on a machine programmatically, you’d ideally use a C++ program (since it has no dependencies on .NET). But if you can guarantee that .NET 2.0 will be available, there’s another option. The source code (written by Scott Dorman) is ported from a C++ program. I’m using the library for an application launcher that verifies the right version of the .NET Framework is available (among other prerequisites).
Calling MSTest from MSBuild or The Price of Not Buying TFS
When one of my colleagues left for a new opportunity, I inherited the continuous build setup he built for our project. This has meant spending the past few weeks scrambling to get up to speed on CruiseControl.NET, MSTest and Subversion (among other things). Because we don’t use TFS, creating a build server required us to install Visual Studio 2008 in order to run unit tests as part of the build, along with a number of other third-party tasks to make MSBuild work more like NAnt. So the first time a build failed because of tests that had passed locally, I wasn’t looking forward to figuring out precisely which of these pieces triggered the problem.
After reimplementing unit tests a couple of different ways and still getting the same results (tests passing locally and failing on the build server), we eventually discovered that the problem was a bug in Visual Studio 2008 SP1. Once we installed the hotfix, our unit tests passed on the build server without us having to change them. This hasn’t been the last issue we’ve had with our “TFS-lite” build server.
Build timeouts have proven to be the latest hassle. Instead of the tests passing locally and failing on the build server, they actually passed in both places. But for whatever reason, the test task didn’t really complete and build timed out. Increasing the build timeout didn’t address the issue either. Yesterday, thanks to the Microsoft Build Sidekick editor, we narrowed the problem down to the MSTest task in our build file. The task is the creation of Nati Dobkin, and it made writing the test build target easier (at least until we couldn’t get it to work consistently). So far, I haven’t found (or written) an alternative task, but I did find a blog post that pointed the way to our current solution.
The solution:
<!– MSTest won’t work if the tests weren’t built in the Debug configuration –> <Target Name=“Test:MSTest” Condition=" ‘$(Configuration)’ == ‘Debug’"> <MakeDir Directories="$(TestResultsDir)" /> <MSBuild.ExtensionPack.FileSystem.Folder TaskAction=“RemoveContent” Path="$(TestResultsDir)" />
<Exec Command=""$(VS90COMNTOOLS)..IDEmstest.exe" /testcontainer:$(TestDir)<test assembly directory>bin$(Configuration)<test assembly>.dll /testcontainer:$(TestDir)<test assembly directory>bin$(Configuration)<test assembly>.dll /testcontainer:$(TestDir)<test assembly directory>bin$(Configuration)<test assembly>.dll /runconfig:localtestrun.testrunconfig" />
</Target>
TestDir and TestResultsDir are defined in a property group at the beginning of the MSBuild file. VS90COMNTOOLS is an environment variable created during the install of Visual Studio 2008. Configuration comes from the solution file. Actual test assembly directories and names have been replaced with <test assembly> and <test assembly directory>. The only drawback to the solution so far is that we’ll have to update our MSBuild file if we add a new test assembly.
The unexpected home of IsHexDigit
I was about to write a method that checked to see if a character was a hexadecimal value when it occurred to me that I should google for it. I was going to name it IsHexDigit, and googling for that revealed this link. I’m not sure why it’s in the System.Uri class, but it’s less code for me to write.
Implementing Mouse Hover in WPF
We’ve spent the past couple of weeks at work giving ourselves a crash course in Windows Presentation Foundation (WPF) and LINQ. I’m working on a code example that will switch the datatemplate in a list item when the mouse hovers over it. Unfortunately, WPF has no MouseHover event like Windows Forms does. The usual googling didn’t cough up a ready-made answer. Some hacking on one example did reveal a half-answer (not ideal, but at least a start).
First, I set the ToolTip property of the element I used to organize my data (in this case, a StackPanel). Next, I added a ToolTipOpening event for the StackPanel. Here’s the code for StackPanel_ToolTipOpening:
private void StackPanel_ToolTipOpening(object sender, ToolTipEventArgs e)
{
e.Handled = true;
ContentPresenter presenter = (ContentPresenter)(((Border)((StackPanel)e.Source).Parent).TemplatedParent);
presenter.ContentTemplate = this.FindResource(“Template2”) as DataTemplate;
}
The result: instead of a tooltip displaying when you hover over a listbox row, the standard datatemplate is replaced with an expanded one that displays more information. This approach definitely has flaws. Beyond being a hack, there’s no way to set how long you can hover before the templates switch.
Switching from an expanded datatemplate back to a standard one involved a bit less work. I added a MouseLeave event to the expanded template. Here’s the code for the event:
private void StackPanel_MouseLeave(object sender, MouseEventArgs e)
{
ContentPresenter presenter = (ContentPresenter)(((Border)((StackPanel)e.Source).Parent).TemplatedParent);
presenter.ContentTemplate = this.FindResource(“ScriptLine”) as DataTemplate;
}
So once the mouse moves out of the listbox item with the expanded template, it switches back to the standard template. Not an ideal solution, but it works.
This link started me down the path to finding a solution (for reference).
Free Test Data
If you find yourself in need of test data (and if you write software for a living, you’ve got that need pretty often), pay a visit to generatedata.com. You have your choice of five different result formats: HTML, Excel, CSV, XML, and SQL. If you’re using it for free, you’re limited to 200 rows of test data. Donate $20 or more and the limit increases to 5000 rows. If you don’t mind fiddling with PHP and MySQL, you can download the generator for free and set it up on your own server.
Comparing XML Strings in Unit Tests
Comparing two XML strings is painful. So of course, my current project required me to come up with a way to do it in .NET. I could only use version 2.0 of the framework, and I didn’t want to add more dependencies to solution that already has plenty (which ruled out XML Diff and Patch). So far, I’ve come up with the following bit of code:
The validationXml contains a string representation of the XML being validated against. It also means I only have to create one instance of XmlDocument. After creating an XPathNavigator on the XmlDocument being compared, an XPathExpression for the subset of XmlDocument being validated, and an XPathIterator, it can be called.
The “params” keyword makes the last argument optional, so it can contain zero or more names of XML elements to ignore when deciding whether or not to call an Assert. I’m still figuring out how to optimize this, but I think it’s a good start.
Converting File URIs to Paths
I spent most of this morning looking for a replacement to Application.ExecutablePath. The reason for this was because certain unit tests that depended on this code failed when a used any test runner other than the NUnit GUI. When using test runners like ReSharper and TestDriven.NET, Application.ExecutablePath returned the executable of the test runner, instead of the DLL being tested.
Assembly.GetExecutingAssembly().CodeBase returned a file URI with the DLL I wanted, but subsequent code which used the value threw an exception because it didn’t accept file URIs. This made it necessary to convert the file URI into a regular path. I haven’t found a .NET Framework method that does this yet, but the following code seems to do the trick:
private static string ConvertUriToPath(string fileName)
{
fileName = fileName.Replace("file:///", "");
fileName = fileName.Replace("/", "\");
return fileName;
}
My Two Cents on Reinventing the Wheel
Yesterday, I came across a spirited defense of reinventing the wheel in a recent post from Jeff Atwood. Dare Obasanjo stands firmly in the “roll your own as last resort” camp. In this particular case, Atwood asserts the following:
[D]eeply understanding HTML sanitization is a critical part of my business.I'll take Atwood at his word on what's critical to his business (and what isn't), but it seems that there's a middle ground between his position and Obasanjo's. Particularly when there's an open source solution available (SgmlReader in this case, since it's written in C#), adopting and improving it has these benefits:
- Improved understanding of HTML sanitization for the adopter.
- Strengthening of the existing community.
My own experience with reinventing the wheel (in software development terms) has rarely, if ever, been positive. Therefore, I have a lot of sympathy for Obasanjo’s perspective. Because I’ve inherited a lot of software from predecessors at various employers, I’ve seen a lot of less-than-ideal (to put it kindly) custom implementations of validation, encryption, search and logging functionality.
There are probably plenty of reasons that development teams reinvent the wheel in these areas, but one highly likely (and unfortunate) reason seems to be insufficient awareness about the wide variety of high-quality open source solutions available for a variety of problems. I don’t know whether this is actually more true in internal IT shops than other environments or not, but it seems that way. Encryption and logging in particular are two areas where it seems like custom code would be a bad idea for virtually everyone (except those actually in the encryption and logging library businesses). With libraries like log4j, log4net, the Enterprise Library, and Bouncy Castle available, developers can spend their time focusing on what’s really important to their application. Code for authentication and authorization seems like one of those areas as well. It seems like there are a lot of solutions to this problem (like OpenID on the public web, and Active Directory in the enterprise) that time spent hand-rolling login/password anything is time not spent working in areas where more innovation is possible (and needed).
When I asked the question of “what should always be third-party” to Stack Overflow, I got some interesting answers. Most answers seemed to agree that encryption should be third-party, except in rare cases, but there was surprising little consensus beyond that. Beyond the scarce resources argument against custom logging (or other areas with widely available open source alternatives), there’s a diminishing returns argument as well. I’ve only used Log4Net and the logging in the Enterprise Library, but they’re really good frameworks. Even if I had the resources to implement custom logging well, the odds that the result would be a significant improvement over the existing third-party options are slim to none. I’d like to see the quality argument made more often in buy vs. build decisions.
Stack Overflow is Live
Stack Overflow is a great new programmer Q & A site from Jeff Atwood and Joel Spolsky. I (and about 500 other developers) got a 5-week headstart on using it as part of the private beta test. As good as googling for answers to development problems has been, Stack Overflow is a big improvement. I’ve already gotten answers to questions that I was able to use in my own work.
If you write code for a living, I strongly encourage you to check out the site. They support OpenID, so you can use your existing Yahoo! or Blogger (other other OpenID-compliant) identity to register with the site. You can use it anonymously as well.
Now I'm Blogging for Work Too
In addition to the entries I post here, I’ve started blogging for my employer, along with some of my colleagues. If you’re interested in blog posts more specific to agile software development, check out all the posts here. My first post there begins a discussion of what enterprise applications can learn from games.
Granting full permissions to all tables and views in a database
One of my assignments is to write a script that will grant CRUD (create, read, update, delete) permissions to a database role. SQL Server Management Studio does a nice job of generating scripts for adding logins, roles, and adding users to roles, but isn’t terribly clever about granting permissions across types of database objects. Some of the difficulty has to do with not having upgraded to SQL Server 2005 yet. Thanks to some helpful people at Stackoverflow and a page I found through a Google search, I was able to put together a script that handles the permission-granting part a bit better.
Step 1 was to develop a query that generated all the commands for granting permissions. Here’s the query I got from Stackoverflow that retrieved all the user tables and views:
SELECT * FROM information_schema.tables WHERE OBJECTPROPERTY(OBJECT_ID(table_name),‘IsMSShipped’) = 0
This query is especially useful because it filters out system tables and views that can appear if you query the sysobjects table.
Using a cursor to apply permissions to all the tables was something one of my colleagues first suggested. I only found this implementation today, and adapted it for my purposes. The change I made to the code in that implementation is in the select statement. I populated the @tables variable this way:
SELECT ‘GRANT SELECT, REFERENCES, INSERT, UPDATE, DELETE ON ' + TABLE_NAME + ' TO ' + @role FROM information_schema.tables WHERE OBJECTPROPERTY(OBJECT_ID(table_name),‘IsMSShipped’) = 0
@role is declared earlier in my script as varchar(50).
I still need to grant execute permissions on the stored procedure. I’ll need a different select query to accomplish that.
Reflector Update
When I originally posted about the purchase, Red Gate hadn’t added a product page to their site yet. Today’s blog post from Richard Hundhausen includes it. The product page also links to the free plug-ins available for Reflector.