software development
Reducing Duplication with Interfaces, Generics and Abstracts
The parts of our application (a long-term service and support system for the state of Maryland) that follow the DRY principle best tend to start with a combination of generic interfaces inherited by an abstract class that implements common functionality. The end result–specific implementations that consist solely of a constructor. I was able to accomplish this as well in one of my more recent domain implementations. I’ve created a sample (using fantasy football as a domain) to demonstrate the ideas in a way that may be applied to future designs.
Let’s take the idea of a team roster. A roster consists of players with a wide variety of roles that can be grouped this way:
- QBs
- offensive linemen
- skill position players
- defensive linemen
- linebackers
- defensive backs
- special teams
- first name
- last name
- team name
- position
How can I become a world-class coder in under three years?
I came across this question on Quora today and decided I would answer it. There were enough up-votes by people following the question that I’m re-posting my answer below:
I'm not sure what the term "world-class coder" means to you. But I would actively discourage the notion that there is some point you can reach in the practice of software development (whether it's 3 years or 20 years) where you can look at yourself and say "Achievement unlocked! I am a world-class coder at last." What may give you more satisfaction over time than the question of "where do I rank" in some mythical best coders on Earth list is "Am I a better developer now than I was last week? Last month? Last year?"The things that previous commenters have suggested are great ideas for continuous improvement and refinement of your skills in programming. Do as many of those things as you can. Beyond those, I’d suggest the following:
-
Be willing to learn from anyone. I've made my living writing software since 1996 and I regularly learn new things about my craft from people a decade or more younger than me.
-
Keep track of what you learn--and share it. Whether it's through blogging, Stack Overflow contributions, or something else--write about it. You may not encounter the exact problems you've solved in the future, but they will often be close enough that what you've captured will help you solve them much faster than you would have otherwise. The ability to explain what you know to others is a very valuable and rare one. The process of preparing to give a presentation to others on a topic has often been a good forcing function for me to learn that topic to the level where I can explain it well.
-
Learn about subjects beyond programming. The importance of the ability to understand new domains well enough and deeply enough to design and implement useful software for them cannot be overstated. I've delivered software solutions for news organizations, healthcare companies, marketing companies and defense/intelligence contractors in my career so far. Making myself familiar with the sort of terminology they use and the way such companies operate (above and beyond a specific project) definitely results in a better end product. One or more such topics can end up being great fodder for pet projects (which are great vehicles for learning things you aren't already learning in a job).
Binding Redirects, StructureMap and Dependency Version Upgrades
Dealing with the fallout in failing unit tests from a code merge is one of the most frustrating tasks in software development. And as one of a (very) small number of developers on our team that believes in unit testing, it fell to me to determine the cause of multiple instances of the structuremap exception code 207 error.
As it turned out, the culprit was a tactic I’ve used in the past to work with code that only works with a specific version of an assembly–the binding redirect. When the same person is in charge of upgrading dependencies, this tends not to be an issue because if they’ve used binding redirects, they know it’s necessary to update them when dependencies are upgraded. In this case, the dependencies were upgraded and the redirects were not. As a result, StructureMap tried to find a specific version of an assembly that was no longer available and threw exception code 207 when it failed.
Replicating Folder Structures in New Environments with MSBuild
I recently received the task of modifying an existing MSBuild script to copy configuration files from one location to another while preserving all but the top levels of their original folder structure. Completing this task required a refresher in MSBuild well-known metadata and task batching (among other things), so I’m recounting my process here for future reference.
The config files that needed copying were already collected into an item via a CreateItem task. Since we’re using MSBuild 4.0 though, I replaced it with the simpler ItemGroup. CreateItem has been deprecated for awhile, but can still be used. There is a bit of debate over the precise differences between CreateItem and ItemGroup, but for me the bottom line is the same (or superior) functionality with less XML.
Creating a new folder on the fly is easy enough with the MakeDir task. There’s no need to manually check whether or not the directory you’re trying to create already exists or not. The task just works.
The trickiest part of this task was figuring out what combination of well-known metadata needed to go in the DestinationFiles attribute of the Copy task to achieve the desired result. The answer ended up looking like this:
<Copy SourceFiles="@(ConfigFiles)" DestinationFiles="$(OutDir)_Config\$(Environment)\%(ConfigFiles.RecursiveDir)%(ConfigFiles.Filename)%(ConfigFiles.Extension)" />
The key bit of metadata is the RecursiveDir part. Since the ItemGroup that builds the file collection uses the ** wildcard, and it covered all the original folder structure I needed, putting after the new “root” destination and before the file names gave me the result I wanted. Another reason that well-known metadata was vital to the task is that all the files have the same name (Web.config), so the easiest way to differentiate them for the purpose of copying was their location in the folder structure.
In addition to the links above, this book by Sayed Ibrahim Hashimi was very helpful. In a previous job where configuration management was a much larger part of my role, I referred to it (and sedodream.com) on a daily basis.
Fixing MVC Sitemap Errors
When attempting to manually test a .NET MVC application, I got the following exception from Visual Studio:
Looking at the inner exception revealed this message:
An item with the same key has already been added.The sitemap file for our application is pretty long (over 1300 lines of XML), but a co-worker pointed me to the potential culprit right away. There was a sitemap node near the end of the file that had empty strings for its controller and action attributes. As far as I can tell, this generates the default url for the site's home page. Since it already exists, this results in the exception that's thrown. Removing the sitemap node resolved our issue. A couple of threads that I checked on stackoverflow (here and here) provide other possible causes for the error.
Identifying All Bad Mappings with AutoMapper
One of the long-running annoyances we’ve had with our test of AutoMapper configuration validity on my current project is that a test failure only revealed the first mapping that was wrong. I haven’t figured out why this is the case, but I’ve come up with a work-around that displays all the necessary information.
Because the exception thrown if one or more incorrect mappings is found is AutoMapperConfigurationException, my revised test catches that exception in order to print the source type, destination type, and the list of unmapped property names. Re-throwing the exception at the end ensures that the test still reports a failure. The XUnit test which demonstrates this is available as a GitHub gist. If you’re using NUnit or MSTest in your application, minor revisions to this test will give you the same results.
Visual Studio & TFS Behavior Tweaks
One of a few long-running annoyances I’ve had with every version of TFS is one of the default behaviors on check-in. The default is to resolve an open item on check-in, which is virtually never the case the first (or second, or third, etc) time you check in code to resolve a bug or implement new functionality. Fortunately, Edsquared has the solution.
After making this long-overdue change in my development environment, I exported the keys for VS2010 and VS2012 as registration entry files below:
Feel free to use them in your environment.
Freedom From Default Color Themes in Visual Studio 2012
I finally joined the ranks of those who’ve installed Visual Studio 2012 this week. The default Light color scheme is way too bright. The Dark color scheme is better, but the grays aren’t differentiated enough (just like the Microsoft Blend UI). Thankfully, some wonderful soul compiled this blog post, which details the changes necessary to save your eyes from the horrible default themes.
Following steps 1 and 2 will be enough, but you can go even further if you want the Visual Studio 2010 icons back in addition to the color scheme.
The Perils of Renaming in TFS
Apparently, renaming an assembly is a bad idea when TFS is your version control system.
Earlier this week, one of my co-workers renamed an assembly to consolidate some functionality in our application yesterday, and even though TFS said the changes were checked in, they weren’t.
I got the latest code the morning after the change, and got nothing but build failures. We’re using the latest version of TFS and it’s very frustrating that something like this still doesn’t work properly.
Ultimately, the solution was found at the bottom of this thread.
The only way I’ve found to avoid this kind of hassle is to create a new assembly, copy your code from the old assembly to the new one, change any references to the old assembly to use the new assembly, then delete the old assembly once you’ve verified the new one is working.
Please Learn to Code (Continued)
A couple days ago, I wrote a post on why Coding Horror is wrong to suggest people shouldn’t learn to code.
Here’s a much better post on the same subject by Jon Galloway (hat tip Scott Hanselman, and his e-mail Newsletter of Wonderful Things).
Please Learn to Code
I came across this post from Jeff Atwood in my Twitter feed this morning. It even sparked a conversation (as much of one as you can have 140 characters at a time) between me and my old co-worker Jon who agreed with Jeff Atwood in far blunter terms: “we need to cleanse the dev pool, not expand and muddy the water”.
While I understand Jon’s sentiment, “cleansing” vs. “expanding” just isn’t going to happen. Computer science as an academic discipline didn’t even exist until the 1950s, so it’s a very long way from having the sort of regulations and licensure of a field like engineering (a term that dates back to the 14th century). Combine that with the decreasing number of computer science graduates our colleges and universities graduate each year (much less the elimination of a computer science department), and it’s no surprise that people without formal education in computer science are getting paid to develop software.
While it does sound crazy that Mayor Bloomberg made learning to code his 2012 new year’s resolution, I’m glad someone as high-profile as the mayor of New York is talking about programming. When I was deciding what to study in college (way back in 1992), computer science as a discipline didn’t have a very high profile. While I knew programming was how video-games and other software was made, I had to find out about computer science from Bureau of Labor Statistics reports.
Jeff’s “please don’t learn to code” is counterproductive–the exact opposite of what we should be saying. Given a choice between having more people going into investment banking and more people going into software development, I suspect a large majority of us would favor the latter.
I also don’t believe that the objective of learning to code has to be making a living as a software developer in order to be useful. The folks at Software Carpentry are teaching programming to scientists to help them in their research. People who test software should know enough about programming to at least automate the repetitive tasks. If you use a computer with any regularity at all, even a little bit of programming knowledge will enable you to extend the capabilities of the software you use.
We need only look at some of the laws that exist in this country to see the results of a lack of understanding of programming by our judges and legislators. I think that lack of understanding led to software patents (and a ton of time wasted in court instead of spent innovating). The Stop Online Piracy Act and the Protect IP Act are other examples of dangerous laws proposed by legislators that don’t have even the most basic understanding of programming.
As someone who writes software for a living, I prefer customers who understand at least a bit about programming to those who don’t, because that makes it easier to talk about requirements (and get them right). They tend to understand the capabilities of off-the-shelf software a bit better and understand the tradeoffs between it and a custom system. In my career, there have been any number of times where an understanding of programming has helped me find an existing framework or solution that met most of a customer’s requirements, so I and my team were able to focus our work just on what was missing.
From Web Forms to MVC
In the weeks since my last post, I’ve been thrown into the deep end of the pool learning ASP.NET MVC 3 and a number of other associated technologies for a healthcare information management application currently scheduled to deploy this July. Having developed web applications using webforms since 2003, I’ve found it to be a pretty significant mental shift in a number of ways.
No Controls
There are none of the controls I’ve become accustomed to using over the years. So in addition to learning the ins-and-outs of MVC 3, I’ve been learning some jQuery as well.
No ViewState
Because there’s no viewstate in MVC, any information you need in more than one view should be available either in the url’s query string, the viewmodel, or be retrievable via some mechanism in your view’s controller. In the application I’m working on, we use Agatha.
More “Pages”
Each CRUD operation gets its own view (and its own viewmodel, depending on the circumstance). This actively encourages separation of concerns in a way that webforms definitely does not.
A Controller is a Lot Like Code-Behind
I’ve been reading Dino Esposito’s book on MVC 3, and he suggests thinking of controllers this way fairly early in the book. I’ve found that advice helpful in a couple of ways:
- This makes it quicker to understand where to put some of the code that does the key work of the application.
- It's a warning that you can put far too much logic in your controllers the same way it was possible to put far too much into your code-behind.
More to Come
This barely scratches the surface of my experience with MVC so far. None of the views I’ve implemented has been been complex enough yet to benefit from the use of Knockout JS, but future assignments will almost certainly change this. We’re also using AutoMapper to ease data transfer between our domain objects and DTOs. In addition to using StructureMap for dependency injection, we’re using PostSharp to deal with some cross-cutting concerns. Finally, we’re using RavenDB for persistence, so doing things the object database way instead of using SQL Server has required some fundamental changes as well.
Inserting stored procedure results into a table
Working with one of my colleagues earlier today, we found that we needed a way to store the results of a stored procedure execution in a table. He found this helpful blog post that shows exactly how.
One thing we found that the original blog post didn’t mention is that this approach works with table variables as well. A revised example that uses a table variable is available as a gist on GitHub.
Saving Changes is Not Permitted (SQL Server 2008 R2)
We just upgraded our development VMs at work, and I got bitten by one of the more annoying default settings in SQL Server Management Studio again. I imported some data for use in some queries and needed to change one of the column types. But when I tried to save the change, I got the dreaded “Saving changes is not permitted.” error.
Fortunately, this blog post directed me to the setting I needed to change in order for SSMS to do what I wanted.
A Brief Introduction to LINQPad
I presented a brief talk on LINQPad at RockNUG a couple of weeks ago. This post will elaborate on that presentation a bit, since the 30 minutes I had wasn’t nearly enough to do justice to a tool I only half-joking called the Swiss Army knife of .NET development.
In addition to enabling developers to write and execute LINQ queries, LINQPad can be used to execute SQL queries as well as compose and run code snippets in C#, VB.NET, and F#. LINQ can query a wide variety of collections and data, including SQL databases and XML. The ability to query XML with LINQ becomes quite powerful when the XML comes from WCF Data Services (an implementation of the OData protocol).
During my presentation, I queried a local version of the Northwind database, as well as the OData endpoint of Netflix. StackOverflow publishes their data via an OData endpoint as well. Additional producers of OData services can be found here, or in the Windows Azure Marketplace.
One of the nice features of LINQPad is the number of export options it provides for your query results. The results of any SQL or LINQ query written in LINQPad can be exported to Excel, Word, or HTML. The Excel and Word export capabilities give you the option of preserving the formatting LINQPad provides, or leaving it out. Once you’ve queried a database with LINQ, the results display allows you to toggle between the equivalent fluent LINQ, SQL and MSIL syntax. I demonstrated this feature by executing a SQL query against the Northwind sample database, then cutting and pasting the equivalent syntax to new query windows and running them to show that the query results were the same.
The LINQPad website pitches the tool as a replacement for SQL Server Management Studio. To test this proposition, I demonstrated LINQPad’s ability to execute stored procedures. I used the TenMostExpensiveProducts stored procedure in the Northwind database, and this script to show one way to use LINQPad to run stored procedures that take parameters.
LINQPad’s capabilities as a code snippet runner are further supported by its ability to reference custom assemblies and namespaces. So instead of dealing with all the overhead of Visual Studio just to write a console application, you could simply write them in LINQPad and reference any custom assemblies you needed.
The latest version of LINQPad also has an extensibility model, if you wanted to query databases other than SQL Server (or different querying sources).
One feature I wished I’d had time to delve into further was LINQPad’s ability to query Entity Framework models defined in Visual Studio. There’s a brief description of that capability here.
All of the query files from my presentation are available on Github.
Ninja UI
Since yesterday’s post about my goals for next year, I heard from my friend Faisal about a jQuery plugin he’s been working on called Ninja UI. It’s on github, so I’ll definitely be checking it out as part of my learning of jQuery for next year. Going beyond using open source tools to being a committer on one would be a big step forward for me.
Another Year Gone
It’s annual review time again, which means this year has gone by even more quickly than usual. Filling out my self-assessment was a good reminder of all the work I had a hand in completing. I’m still deciding on goals for 2012, and I’m posting all of them here so I can look back on them over the course of next year and track my progress.
- Learn jQuery. I got a bit of exposure to it this year through a couple of projects that I worked on, and a .NET user group presentation or two, but haven't done the sort of deep dive that would help me improve the look-and-feel of the web applications I build and maintain.
- Learn a functional programming language. I've been thinking about this more recently since some of our work involves the implementation of valuation models in code. I also came across this article in the November Communications of the ACM advocating OCaml. Since I work in a Microsoft shop, picking up something like F# might have a slightly better chance of making it into production code than OCaml or Haskell. Part of my objective in learning a functional programming language is to help me recognize and make better use of functional techniques in a language like C#, which has added more and more support for the functional programming style of the years.
- Give a few technical talks/presentations. This year, I presented on NuGet at my job, and on Reflector at RockNUG. Having to present on a tool or technology to group has always been a great incentive to do some deep learning of a subject. It's also a chance to exercise some speaking skills (which developers need a lot more than they might think in order to be successful) and to handle a Q & A session. I haven't developed any new presentations yet, but some prospective topics include: LINQPad, elmah,
- Take more online training. We have access to Pluralsight .NET training through work. I watched quite a few of their videos over the course of the year. 2012 shouldn't be any different in that respect. I recently came across free webcasts on a variety of topics from DevelopMentor. Since they're downloadable as well as streamable, I'll definitely use my commute to watch some of them.
- Write a compiler. It's been awhile since I've cracked open "the dragon book", so I'm probably overdue to exercise my brain in that way. I found that suggestion (and a number of other very useful ones) here.
- Practice. I'd heard of the "code kata" idea before, but hadn't really explored it. Dave Thomas of Pragmatic Programmers has nearly a couple dozen here.
LINQ Aggregate for Comma-Separated Lists of Values
A couple of days ago, while pairing with my colleague Alexei on bug fixes to a new feature, we came across a bit of code that attempted to take an integer array and construct a string with a comma-delimited list of the numbers from it. The existing code didn’t quite work, so we wrote a basic for-each loop and used ReSharper to see what LINQ alternative it might construct. Here’s what ReSharper came up with:
int[] numbers = new[] {1, 5, 8, 26, 35, 42};
var result = numbers.Aggregate("", (current, item) => current + item.ToString() + “,");
Before ReSharper served this up, I wasn’t familiar with the Aggregate operator. When I checked out 101 LINQ Samples for it, the vast majority of the examples used numbers.
AppleScript + RSVP Emails = Weddings Guests Address Book Group
I’ve been using Macs as my primary home computers for about seven years now, but hadn’t developed an interest in using AppleScript until very recently. I’m getting married in about six weeks, and my fiancee and I set up an e-mail address where everyone we invited to the wedding and reception could RSVP. In retrospect, figuring out an Apple Mail rule (or rule + AppleScript) ahead of time would probably have been a better idea, but I didn’t think of it until after I had dozens of RSVPs and no convenient way to respond to the guests en masse with additional wedding information, hotel arrangements, parking, etc. So I thought I’d figure out just enough AppleScript to go through our RSVP e-mail box and build an address book group out of the e-mails we received.
With an assist from someone on stackoverflow.com, I came up with a script that did the job. I’ve made it available as a gist on GitHub.
There are probably a ton of ways to improve this script, but for what I needed, this does the job.
Introducing NuGet
Today at work, I gave a presentation on NuGet. I’ve suggested they consider it as an option to ease management of the open source dependencies of our major application, so it was natural that I present the pros and cons.
NuGet is a system for managing .NET packages. It’s not unlike RubyGems or CPAN (for Ruby and Perl respectively), and while it has some work to do to be on par with those alternatives, they’re off to a very good start. Today’s presentation focused on just a few the capabilities of NuGet, and I’ll recap a few from my presentation in this post.
The primary use case for NuGet is the management of open source dependencies in a .NET application. There are a number of key open source libraries that .NET developers like me have been using in projects for years. Upgrades were always a pain because of having to manage their dependencies manually. Many of these tools (NHibernate, NUnit, log4net, and more) are already available as NuGet packages at the NuGet Gallery. I used NHibernate and NUnit in my examples today. Another tool that proved quite useful in my demo was the NuGet Package Explorer. Some of its features include:
- Opening and downloading packages from remote feeds
- Opening local packages to view and change their metadata and contents
- Creating new packages (instead of fiddling with XML manually)
I wrapped up my presentation with two different examples of building NuGet packages without a manually-created .nuspec file as a starting point. The documentation provides examples of how to generate a .nuspec file from an existing DLL, and how to generate a NuGet package from a .csproj or .vbproj file. I published the rules engine (which I found in an answer to a stackoverflow.com question), and a test assembly I created to the NuGet Gallery earlier this evening. If you want to check them out, just search for Arpc.RulesEngine in the NuGet Gallery. I still need to publish the rules engine source as a package and/or push it to a symbol server. Once the enterprise story for NuGet becomes a bit clearer, I hope I have an opportunity to present on that as well.