Going beyond files with ItemGroup

If you Google for information on the ItemGroup element of MSBuild, most of the top search results will discuss its use in dealing with files.  The ItemGroup element is far more flexible than this, as I figured out today when making changes to an MSBuild script for creating databases.

My goal was to simplify the targets I was using to create local groups and add users to them.  I started with an example on pages 51-52 of Inside the Microsoft Build Engine: Using MSBuild and Team Foundation Build, where the author uses an ItemGroup to create a list of 4 servers.  Each server has custom metadata associated with it.  In his PrintInfo target, he displays the data, overrides an element with new data, and even removes an element.  Because MSBuild supports batching, you can declare a task and attributes once and still execute it multiple times.  Here’s how I leveraged this capability:

  1. I created a target that stored the username of the logged-in user in a property.
  2. I created an ItemGroup.  The metadata for each group element was the name of the local group I wanted to create.
  3. I wrote the Exec commands to execute on each member of the ItemGroup
The ItemGroup looks something like this:
<ItemGroup> <LocalGroup Include="Group1"> <Name>localgroup1</Name> </LocalGroup> <LocalGroup Include="Group2"> <Name>localgroup2</Name> </LocalGroup> <LocalGroup Include="Group3"> <Name>localgroup3</Name> </LocalGroup> </ItemGroup>
The Exec commands for deleting a local group if it exists, creating a local group, and adding the logged-in user to it, look like this:
<Exec Command="net localgroup %(LocalGroup.Name) /delete" IgnoreExitCode="true" /> <Exec Command="net localgroup %(LocalGroup.Name) /add" IgnoreExitCode="true" /> <Exec Command="net localgroup %(LocalGroup.Name) $(LoggedInUser) /add" />
The result is that these commands are executed for each member of the ItemGroup.  This implementation makes a lot easier to add more local groups if necessary, or make other changes to the target.

From enthusiast to user

My friend Sandro read this Slate piece yesterday and wrote this blog entry in part about enthusiasts and users.  I think his concern that today’s computer science students seem to be more users than enthusiasts is very legitimate, since they’re some of the people we’re counting on for the next advances in the field of computing and innovative new products.  The similarity he sees between advances in automobiles and computing is an interesting one.  I agree with him up to a point about commoditization, but I see real benefits to certain things becoming commodities.  He touches only briefly on case mods in the PC space, but commodity hardware (RAM, hard drives, video cards, etc) has made it a lot easier for the technically-inclined to build entire PCs themselves, instead of buying whatever Dell or HP is selling this week.  Commodity hardware and operating systems are what enable a product like TiVo to exist (along with less-capable imitators).  We have commodity hardware to thank for the XBox 360, and commodity operating systems to thank for XBMC.  This doesn’t mean that a ton of people will avail themselves of the option to build their own PC, or their own home theater PC, just that the option is definitely out there for those who want to.

I suspect it has always been the case that the vast majority of people would rather use something cool than build it.  As much as those of us in the U.S. love cars, very few of us will be building our own Rally Fighter anytime soon.  I enjoy photography as a hobby, but I haven’t been in a darkroom to develop my own film in years (nor did I ever spend enough time in one to get really good at the process).  At least with computers, there came a point for me where fiddling around inside the guts of a computer to get something working again got to be too much of a hassle.  This could mean I’m gotten lazy, but I really like it when things just work.  There’s definitely something to be said for having the ability to fix something or hack it to do something beyond its original purpose.  I’ve always admired people like that, and I think they’re responsible for a lot of the great advances we benefit from today.

I think human nature is such that we won’t run out of enthusiasts anytime soon.  As long as there are magazines like MAKE and sites like iFixit, enthusiasts will continue to do things that make users jealous.


An update on SCO

Though I wished them dead years ago, SCO still lives.  With any luck, this latest court ruling will finally finish them off.


Adventures in e-commerce

I’m working on an e-commerce site for the first time in about 10 years.  The site is Trés Spa, a skin care products company in northern California.  Unlike my previous forays into e-commerce, none of the technologies I’m using come from Microsoft.  I’m using the community edition of Magento.  So far, I’ve been able to update the configuration so that USPS shipping options show up in shipping and tax estimates.  I haven’t had to write any code yet, but we’ll see if that changes.  Despite running WordPress for awhile, I’ve done very little with PHP.


More on migrating partially-trusted managed assemblies to .NET 4

Some additional searching on changes to code access security revealed a very helpful article on migrating partially-trusted assemblies.  What I posted yesterday about preserving the previous behavior is found a little over halfway through the article, in the Level1 and Level2 section.

One thing this new article makes clear is that use of SecurityRuleset.Level1 should only be used as a temporary measure until code can be updated to support the new security model.


Upgrading .NET assemblies from .NET 3.5 to .NET 4.0

Code access security is one area that has changed quite significantly between .NET 3.5 and .NET 4.0.  Once an assembly has been upgraded, if it allowed partially-trusted callers under .NET 3.5, it would throw exceptions when called under .NET 4.0.  In order to make such assemblies continue to function after being upgraded, AssemblyInfo.cs needs to change from this:

[assembly: AllowPartiallyTrustedCallers]
to this:
[assembly: AllowPartiallyTrustedCallers] [assembly: SecurityRules(SecurityRuleSet.Level1)]
Once this change has been made, the assembly will work under the same code access security rules that applied prior to .NET 4.0.

Why GDP Matters for Schoolkids

Planet Money, one of many podcasts I listen to in Beltway traffic, had a great episode recently attempting to explain why GDP is important.  The reporter contrasts the resources for a school in Kingston, Jamaica with a socioeconomically similar school in Barbados.  The difference in what a country can do with a per-capita GDP of approximately $15,000/year (Barbados) versus around $5600/year (Jamaica) turns out to be quite staggering.  Hearing about teachers paying for school materials out of their own pockets sounded a lot like what I’ve heard and read in news stories and features about inner-city schools here in the U.S.  One part of the piece that I believe has broader applications to how foreign direct investment (FDI) is used worldwide is when Jamaica’s minister of education (Andrew Holness) explained why the FDI Jamaica has received hasn’t resulted in the expected benefits to the country.  It boiled down to not having enough sufficiently-educated people to staff the projects being built, whether it was bauxite plants or anything else.

A paper by Peter Blair Henry (a Jamaican-born economist) goes into more detail on the comparison between Barbados and Jamaica.  There’s also a podcast of him from last year on the same subject.  I can’t vouch for these latter two links (yet), but the Planet Money episode is worth a listen if you’re at all interested in economics.


When 3rd-party dependencies attack

Lately, I’ve been making significant use of the ExecuteDDL task from the MSBuild Community Tasks project in one of my MSBuild scripts at work.  Today, someone on the development team got the following error when they ran the script:

"Could not load file or assembly 'Microsoft.SqlServer.ConnectionInfo, Version=9.0.242.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91'
It turned out that the ExecuteDDL task has a dependency on a specific version of Microsoft.SqlServer.ConnectionInfo deployed by the installation of SQL Server 2005 Management Tools.  Without those tools on your machine, and without an automatic redirect in .NET to the latest version of the assembly, the error above results.  The fix for it is to add the following XML to the "assemblyBinding" tag in MSBuild.exe.config (in whichever .NET Framework version you're using):
<dependentAssembly> <assemblyIdentity name="Microsoft.SqlServer.ConnectionInfo" publicKeyToken="89845dcd8080cc91" culture="neutral" /> <bindingRedirect oldVersion="0.0.0.0-9.9.9.9" newVersion="10.0.0.0" /> </dependentAssembly>
Thanks to Justin Burtch for finding and fixing this bug.  I hope the MSBuild task gets updated to handle this somehow.

Continuous Integration Enters the Cloud

I came across this blog post in Google Reader and thought I’d share it.  The idea of being able to outsource the care and feeding of a continuous integration system to someone else is a very interesting one.  Having implemented and maintained such systems (which I’ve blogged about  in the past), I know it can be a lot of work (though using a product like TeamCity lightens the load considerably compared with CruiseControl.NET).  Stelligent isn’t the first company to come up the idea of CI in the cloud, but they may be the first using all free/open source tools to implement it.

I’ve read Paul Duvall’s book on continuous integration and highly recommend it to anyone who works with CI systems on a regular basis.  If anyone can make a service like this successful, Mr. Duvall can.


Set-ExecutionPolicy RemoteSigned

When you first get started with PowerShell, don’t forget to run ‘Set-ExecutionPolicy RemoteSigned’ from the PowerShell prompt. If you try to run a script without doing that first, expect to see a message like the following:

File <path to file> cannot be loaded because execution of scripts is disabled on this system.  Please see "get-help about_signing" for more details.
The default execution policy for PowerShell is "Restricted" (commands only, not scripts).  The other execution policy options (in decreasing order of strictness) are:
  • AllSigned
  • RemoteSigned
  • Unrestricted
When I first tripped over this, the resource that helped most was a TechNet article.  Later, I found a blog post that was more specific about the execution policies.

Can't launch Cassini outside Visual Studio? This may help ...

I’d been trying to launch the Cassini web server from a PowerShell script for quite awhile, but kept getting an error when I tried to display the configured website in a browser.  When I opened up a debugger, it revealed a FileNotFoundException with the following details:

"Could not load file or assembly 'WebDev.WebHost, Version=8.0.0.0, Culture=neutral, PublicKeyToken=...' or one of its dependencies..."
Since the WebDev.WebHost.dll was present in the correct .NET Framework directory, the FileNotFoundException was especially puzzling.  Fortunately, one of my colleagues figured out what the issue was.  WebDev.WebHost.dll wasn't in the GAC.  Once the file was added to the GAC, I was able to launch Cassini and display the website with no problems.

Can Google Find You?

Recruiters use Google.  Whether you’re actively seeking a new job or not, it’s important to use this fact to your advantage.  My friend Sandro gave me this advice years ago, when he told me to put my resume online and make it “googleable”.  For me, the result was contacts from recruiters and companies I might never have heard of otherwise.  In addition to putting your resume online, I would recommend blogging about your job–within reason.  Definitely do not write about company secrets or co-workers.  Putting such things in your blog doesn’t help you.  Instead, write about what you do, problems you’ve solved, even your process of problem-solving.  At the very least, when you encounter similar challenges in the future, you’ll have a reference for how you solved them in the past.  Your blog post about how you fixed a particular issue might be helpful to someone else as well.

There are many options available for putting a resume and/or blog online.  Sandro hosts his, mine, and a few others on a server at his house.  But for those of you who don’t have a buddy to host theirs, here are a couple of readily-accessible free options:

There's a ton of advice out there on what makes a great resume, so I won't try to repeat it all here.  You can simply put a version of your 1 or 2-page Microsoft Word resume on the web, or you can put your entire career up there.  Having your own blog or website means you aren't subject to any restrictions on length that a site like Monster or CareerBuilder might impose.  Consider linking your resume to the websites of previous employers, technologies you've worked with, schools you've attended, and work you've done that showcases your skills (especially if it's web-related).  I don't know if that makes it easier for Google to find you, but it does give recruiters easy access to details about you they might have to dig for otherwise.  Doing what you can to make this process easier for them certainly can't hurt.

Transforming Healthcare through Information Technology

Back on November 20, I attended a seminar at the Reagan Building on how healthcare in the U.S. could be improved through information technology.  As an alumnus of the business school, and someone who’d worked in healthcare IT before, I wanted to learn about a part of the healthcare debate that I hadn’t seen much coverage lately.  Dr. Ritu Agarwal gave the talk and answered questions during and after her presentation.

The main problem with healthcare in the U.S. could probably be summed up this way:

Despite spending more on healthcare than any other country in the world, our clinical outcomes are no better than in countries that spend far less.
Even more disturbing, of the 30 countries in the OECD, the U.S. has the highest infant mortality rate.

In the past 10 years, premiums for employer-based health insurance have risen 120%.  Over the same period, inflation grew 44%, while salaries grew only 29%.  So healthcare costs are increasing far faster than inflation (and our ability to pay for it with our salaries).

As far as healthcare IT goes, Dr. Agarwal gave the following reasons for the slow pace of adoption by healthcare providers:

  • inertia
  • it's a public good--patients get the benefits--not the healthcare providers
  • lack of common standards
Adding to the inertia point is the fact that healthcare in the U.S. has many stakeholders--patients, medical professionals, hospitals, pharmaceutical companies, insurance companies, and more.

Dr. Agarwal pointed to a number of countries with successful implementations of healthcare IT.  They included Canada, Australia, and the United Kingdom.  Australia in particular was singled out as being 5-10 years ahead of the U.S.

One thing I didn’t expect was that the Veterans Administration and the Department of Defense would be held up as native models of successful healthcare IT implementations.  One key factor noted by one of the other seminar participants was that the VA and DOD systems were closed.  Providers, specialists, hospitals, etc were all part of the government.  This enables them to enforce standards, in patient records and other areas.  Another point I considered later (which didn’t come up in the Q & A) was that the government model is non-profit as well.

Dr. Agarwal’s proposed solution to improving the current state of IT use in healthcare (as I recall it) was an regional exchange model.  Healthcare providers in a particular region of the U.S. would choose a standard for electronic health records (EHR) and other protocols.  Connections between these regional exchanges would ultimately form a national health information exchange.  Building on existing protocols and technologies (instead of attempting to build a national exchange from scratch) would be the most practical choice.

For more information, check out the slides from the presentation.


Unit testing strong-named assemblies in .NET

It’s been a couple of years since I first learned about the InternalsVisibleTo attribute.  It took until this afternoon to discover a problem with it.  This issue only occurs when you attempt to unit test internal classes of signed assemblies with an unsigned test assembly.  If you attempt to compile a Visual Studio solution in this case, the compiler will return the following complaint (among others):

Strong-name signed assemblies must specify a public key intheir InternalsVisibleTo declarations.
Thankfully, this blog post gives a great walk-through of how to get this working.  The instructions in brief:
  1. Sign your test assembly.
  2. Extract the public key.
  3. Update your InternalsVisibleTo argument to include the public key.


A visit to Iowa City

Last weekend, I visited my cousin Kevin at the University of Iowa to sit on his Ph. D defense.  For the past five years, he’s been working in pharmaceutical chemistry figuring out how to create vaccines that can be delivered directly to human genes.  I’m no chemist, so the bulk of his talk was way over my head, but it was very cool to see his command of the material and how well he presented.  When he came back from the private portion of his defense, we knew he’d succeeded.

After a celebratory lunch, Kevin took his brother Richard, sister Michelle, and me to a firing range to shoot.  By firing range, I don’t mean some shiny building with paper targets on motorized tracks.  We drove about an hour from Iowa City to a fenced-in area outdoors with some metal stands and a big pit.  You bring your own guns, ammo, and targets.  When other people are around, you have to signal them that you’re going to put targets out so they stop shooting.  We turned our fire on some empty steel solvent containers with four different weapons: a Ruger pistol (.22 LR ammunition), a Ruger rifle, a Springfield 1911 (.45 ammunition), and an M1 Garand (7.62mm rounds).  After spending a couple hours shooting, I will never look at Hollywood shoot-em-ups the same way again.  Movies seem totally fake compared with the noise and recoil of large-caliber weapons.  We had fun, and we turned out to be half-decent shots (for rookies).


StackOverflow Dev Days DC

In this case, DC = Falls Church, VA.  I went to the State Theatre to attend this conference.  Considering the cost ($99/person), the conference turned out to be a great value.  I wrote up a conference report to share with my bosses and co-workers and it’s included below.  It has footnotes because I typed it up in Word 2007 and pasted it in here with very little editing (because after all this writing, I’m feeling a bit lazy).

Summary The main reasons the creators of stackoverflow.com came up with this conference include the following:

  1. Bring together developers that are part of the Stack Overflow community[1]
  2. Teach developers something outside their immediate field[2]
  3. Accomplish 1 & 2 at low cost[3]
A fourth reason I would add is to pitch FogBugz 7.  It’s the primary product offering of Fogcreek Software, so it would have been odd for a conference it supports to not do at least a little advertising.  Spolsky also attempted to divide the venue by area for networking around certain topics, but I’m not sure how successful that was.

The conference succeeded in its main objectives.  At $99 per person, this conference was a bargain.  Given the diversity of topics and caliber of speakers, the price could have been higher and the venue would still have sold out.  Of the seven official topics presented (there was an eighth on Agile after the conference ended), only the ASP.NET MVC talk used technology that I had hands-on production experience with.  I was disappointed not to see a presentation on Android, but that was the only thing obviously missing from the day.

Keynote: Joel Spolsky If I were to boil down Joel Spolsky’s keynote to a single phrase, it would be this:

“Death to the dialog box!”

Spolsky’s talk argued persuasively that software often forces users to make decisions about things they don’t care about, or don’t know enough about to answer correctly.  Among his examples were modal dialog boxes for products like QuickBooks and Internet Explorer, and the Windows Vista Start button.  He talked about the other extreme (overly simple applications) as well, using the “build less” design philosophy of the 37signals team as an example.[4] Equating those kinds of applications with Notepad was a reach (and clearly played for laughs), but described the limitations of the alternative to complex applications pretty well.  The examples did a good job of setting up the choice between simplicity and power.

He cited an experiment in the selection of jam from The Paradox of Choice: Why More Is Less[5] to show the potential drawbacks of too many choices.  When the results of this experiment showed that a display with fewer choices resulted in an order of magnitude more sales of jam, it put a monetary value on the design decision to limit choices.

Predictably, his examples of the right kind of design were products from Apple.  It takes a lot more effort to put a Nokia E71 in vibrate mode than it does an iPhone.  Spolsky pointed to the iPod’s lack of buttons for Stop and Power as examples of addition by subtraction.  The best example he chose was actually Amazon’s 1-Click shipping.  In addition to offering the most reasoned defense I’ve heard yet of Amazon winning that patent, he explained how it works for multiple purchases.

A few other takeaways from the Spolsky’s keynote that I’ve tried to capture as close to verbatim as possible are:

  • The computer shouldn’t set the agenda.
  • Bad features interrupt users.
  • Don’t give users choices they don’t care about.
iPhone Development: Dan Pilone This talk successfully combined technical depth on iPhone development with information about business models for actually selling an app once it’s complete.  Pilone discussed which design patterns to use (MVC, DataSource, Delegate) as well as what paid applications are selling for in the App Store (the highest-grossing ones sell for between $4.99 and $9.99).

One of the most useful parts of the talk was about the approval process.  He gave his own experience of getting applications through the submission process, including one that was rejected and the reasons why.  According to him, 2 weeks is average time it takes Apple to accept or reject an application.  It’s even possible for upgrades of a previously-accepted app to be rejected.

Pilone did a good job of making it clear that quality is what sells applications.  He used the Convert[6] application (from taptaptap) as an example.  It’s just one of over 80 unit converter applications in the App Store, but it’s beating the competition handily.  OmniFocus was his second example. Revenue Models

  • Ad-supported (very difficult to make money with these)
  • Paid
  • In-app upgrades[7]
Dan Pilone is the co-author of Head First iPhone Development[8], which will be available for sale on October 29.

His recommendation for selling apps in the App Store is to release a paid version first, then an ad-supported version.  This advice seemed counterintuitive to me, but I suspect he suggested it because there’s no data on the in-app upgrades revenue model.  I see in-app upgrades as Apple’s most explicit support for the “freemium”[9] business model yet.

ASP.NET MVC: Scott Allen This talk was basically a demo of a preview version of ASP.NET MVC 2.  Allen wrote code for his demonstration on-the-fly (with the sort of mistakes that can happen using this approach), so the example was pretty basic.  The takeaways I thought were useful for getting started with the technology were:

  • External projects that add features to ASP.NET MVC
    • MVCContrib
    • MVC T4
    • You can combine standard WebForms and MVC in the same solution—particularly useful if you’re trying to migrate an application from ASP.NET to ASP.NET MVC.  Allen mentioned the blogging platform Subtext[10] as an example of one application attempting this kind of migration.
This talk left a lot to be desired.  StackOverflow is the most robust example of what can be built with ASP.NET MVC.  A peek inside even a small part of actual StackOverflow source using ASP.NET MVC would have made a far more compelling presentation.

FogBugz and Kiln Even though this was strictly a product pitch, I’ve included it in the report because of how they implement a couple of ideas: plug-in architecture and tagging.

Plug-in architectures as an idea aren’t new—what was different was the motivation Joel Spolsky described for it.  One of his intentions was to make it easier for people to extend the functionality of FogBugz in ways he didn’t want.  He isn’t a fan of custom fields, so instead of building them into the core product, they’re implemented as a plug-in.  He demonstrated Balsamiq integration (via plug-in) as well, so the architecture does enable extension in ways he likes as well.

Tagging isn’t a new idea either—what I found very interesting is how they apply them in FogBugz.  Spolsky pitched them as a substitute for custom workflow.  His idea (as I understood it) was that bugs could be tagged with any items or statuses outside the standard workflow.  There wasn’t much more detail than this, but I think the idea definitely is worth exploring further.

Python: Bruce Eckel His talk was supposed to be about Python, but Bruce Eckel covered a lot more than that.  The most important takeaways of his talk were these:

  1. In language design, maintaining backward compatibility can cripple a language.
  2. The best programming languages change for the better by not being afraid of breaking backward compatibility.
  3. “Threads are unsustainable.”
Eckel’s talk gave the audience a history of programming languages, as well as a hierarchy of “language goodness”.  For the main languages created after C, the primary motivation for creating them was to fix the problems of its predecessor.  So C++ was an attempt to fix the problems of C, while Java was an attempt to fix C++.  His assertion about the primary motivation behind Ruby was this (I’m paraphrasing):
Ruby intends to combine the best of Smalltalk and the best of Perl.
He made his point about the problems of backward compatibility by comparing an attempt to add closures to Java to language changes made by Ruby and Python.  An article titled “Seeking the Joy in Java” goes into greater detail on the Java side of things.[11] In the case of Java, the desire to maintain backward compatibility often prevents changes to a language which could fix things that are poorly implemented.  The authors of Python and Ruby aren’t afraid to break backward compatibility to make improvements, which makes them better languages than Java (in Eckel’s view).

Here’s Eckel’s hierarchy of programming languages:

Python (his favorite)
Ruby, Scala, Groovy (languages he also likes)
Java
C++
C
Eckel also mentioned Grails as a framework he likes.

Another one of his pronouncements that stood out was a hope that Java would become the COBOL of the 21st century.

Eckel’s argument regarding the difficulty of writing good multithreaded code is one I’ve heard before.  He pointed to Python as a language with libraries for handling both the single processor task-switching and multi-processor parallel execution models of concurrency.

Google App Engine: Jonathan Blocksom Jonathan Blocksom gave a great overview of Google App Engine.  He’s a member of Google’s Public Sector Project Team (not the App Engine team), and I think that helped him present the information from more of an audience perspective.  He did a nice job of describing the architecture and the advantages of using it.  Some of the applications running on Google App Engine include:

Blocksom also discussed some of the limitations of App Engine:
  • 30-second time limit on asynchronous tasks
  • No full text search
jQuery: Richard D. Worth This may not have been the best talk for those already familiar with jQuery, but for me (someone unfamiliar with jQuery), it was close to perfect.  The presenter did an excellent job of showing its advantages over regular ECMAScript.  He used a clever trick to minimize the amount of typing during his demos by using slides with only the changed lines highlighted.  The “find things then do stuff” model he explained made it very easy to grasp what he was doing as he increased the complexity of his examples.[12]

Wrap-up

After the conference ended, a “metaStackOverflow” question was added to collect reviews of the conference from its attendees.[13] The top answer (as of October 28, 2009) also includes links to slides for three of the talks, which I’ve included here:
[1] [blog.stackoverflow.com/2009/05/s...](http://blog.stackoverflow.com/2009/05/stack-overflow-developer-days-conference/)

[2] Ibid

[3] Ibid

[4] gettingreal.37signals.com/toc.php

[5] www.amazon.com/gp/produc…

[6] taptaptap.com

[7] This revenue model is brand-new—Apple only began to support this within the past week or so.

[8] www.amazon.com/Head-Firs…

[9] en.wikipedia.org/wiki/Free…

[10] www.subtextproject.com

[11] www.artima.com/weblogs/v…

[12] Worth used http://jsbin.com/ for some of the more complex demos.  It’s a very good tool I hadn’t seen before.

[13] meta.stackoverflow.com/questions…


A .NET Client for REST Interface to Virtuoso

For my current project, I’ve been doing a lot of work related to the Semantic Web.  This has meant figuring out how to write SPARQL queries in order to retrieve data we can use for testing our application.  After figuring out how to do this manually (we used this SPARQL endpoint provided by OpenLink Software), it was time to automate the process.  The Virtuoso product has a REST service interface, but the only example I found here for interacting with it used curl.  Fortunately, some googling revealed a really nice resource in the Yahoo Developer Network with some great examples.

I’ve put together a small solution in Visual Studio 2008 with a console application (VirtuosoPost) which executes queries against http://dbpedia.org/fct/ and returns the query results as XML.  It’s definitely not fancy, but it works.  There’s plenty of room for improvement, and I’ll make updates available here.  The solution does include all the source, so any of you out there who are interested in taking the code in an entirely different direction are welcome to do so.


Adventures in SPARQL

If this blog post seems different than usual, it’s because I’m actually using it to get tech support via Twitter for an issue I’m having.  One of my tasks for my current project has me generating data for use in a test database. DBPedia is the data source, and I’ve been running SPARQL queries to retrieve RDF/XML-formatted data against their Virtuoso endpoint.  For some reason though, the resulting data doesn’t validate.

For example, the following query:

PREFIX owl: <[www.w3.org/2002/07/o...](http://www.w3.org/2002/07/owl#>) PREFIX xsd: <[www.w3.org/2001/XMLS...](http://www.w3.org/2001/XMLSchema#>) PREFIX rdfs: <[www.w3.org/2000/01/r...](http://www.w3.org/2000/01/rdf-schema#>) PREFIX rdf: <[www.w3.org/1999/02/2...](http://www.w3.org/1999/02/22-rdf-syntax-ns#>) PREFIX foaf: <[xmlns.com/foaf/0.1/...](http://xmlns.com/foaf/0.1/>) PREFIX dc: <[purl.org/dc/elemen...](http://purl.org/dc/elements/1.1/>) PREFIX : <[dbpedia.org/resource/...](http://dbpedia.org/resource/>) PREFIX dbpedia2: <[dbpedia.org/property/...](http://dbpedia.org/property/>) PREFIX dbpedia: <[dbpedia.org/>](http://dbpedia.org/>) PREFIX skos: <[www.w3.org/2004/02/s...](http://www.w3.org/2004/02/skos/core#>) SELECT ?property ?hasValue ?isValueOf WHERE { { <[dbpedia.org/resource/...](http://dbpedia.org/resource/Bank>) ?property ?hasValue FILTER (LANG(?hasValue) = 'en') .} UNION { ?isValueOf ?property <[dbpedia.org/resource/...](http://dbpedia.org/resource/Bank>) } }
generates the RDF/XML output here.  If I try to parse the file with an RDF validator (like this one, for example), validation fails.  Removing the attributes from the output takes care of the validation issues, but what I'm not sure of is why the node ids are added in the first place.

Adding File Headers Made Easy

One of the things on my plate at work is a macro for adding a file header and footer to all the source files in a Visual Studio solution. The macro I put together from my own implementation, various web sources, and a colleague at work accomplished the goal at one time–but inconsistently. So I’d been exploring other avenues for getting this done when Scott Garland told me about the File Header Text feature of ReSharper. You simply put in the text you want to appear at the top of your source file, add a new Code Cleanup profile, check the Use File Header Text option, then run the new profile on your solution.

The result: if the filename ends in “.cs”, ReSharper will add the value in File Header Text as a comment to the top of the file. It’s even clever enough not to add duplicate text if a file already contains it in its header. So if you need to add copyright notices or any other text to the top of your source code files, and you use ReSharper, you’ve already got what you need.