ScrollViewer+ItemsControl vs. ListView
One of my most recent tasks at work was determining the cause of slow performance in one part of an application and coming up with a fix (if possible). We tracked the source of the problem down to a use of ItemsControl inside a ScrollViewer. Because the ItemsControl instance was trying to display hundreds of complex items, it took a noticeably long time to load. This turns out to be a known issue, with a few possible solutions. Simply changing the ItemsPanelTemplate of the ItemsControl instance to contain a VirtualizingStackPanel didn’t fix our performance problem.
What did resolve our performance issue was replacing the ScrollViewer and ItemsControl combination with a ListView. The list of what we changed includes:
- Giving the ListView the same name as the ItemsControl.
- Giving the ListView the same ItemsSource as the ItemsControl.
- Update the ItemsPanelTemplate of the ListView to use VirtualizingStackPanel.
- Set HorizontalScrollBarVisibility to "Disabled".
- Bound the Visibility property of the ListView to a Converter.
- Update the ItemContainerStyle with a ListViewItem style that sets the HighlightBrushKey and ControlBrushKey to be transparent.
The changes we made reduced the load time from around 20 seconds down to less than 2 seconds for 400 items.
The tradeoff in moving to a ListView (with VirtualizingStackPanel) from ScrollViewer+ItemsControl is scrolling speed. Scrolling through 400 items does go more slowly, but it’s preferable to waiting as long as we did just to see the data.
Bloatware happens when you aren't the only customer
This article in Ars Technica reminded me of one of the things I never liked about PCs you bought from Dell, HP, or any major vendor–bloatware. Every PC I’ve had that I didn’t build myself, and every Windows laptop had multiple pieces of software that I didn’t want. The least-offensive of these apps merely took up hard drive space. The worst made the computer slower and more of a hassle to use. Switching from Windows to Mac at home meant no more bloatware. Moving to an Intel chip-based Mac extended the no-bloatware experience to Windows VMs. Moving to the iPhone a couple of years ago (and sticking with it by buying an iPhone 4) seems to have spared me from vendor bloatware as well. This is especially important in the mobile space because today’s smartphones have a lot less storage space to waste, and less-powerful CPUs than modern PCs.
Bloatware happens on the PC because even though we buy them, we aren’t the only customer. Every company with some anti-virus software to sell, a search engine they want you to use, or some utility they want you to buy wants to be on your PC or laptop. They’re willing to pay to be on your new machine, and PC vendors aren’t going to turn down that money. A similar thing seems to be happening on Android phones now. Here’s the key quote from the story:
"It's different from phone to phone and operator to operator," says Keith Nowak, spokesman for HTC. "But in general, the apps are put there to meet the operator's business and revenue needs." (emphasis mine)The money we pay for our voice and data plans isn't the only money that Verizon, AT&T, Sprint, & T-Mobile want. Some of these carriers have decided that they'll take money from other companies who want to put applications on the smart-phones they sell. This highlights one of the key differences between the way Google approaches the mobile phone market and the way Apple does it.
When it comes to Android, you and I aren’t Google’s customers–not really. The real customers are mobile phone hardware vendors like Motorola and HTC. They need to care how open and customizable Android is because they expect it to help them sell more phones. Making the OS free for the vendors is in Google’s interest because the bulk of their revenue comes from advertising. The more phones there are running Android, the more places their ads will appear. Android’s openness is only of interest to us as users to the extent it allows us to do what we want with our mobile phone.
Unlike Google, Apple is in business to sell us electronics. They expect iOS4 to help them sell more iPhones and iPads. But since you and I are the customers Apple is chasing, no pre-loading of apps from third parties. It doesn’t mean they won’t feature apps that highlight the phone’s capabilities (Apple does plenty of that). Nor does it mean we can’t get apps from AT&T, just that putting them on and taking them off is our choice. There is a tradeoff as far as how long iPhone users wait for features when compared with Android phone users. But I think the iPhone features all work better, and work together to create arguably the best smartphone available.
My First PowerShell Cmdlet
We’ve been using PowerShell to write automated tests of the UI on my current project. One of the tasks I took on today was creating a custom cmdlet to enable us to select radio buttons.
I already had an existing assembly of cmdlets to work with, so I just added a new class (SelectRadioButton) to it. Next, I added references to System.Management.Automation and System.Windows.Automation. With these references in place, I could add this attribute to the class:
[Cmdlet(VerbsCommon.Select, "RadioButton", SupportsShouldProcess = true)]
The attribute determines the actual name of the cmdlet you'll use in scripts (Select-Radiobutton). The cmdlet needs an instance of AutomationElement to operate on, so that's defined next:
[Parameter(Position = 0, Mandatory = true, HelpMessage = "Element containing a radio button control")]
[ValidateNotNull]
public AutomationElement Element { get; set;}
Finally, I adapted some of the logic for my override of the ProcessRecord from this article on using UI automation. The end result looks something like this:
protected override void ProcessRecord()
{
try
{
if (Element.Current.ControlType.Equals(ControlType.RadioButton))
{
SelectionItemPattern pattern = Element.GetCurrentPattern(SelectionItemPattern.Patern) as SelectionItemPattern;
if (pattern != null) pattern.Select();
}
else
{
//Put something in here for handling something other than a RadioButton
}
}
catch (Exception ex)
{
// You could put some logging here
throw;
}
}
In non-iPhone 4 news
Apple stealthily revised the Mac mini. Get the full story here, but the part I think is the most interesting is that they designed in a removable panel on the bottom to make it easy to replace the RAM yourself. It shows a rare bit of flexibility from Apple when it comes to their hardware.
As for the rest of the device:
- No more power brick? Nice!
- Tons of ports (including HDMI).
- SD card slot
- No Blu-Ray? Rats. "Bag of hurt" or no, that would have been nice.
- The price bump from the previous version of the Mac mini seems a bit steep.
When default settings attack
When you first install SQL Server 2008 Express, the TCP/IP protocol is disabled by default. Be sure the protocol is enabled (which requires restarting the service) before you try to run an application that depends on it, otherwise you could spend hours trying to figure out why your application won’t work. It looks like SQL Server 2008 R2 Developer behaves the same way.
I suggested this awhile back to a co-worker who’d been struggling all day with why an application wasn’t working, and it turned out to be the solution.
To Curacao and back
I spent the past 7 days vacationing in Curacao with my girlfriend Ebony and another couple we’re friends with. In this post, I’ll talk about how it went, and how I might have done things differently if I were visiting again.
Why Curacao?
Ebony has wanted to go there for awhile, because of the beautiful water, sun, and beaches.
What to wear
Definitely wear light clothing. Average high temperatures in Curacao are mid-to-upper 80s Fahrenheit year-round. Don’t skimp on sunscreen, or you’ll regret it–even if your skin is already relatively dark. My friends aren’t that much lighter than me, and all of them got burnt. Don’t go easy on bug spray either.
Accommodations
We spent two nights at Renaissance Curacao Resort & Casino in Willemstad. If you’re familiar with and/or a fan of Marriott properties, this one has everything you expect. They also have plenty of outlets for appliances and electronics from the U.S., so you won’t need to use converters. The only wi-fi access appeared to be in the lobby, and I never managed to connect with my iPhone. There’s wired internet access from the rooms, so netbook and laptop users will have an alternative to the business center. The private beach they talk about on their web page is man-made, and doesn’t connect directly to the ocean, but a large saltwater pool.
We spent the rest of the time at the Hyatt Regency Curacao Golf Resort, Spa, and Marina. Parts of the property are still under construction, so we got the benefit of a grand opening rate, 4 nights for the price of 3, and free breakfast for the duration of our stay. The service we received from every member of the staff was excellent. Without exception, they were all incredibly courteous and polite, and went out of their way to accommodate our requests. I thought the rooms were nice, but some of the balconies are much better for privacy than others. The tub has a rather high edge, so it’s a bit of a challenge to get into unless you’re tall. Strangely, the shower only has frosted glass on half the length of the tub–and no sliding door.
Food
Bistro Le Clochard was expensive, but the food was excellent. It’s inside Rif Fort, a very short walk from the Renaissance. We discovered that their kitchen accommodates vegetarians quite well. It seems to be a quite popular place, so make reservations ahead of time, or you’ll have to eat elsewhere. The restaurant within the Renaissance is ok.
The Hyatt has three restaurants: Medi, Shor, and Swim. The food at all of them is quite good, though the serving times vary widely (Shor is the slowest, Swim is the fastest). Swim will serve you poolside or at the beach. Their plantain chips and fish tacos were especially good.
How to pay
U.S. currency was accepted everywhere we tried to use it, as were our credit cards. I checked the tourist board website to get information ahead of time.
Activities, Attractions, & Shopping
Of the attractions available in Curacao, we got to the Kura Hulanda Museum and the Rif Fort in Willemstad. At Kura Hulanda, the extra money for a tour guide was well worth it. It provides a great history lesson of many cultures, as well as the slave trade. With more time, I would have visited the Mikve Israel-Emanuel Synagogue and the Maritime Museum as well.
During our time at the Hyatt, we went on a 3-hour cruise with some time for snorkeling. I still regret my lack of underwater camera gear for this, because there were a lot of strange and beautiful fish to see. There was even a small shipwreck close to where we snorkeled that we were able to see. Ocean Encounters handled our tour, and they did an excellent job. If we’d planned further in advance, we could have gone on the 7-hour cruise to and from Klein Curacao. This trip made me wish I knew how to scuba dive. A trip back for the sole purpose of getting PADI-certified would probably be worth it.
At least in Willemstad, there are tons of places to buy jewelry, electronics, clothes, and souvenirs. Prices in downtown were pretty good from what I saw. The street vendors just outside the Rif Fort offered the best prices, and we ended up getting a couple of very nice things in both places.
Getting there (and back)
We flew American Airlines from Reagan National Airport to Hato International via Miami. Our friends flew to Miami from Philadelphia, then to Hato International. During the time I researched flight costs, they ranged from $450/person to well over $600/person. We ended up using frequent flier miles for the DCA-MIA leg of the trip to cut down our out-of-pocket costs. If I had it to do over again, I’d have planned much further in advance.
One thing I noticed (to my annoyance) about flying into and out of Miami is that the gate personnel decided to pick on either Ebony or both of us about the size of our carry-on luggage. To make sure you avoid that kind of harassment, make sure your packed carry-on fits in the stupid little cages they have near the gates. Otherwise, you could end up having to check a bag you weren’t expecting and risk the airline losing it (like American Airlines nearly did with her bag).
The last thing I’ll say about flying to and from Curacao (at least in this post), is to avoid taking the last flight out of Curacao on whatever day you depart. If there’s a problem with that flight (as there was in our case), you’ll be stuck at least one extra day.
Going beyond files with ItemGroup
If you Google for information on the ItemGroup element of MSBuild, most of the top search results will discuss its use in dealing with files. The ItemGroup element is far more flexible than this, as I figured out today when making changes to an MSBuild script for creating databases.
My goal was to simplify the targets I was using to create local groups and add users to them. I started with an example on pages 51-52 of Inside the Microsoft Build Engine: Using MSBuild and Team Foundation Build, where the author uses an ItemGroup to create a list of 4 servers. Each server has custom metadata associated with it. In his PrintInfo target, he displays the data, overrides an element with new data, and even removes an element. Because MSBuild supports batching, you can declare a task and attributes once and still execute it multiple times. Here’s how I leveraged this capability:
- I created a target that stored the username of the logged-in user in a property.
- I created an ItemGroup. The metadata for each group element was the name of the local group I wanted to create.
- I wrote the Exec commands to execute on each member of the ItemGroup
<ItemGroup> <LocalGroup Include="Group1"> <Name>localgroup1</Name> </LocalGroup> <LocalGroup Include="Group2"> <Name>localgroup2</Name> </LocalGroup> <LocalGroup Include="Group3"> <Name>localgroup3</Name> </LocalGroup> </ItemGroup>The Exec commands for deleting a local group if it exists, creating a local group, and adding the logged-in user to it, look like this:
<Exec Command="net localgroup %(LocalGroup.Name) /delete" IgnoreExitCode="true" /> <Exec Command="net localgroup %(LocalGroup.Name) /add" IgnoreExitCode="true" /> <Exec Command="net localgroup %(LocalGroup.Name) $(LoggedInUser) /add" />The result is that these commands are executed for each member of the ItemGroup. This implementation makes a lot easier to add more local groups if necessary, or make other changes to the target.
From enthusiast to user
My friend Sandro read this Slate piece yesterday and wrote this blog entry in part about enthusiasts and users. I think his concern that today’s computer science students seem to be more users than enthusiasts is very legitimate, since they’re some of the people we’re counting on for the next advances in the field of computing and innovative new products. The similarity he sees between advances in automobiles and computing is an interesting one. I agree with him up to a point about commoditization, but I see real benefits to certain things becoming commodities. He touches only briefly on case mods in the PC space, but commodity hardware (RAM, hard drives, video cards, etc) has made it a lot easier for the technically-inclined to build entire PCs themselves, instead of buying whatever Dell or HP is selling this week. Commodity hardware and operating systems are what enable a product like TiVo to exist (along with less-capable imitators). We have commodity hardware to thank for the XBox 360, and commodity operating systems to thank for XBMC. This doesn’t mean that a ton of people will avail themselves of the option to build their own PC, or their own home theater PC, just that the option is definitely out there for those who want to.
I suspect it has always been the case that the vast majority of people would rather use something cool than build it. As much as those of us in the U.S. love cars, very few of us will be building our own Rally Fighter anytime soon. I enjoy photography as a hobby, but I haven’t been in a darkroom to develop my own film in years (nor did I ever spend enough time in one to get really good at the process). At least with computers, there came a point for me where fiddling around inside the guts of a computer to get something working again got to be too much of a hassle. This could mean I’m gotten lazy, but I really like it when things just work. There’s definitely something to be said for having the ability to fix something or hack it to do something beyond its original purpose. I’ve always admired people like that, and I think they’re responsible for a lot of the great advances we benefit from today.
I think human nature is such that we won’t run out of enthusiasts anytime soon. As long as there are magazines like MAKE and sites like iFixit, enthusiasts will continue to do things that make users jealous.
An update on SCO
Though I wished them dead years ago, SCO still lives. With any luck, this latest court ruling will finally finish them off.
Adventures in e-commerce
I’m working on an e-commerce site for the first time in about 10 years. The site is Trés Spa, a skin care products company in northern California. Unlike my previous forays into e-commerce, none of the technologies I’m using come from Microsoft. I’m using the community edition of Magento. So far, I’ve been able to update the configuration so that USPS shipping options show up in shipping and tax estimates. I haven’t had to write any code yet, but we’ll see if that changes. Despite running WordPress for awhile, I’ve done very little with PHP.
More on migrating partially-trusted managed assemblies to .NET 4
Some additional searching on changes to code access security revealed a very helpful article on migrating partially-trusted assemblies. What I posted yesterday about preserving the previous behavior is found a little over halfway through the article, in the Level1 and Level2 section.
One thing this new article makes clear is that use of SecurityRuleset.Level1 should only be used as a temporary measure until code can be updated to support the new security model.
Upgrading .NET assemblies from .NET 3.5 to .NET 4.0
Code access security is one area that has changed quite significantly between .NET 3.5 and .NET 4.0. Once an assembly has been upgraded, if it allowed partially-trusted callers under .NET 3.5, it would throw exceptions when called under .NET 4.0. In order to make such assemblies continue to function after being upgraded, AssemblyInfo.cs needs to change from this:
[assembly: AllowPartiallyTrustedCallers]to this:
[assembly: AllowPartiallyTrustedCallers] [assembly: SecurityRules(SecurityRuleSet.Level1)]Once this change has been made, the assembly will work under the same code access security rules that applied prior to .NET 4.0.
When 3rd-party dependencies attack
Lately, I’ve been making significant use of the ExecuteDDL task from the MSBuild Community Tasks project in one of my MSBuild scripts at work. Today, someone on the development team got the following error when they ran the script:
"Could not load file or assembly 'Microsoft.SqlServer.ConnectionInfo, Version=9.0.242.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91'It turned out that the ExecuteDDL task has a dependency on a specific version of Microsoft.SqlServer.ConnectionInfo deployed by the installation of SQL Server 2005 Management Tools. Without those tools on your machine, and without an automatic redirect in .NET to the latest version of the assembly, the error above results. The fix for it is to add the following XML to the "assemblyBinding" tag in MSBuild.exe.config (in whichever .NET Framework version you're using):
<dependentAssembly> <assemblyIdentity name="Microsoft.SqlServer.ConnectionInfo" publicKeyToken="89845dcd8080cc91" culture="neutral" /> <bindingRedirect oldVersion="0.0.0.0-9.9.9.9" newVersion="10.0.0.0" /> </dependentAssembly>Thanks to Justin Burtch for finding and fixing this bug. I hope the MSBuild task gets updated to handle this somehow.
Continuous Integration Enters the Cloud
I came across this blog post in Google Reader and thought I’d share it. The idea of being able to outsource the care and feeding of a continuous integration system to someone else is a very interesting one. Having implemented and maintained such systems (which I’ve blogged about in the past), I know it can be a lot of work (though using a product like TeamCity lightens the load considerably compared with CruiseControl.NET). Stelligent isn’t the first company to come up the idea of CI in the cloud, but they may be the first using all free/open source tools to implement it.
I’ve read Paul Duvall’s book on continuous integration and highly recommend it to anyone who works with CI systems on a regular basis. If anyone can make a service like this successful, Mr. Duvall can.
Set-ExecutionPolicy RemoteSigned
When you first get started with PowerShell, don’t forget to run ‘Set-ExecutionPolicy RemoteSigned’ from the PowerShell prompt. If you try to run a script without doing that first, expect to see a message like the following:
File <path to file> cannot be loaded because execution of scripts is disabled on this system. Please see "get-help about_signing" for more details.The default execution policy for PowerShell is "Restricted" (commands only, not scripts). The other execution policy options (in decreasing order of strictness) are:
- AllSigned
- RemoteSigned
- Unrestricted
Can't launch Cassini outside Visual Studio? This may help ...
I’d been trying to launch the Cassini web server from a PowerShell script for quite awhile, but kept getting an error when I tried to display the configured website in a browser. When I opened up a debugger, it revealed a FileNotFoundException with the following details:
"Could not load file or assembly 'WebDev.WebHost, Version=8.0.0.0, Culture=neutral, PublicKeyToken=...' or one of its dependencies..."Since the WebDev.WebHost.dll was present in the correct .NET Framework directory, the FileNotFoundException was especially puzzling. Fortunately, one of my colleagues figured out what the issue was. WebDev.WebHost.dll wasn't in the GAC. Once the file was added to the GAC, I was able to launch Cassini and display the website with no problems.
Can Google Find You?
Recruiters use Google. Whether you’re actively seeking a new job or not, it’s important to use this fact to your advantage. My friend Sandro gave me this advice years ago, when he told me to put my resume online and make it “googleable”. For me, the result was contacts from recruiters and companies I might never have heard of otherwise. In addition to putting your resume online, I would recommend blogging about your job–within reason. Definitely do not write about company secrets or co-workers. Putting such things in your blog doesn’t help you. Instead, write about what you do, problems you’ve solved, even your process of problem-solving. At the very least, when you encounter similar challenges in the future, you’ll have a reference for how you solved them in the past. Your blog post about how you fixed a particular issue might be helpful to someone else as well.
There are many options available for putting a resume and/or blog online. Sandro hosts his, mine, and a few others on a server at his house. But for those of you who don’t have a buddy to host theirs, here are a couple of readily-accessible free options:
There's a ton of advice out there on what makes a great resume, so I won't try to repeat it all here. You can simply put a version of your 1 or 2-page Microsoft Word resume on the web, or you can put your entire career up there. Having your own blog or website means you aren't subject to any restrictions on length that a site like Monster or CareerBuilder might impose. Consider linking your resume to the websites of previous employers, technologies you've worked with, schools you've attended, and work you've done that showcases your skills (especially if it's web-related). I don't know if that makes it easier for Google to find you, but it does give recruiters easy access to details about you they might have to dig for otherwise. Doing what you can to make this process easier for them certainly can't hurt.Transforming Healthcare through Information Technology
Back on November 20, I attended a seminar at the Reagan Building on how healthcare in the U.S. could be improved through information technology. As an alumnus of the business school, and someone who’d worked in healthcare IT before, I wanted to learn about a part of the healthcare debate that I hadn’t seen much coverage lately. Dr. Ritu Agarwal gave the talk and answered questions during and after her presentation.
The main problem with healthcare in the U.S. could probably be summed up this way:
Despite spending more on healthcare than any other country in the world, our clinical outcomes are no better than in countries that spend far less.Even more disturbing, of the 30 countries in the OECD, the U.S. has the highest infant mortality rate.
In the past 10 years, premiums for employer-based health insurance have risen 120%. Over the same period, inflation grew 44%, while salaries grew only 29%. So healthcare costs are increasing far faster than inflation (and our ability to pay for it with our salaries).
As far as healthcare IT goes, Dr. Agarwal gave the following reasons for the slow pace of adoption by healthcare providers:
- inertia
- it's a public good--patients get the benefits--not the healthcare providers
- lack of common standards
Dr. Agarwal pointed to a number of countries with successful implementations of healthcare IT. They included Canada, Australia, and the United Kingdom. Australia in particular was singled out as being 5-10 years ahead of the U.S.
One thing I didn’t expect was that the Veterans Administration and the Department of Defense would be held up as native models of successful healthcare IT implementations. One key factor noted by one of the other seminar participants was that the VA and DOD systems were closed. Providers, specialists, hospitals, etc were all part of the government. This enables them to enforce standards, in patient records and other areas. Another point I considered later (which didn’t come up in the Q & A) was that the government model is non-profit as well.
Dr. Agarwal’s proposed solution to improving the current state of IT use in healthcare (as I recall it) was an regional exchange model. Healthcare providers in a particular region of the U.S. would choose a standard for electronic health records (EHR) and other protocols. Connections between these regional exchanges would ultimately form a national health information exchange. Building on existing protocols and technologies (instead of attempting to build a national exchange from scratch) would be the most practical choice.
For more information, check out the slides from the presentation.
Unit testing strong-named assemblies in .NET
It’s been a couple of years since I first learned about the InternalsVisibleTo attribute. It took until this afternoon to discover a problem with it. This issue only occurs when you attempt to unit test internal classes of signed assemblies with an unsigned test assembly. If you attempt to compile a Visual Studio solution in this case, the compiler will return the following complaint (among others):
Strong-name signed assemblies must specify a public key intheir InternalsVisibleTo declarations.Thankfully, this blog post gives a great walk-through of how to get this working. The instructions in brief:
- Sign your test assembly.
- Extract the public key.
- Update your InternalsVisibleTo argument to include the public key.