Technology
Replicating Folder Structures in New Environments with MSBuild
I recently received the task of modifying an existing MSBuild script to copy configuration files from one location to another while preserving all but the top levels of their original folder structure. Completing this task required a refresher in MSBuild well-known metadata and task batching (among other things), so I’m recounting my process here for future reference.
The config files that needed copying were already collected into an item via a CreateItem task. Since we’re using MSBuild 4.0 though, I replaced it with the simpler ItemGroup. CreateItem has been deprecated for awhile, but can still be used. There is a bit of debate over the precise differences between CreateItem and ItemGroup, but for me the bottom line is the same (or superior) functionality with less XML.
Creating a new folder on the fly is easy enough with the MakeDir task. There’s no need to manually check whether or not the directory you’re trying to create already exists or not. The task just works.
The trickiest part of this task was figuring out what combination of well-known metadata needed to go in the DestinationFiles attribute of the Copy task to achieve the desired result. The answer ended up looking like this:
<Copy SourceFiles="@(ConfigFiles)" DestinationFiles="$(OutDir)_Config\$(Environment)\%(ConfigFiles.RecursiveDir)%(ConfigFiles.Filename)%(ConfigFiles.Extension)" />
The key bit of metadata is the RecursiveDir part. Since the ItemGroup that builds the file collection uses the ** wildcard, and it covered all the original folder structure I needed, putting after the new “root” destination and before the file names gave me the result I wanted. Another reason that well-known metadata was vital to the task is that all the files have the same name (Web.config), so the easiest way to differentiate them for the purpose of copying was their location in the folder structure.
In addition to the links above, this book by Sayed Ibrahim Hashimi was very helpful. In a previous job where configuration management was a much larger part of my role, I referred to it (and sedodream.com) on a daily basis.
Fixing MVC Sitemap Errors
When attempting to manually test a .NET MVC application, I got the following exception from Visual Studio:
Looking at the inner exception revealed this message:
An item with the same key has already been added.The sitemap file for our application is pretty long (over 1300 lines of XML), but a co-worker pointed me to the potential culprit right away. There was a sitemap node near the end of the file that had empty strings for its controller and action attributes. As far as I can tell, this generates the default url for the site's home page. Since it already exists, this results in the exception that's thrown. Removing the sitemap node resolved our issue. A couple of threads that I checked on stackoverflow (here and here) provide other possible causes for the error.
Visual Studio & TFS Behavior Tweaks
One of a few long-running annoyances I’ve had with every version of TFS is one of the default behaviors on check-in. The default is to resolve an open item on check-in, which is virtually never the case the first (or second, or third, etc) time you check in code to resolve a bug or implement new functionality. Fortunately, Edsquared has the solution.
After making this long-overdue change in my development environment, I exported the keys for VS2010 and VS2012 as registration entry files below:
Feel free to use them in your environment.
Tim Cook Should Ignore Ars Technica (Almost) Completely
I came across this article by Jacqui Cheng and thought I’d add my two cents on each of the suggestions.
-
License OS X. Despite the article’s protestations that licensing doesn’t have to be the disaster it was for them in the 90s, this suggestion misses the mark because it misunderstands what kind of company Apple–a hardware company. Licensing OS X would only send hardware revenue to a company (or companies) other than Apple. There’s no compelling reason for them to give away that money. Licensing the OS won’t get them additional users, or revenue, or get them into some new market they might want to enter. This is by far the worst idea on the list.
-
Bring some manufacturing jobs back to the U.S. It’s a nice idea in theory, but in reality, there’s no compelling reason for them to do this. Why should they voluntarily raise their costs and reduce their profit margins? Apple is hardly the only company doing business with Foxconn. Dell, H-P, Cisco, Intel and Cisco are also major customers.
8. Invest in an independent research lab. This has been said better by others, but Apple’s success is due in large part to its narrow focus. People and capital used for such a lab wouldn’t be available to help with the things that Apple is great at. There are other ways that Apple can contribute to the public good without directing a ton of money toward basic research. In my view, the federal government is the right entity to be doing that (but that’s a whole other discussion).
7. More transparency on OS X and Mac plans. Like suggestion 10, the primary focus of this suggestion seems to be on Mac Pro users. It’s true that the Mac Pro hasn’t gotten much attention from Apple over the past couple of years. Perhaps the biggest reason is that it doesn’t account for much of their revenue anymore. The one point I would extrapolate from their suggestion that I would agree with is that Apple can definitely improve in how they treat developers for their platforms. I’ve spent my career writing desktop and web applications on and for various versions of Windows, and Microsoft seems much more “pro-developer” (more information about development tools, free copies of software, training events, etc). I wouldn’t expect Apple to try and become just like Microsoft in this regard (nor should they), but there are definitely some lessons Apple could learn.
6. Make the Apple TV more than a hobby. This is the first suggestion in the list that I like. I like the Apple TV enough that I own one for each TV in my house and have started buying them as gifts for family.
- Offer streaming, subscription music. I’m not sure what I think of this suggestion. I avoided subscription music services in favor of buying music for years because I preferred the idea of owning it and being able to listen to it on whatever device I wanted. I like the experience I’ve had with Spotify so far, but I don’t know if I listen to enough music to justify the monthly cost. I’m not sure what Apple could bring to the space that would be better. Whether they do anything with streaming or not, what Apple really needs to do is re-think iTunes. As Apple has offered more and more content, iTunes has become more of a sprawling mess.
4. Inject some steroids into the Mac line. I disagree with this suggestion completely. Apple got it right with their focus on battery life and enough speed. In mobile phones and tablets, seemingly every manufacturer using Android as the OS focused on metrics like processor speed, camera megapixels, and features like full multi-tasking. The result: devices that had to be recharged multiple times over the course of a day. By contrast, the iPhone is plenty fast, but I can go a full day without having to recharge it. Multiple days can go by before I need to recharge the iPad. Apple has correctly avoided competing on specific measures like processor speed and how many megapixels their cameras have. They’re competing (and winning) on the experience of using their products.
3. Diversify the iOS product line. If the rumors are correct, Apple will be offering a smaller version of the iPad soon. The next iPhone will probably have a larger screen as well. But beyond those changes, I don’t think Apple should be in any hurry to diversify in the way Ars Technica suggests. By limiting the differentiation of their iOS-based products to storage size (and cost), Apple has chosen a metric that is both meaningful and easy for the typical consumer to understand. This makes Apple products easier to buy than the alternatives.
- Make a larger commitment to OS security. I agree with this suggestion as well. Apple’s success in the market has made them big enough for virus/malware makers to spend time targeting.
1. Cater to power users again. I see this suggestion as a variation on the them of suggestion 7. I’m sure Apple could do something like this in a way that wouldn’t disrupt their current approach. Whether or not it would net them enough additional customers and revenue to be worthwhile is another discussion.
Introducing AutoPoco
I first learned about AutoPoco from this blog post by Scott Hanselman. But it wasn’t until earlier this spring that I had an opportunity to use it. I’d started a new job at the end of March, and in the process of getting familiar with the code base on my first project, I came across the code they used to generate test data. I used AutoPoco to generate a much larger set of realistic-looking test data than was previously available.
Last week, I gave a presentation to my local .NET user group (RockNUG) on the framework. The slide deck I put together is available here, and there’s some demo code here. The rest of this blog post will speak to the following question (or a rather rough paraphrase of it) that came up during a demo: is it possible to generate instances of a class with an attribute that has a wide range of values, save one?
The demo that prompted this question is in the AddressTest.cs class in the demo code I linked to earlier. In that class, the second test (Generate100InstanceOfAddressWithImpose) gives 50 of the 100 addresses a zip code of 20905 and a state of Maryland. The possible objective of the question could be to generate a set of data with every state except one.
After taking a closer look at the documentation, and a review of the AutoPoco source code for generating random states, I came up with an answer. The Generate1000InstancesOfAddressWithNoneInMaryland test not only excludes Maryland from the state property, it uses abbreviations instead of the full state name. The implementation of CustomUsStatesSource.Next adds a couple of loops (one if abbreviations are used, one if not) that keep generating random indexes if the resulting state is contained in the list of states to exclude.
The ability to pass parameters to custom datasources in order to control what type of test data is generated is an incredibly useful feature. In the work I did on my project’s test generator, I used the capability in order to create a base datasource that generated numeric strings with the length controlled by parameters. This allow me to implement new datasources for custom ids in the application by inheriting from the base and specifying those parameters in the constructor.
Because AutoPoco is open source, if your project has specific needs, you can simply fork it and customize as you wish. Another value-add of a framework like this could be realized if you write multiple applications that share data. In such a scenario, test data becomes a corporate resource, with different sets generated and made available according to the scenarios being tested.
Another advantage of AutoPoco for test generation is that its use of plain old CLR objects keeps it independent of specific database technologies. I’m currently using AutoPoco with RavenDB; it will work just as well with the database technology (or ORM) of your choosing–Entity Framework, NHibernate, SQL Server, Oracle, etc.
AutoPoco is available via NuGet, so it’s very easy to add to whatever test assemblies you’ve currently got in your solutions. As long as you have public, no-arg constructors for the CLR objects (since AutoPoco uses reflection to work), you can generate large volumes of realistic-looking test data in virtually no time.
The Perils of Renaming in TFS
Apparently, renaming an assembly is a bad idea when TFS is your version control system.
Earlier this week, one of my co-workers renamed an assembly to consolidate some functionality in our application yesterday, and even though TFS said the changes were checked in, they weren’t.
I got the latest code the morning after the change, and got nothing but build failures. We’re using the latest version of TFS and it’s very frustrating that something like this still doesn’t work properly.
Ultimately, the solution was found at the bottom of this thread.
The only way I’ve found to avoid this kind of hassle is to create a new assembly, copy your code from the old assembly to the new one, change any references to the old assembly to use the new assembly, then delete the old assembly once you’ve verified the new one is working.
Please Learn to Code (Continued)
A couple days ago, I wrote a post on why Coding Horror is wrong to suggest people shouldn’t learn to code.
Here’s a much better post on the same subject by Jon Galloway (hat tip Scott Hanselman, and his e-mail Newsletter of Wonderful Things).
Please Learn to Code
I came across this post from Jeff Atwood in my Twitter feed this morning. It even sparked a conversation (as much of one as you can have 140 characters at a time) between me and my old co-worker Jon who agreed with Jeff Atwood in far blunter terms: “we need to cleanse the dev pool, not expand and muddy the water”.
While I understand Jon’s sentiment, “cleansing” vs. “expanding” just isn’t going to happen. Computer science as an academic discipline didn’t even exist until the 1950s, so it’s a very long way from having the sort of regulations and licensure of a field like engineering (a term that dates back to the 14th century). Combine that with the decreasing number of computer science graduates our colleges and universities graduate each year (much less the elimination of a computer science department), and it’s no surprise that people without formal education in computer science are getting paid to develop software.
While it does sound crazy that Mayor Bloomberg made learning to code his 2012 new year’s resolution, I’m glad someone as high-profile as the mayor of New York is talking about programming. When I was deciding what to study in college (way back in 1992), computer science as a discipline didn’t have a very high profile. While I knew programming was how video-games and other software was made, I had to find out about computer science from Bureau of Labor Statistics reports.
Jeff’s “please don’t learn to code” is counterproductive–the exact opposite of what we should be saying. Given a choice between having more people going into investment banking and more people going into software development, I suspect a large majority of us would favor the latter.
I also don’t believe that the objective of learning to code has to be making a living as a software developer in order to be useful. The folks at Software Carpentry are teaching programming to scientists to help them in their research. People who test software should know enough about programming to at least automate the repetitive tasks. If you use a computer with any regularity at all, even a little bit of programming knowledge will enable you to extend the capabilities of the software you use.
We need only look at some of the laws that exist in this country to see the results of a lack of understanding of programming by our judges and legislators. I think that lack of understanding led to software patents (and a ton of time wasted in court instead of spent innovating). The Stop Online Piracy Act and the Protect IP Act are other examples of dangerous laws proposed by legislators that don’t have even the most basic understanding of programming.
As someone who writes software for a living, I prefer customers who understand at least a bit about programming to those who don’t, because that makes it easier to talk about requirements (and get them right). They tend to understand the capabilities of off-the-shelf software a bit better and understand the tradeoffs between it and a custom system. In my career, there have been any number of times where an understanding of programming has helped me find an existing framework or solution that met most of a customer’s requirements, so I and my team were able to focus our work just on what was missing.
Thanks Again StackOverflow!
About a month ago, I wrote a brief post about starting a new job. In it, I tipped my hat to StackOverflow Careers, for connecting me with my new employer. Yesterday, I received a package from FedEx. I was puzzled, since I didn’t recall ordering anything recently. But upon opening it, I discovered a nice StackOverflow-branded portfolio, pen and card with The Joel Test on it. In the pocket was a version of my profile printed on high-quality paper.
I appreciate the gesture, and thank StackOverflow Careers and the StackExchange team not only for the portfolio (which I’ve already replaced my previous portfolio with), or for creating a great site for connecting developers with employers that value developers, but for the whole collection of Q & A sites that make software development (and many other fields of endeavor) easier to learn.
From Web Forms to MVC
In the weeks since my last post, I’ve been thrown into the deep end of the pool learning ASP.NET MVC 3 and a number of other associated technologies for a healthcare information management application currently scheduled to deploy this July. Having developed web applications using webforms since 2003, I’ve found it to be a pretty significant mental shift in a number of ways.
No Controls
There are none of the controls I’ve become accustomed to using over the years. So in addition to learning the ins-and-outs of MVC 3, I’ve been learning some jQuery as well.
No ViewState
Because there’s no viewstate in MVC, any information you need in more than one view should be available either in the url’s query string, the viewmodel, or be retrievable via some mechanism in your view’s controller. In the application I’m working on, we use Agatha.
More “Pages”
Each CRUD operation gets its own view (and its own viewmodel, depending on the circumstance). This actively encourages separation of concerns in a way that webforms definitely does not.
A Controller is a Lot Like Code-Behind
I’ve been reading Dino Esposito’s book on MVC 3, and he suggests thinking of controllers this way fairly early in the book. I’ve found that advice helpful in a couple of ways:
- This makes it quicker to understand where to put some of the code that does the key work of the application.
- It's a warning that you can put far too much logic in your controllers the same way it was possible to put far too much into your code-behind.
More to Come
This barely scratches the surface of my experience with MVC so far. None of the views I’ve implemented has been been complex enough yet to benefit from the use of Knockout JS, but future assignments will almost certainly change this. We’re also using AutoMapper to ease data transfer between our domain objects and DTOs. In addition to using StructureMap for dependency injection, we’re using PostSharp to deal with some cross-cutting concerns. Finally, we’re using RavenDB for persistence, so doing things the object database way instead of using SQL Server has required some fundamental changes as well.
New Gig
Tomorrow will be my first day with FEi Systems. They design, build and maintain healthcare IT systems, including health information exchanges and electronic health record (EHR) systems. I’ll be helping their efforts in those areas as a senior developer. I’m looking forward to being back in the healthcare sector (I worked for another healthcare IT company earlier in my career). I think it can benefit a great deal from the deployment and usage of well-designed IT systems.
I wouldn’t have been aware of this company, or this opportunity without Stack Overflow Careers. I was a beta tester on the original site (Stack Overflow), and since it launched it’s been a big help to me in solving a number challenging technical problems on a variety of projects. Both the career site and Q&A site are worth your while if you write code for a living.
Retraction
Just came across this story relating to my January 25th blog post on Mike Daisey’s story of his visits to factories in Shenzhen. As it turns out they were indeed stories–large portions of his monologue have been revealed as complete fabrications.
This American Life has fully retracted the story, so I thought I would do the same since I linked to it when it first aired.
Inserting stored procedure results into a table
Working with one of my colleagues earlier today, we found that we needed a way to store the results of a stored procedure execution in a table. He found this helpful blog post that shows exactly how.
One thing we found that the original blog post didn’t mention is that this approach works with table variables as well. A revised example that uses a table variable is available as a gist on GitHub.
Saving Changes is Not Permitted (SQL Server 2008 R2)
We just upgraded our development VMs at work, and I got bitten by one of the more annoying default settings in SQL Server Management Studio again. I imported some data for use in some queries and needed to change one of the column types. But when I tried to save the change, I got the dreaded “Saving changes is not permitted.” error.
Fortunately, this blog post directed me to the setting I needed to change in order for SSMS to do what I wanted.
A Brief Introduction to LINQPad
I presented a brief talk on LINQPad at RockNUG a couple of weeks ago. This post will elaborate on that presentation a bit, since the 30 minutes I had wasn’t nearly enough to do justice to a tool I only half-joking called the Swiss Army knife of .NET development.
In addition to enabling developers to write and execute LINQ queries, LINQPad can be used to execute SQL queries as well as compose and run code snippets in C#, VB.NET, and F#. LINQ can query a wide variety of collections and data, including SQL databases and XML. The ability to query XML with LINQ becomes quite powerful when the XML comes from WCF Data Services (an implementation of the OData protocol).
During my presentation, I queried a local version of the Northwind database, as well as the OData endpoint of Netflix. StackOverflow publishes their data via an OData endpoint as well. Additional producers of OData services can be found here, or in the Windows Azure Marketplace.
One of the nice features of LINQPad is the number of export options it provides for your query results. The results of any SQL or LINQ query written in LINQPad can be exported to Excel, Word, or HTML. The Excel and Word export capabilities give you the option of preserving the formatting LINQPad provides, or leaving it out. Once you’ve queried a database with LINQ, the results display allows you to toggle between the equivalent fluent LINQ, SQL and MSIL syntax. I demonstrated this feature by executing a SQL query against the Northwind sample database, then cutting and pasting the equivalent syntax to new query windows and running them to show that the query results were the same.
The LINQPad website pitches the tool as a replacement for SQL Server Management Studio. To test this proposition, I demonstrated LINQPad’s ability to execute stored procedures. I used the TenMostExpensiveProducts stored procedure in the Northwind database, and this script to show one way to use LINQPad to run stored procedures that take parameters.
LINQPad’s capabilities as a code snippet runner are further supported by its ability to reference custom assemblies and namespaces. So instead of dealing with all the overhead of Visual Studio just to write a console application, you could simply write them in LINQPad and reference any custom assemblies you needed.
The latest version of LINQPad also has an extensibility model, if you wanted to query databases other than SQL Server (or different querying sources).
One feature I wished I’d had time to delve into further was LINQPad’s ability to query Entity Framework models defined in Visual Studio. There’s a brief description of that capability here.
All of the query files from my presentation are available on Github.
Mr. Daisey and the Apple Factory
If you haven’t already heard this episode of This American Life, it’s definitely worth your time. I won’t look at any of my “iStuff” the same way again after hearing it. The suicides at Foxconn made the news last year (along with a mass suicide threat earlier this year), but this piece gives a lot of insight into the conditions that could drive people to kill themselves.
I found it difficult to listen to this piece and not feel complicit in how the workers at these plants are treated. I wish I knew how much more per product it would cost to improve working conditions (and hope I’d be a decent enough human being to pay extra).
Ninja UI
Since yesterday’s post about my goals for next year, I heard from my friend Faisal about a jQuery plugin he’s been working on called Ninja UI. It’s on github, so I’ll definitely be checking it out as part of my learning of jQuery for next year. Going beyond using open source tools to being a committer on one would be a big step forward for me.
Another Year Gone
It’s annual review time again, which means this year has gone by even more quickly than usual. Filling out my self-assessment was a good reminder of all the work I had a hand in completing. I’m still deciding on goals for 2012, and I’m posting all of them here so I can look back on them over the course of next year and track my progress.
- Learn jQuery. I got a bit of exposure to it this year through a couple of projects that I worked on, and a .NET user group presentation or two, but haven't done the sort of deep dive that would help me improve the look-and-feel of the web applications I build and maintain.
- Learn a functional programming language. I've been thinking about this more recently since some of our work involves the implementation of valuation models in code. I also came across this article in the November Communications of the ACM advocating OCaml. Since I work in a Microsoft shop, picking up something like F# might have a slightly better chance of making it into production code than OCaml or Haskell. Part of my objective in learning a functional programming language is to help me recognize and make better use of functional techniques in a language like C#, which has added more and more support for the functional programming style of the years.
- Give a few technical talks/presentations. This year, I presented on NuGet at my job, and on Reflector at RockNUG. Having to present on a tool or technology to group has always been a great incentive to do some deep learning of a subject. It's also a chance to exercise some speaking skills (which developers need a lot more than they might think in order to be successful) and to handle a Q & A session. I haven't developed any new presentations yet, but some prospective topics include: LINQPad, elmah,
- Take more online training. We have access to Pluralsight .NET training through work. I watched quite a few of their videos over the course of the year. 2012 shouldn't be any different in that respect. I recently came across free webcasts on a variety of topics from DevelopMentor. Since they're downloadable as well as streamable, I'll definitely use my commute to watch some of them.
- Write a compiler. It's been awhile since I've cracked open "the dragon book", so I'm probably overdue to exercise my brain in that way. I found that suggestion (and a number of other very useful ones) here.
- Practice. I'd heard of the "code kata" idea before, but hadn't really explored it. Dave Thomas of Pragmatic Programmers has nearly a couple dozen here.
Introducing NuGet
Today at work, I gave a presentation on NuGet. I’ve suggested they consider it as an option to ease management of the open source dependencies of our major application, so it was natural that I present the pros and cons.
NuGet is a system for managing .NET packages. It’s not unlike RubyGems or CPAN (for Ruby and Perl respectively), and while it has some work to do to be on par with those alternatives, they’re off to a very good start. Today’s presentation focused on just a few the capabilities of NuGet, and I’ll recap a few from my presentation in this post.
The primary use case for NuGet is the management of open source dependencies in a .NET application. There are a number of key open source libraries that .NET developers like me have been using in projects for years. Upgrades were always a pain because of having to manage their dependencies manually. Many of these tools (NHibernate, NUnit, log4net, and more) are already available as NuGet packages at the NuGet Gallery. I used NHibernate and NUnit in my examples today. Another tool that proved quite useful in my demo was the NuGet Package Explorer. Some of its features include:
- Opening and downloading packages from remote feeds
- Opening local packages to view and change their metadata and contents
- Creating new packages (instead of fiddling with XML manually)
I wrapped up my presentation with two different examples of building NuGet packages without a manually-created .nuspec file as a starting point. The documentation provides examples of how to generate a .nuspec file from an existing DLL, and how to generate a NuGet package from a .csproj or .vbproj file. I published the rules engine (which I found in an answer to a stackoverflow.com question), and a test assembly I created to the NuGet Gallery earlier this evening. If you want to check them out, just search for Arpc.RulesEngine in the NuGet Gallery. I still need to publish the rules engine source as a package and/or push it to a symbol server. Once the enterprise story for NuGet becomes a bit clearer, I hope I have an opportunity to present on that as well.
Bloatware happens when you aren't the only customer
This article in Ars Technica reminded me of one of the things I never liked about PCs you bought from Dell, HP, or any major vendor–bloatware. Every PC I’ve had that I didn’t build myself, and every Windows laptop had multiple pieces of software that I didn’t want. The least-offensive of these apps merely took up hard drive space. The worst made the computer slower and more of a hassle to use. Switching from Windows to Mac at home meant no more bloatware. Moving to an Intel chip-based Mac extended the no-bloatware experience to Windows VMs. Moving to the iPhone a couple of years ago (and sticking with it by buying an iPhone 4) seems to have spared me from vendor bloatware as well. This is especially important in the mobile space because today’s smartphones have a lot less storage space to waste, and less-powerful CPUs than modern PCs.
Bloatware happens on the PC because even though we buy them, we aren’t the only customer. Every company with some anti-virus software to sell, a search engine they want you to use, or some utility they want you to buy wants to be on your PC or laptop. They’re willing to pay to be on your new machine, and PC vendors aren’t going to turn down that money. A similar thing seems to be happening on Android phones now. Here’s the key quote from the story:
"It's different from phone to phone and operator to operator," says Keith Nowak, spokesman for HTC. "But in general, the apps are put there to meet the operator's business and revenue needs." (emphasis mine)The money we pay for our voice and data plans isn't the only money that Verizon, AT&T, Sprint, & T-Mobile want. Some of these carriers have decided that they'll take money from other companies who want to put applications on the smart-phones they sell. This highlights one of the key differences between the way Google approaches the mobile phone market and the way Apple does it.
When it comes to Android, you and I aren’t Google’s customers–not really. The real customers are mobile phone hardware vendors like Motorola and HTC. They need to care how open and customizable Android is because they expect it to help them sell more phones. Making the OS free for the vendors is in Google’s interest because the bulk of their revenue comes from advertising. The more phones there are running Android, the more places their ads will appear. Android’s openness is only of interest to us as users to the extent it allows us to do what we want with our mobile phone.
Unlike Google, Apple is in business to sell us electronics. They expect iOS4 to help them sell more iPhones and iPads. But since you and I are the customers Apple is chasing, no pre-loading of apps from third parties. It doesn’t mean they won’t feature apps that highlight the phone’s capabilities (Apple does plenty of that). Nor does it mean we can’t get apps from AT&T, just that putting them on and taking them off is our choice. There is a tradeoff as far as how long iPhone users wait for features when compared with Android phone users. But I think the iPhone features all work better, and work together to create arguably the best smartphone available.