Imagine that you walk into a company, and take a stroll through the software department. All around you, as far as the eye can see, developers toil away, staring into 17 inch CRT monitors. What would you think of that? Would you have to restrain yourself from jogging over to the HR department, enthusiastically, to apply for a job? Or would you thank your lucky stars that you worked somewhere else? I’m betting the latter.
One of the most penny wise, pound foolish strategies that a company can pursue is hiring a team of software developers, whose mean salary may be in the six figure range, and saving a few bucks by forcing them to do their work on obsolete hardware. It’s foolish because the productivity lost by these expensive workers far outpaces the cost of updated computers and monitors. Of course, this is pretty commonly known these days. You don’t see or hear about nearly as many companies skimping too much on a second monitor or new machines for software developers.
And yet, it’s still fairly common to make developers use older versions of programming languages and frameworks. Now, this isn’t a completely direct parallel. Companies historically have let hardware age on developers’ desks mainly as a cost savings strategy, whereas continuing to work on a “stable” version of a language or framework is generally a risk minimization strategy; why port your code base to v-next when that could introduce bugs and it doesn’t matter to the users? That’s a fair argument, but the thing is, when you pull back a level of abstraction, risk minimizing is, at its core, still about cost savings. In a company, everything comes down to top line revenue minus bottom line cost.
So why would I argue that it makes sense to upgrade to v-next, taking the risk and possibly incurring the cost associated therewith? Well, instead of answering that directly, how about I show you? Take a look at the following code that you might find in some kind of .NET based, online dating application.
1 2 3 4 5 6 7 8 9 10 11 12 13 | public class GeographicRegion { private readonly IEnumerable<DatingProfile> _profilesInRegion;
public GeographicRegion(IEnumerable<DatingProfile> profilesInRegion) { _profilesInRegion = profilesInRegion; } public IEnumerable<DatingProfile> FindActiveTwentySomethings() { return _profilesInRegion.Where(profile => profile.Age >= 20 && profile.Age < 30 && profile.IsActive); } } |
Nothing too remarkable. There’s a concept of “Geographic Region” and each region is handed, upon instantiation, a strategy for enumerating profiles found within it. It has a method called FindActiveTwentySomethings() that, unsurprisingly, looks for anyone with an age in the 20s and a setting on the profile indicating that the profile is active. Apart from a potential discussion of which sort of collection type might make the most sense, there’s absolutely nothing remarkable happening here. This code is so simple that it’s almost not worth discussing.
But let’s go way, way back in time and look at what someone might have written in the days of Visual Studio 2003.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 | public class GeographicRegion { private readonly DatingProfile[] _profilesInRegion;
public GeographicRegion(DatingProfile[] profilesInRegion) { _profilesInRegion = profilesInRegion; } public IEnumerable FindActiveTwentySomethings() { ArrayList matchingProfiles = new ArrayList();
foreach(DatingProfile profile in _profilesInRegion) { if (profile.Age > 20 && profile.Age < 30 && profile.IsActive) matchingProfiles.Add(profile); }
return matchingProfiles; } }
|
The first thing you’ll notice is that the code is longer, but not horribly so. The second thing that you’ll probably notice is that the FindActiveTwentySomethings() method is not only longer, but it’s also no longer declarative, but, instead, imperative. The code no longer says, “give me profiles where the age is in the 20s and the profile is active.” Instead it says, “first, declare an array list; next, loop through the items in the profiles and for each of those items, do the following… etc.” Finally and perhaps most subtly, the lack of generics (those came with C# 2.0 in 2005) means that the method no longer forces type safety. ArrayList and IEnumerable here both deal in Object. This doesn’t seem to be an immediate problem here, since the dating profile is strongly typed, but if you’re a caller of this method, that’s hardly reassuring.
None of these things seems like a big deal in this small context, but imagine this start difference, writ large across an entire code base, application ecosystem or software department. The C# 1.1 code shown above is more verbose, harder to read, harder to work with, and more error prone during maintenance. This means that developers in such a code base spend more time floundering, troubleshooting, squinting to try to understand, and generally wasting time (and money) than their counterparts working in a modern code base.
Over the course of time, language and framework authors, like any software vendors, address shortcomings in their products and add features that make things easier and more efficient for their users. So, every time a new version of a language comes out, you can expect development in that language to trend toward more efficiency. As a company or a department, if you deliberately avoid such updates, it’s no different than not updating hardware as it ages. Your team’s productivity (and morale) will suffer.
It’s not common for companies to be penny wise and pound foolish about hardware any longer. So, don’t make the same mistake with your software. Have a plan to stay on the latest bits and keep your developers operating at peak efficiency. You don’t have to adopt everything that comes out right away, but you can’t afford to let it go too long.
Infragistics Ultimate 15.2 is here. Download and see its power in action!