These days, I make part of my living doing what's called "software craftsmanship coaching." Loosely described, this means that I spend time with teams, helping them develop and sustain ways to write cleaner code. It involves introduction to things like the SOLID Principles, design patterns, DRY code, pair programming, and, of course, automated testing and test driven development (TDD). I've spent a lot of time contemplating these subjects and their economic value to organizations, even up to the point of creating a course for Pluralsight.com about this very thing. And through this contemplation, I've come to realize that TDD is an extraordinarily nuanced practice, both in terms of advantages offered and challenges presented.
This post is not about TDD, so what I'd like to do is zoom in on one particular benefit offered by the practice. It's a benefit that tends to be overlooked beside the regression suite that it generates and the loosely coupled design that it encourages. But one of the important things that TDD does is to provide a very tight, automated feedback loop. Consider what generally happens if you're working on a web application and you want to evaluate the effects of your most recent changes to the code base. You build the code and then run it, and running it is generally accomplished by deploying it to some local version of a web server and then starting the web server. Once the web server and your web application are running, you then engage the GUI and navigate to wherever it is that will trigger your code to be run. Only at this point do you get feedback about what you've done. TDD short-circuits this process by requiring only build and execution of a test suite.
Of course, TDD isn't the only way to create a tight feedback loop, but it is a well-recognized one. And it's also one that tends to spoil you. After becoming used to TDD, it's hard to go back to waiting for long cycle times between writing code and seeing the results. In fact, it tends to go the other way and you find yourself chasing other means of obtaining fast, automated feedback. It was this exact dynamic that got me hooked on the idea of static code analysis. If I could get quick feedback from unit tests about whether my code worked, why couldn't I get feedback about whether it was well written?
A Code Quality Feedback Loop?
Now, "well written" inherently invites a great deal of subjectivity, and it's not as though there is any universal agreement, even in a given language, as to what properties of code are ideal. But there are some pretty well established trends that get pretty wide agreement. It is preferable not to write classes and methods that are overly large or complex. It is preferable not to create modules that are too tightly coupled or needlessly interdependent. And, speaking of dependencies, it's better not to create cycles. It's pretty easy to argue that inheritance hierarchies shouldn't be too deep, method parameter rosters shouldn't be too long, and classes shouldn't be too overrun with methods.
But factoring all of these things and more into the mix, it gets sort of hard to keep track of it all. I mean, it's easy enough to be in the middle of some monster 4000 line method and think, "man, this method is waaay too big," but it can be harder to notice when you're adding a few lines to a method that may already be marginally too long. After all, it's not necessarily at the forefront of your mind since you're probably in their chasing some infuriating bug.
Before giving up hope, though, consider things with which you may be more familiar, such as test coverage tools and compiler warnings. You can deliver code with minimal test coverage or even with boatloads of compiler warnings, but there's a nagging pull not to so. Call it gamification or perfectionism or whatever you like, but it's there, even if you don't always obey it. There's a pressure to fix these issues because they're constantly there, in your face. They're part of a pretty tight feedback loop for you.
So I encourage you to add static analysis tools into your feedback loop. I'm not really talking about the kinds of tools that alert you if you're not following the team's coding standards (go nuts with this if you want). Rather, I'm referring to the kinds of tools that show you things about your code like line count in methods, cyclomatic complexity, number of methods in a class, and class cohesion. Set up tools that warn you when these things are running afoul of what they generally look like in "clean code."
What you're going to get out of this is not the bullet-proof, "one true way" to do things. Life isn't that simple, people who tell you it is are selling you a false bill of goods. What you're going to get out of it is a growing understanding of architectural tradeoffs buried within the code that you write. The static analysis tool serves the same purpose as the rumble strips on highways by jolting you whenever you're venturing beyond what may be considered standard usage. Sure, there might be reasons to veer onto the shoulder in certain odd circumstances, but usually you've just drifted over there due to inattentiveness. Well, not anymore you won't.
If you're skeptical, just install such a tool and see what you think. See what it says about your code, but don't take any action one way or another if you're not comfortable with it. If you disagree with it, do some research and try to formulate an argument as to why. I'm not advocating that you revisit all of your programming decisions to achieve a number that some tool says you should have. I'm advocating that you make yourself aware of these numbers and the concepts that drive them so that you can have intelligent conversations about them and make informed decisions. And I'm advocating that you do this with a fast feedback loop, safely in the comfort of your own IDE.
The quick feedback here is the best part of all. The static analysis tools are just executed algorithms. You're not submitting to peers for a code review or putting your code on the internet and being blasted by mean-spirited trolls. You're just helping yourself to some automated feedback with the understanding that you can keep helping yourself to it whenever you want. After enough time with this approach, you'll be prepared for the arguments that actual trolls and critics might offer up. And, hey, you might just learn some things and change some habits in ways that make you happy.