Here at CloudBees, we often sing the praises of continuous integration and its siblings, continuous delivery and continuous deployment. Having a great unit test suite is a key component of enabling these techniques. When you have a comprehensive suite of quality unit tests, covering (at the very least) the most critical paths of your business logic code, the benefits are huge. The boost in the developer's level of trust in the code alone would be enough for unit testing to be worth it.
We also can't help but mention the design benefits unit testing can bring to your codebase. Not all code is equally testable, and unit tests are great teachers in the art of writing low-coupled, highly maintainable code. It goes without saying that there are tons of benefits to actually finding bugs before they reach production, where they would be way more expensive to track and fix.
With all that in mind, we must admit: writing tests aren't always a walk in the park. That's especially true when you have an existing application riddled with external dependencies.
How do you proceed in such a scenario? That's what today's post is all about. Let's get to it.
Best Practice #1: Separate Commands From Queries
Have you ever heard of the CQS (command-query separation)? This a software design principle that basically consists of two premises:
Functions (or methods, depending on what language you happen to use) can broadly be divided into (you've guessed it): commands or queries.
A function/method should be one of those things, but not both.
Okay, but what are queries and commands? They're pretty much what their names suggest: a command is a type of function that does something. It generates an observable side effect. A query is a function that computes and returns a value but doesn't cause any side effects. In case that's not clear enough, let's see some actual code (with examples in the C# language):
public bool Contains(DateTime date) { return date >= this.startDate && date <= this.endDate; }
This is a "contains" method in a hypothetical type to represent a range of dates. (Well, not quite so hypothetical, since I've implemented types like this more than once, but still.) Is this method a command or a query? If you've answered "query," you're right. Now, consider the following code:
public void Save(Product product) { var jsonString = serializer.Serialize(product); File.WriteAllText(FILE_PATH, jsonString); }
What about the code above? Is that method a command or a query? This is a no-brainer, right? It's definitely a command because it causes side effects. The method will save the content of "jsonString" to the file represented by the path in the FILE_PATH constant. If the file doesn't exist, it's going to be created. On the other hand, if it already exists, it's going to be overwritten.
Okay, so that's the definition of commands and queries. With that out of the way, let's get to the why. Why is this such a big deal?
Here's why: since query methods can't cause external side effects, they are what in the functional paradigm is usually called "a pure function." Pure functions can neither cause nor consume side effects. In order to be useful, it must access only the values it receives as parameters. So basically, queries are deterministic. When called with the same input, they always return the same output. This makes pure functions easy to reason about. They are also cacheable by nature, which can improve performance.
But the main selling point of pure functions—at least for me—is that they are intrinsically testable. Since they neither cause nor consume side effects, they can be called from unit tests without the need for fakes or complicated setups. Pure functions—at least, useful ones—also always have return values, which means their tests usually consist of very simple assertions. That, in its turn, can make for tests that are simpler, shorter, and more readable.
Best Practice #2: Separate Concerns in Your Code
For our second best practice, let's resort to another example. Consider the following piece of code:
public double AverageTemperatureInPeriod(DateTime start, DateTime end) { var repository = new ClimateReadingRepository(); IEnumerable<ClimateReading> readings = repository.All(); var filtered = readings.Where(x => x.Created >= start && x.Created <= end) var ordered = filtered.OrderBy(x => x.Temperature); var withoutLowest = ordered.Skip(1); return withoutLowest.Average(x => x.Temperature); }
Now in real life, that would've probably been a two-liner (or a somewhat long one-liner), but I've decided to detail each step. The code should be easy enough to understand:
We start out by creating a new instance of the "ClimateReadingRepository" class.
Then, we use the repository to get all of the climatic readings from some persistence storage (which, in 9 out of 10 cases, is a relational database).
After that, we filter the retrieved readings by the specified start and end dates.
We then order the filtered readings by temperature and assign the result to the "ordered" variable.
We skip the first reading (which is effectively the reading with the lowest temperature—pretend that makes sense) and assign the resulting sequence to the "withoutLowest" variable.
Finally, we calculate and return the average of the temperatures.
This seems simple enough, right? But how would you go about unit testing this code? Well, it's not that hard to come up with some scenarios:
There's no reading in the period specified. Should return zero?
There's exactly one reading in the period. What should the result be? (We're skippíng the first reading after ordering. Is that a bug?)
There are two reading in the period. The result should be the highest temperature of the two.
There are three readings in the period. The result should be the average of the two greatest temperatures.
And so on. As I've said, not that hard. But here's my point: how would you test this? How to actually write the test code?
var sut = new ClimateCalculator(); var expected = // what should go here??? var result = sut.Average(new DateTime(2018, 1, 1), new DateTime(2018, 2, 1)); Assert.AreEqual(expected, result);
The problem we have is that the method "AverageTemperatureInPeriod," which should concern itself only with the calculation, is tightly coupled with getting the objects to operate on. We have to break those two concerns apart. How can we do that? The easiest approach is to alter the signature of "AverageTemperatureInPeriod," having it receive the necessary climate readings as parameters:
public double AverageTemperatureInPeriod(IEnumerable<ClimateReading> readings, DateTime start, DateTime end) { var filtered = readings.Where(x => x.Created >= start && x.Created <= end); var ordered = filtered.OrderBy(x => x.Temperature); var withoutLowest = ordered.Skip(1); return withoutLowest.Average(x => x.Temperature); }
Now the method doesn't worry about how the objects are obtained. It could retrieve them from a database, a CSV file, wherever. From the code's point of view, all that matters is that it's getting a sequence of "ClimateReading" instances on which to operate. With these concerns now separated, it becomes easy to write a test for the method. A true unit test, mind you—one that doesn't perform any IO and is reliable, deterministic, and blazingly fast. Let's see an example of such a test:
[Test] public void AverageTemperatureInPeriodo_ReturnsTheAverageIgnoringLowestTemperature() { var readings = new List<ClimateReading> { new ClimateReading(20, new DateTime(2018, 1, 1)), new ClimateReading(22, new DateTime(2018, 1, 15)), new ClimateReading(19, new DateTime(2018, 1, 31)), new ClimateReading(19, new DateTime(2018, 2, 4)) }; var sut = new ClimateCalculator(); Assert.AreEqual(21, sut.AverageTemperatureInPeriod(readings, new DateTime(2018, 1, 1), new DateTime(2018, 1, 31))); }
See? By properly separating our concerns, we could very easily write a test for our ClimateCalculator class. The test is short, correctly highlights the reasoning behind the calculation, while completely ignoring implementation concerns, such as persistence or the file system.
Best Practice #3: Inject Your Dependencies
This final best practice is the logical culmination of the previous points. In the previous section, we changed the "AverageTemperatureInPeriod" method to make it more easily testable. But think a little bit. Those instances still have to come from somewhere, right? They didn't just pop into existence. Somewhere in the code, there's probably still a line (or more) instantiating the ClimateRepository class. For the sake of the argument, let's consider that this class talks to, let's say, a SQL Server database. In this situation, any code that depends on ClimateRepository wouldn't be able to be unit tested. The answer to this problem is DI (dependency injection). This term has, for some reason, gotten somewhat of a bad reputation in some software circles. In my opinion, this reputation is totally undeserved. In the broadest terms, DI is nothing more, nothing less than passing the dependencies needed by an object as parameters, in the form of interfaces. By doing that, you add to your code what author Michael Feathers calls a "seam:"
a place where you can alter behavior in your program without editing in that place.
Placing seams on strategic points in your code enables you to go and replace "problematic" dependencies with harmless in-memory fakes for unit testing purposes. That way, you can test what truly matters without infrastructure concerns and implementation details bogging you down.
Remember: Unit Testing Is Worth the Price
Tutorials on unit testing often give the idea that the practice is as easy as cake. Well, on greenfield applications, maybe. But I guarantee you that, as a typical software developer, you won't be working on those most of the time. In the real world, most development occurs on brownfield applications: legacy beasts with humongous, untested codebases. And with external dependencies. Lots of them. But don't let that discourage you. Unit testing might be hard in such scenarios, but it's definitely worth the hard work. As you've just seen in today's post, not all is lost. By applying some well-known guidelines, it's possible for you to lighten the burden of applying unit tests to a legacy application. Time to roll up your sleeves and get to work. Until next time!