Improve Java Code Coverage and Quality with Unit Tests and JaCoCo

And how do we know that the tests that we write are worth writing?There are some criteria to consider when writing tests:We want to make sure that the parts of the code that are best tested are the parts that are most likely to contain bugs.

We want to focus our tests on parts of the application that are critical, the parts where bugs are most likely to lead to a bad outcome for our customers.

We don’t want to write tests that repeatedly cover the same areas of the code while ignoring other parts of the code.

Let’s start by trying to figure out what parts of the code are most likely to contain bugs.

If we had to make a general assumption about where bugs hide in code, we’d look at the code that is the most complex.

But how do we figure out which code is the most complex?Cyclomatic ComplexityOne common heuristic is called cyclomatic complexity.

It’s been around for a long time; Thomas McCabe invented it in 1976.

A simple description of the algorithm can be found here.

Assign one point to account for the start of the method.

Add one point for each conditional construct, such as an if condition.

Add one point for each iterative structure.

Add one point for each case or default block in a switch statement.

Add one point for any additional boolean condition, such as the use of && or ||.

The higher the score, the more complex a method is.

A paper authored by McCabe for the National Institute of Standards and Technology suggested that you should keep the score to 10 or less.

When working with cyclomatic complexity keep in mind that in the end, a person has to declare whether a section of code is critical; any number that’s calculated by any algorithm is just a guide to that decision.

NOTE — You should be aware that some people don’t like using cyclomatic complexity.

Many companies are using SonarQube to provide code quality metrics for their software.

One of the metrics provided by SonarQube is cyclomatic complexity.

However, in my opinion, it comes too late in the process.

SonarQube is usually run on code that’s already been pushed to git.

It can monitor a feature branch, but in this instance you want a quick feedback cycle, one that doesn’t involve a push to git and then waiting for a server to process your branch.

That’s where JaCoCo comes in.

Introducing JaCoCoJaCoCo is an open source Java software quality tool for measuring code coverage, showing you what lines in your code have been tested by the unit tests you’ve written.

Along with coverage, JaCoCo also reports on the complexity of each method, and tells you how much of the complexity in a method remains untested.

Let’s see how to add JaCoCo support to our calculator service.

All we need to do is add a few lines to the POM file.

Under projects/build/plugins, add the following XML:And under projects, add this XML:Now all you need to do is run the command mvn test jacoco:report.

This runs all of the unit tests in your project and creates an HTML report of the code coverage information.

You can find this report in the target/site/jacoco directory in your project.

If we look at the report, we’ll see that we’re missing quite a bit:That’s a lot of red.

Before we go on, let’s go over the columns in the table so we understand what we’re looking at and what we need to improve.

The Element column gives the packages in the current application.

You can use this column to drill down into the code to see exactly what is covered and what isn’t.

We’ll get to that in a bit, but first we’ll look at the other columns.

Missed Instructions and Cov.

 — This gives a graphical and percentage measurement of the number of Java bytecode instructions that have been covered in tests.

Red means uncovered, green means covered.

Missed Branches and Cov.

 — This gives a graphical and percentage measurement of the number of _branches_ that have been covered in tests.

A branch is a decision point in your code and you need to provide (at least) a test for each possible way a decision could go in order to get complete coverage.

Missed and Cxty — Here’s where we find the cyclomatic complexity score for your source code.

At the package level, this is the sum of the scores for all the methods in all of the classes in the package.

At the class level, it’s the sum of scores for all of the methods in the class, and at the method level, it’s the score for the method.

Missed and Lines — This is the number of lines of code and how many lines don’t have complete coverage.

Missed and Methods — This is the number of methods and the number of methods that don’t have complete coverage.

Missed and Classes — This is the number of classes, including inner classes, and the number of classes that don’t have at least some code coverage.

Let’s return to the Element column.

If you click on a package name, you’ll see a similar screen with the classes in a package in the Element column.

Here’s what it looks like if you click the com.


demo link:If you click a class name, you’ll see the methods in the class:And finally, if you click on the name of a method, you’ll see the class’ source code, scrolled to the method:The code is colored red, yellow, or green to indicate whether there is no, partial, or complete code coverage for each line.

The class name is highlighted in green to show that the default constructor has been invoked by the empty test’s loading of the Spring Application Context.

The calculator method was also invoked, since its @Bean annotation puts an instance of CalculatorImpl into the Application Context as well.

We see at the package level that we have:0% coverage in the com.



calculator package37% coverage in the com.



controller package,58% coverage in the com.


demoThe only reason we have any coverage at all is that the @SpringBootTest annotation in DemoApplicationTests started up a Spring Application Context, which loaded the constructors and the method annotated with @Bean.

This demonstrates an important point; you can trigger code coverage without any tests, but you shouldn’t.

Calling code from tests without confirming the changes caused by calling the code is not a valid test.

You can trick Sonar and JaCoCo, but code reviewers should verify that code coverage reflects values that are actually validated.

This demonstrates an important point; you can trigger code coverage without any tests, but you shouldn’t.

Calling code from tests without confirming the changes caused by calling the code is not a valid test.

You can trick Sonar and JaCoCo, but code reviewers should verify that code coverage reflects values that are actually validated.

Viewing Unit Test Coverage in JaCoCoNow we should write some tests.

We can start with the test we already have, DemoApplicationTests.

There’s not much to verify here, but we can make sure that we’re loading up the correct implementations of our business logic.

In our trivial program it’s clear which implementation is being loaded into the Application Context, but with larger programs that include libraries written by others, you might accidentally depend on the wrong implementation of an interface.

With classpath scanning, you also might miss classes or REST endpoints that you thought were being loaded.

Here’s a test to validate that we are instantiating the right things:If we run our test coverage again with mvn test jacoco:report and then drill down to the method level on DemoApplication, we now see:Well, nothing’s changed in the coverage report.

We aren’t going to add a test for main because that would launch the application and we don’t want to do that in a unit test.

But now we are actually testing to make sure that our application is loading the correct classes.

Remember, things like cyclomatic complexity and code coverage reports are tools to help people understand the quality of the tests and the code.

In the end, a person has to judge if the tests are valid.

Let’s add a test for our REST endpoint.

If you look at the code, you’ll notice that while there are Spring annotations to mark it as a REST endpoint, map a method to a URI, and to extract data out of the request, you don’t need Spring or an Application Context to test the business logic for this class.

Let’s write our unit test without using Spring at all:Since this is a unit test, we are only testing the functionality within the class; everything outside of the class can (and should) be replaced with a mock implementation.

We have a simple implementation of Calculator, and then have two tests that cover the two possible paths through the controller method (normal return value and exception).

If we look at our code coverage now, we get:Progress!.Our controller package now has 100% code coverage.

Now we have to write tests for the business logic.

Clearly, this is where we have the most complex code in the application.

The package has a total complexity of 31, with 21 points of that complexity coming from one single method, process in CalculatorImpl.

This is where we should focus our efforts.

Initial Unit Test ResultsThere are a few things to note in these tests.

First, there is once again nothing about Spring in these tests.

In general, you should avoid loading a Spring Application Context for your tests, as it slows them down greatly.

Next is the way we work through the test cases.

This business logic returns different outputs for different input.

Rather than repeating yourself over and over, use a data-driven test to specify the expected inputs and the expected outputs.

JUnit has built-in parameterized support that is simple to use and outputs different test results for each data entry.

You can learn more about it here.

Finally, there are tests for negative cases as well.

We want to make sure we test more than just the “golden path” through the code.

We also need to understand what triggers exceptions, and what exceptions will be triggered.

After adding these tests and seeing them pass, let’s see what our code coverage looks like.

That’s a lot of coverage with only a few test cases, but let’s do better.

Let’s look at the class level, so we can see what method-level coverage looks like:We can drill down to the code to see what parts remain untested in process:And in shouldEvaluate:Even though we are getting to a pretty good level of test coverage, there are lots of branches in the code that aren’t being tested.

It looks like we need to add a few more expressions to our test set to trigger these branches.

With these new tests, we can recheck our code coverage and see our improvements:We again drill down to the code to see what parts remain untested in process:And in shouldEvaluate:That looks pretty good.

The only things that remain untested in process are two default switch conditions that throw exceptions.

(That code is actually currently unreachable, but it’s a good practice to include a default clause in case future changes trigger unexpected situations.

)Fixing the BugsSo, we’re done, right?.Actually, we aren’t.

Despite having tests that pass and having nearly 100% code coverage.

There are two bugs in this program.

Have you seen them?.Take a minute to try to find them.

Here are a couple of tests cases that expose the bugs:{"6 / 3", 2, null},{"1 – 1 * 2", -1, null}Running these tests produces:[INFO] Running com.




CalculatorTest[ERROR] Tests run: 16, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 0.

169 s <<< FAILURE!.— in com.




CalculatorTest[ERROR] testProcess[14: CalculatorTest(6 / 3)=2, throws null](com.




CalculatorTest) Time elapsed: 0.

013 s <<< FAILURE!java.


AssertionError: expected:<2.

0> but was:<18.

0>at com.






java:60)[ERROR] testProcess[15: CalculatorTest(1–1 * 2)=-1, throws null](com.




CalculatorTest) Time elapsed: 0.

001 s <<< FAILURE!java.


AssertionError: expected:<-1.

0> but was:<0.

0>at com.






java:60)If you look on line 137 in CalculatorImpl, in the function shouldEvaluate, there’s a = instead of a -in the if statement’s condition.

Also, on line 106 in CalculatorImpl, in the process function, the code multiplies instead of divides when it evaluates a /.

Fixing these problems is easy if you can find them, but in order to find them, you need to supply data that truly represents all of the possible inputs to the code.

This is one of the reasons why it’s best for developers to write their own tests; the developer often has the best idea about what kind of data is going to be passed in.

Code coverage numbers aren’t enough.

Once we fix the bugs, we re-run our tests, they pass, and our code coverage is actually slightly better.

RefactoringWe could stop here, but the process method should really be refactored.

Its complexity score of 21 is far higher than we should have for a single method.

One simple refactoring is removing duplicate code.

The code that applies the operators to the numbers appears three times.

It should be broken into its own method.

Now that we have unit tests with good code coverage, it’s easier to have confidence when making these sorts of changes.

Our code now looks like this:And when we re-run our tests and code coverage with mvn test jacoco:report, we now see this:In the GreenThat’s a lot better.

We’ve reduced complexity and increased code coverage while ensuring that our program still worked, even after making a change.

Now that we have validated that the functionality works locally, we can have confidence that it is ready for a code review in a pull request.

Testing is something that many developers avoid doing.

But with a few simple tools and some understanding of the process, testing helps you spend less time tracking down bugs and more time solving interesting problems.

Just remember these tips:JaCoCo can help you get code coverage metrics locally.

Be sure to write tests for complex parts of the codebase.

Code coverage isn’t everything; bugs can still exist in code with 100% coverage.

Refactor complex sections of code to make them less complex.

These opinions are those of the author.

Unless noted otherwise in this post, Capital One is not affiliated with, nor is it endorsed by any of the companies mentioned.

All trademarks and other intellectual property used or displayed are the ownership of their respective owners.

This article is © 2019 Capital One.

.. More details

Leave a Reply