A few things about test-driven development

When I talk with potential customers I usually do not mention automated tests until they are mentioned by my counterpart. The idea of automated tests may be understood in a wrong way. Most of such customers do not have enough technical skills to understand why it is important to have automated tests in their applications. They may think that developing such tests takes additional time they will be charged for (and they are right actually).

But a conversation may go in any direction and eventually most customers ask me what the intrinsic value of automated tests is. Then I have to explain why it is important - to write automated tests. I have to answer their concerns and chew it over why a well tested application is better than a quickie.

Here are a few thoughts about automated tests. I hope, they will be helpful for any person who wants to develop a good and stable application.

Automated testing

As I have said, first concern usually refers to the fact that tests take "extraordinary amount of time". This phrase was used by one of my clients. He was going to test the application himself and thought that such measures would be enough. They would not, though. They never are.

When one tests his or her application manually, he tests its current state and nothing else. Even more, he tests the current state of the latest feature only.

Let us assume that a client tested a feature and it worked perfectly. But two months later the developer decided to update, let us say, routing. Or a property name which was used in several algorithms. The developer did his best to make corresponding corrections in algorithms as well. He even used search to make sure that the property was renamed everywhere. But still he missed one or two occurrences, or made a typo somewhere deep inside the logic.

Is such a situation possible? For sure. Was the described developer a bad developer? No, he definitely was not. Such things happen even with the best of us.

How can the client check that, after this small refactoring, all existing features work as good as they used to work two months ago? If he has already tested the application manually, he should verify every little feature once again. This is a huge amount of work and the client does not have enough time to do it, but this is not the biggest problem. The biggest problem is that he does not remember the details of how these features are supposed to work exactly. And before performing a new test, he should recall scenarios verified two months ago. Thank God if he has written them down somewhere.

So, should a customer save verified scenarios on a paper? Here is a better solution: such things can be saved with a help of RSpec! If one does so, he:

  • a) has all scenarios in the same repository,
  • b) does not have to re-check everything manually after every small refactoring, RSpec does it for him, and
  • c) has detailed specifications of the project, ready to be compiled into paper specifications when needed.

Specifications

The last point is important. Ruby on Rails has very nice gems for testing purposes because the authors of Rails consider tests not just as a code for application state check. They consider the test code as specifications. This means that tests:

  • a) should be readable almost like paper specifications,
  • b) should be done before the corresponding features are developed and
  • c) should describe the way the application must work instead of describing the way it does work.

Here is an example of RSpec syntax:

describe Order do
context "with no items" do
it "behaves one way" do
# ...
end
end

context "with one item" do
it "behaves another way" do
# ...
end
end
end

Very readable, isn't it? The same is valid for other test libraries. For example, Cucumber syntax looks like this:

Scenario: Create a subtopic page
Given I am signed in as an admin
And the following home page exists:
| name | path |
| English | / |

These scenarios may even be added by the customer if he knows the syntax. If he does not, for example, when RSpec is used, he can add empty specs like:

it "should be Tracking Assessment for Medicare primary payment provider" do
end

The developer will read the text in the specification name, add a code into the body and then make the feature satisfy the test.

Blind spots and testing before coding

The client mentioned in the beginning of this article was also concerned that an application with automated tests would still have some "blind spots". He was right, there are always some gaps in the test coverage even when the developer writes automated tests. But if one tests the application manually, he will have many more blind spots! It is easy to forget something. Now, if he has tests, it is much easier to resolve issues even if a gap is detected.

There is a procedure that helps the development team to take care of blind spots. For example, the customer or the developer got a bug. They knew that they had tests and wondered why this bug was not covered by these tests. What should happen next? The developer should check the test code and find the gap. Once it is found, everybody knows why the bug appeared. The developer should add a new test (or a few ones) to cover the blind spot, make sure that the new tests fail (this is the correct behavior because the bug is still there) and then fix the bug. After this is done, the new tests should pass.

And here we have found a good reason why the tests should be done before developing the corresponding features. Tests are good when they fail. If the developer adds tests after the main code is completed, the tests always pass, the application looks ideal and he never actually knows if they are able to find the error they are intended for. To make sure that they are, he should "break" the feature, make sure that corresponding tests failed as well and then fix everything back. It is more time consuming than writing tests before the main code, isn't it?

Checking yourself

I used to hear another concern that was worded in the following way: if both tests and features are developed by the same engineer, wouldn't it make the whole work useless? The developer can repeat the code and duplicate the error. The same error in the logic and in the tests will become invisible.

Well, this is not necessarily true. The tests are usually based on specifications which are written by the customer or at least with the help of the customer. Thus, they do not "repeat" the logic (there is no logic in the application yet actually), they look like a set of simple rules. This set of rules should be covered by algorithms written in a specific language. The algorithms will look like "do-while" and "if-else" statements.

Therefore, tests cannot "repeat" the feature code. These parts of work look pretty different.

Potential problems

As a conclusion: tests are a very powerful methodology which gives the customer control. But there are some potential issues related to tests.

Actually, I would say, there is only one issue: what to test? The tests should cover the code well, but they should not be redundant. They should run quickly in the background allowing the developer to see immediately the way the application reacts after he has updated every single line of code. Sometimes a developer even has to get rid of some tests because they are too slow.

But this issue does not mean that the test-driven development approach is corrupted. It is a powerful approach that helps developers to increase quality of their work although they should use it carefully and consciously.

Final Answers

Let us repeat again:

1. Tests are necessary to increase quality of the application. It is much easier to write down scenarios of work using special testing libraries than to keep these scenarios in mind.

2. Tests are more than a special type of code. They are specifications. They explain a lot. They should be very readable and no developer familiar with tests would agree to work without them.

3. Tests are good when they fail! If a feature is broken while corresponding tests still pass, it means that they do not work.

4. Thus, it is much more effective to write tests before developing features. A developer should describe the way the application must work, and only after this is done, he should write the code for the feature and satisfy the correponding tests. This is the most effective way of development.

I hope, every conscious customer will choose to have a stable application with all means to keep its behavior in the correct state.

Get in Touch