Test cases are important. But what is it?
A test case is a set of conditions or variables under which a tester will determine whether a system under test satisfies requirements or works correctly.
The process of developing test cases can also help find problems in the requirements or design of an application.
Irrespective of what approach you follow (TDD/ BDD), unit testing, integration testing and end-to-end testing are important.
Recently at Synerzip, one of our projects underwent a major change. Approximately 48,000 lines of code were added and approximately 32,000 lines of code were expunged from one project. That approximates to rewriting 40% of our front-end code. Such a huge change can’t realistically be perfect; and one would not dare to think of such an enormous change unless (s)he has considerable amount of tests in place. This particular change was important and necessary to the project since most of our features (the ones we wanted to introduce) depended on it. The project was a migration of Polymer from 0.5 to 1.0. Since there were so many changes in the polymer itself, we were only confident in the test cases. Irrespective of what we did, we could be certain that
we did not break anything because of the test cases.
In this project, we followed a stringent process. The change/patch was run on a “try server”. Once “the try” succeeded without any failures, we initiated a 2-step review process; and we merged the patch to the master only when both the reviewers were satisfied with the changes.
For example, we had a change that looked perfect (technically: 100%; behaviourally: almost). When we tested the patch on the “try server”, roughly 10% of the tests failed, which is relatively small percentage for such an enormous change. Still, we had to fix the issues before we could merge the change in the master. Considering this, if we did not have the tests, we could have merged the changes in the master without being aware that the product no longer remained intact (somewhere something would have been broken). This would have resulted in costly and timely repairs to fix hundreds of newly introduced bugs (discovered by QA’s/ users). This is an excellent example of a positive result using test cases properly
Oftentimes we (the developers) tend to be lax in writing the tests, don’t we? Because of this, we end up writing the minimal tests or ignoring them entirely.
We often assume our code changes will work. If and when a review process is involved, and if the reviewer figures out the missing tests, the code (and the product) is saved. Conversely, consequenses would start occurring over time. Absconding from writing the tests is never beneficial.
How often does it happen that we begin to write software (without tests) and abandon it in the middle (for various reasons) for over a period of time? After a brief break, we get back to it only to try and take it to the completion. But do we understand the software? Have we jotted down the use cases (good and bad) somewhere? Too often, the answer is “negative” because some of us are so busy implementing the figment that we disregard certain important aspects (one of them being testcases). How frequently do we yearn to change a piece of code (written by ourselves or by somebody else) that looks ugly yet we withhold from refactoring it only because we are uncertain if the functionality has enough tests covered. In my experience, more than 50% of the times developers restrain from modifying such code, even if there is a test coverage. It only tells if the functionality is tested, not the use cases.
When working in collaboration, we are dependant on other developers’ changes, the productivity of the people involved varies. For the changes in one’s code, the other would have to (re)test his/ her own code. For instance, consider a frontend developer and a backend developer, both having agreed on a design, continue their development independently. For the sake of this example, let us say that the design is complete. Both developers have written the unit tests and the integration tests at their respective ends; and it’s time to integrate the code. Generally, it is seen that the integration is tested end-to-end by the front end developer. Just to make the scenario worse, let’s say that the feature they developed has ~30 odd use cases. Manually testing these uses cases take time and there is a possibility of missing some uses cases (unless the developer maintains the checklist). For any issues in one or more of these use cases and for any change(s) in the frontend/ backend, the developer has to test all these cases repeatedly until all the use cases are addressed and the feature is delivered/handed over to QA. Such testing is tedious and boring. This can be avoided by writing the end to end (E2E) tests. After every change, the developer can simply run the E2E tests to ensure that the changes do not impact the product or new features use cases.
As a developer we must understand: any new code that we pen should have positive and negative tests validating the code. This not only boosts our confidence with the changes we introduce but it will also help the reviewer and reader understand our code better. Both parties will be confident, too; to accept or disapprove the changes. It will be easier to understand the developer’s intent and figure out the scenarios considered or missed. The unit tests help other developers to understand the ways to use and misuse the functions. The end-to-end tests help the product manager have an overview of the feature. Moreover, we ourselves will be certain that at any point in the near future, our code can be refactored (by ourselves or somebody else) or updated without having to worry about breaking the existing use cases. All in all having test cases is win/win.