The adventure begins...
I’ve started to learn how to create automated tests about a year ago. I didn’t have strong technical skills or IT education, so even the most obvious things were novelties for me. I was very happy when I started to understand the concept of test automation and excitement hit the roof when I successfully executed my first script. My first thought was:
I wanted to immediately implement automated tests in the projects I was working on. The idea was accepted by the rest of the team. The first effects were visible after 2-3 weeks when I ran automated test cases on the new version of software - the first defects were tracked and I could report them minutes after deployment. I started to realize what a powerful skill I was developing… until brutal reality hit me hard with its welcome-to-the-real-world hammer.
I didn’t give myself enough time for research at the beginning - to read more books, articles, watch some webinars or tutorials, consult difficult concepts with more experienced people. I didn’t give myself enough time for training, for learning best practices and starting the real work on a higher level. Instead, I went YOLO.
As a result, I ended up creating ugly code, hard to maintain and unadaptable to changes in the tested software. My work became less effective and I had to spend more time on updating the tests, than on upgrading them or creating the new ones.
I don’t regret that, but when I look at it now with the benefit of hindsight, I know I could start my adventure with automated testing better. Much, much better. So please take a few minutes to read about my failures and try not to copy them ;)
Think before you test
Before you start writing automated tests for the project you are currently working on, stop for a moment and try to answer this seemingly simple question: ‘Why, how and what do I want to test?’ First of all, you must realize that you won’t be able to test everything. Many non-functional tests, called ‘-ilities’ tests (e.g. usability, maintainability, availability or extensibility), are mostly manual.
There is also another thing - tested software will eventually go live and will be used by different people in various ways. Even if you would be able to identify all available ‘thought patterns’ and describe all possible steps to interact between features, it would be very time-consuming and ineffective to try to automate all those use cases.
Instead, you should start with some high-level strategy, e.g.:
- Ask yourself this: ‘Why do I want to test?’ For instance, to have a safety net against regression. Or to run more tests in less time.
- Then: ‘How do I want to test?’ For example: Initially I will focus only on the standard use cases. I will add new cases when a defect will be reported and fixed so that I will know immediately if the issue reappears in the future.
- Finally: ‘What do I want to test?’ For example, I want to test a feature X and how it’s integrated with features Y and Z. I need to verify if the following user journey: ‘X -> Y -> Z’ works as intended. I won’t check if journey ‘X -> Z -> Y -> Z’ works, because it’s not a standard user behavior. (But if future manual tests will show that this scenario has a defect, I will add this case to my automated tests, according to the ‘how’ part of my strategy.)
Use any means you need to plan this effectively. Personally, I love to use pen and paper to design each state of my test or to write down a step-by-step scenario in the test management tool or directly in the code in test case method (usually as a docstring comment for future reference).
Store elements in variables
In my previous post, I encouraged you to store website elements as variables. I strongly recommend storing anything you can as a variable. Let me give you an example I experienced in my first project with automated tests.
I wanted to test validation errors on the form that contained 6 required fields. First, I wanted to submit an empty form (so 6 errors were displayed), then I filled in one field and hit the submit button (5 errors were displayed) and I repeated this until the whole form was correctly send. After each step, I checked if validation errors are visible and have the correct labels. That’s 21 assertions in total. I thought that validation errors text won’t change during the whole project (so naive!), so I didn’t store any labels as variables (yes, you may laugh all you want, fully deserved). Naturally, after a couple of weeks, all error messages were changed. Nothing major, we just had to add a dot at the end of each sentence. But that was enough to fail my tests, so I had to manually edit each occurrence in the code and add this ‘.’ character 21 times.
When another change was requested by the product owner a few weeks later, I was prepared. I wasn’t cursing my stupidity. I didn’t waste my time. I just had to make a quick change in one place. It felt good.
Do not repeat the code
This is somehow related to the previous paragraph, but in a broader sense. In order to explain this, I will once more use my own mistake.
Almost all web applications that are developed at DeSmart, contain some sort of user management. It means that some part of the product is available only after logging in to the service. From the automated testing point of view (using Selenium), in order to successfully complete each test case, I had to go through the login process where it was required.
At the end of the project, I had to maintain around 20 test suites that required logging in. My problem was that I didn’t export the login process to a separate method, to a separate file (epic facepalm). A couple of days after going live, the product owner redesigned the homepage and login modal layer was available under a different selector. In addition, the login button changed its label. Boom, 40 changes required in my tests, sweet (not)!
To save time you should always consider moving reusable parts of the code outside of the test suite, to a separate file. Then you can just use the import statement to add external method to your tests.
Write independent tests
Test cases shouldn’t be dependent on each other. If one test case creates something, then the second case shouldn’t refer to this item. Let me give you another real-life example.
In 2015, we had developed a web service, where you can rent venues. In order to test request for proposal (RFP) lifecycle, I created several cases. The first one created an RFP. The second one verified if RFP is correctly displayed in the venue manager account. The third one checked if responding to RFP works as expected. The fourth one logged in to the user account and checked if a response was received. The fifth case sent a client’s response. The sixth…
I believe that you can already see the problem, right? If one test case fails, automatically all the following cases fail, too. It doesn’t mean that there are defects in the next steps. The tests cannot be successfully completed, because they are referring to objects that were incorrectly created or weren’t created at all in the previous step.
You should simply avoid any ‘communication’ between your cases. It should be possible to run each test case separately in any order necessary.
Create small, simple scenarios
Generally, it’s not a good idea to create huge, extremely complicated end-to-end cases that check every page, every state and every nook. Such scenarios tend to run for several minutes. They are awesome - noted - but only when everything goes without any errors. However, when something is not working correctly, then either:
- the test throws fail and continues the execution - you can’t be 100% sure that the rest of the scenario is executed correctly (e.g. assertions could be done on incorrect elements),
- the test throws error and stops the execution - everything that was supposed to be tested after this point is not tested at all, so you don’t have any results.
If you split bigger scenarios into smaller ones, it will generally extend the time needed for execution (if you don’t run your tests in parallel) and you will check fewer things within one case. On the other hand, you will have less dependent tests and any errors raised in one case will not affect the others. Basically, you will receive more complete test results. Also, in my opinion, smaller cases are easier to maintain and are more adaptable to changes in the tested software.
There will be more
I shared just a few tips on how to create automated tests more effectively, but more will come in the next couple of weeks, so stay tuned!
Maybe you would like to share your experience or describe the best practices you use in order to ensure a high quality testing process? I’m looking forward to any discussion, so don’t hesitate to leave a comment.