Tester’s life in IT projects seems easy: you get the information that a given feature is ready for testing and you verify if it works as intended or not. If the implementation is correct, then your job is done. If not, you have to report an issue that describes the defect and then you wait until it’s fixed. But I can assure you that there’s more to it. There’s real business value in having extensive software testing in IT projects. In order to define that value, we should answer two questions first: why do we test and how do we do it?
Why do we test
Learn and then challenge the product (but remember about the context!)
System Development Life Cycle (SDLC) usually goes through several different phases before it can be released to end-users. During that time the whole team creates a lot of project-related elements: product canvas, backlog, documentation, wireframes, designs, requirements, manuals, source code. All those items try to describe and resolve a problem or meet a need that was the main reason to develop a given application. But in the end, the product must face the real world. And an imaginary one, too.
I believe that the sooner and more often it happens, the better. And the most efficient way to achieve that is by regular testing. In Desmart, we try to understand needs, goals and frustrations of the potential users, so when we test our software, we try to walk in their shoes. A dedicated tester not only verifies if all requirements are fulfilled, but also checks if the software works as intended when you interact with it in unexpected ways that were not described anywhere before (thus the earlier reference to the imaginary world). He or she also makes sure that the product is usable at all.
I think that this approach results in many questions, doubts and suggestions. It may force the whole team to revisit an already finished part of the application. It may cause redesign. Heck, it can even cause us to give up on some part of the product. But it’s good, it’s healthy, because it helps us to understand the problem better and deliver a more usable product.
I always like to say that being a tester is not a profession - it’s a state of mind. Unlike developers who should be driven by creativity while analyzing or coding, testers should be focused on destructive thinking (which also requires some kind of creativity). ‘I wonder if something will break when I do this…’ - tester’s mind is conditioned for this type of thinking.
When testers try to explore every nook and cranny of the product by running different sets of scenarios, especially ‘non-standard user behavior’ scenarios, the results may reveal different types of defects that amaze the developers. I think it’s a very good exercise to put the tested product in this type of crossfire of different mindsets.
Don’t trust devs
I’m always willing to listen to what the developers have to say about their developed feature or fixed defect. I acknowledge their opinion. I respect their knowledge and experience. But I don’t trust them (if any DeTeam member is reading this - sorry, it’s nothing personal). Why? Because I believe that they don’t fully understand the technology they’re working with (again, guys, nothing personal… darn, I’m not helping myself, am I?).
Nowadays, there are more and more ready-to-use solutions available on the developer’s market: libraries, plugins, whole frameworks - tools that allow to develop a product in the desired environment. It’s cool, it can speed some processes up significantly. But due to some hidden dependencies, it can also backfire.
Let’s imagine the following scenario without proper internal testing: part of the source code is changed, just a minor change that shouldn’t affect any other parts of the application. As a result, there’s a conflict in one of the libraries that allows to run the application in the Firefox browser. The developer quickly checks the application on Chrome and since everything works like a charm, he releases the changes. Most probably the problem will be reported after a couple of days and it could be very hard to debug and understand what’s the cause.
Now let’s go the same route, but with internal testing provided (I will jump straight to the part after the developer releases his changes): since there are changes in the application, it should be checked if they work on different browsers. Chrome: passed. Safari: passed. IE/Edge: passed. Firefox: boom, something’s wrong, raise the alarm! The developer receives feedback very quickly and it’s easy for him to track the change in the code that caused the error. Thus, it’s much easier to debug the issue and fix it.
To sum it up: the tester’s job is to verify different hardware/software configurations and catch all kinds of unexpected behaviors. But the most beautiful side-effect of this task is helping to educate developers about how their own technology works.
‘Internal’ Product Owner
Before we start developing a product, we start with the analysis phase. For example, we invite the clients to our office where we do a workshop and we try to squeeze as much information as possible in the early stage. The tester should be a part of the analytical team. Not only can he start statically test any artifacts that have been created already (designs, wireframes, documentation), but also he can start to gather and organize the knowledge about the product.
During the development phase, the tester checks every part of the application. Again, he gains a lot of knowledge about the product - both from the technical and the business point of view. In result, he can be a great support for the rest of the team, he can be an advisor, explain some requirements, clarify the concerns or dispel some doubts. In the end, the communication and information flow is more effective.
Reducing the losses... and maybe costs
If you google ‘cost of fixing bugs’ you will find tens of charts that show exponential curve representing how much money you will lose if you don't do proper testing in the early stages of the SDLC. Some of them are exaggerated, some are more reasonable. But I believe that nowadays - given that you can use agile methodologies and continuous integration tools - the actual cost (both in terms of time and money) of fixing the issue in the production environment is more comparable to those in the previous phases (well, unless you’re developing some life-critical system or you publish the changes once or twice a year).
I feel that when a defect is found in the production environment, it is not your budget, but the immeasurable aspects of your business that are at stake: your reputation, reliability or the end-user's’ satisfaction. By providing well-planned testing in the early stages of the development, you will not eliminate this threat, but you can greatly reduce the risk of the end-user finding out that there’s something wrong with the application.
High quality is a sum of all the elements (and many more) mentioned above. Please remember that delivering a good product is the goal that should be shared by all the members of the team.
How do we test
‘We test fast. We find all bugs. We test EVERYTHING!’ That’s something you probably would like to hear. Well, you won’t.
Testing process needs some preparation time. The tester cannot launch the application and start testing without giving a single thought to how he’s going to do that. This requires some design and a well-fitting approach.
The first type of testing that is usually used when a new user story is delivered, is the analytical testing. It is based on a systematic examination of product’s risks or defined intended behavior. For example, each user story should contain acceptance criteria (AC) - a list of requirements that encapsulates how a given feature should work. It is essential to verify whether each detail from the AC list has been properly implemented. This is the absolute core of the functional software testing and must be done very carefully.
As I mentioned above, AC are defined. But this doesn’t mean that they are defined correctly or contain all requirements that should be defined. This is the moment when the tester needs to use his common sense and knowledge about the product, he needs to put end-user’s shoes on and rely on his intuition. In other words, this is the moment when the exploratory tests come into play. This is a more informal way of testing where new test scenarios are constantly developed during the test session. This process is stimulated mainly by the current application’s behavior and could be very dynamic. I believe that this is the more efficient way of software testing because it requires little test preparation, critical issues can be found faster, developers can get feedback quicker and it also leaves a lot of room for running non-standard or edge case scenarios.
When the product grows bigger and bigger, maintaining the integrity of all its components might become tricky. In order to catch bugs in the areas that have been developed already, we run regression-averse tests regularly. The amount of time needed to properly complete this kind of testing is proportional to the application’s size so we try to introduce automatization. Automated tests are a great way to complement manual testing. They can speed up the testing process significantly, especially with the continuous integration tools support - after a couple of minutes you may be able to tell if the recent changes have damaged the already existing parts of the application. Of course, creating such tests takes some time. But if they are well designed and maintainable, you will receive your return on investment as additional time for more sophisticated exploratory or non-functional tests.
Fulfilling requirements is very important, but in DeSmart we want to make sure that the final product will be usable, safe, able to handle increased traffic and will adapt to different languages or regional differences of a target market. This can be done by running the non-functional tests that don’t necessarily check application features, but they evaluate software characteristics, such as usability, security, scalability or internationalization and localization (i18n & l10n).
At the end, I would like to emphasize that it’s impossible to test the product completely. Diversity of hardware/software configurations and end-user behavior patterns, dynamic development of tools that your product depends on - all those aspects of today’s IT environment guarantee that your product has and will have defects, this is inevitable. Fortunately, we’re here for you to reduce this problem to a minimum.
I hope that my reasoning convinced you that software testing is more complex than it seems and it gives true value to SDLC. Dedicated tester who can effectively mix different test approaches, who remembers about the context, who has broad business and technical knowledge and critical thinking mindset can be an invaluable asset to your team.
If you’d like to know more about the testing process in DeSmart feel free to contact us at firstname.lastname@example.org.