Effective Exploratory Testing (Part 1): Some good practices
Part 1: Effective Exploratory Testing (Part 1): Some good practices (Reading)
Part 2: Effective Exploratory Testing (Part 2): More Effective With Pair Exploratory Testing
Part 3: Effective Exploratory Testing (Part 3): Explore It, Automate It
Part 4: Effective Exploratory Testing (Part 4): A Practice of Pair Exploratory Testing
Part 5: Effective Exploratory Testing (Part 5): Risk-based Exploratory Testing
Special Series 1: Effective Exploratory Testing: Empower Your Exploratory Testing With Smart Tools (1)
Special Series 2: Effective Exploratory Testing: Empower Your Exploratory Testing With Smart Tools – A Case Study (2)
Context-Driven Testing Framework: Effective Exploratory Testing: Context-Driven Testing Framework
Conventional software testing is shaped based on a simple model where almost every test cases developed beforehand covering all possibilities which are identified from upfront requirements, specifications, designs, … After all those test cases are “baseline-ed”, they are executed many times later. Testers in this school (conventional testing called as “Factory” School of Software Testing) run their tests (test cases) and look for the same outcome each time. Opposite this approach, Context-Driven is a combination of skills, techniques, documentation that depends on specific situations. Testing is a solution to given problems. it must be suitable to the context of the project, and therefore testing is a human activity that requires a great deal of skill to do well.
Context-driven testing promotes critical & creative thinking process on the application under test at the particular period with a specific context. By this nature, performing the exploratory testing (an instance of Context-Driven Testing) is rather different from conventional one. It requires exploratory testers to build test models, use them with all skills, heuristics, practices to explore, reasoning, evaluation and find hidden things from what they test. With this approach, all tests are designed, executed, reported in parallel from rapid learning.
In this article, we state some good practices which are beneficial for the ones who are practitioners in exploratory testing in particularly or context-driven testing in general.
Building Test Models
The fact is that our brain is designed to build on models of the world. We inherently capable of modeling and remodeling our surroundings. Naturally, our brain recalculates the models time by time. For example: in the context of fashion, we quickly imagine how it should be. In testing, models can be categorized into:
1. Formal model – is a precise statement of components to be used and the relationships among them. For example: a tax formula, a software design, formal specification in text,…
2. Informal model – implies something which lacks of precision. For example: patterns from system failures, product risks,…
3. Ad-hoc model – derived at/during/after testing by testers. For example: A tester sees an opportunity to explore a new aspect of system which has not been identified before
Test models combine with our judgment during the exploration which helps us execute testing with many contexts and help evaluate the feature more accurately.
The question is: how to build good test models?
1. Use project documents: Familiarly, we can build our own models from given specifications, software requirements, user stories, design, development notes, release notes, or even the notes in the meeting with product owners, key stakeholders, developers, or users. These resources are, certainly, helpful because they are describing the product under test. However, we need to avoid biases when using our test models with these resources. Usually, we believe that the product should conform to the output of the developers, or requirements, the designs, so any differences identified are easily treated as bugs. These biases are dangerous and can create inaccurate models. To remove these biases, there are some good practices:
a. Reach out and gather information as much as possible about products/ features under test from various resources. Each one can describe different facets or different contexts that would help us adjust our models
b. Find answers to questions like “why ABC is functioning like this?” “Why is it this and not the other?”, “Do these requirements/designs make sense?”, “who/when/where …?”
c. Share your product knowledge, your understanding with other testers in your team. They may have different thoughts, different models.
2. Use Oracle: An oracle is a heuristic principle or mechanism by which someone might recognize the problem. Oracles are context-dependent. If you have done testing with other similar products, you can use them to compare with current product/ feature. Then, your models are built by your own experiences. You may search oracles from the Internet or ask your colleagues who have worked on similar products. Oracle is not only a product, feature or application, it can be something which helps you picture the problem, for example: known security bugs.
You have to train yourself to have good observations, to recognize abnormalities, changes in the product, changes between this run and previous run, changes in outcomes, changes in product structure…
It is very important prior to the test execution to identify various elements of the product. They can be deployment elements, outputs/ outcomes, log files, system processes, file system, inner structure, events triggered by the product, … All changes from these relevant elements may be issues or risks to the products. Being observant of them is necessary. We can evaluate the products with the help of other tools.
For example: we use Task Manager to monitor how the application consumes the PC resources (RAM, Disk, CPU). Or at MeU Solutions we use One2Explore to keep track all product behaviors which associate with its status and relevant data, for example, one of web applications we tested with One2Explore was PoS (Point of Sale). One2Explore captures its inner structure like DOM, current status of web elements, request & response time in the active test. All these captures were visualized into a picture which helped us detect changes in the application test runs iterations (or software builds). If you interested in this tool, you may find more info from MeU website.
In addition to the changes of the UI or product’s components, data flow is something to pay attention to. Example “how data is logged”, “when/where is it stored”, “what is being stored”. Knowing these, you are able to point out issues and risks. For example: An application we have tested, stored its data temporarily into a file, all data was truly placed into database when the transaction gets done. If the file is corrupted by any reason, data will never be stored correctly even when its transaction is completed
Keep your test execution long enough and focus
Long: Exploration becomes ineffective when it takes too long. Session-based testing is a method of managing exploratory testing long enough to exercise its effectiveness. According to Wikipedia: Session-based testing is a software test method that aims to combine accountability and exploratory testing to provide rapid defect discovery, creative on-the-fly test design, management control and metrics reporting.
However, if your session is too short, you will not explore as many things as you need. So, how to decide the length of each test charter (*) which is executed in sessions? Some practices can be used as such: touring (menu touring, data flow touring, UI touring, … ) to gain info for your decision, Or at MeU, we use session -1 or session 0 for this purpose. For example: Testing thoroughly a new feature or a charter, we will run the first (session -1) or the second (session 0) sessions with this feature/ charter that aims to have an overall picture and understand more about a major scenario.
Focus: If you design a charter that is vague or ambiguous or too high level, your exploration will not be directed. It is like mining without a map. Otherwise, if you have charter too detailed and too specific, it traps you into just a few narrow scenarios. Or even if your test charters include many purposes, you can lose direction and track.
A good practice at MeU in designing test charter is that “90-4-1”. It means 90’ (of course you can create a charter with 75’ or 120’, but the deviation is kept at small enough -15’ and +15’) For One objective (task, bug exploration, …)
(*) Exploratory Testing is a skilled and disciplined approach to testing. And one of the skills exploratory testers master is the ability to manage the scope of testing so that the software is tested in a thorough and appropriate manner. Testers manage the scope of exploratory testing using a concept called a “charter”
Two in A Box
At MeU Solutions, this practice means two testers paired to perform the testing in one time-box session at the same time and at the same desk. A characteristic of exploratory testing in timebox session is high attention. Testers need focus on what they are going through with their testing. However, too much focusing on one thing may make them miss other things. Having the second tester in a session returns some benefits:
(1). Help identify what you might miss in an active session.
(2). Have more views by a different mind and different knowledge background.
(3). Take notes and key workflows which has been executed in the session, these are useful resources for debriefing later on.