Effective Exploratory Testing: Context-Driven Testing Framework
What is a Context-Driven Testing Framework being used in this article? It is really not a process which mandates all testers to follow. It is like a good practice or a heuristic instead and is used to drive the testing execution in a context-driven manner. You can call it a framework, a model, a practice or whatever that complies with its purpose.
Early Phase (this implies early days in a sprint, a release or a project cycle): Every project always starts off with needs derived from problems. A product is a solution that it doesn’t work if the problem cannot be solved. This is also one of seven context-driven testing principles. There are people (called as project stakeholders) who have interests, concerns about the product, they can affect or be affected by the product. These people play a primary role in contributing their ideas to our testing, they don’t actually tell us what/how we must do, they tell us what they really concern and value to them. By with many different approaches such as interview, workshop, observation, snoop from similar products,… we can gain their knowledge & experience and their expectation for what the product must be built.
Typically, with all of the inputs of these stakeholders, requirements are documented. Depend on the project’s attributes, level of documentation is various. Some projects, for example, health-care ones with hundred-page requirement documents, while some projects can be explicitly documented in form of comprehensive user stories. Whatever the uniform they take, these requirements are great artifacts to generate test ideas and to build the right test models.
Software engineering process has been changed over time, it has accelerated the transformation from waterfall to RUP, Scrum/ Agile, Kanban and DevOps,.. However, whatever a project takes, the product should fundamentally be developed with the earliest involvement of testers. Testers at this point have their own voice, look into the requirements & talk to developers, product owners, clients or those who have impacts (or are impacted) with (by) the product to understand their stories, then capture & draw up test ideas from test models built into their mind. At this moment, testers can help identify “untestable” items. An untestable item is an object which is constrained by something that results to be hard in delivering testing as expectation. For example a lack of tools to measure the performance peak of an application.
Development/ Testing Phase (this implies when the testing takes the course of design and execution): When the product being built, test design and execution is taken in place. It is a process of parallel activities including product exploring & learning, test execution and test report. A productive testing in this period is comprised of the following factors:
- Prioritization: An efficient testing demands us to ask the question: “What is the order for our test items to amplify the testing value”. “What is the most important to key stakeholders?”, “What the most our stakeholders’ concerns”, “what is the riskiest our stakeholders take” See more: Effective Exploratory Testing (Part 5): Risk-based Exploratory Testing.
- Autonomy: It demands that testers impose responsibility for their testing on what to get involved. We have to tell them what/how our testing has taken, express our analysis on the risk of application and our testing.
- Thinking: Both lateral and creative thinking are a part of the process – learning and testing.
- Learning: We study and model the product or anything related to the product or context of its use or any necessary means to build mental models to generate test ideas.
- Quality: A matter of quantity is not primary, we usually don’t ask how many test cases executed, how many bugs found in this cycle. Instead, we concern what our value (quality of work) contributing to the product in uncovering hidden problems.
Modeling is an essential part of test execution. It represents the thought-process that we believe is going on in our mind when we explore and test the product. We build test models from many sources and use them to validate and challenge the sources. A test model helps testers to recognize particular aspects of the product under test that could be the subject of their test. An example of a test model: A formula helps to imagine how to calculate the income tax. Models can take various forms as follows:
- A checklist that comprises a list of functional area, features, or issues to cover the testing
- Graphics: A direct diagram that consists of nodes and links that connect them such as state-transition models, control-flow graphs, flowcharts, sequence diagram
- Business/ User Story: The concept of user stories has wide acceptance in Agile teams. Story summaries or feature titles are usually augmented by acceptance criteria
- Mind-map: decompose the product to smaller ones. They can help people to visualize the structure of the system under test.
Unfortunately, in reality, the product may not behave as we expect because our model sometime ignores several key aspects of the product’s behavior and its context. At that moment, we have to get back and validate & challenge our model. Use and build test models are essential to conducting an exploratory testing performance effectively – see this link for more practices: Effective Exploratory Testing (Part 1): Some good practices and To Build Your Test Ideas Effectively In Context-Driven Testing. Testers build their own models from information that they elicit from sources of knowledge. Reaching out to as many sources as possible is very important to build effective models and amplify product knowledge. Below is a list of sources, but not limited:
- Other parties: From the discussions, interview, questioning stakeholders, clients, users, developers, Subject Matter Experts…
- Documentation (described in Early Phase): Software specs, designs, requirements, user guide, Release Notes, Development Notes …
- Experience: What we earned from similar products or even from the product being under test by your experiments
- Similar product: What we snoop on other similar products
Learning and Exploring: This includes four key activities as follows:
- Obtaining: Take in the information about the product from different sources which are described above. For instance: When reading, assimilating, analyzing a requirement document, we can ask the author to classify in his way to reveal other product angles or we can ask open questions to get a big picture initially. Or use “Always/Never” Heuristic to question and find out problems that we might be blinded by our lack of inexperience. For example, Testing for a banking project, testers would ask stakeholders “what is (never) always happened for a transaction?”. An answer can be “No transaction lost”, then we understand that the product should have a mechanism to monitor & track all transactions, the corrective response is required. Or in a healthcare project, a tester can ask PO/PM “what must always happen in the product?”, an answer could be “PIH data must be protected all the times”. We understand the two things must be cared about are that (1) data loss and (2) data breach. We must identify an approriate approach to verify how the data proceded, stored, transferred, encrypted (if needed), backup… and how data to be classified for restricted accesses
- Modeling: When the information acquired, like nature, our brain will start modeling to structure disparate items of data into some kind of order or perspective. We look at the product in our sense and our belief about it.
- Converging: Build and Use Oracle are a key aspect during exploring and learning the product. “A Test Oracle is a mechanism to determine whether software executed exactly for a test. It enables us to predict the outcome of any test. In effect, an oracle tells us what a system does in all circumstances and situations. If an oracle does this, it is good. Our Oracle might be derived from the same sources as our models. Whatever our sources of knowledge are, like models, they are fallible.” An example for Oracle is that we evaluate the capability of OpenOffice by comparing itself with Ms. Office. Or when testing “Drag & Drop Picture” of StickyNotes, in Ms. Office, we can drag a picture from an Internet browser and drop it into a document. But this doesn’t happen in StickyNotes, then it is turning back to challenge our Oracle and raise a concern that StickyNotes differs from Ms. Word that may cause violated expectations. When we are modeling, we can derive examples of a system in operation, suggested by our model and use these examples to post ‘what if?’ challenges to our sources
- Challenging: Questions take into the form of scenarios that we believe that the system must deal with. These confirm our understanding. However, any gaps or inconsistency between our model and product’s fact are something to challenge sources or to align our model
Test Execution and Reporting: including design test experiments, Perform them and Report what we found.
Design & perform test experiments: use models to generate things for our testing (called test ideas or coverage items). These test ideas can be written in form of test charters which will be executed with given time-boxes called sessions ranging from 60’ – 120’ long. Performing Experiments is to enact test procedures to discover reliable answers to our questions about the products. During this performance, we observe outputs and judge whether an output matches an expectation or judges whether the outcome of the test is anomalous in some way. Any abnormality encountered needs to be interpreted to find out causes or have more info for making a decision of demanding or not more other tests. This is kind of diverging process taken in our mind to generate a number of accounts of different experiences from what we have done with the system
- The product behaves like this is exactly the same as the expectation (or not) in just this situation (or may not behave correctly in other). Do we need more tests?
- The product behaves in a way that is different from our model. Do we need to challenge the source and align the test model?
- The product behaves in ways that testers (or tools) were not seen (or recognized). Are testers (or tools) are fallible?
- The system cannot be made to fail in ways that stakeholders are concerned with – perhaps stakeholder concerns are addressed or perhaps our tests are poor
- The product’s behaviors are patterned that gives to new concerns, new risks, and more tests.
Tools can also be applied to capture outputs and make comparisons with the expectations, but keep in mind that tools are not a sophisticated observer, it is just used for basic checking activities. One of the useful tools developed by MeU Solutions help track/trace and report all testing in pictures/ graphs. You can find more info here: [Video] One2Explore – Graphing feature for workflow testing and Effective Exploratory Testing (Special Series): Empower Your Exploratory Testing With Smart Tools (1).
In this AI/Machine Learning era, tool becomes smarter to well-support testers. Human, at least now, is unreplaceable but Human-like tools (employed AI/ML) can be effective assistants for testers.
Logging and Reporting: Logging abnormalities (also known as failures, bugs, defects, errors, issues,..) can follow a formal process that involves both many parties (dev, test, user, client,..) and tools (JIRA, Bugzilla, TestLink, …) into series of activities of review, controlling changes, re-test, regression testing,.. or it can be an informal one whereby the tester talks to the developer and agrees on a certain point. In addition, all evidence which helps convince a statement/ conclusion and anything useful which improves the testing performance should be logged for the debriefing session too. Following is a list of things (but not limited) to note in an exploratory testing session.
- How the tests performed
- What we did. Why we did
- What we saw. Video/ screenshots
- Questions and New Ideas?
- Our evaluation of the quality of the current charter
- Ratio Test: Setup, execution
- Challenges, blocks, …
A debrief is a discussion around a recently completed exploratory testing session between two people, the:
- Reporter – or the person who ran the exploratory testing session
- Reviewer – or the person who learns about what happened during the exploratory testing session
During the debrief, the reporter shares information such as what they did and didn’t test, what they learned during testing, what issues they faced and what bugs they raised
Finally, reporting is the process whereby the tester provides meaningful feedback to stakeholders. Usually, it relates to the completion status or coverage of tests to give an indication of progress, but it will also give an indication of completeness or thoroughness. The status of individual tests is of interest, but it is more the patterns that emerge from the interpretations of these tests that inform the decision-making of stakeholders. A good report is meaningful to stakeholders who directly receive it. For example, it is much sense to send a report with data & info about tests which relate to the code coverage, control flow to developers rather than a report contains the ratios of pass/fail or the number of hours spent on testing.
Below are some more similar recommended topics in the Effective Exploratory Testing series for you:
Part 1: Effective Exploratory Testing (Part 1): Some good practices
Part 2: Effective Exploratory Testing (Part 2): More Effective With Pair Exploratory Testing
Part 3: Effective Exploratory Testing (Part 3): Explore It, Automate It
Part 4: Effective Exploratory Testing (Part 4): A Practice of Pair Exploratory Testing
Part 5: Effective Exploratory Testing (Part 5): Risk-based Exploratory Testing
Special Series 1: Effective Exploratory Testing (Special Series): Empower Your Exploratory Testing With Smart Tools (1)
Special Series 2: Effective Exploratory Testing: Empower Your Exploratory Testing With Smart Tools – A Case Study (2)