Effective Exploratory Testing: Machine Learning to Empower Your Exploratory Testing Power (1)
Artificial Intelligence (AI) was coined in 1955 by John McCarthy. However, it has drastically evolved throughout the past decade. It can recognize faces, beat the world’s best human players such as games as Go and chess. AI software is able to learn from data and experiences over time to make predictions. A subset of AI is Machine Learning (ML), it is a method of training algorithms such that they can learn how to make decisions. ML discovers connections and relationships in the data that would be more complicated or nuanced than a human would find. In software quality, AI/ML empowers software testing by moving toward a context where machines will take over developing automation tests and executing them with the
We can feed AI Software with the amount of testing data to acclimatize as per a set of inputs so that it learns and identify patterns and testing logics
Using AI/ML in software testing is still rather new at the moment. With its pattern-recognition technology, AI/ML significantly benefits the testing in various areas. The following are samples where AI/ML used in software testing:
- UI checks quicker and more accurate: UI testing is one of the testing edges that it costs the project a fortune. Testers usually spend much time on verifying the application’s UI with less sense of end-users, AI/ML Software earns its experience and its data from other applications to identify patterns that mean to verify the UI of a particular application appearing correctly to users. It makes sure that the UI itself looks right to the user and that each UI element appears in the right color, the shape at the right position. An advantage when using AI/ML to test along with testers is easy to identify what they would most likely miss
- To resolve the problem of API Testing effectively: Because the high rate of changes is made of UI updates that consuming up to 80% time for regression, API testing appears as a good solution. It allows testers to maintain stability as the application is undergoing change. However, it’s not a trivial task for all organizations to set out. Testers are hard to know how to test API and/or don’t know how the application is using the API. The appearance of AI/ML is taking the complexity out of API testing. AI/ML is leveraged into manual testing to monitor all traffic generated during manual testing execution. Then, it uses the AI algorithm to analyze and build a set of API test scenarios. It also identifies API calls during this progress too. Then discovers patterns and analyzes the relationships between them so it can generate complete API testing scenarios.
- Spot out application deviations as a problem to test: AI/ML tools are running silently and crawling the application. It snapshots the application states at the moment the testing taken by collecting all data having to do with features. It compares current state to all the known patterns it has already learned to spot out deviations which may need to be tested, for instance, a page is resulting in a CSS error, although it did not do
- Shrinking the effort to maintain the automation tests: One of the most challenges in automation testing is the cost for test maintenance every moment its application take changes. AI/ML to leverage its capability in learning changes and relationship of controls help create connections between changes and relevant tests. Or another approach has been used at our company and integrated into Shinobi as a smart object recognition solution. The solution’s principle is rather simple, imaging how a human recognizes controls in the application it is exactly Shinobi is doing. Even when a control’s properties gets changed, its shape, positions, relevant text help Shinobi that’s what it is.
However, AI/ML has some explaining to do. Because it is not easy for organizations to use a system without trust. Especially, in fields such as health-care, finance or law enforcement where its consequences of a bad recommendation are more substantial than a failure in a game. Similarly, we are hard to trust a software which has been tested by an AI/ML software without having an eye of a human.
Totally trusting AI bots will perform well testing to be human-free testing is not really a good approach. However, using AI to help testers in delivering their testing faster and better is kind of on the same lines as a tiger having wings. In previous decades, we have been using automation into testing shrink testing timeline. This is significantly bringing effectiveness, especially in regression testing. In DevOps, obviously automating testing is a thing cannot be absent as one of the factors making success. Therefore, we can totally believe that AI/ML would be the most effective weapon for testers in this new era. Although someone is just afraid of the rise of AI/ML. They live in fear of losing their jobs if AI/ML is really so smart that takes all over the testing. To get to that point, it is still a long journey, but it senses to rethink about keeping learning more new things to accelerate transformations in our testing roles and adopting new changes.

A difference between automation machine and human-being testing is that the learning the application during the testing execution and having the sense of what’s being tested to have an appropriate assessment on quality. This is exactly explaining why exploratory testing is revealing its values and becoming more popular in software testing now. In a different definition, exploratory testing is a testing approach that emphasizes the engagement of the tester. So we understand without a human in exploratory testing, that is not what it is. However, as mentioned above, rapid transformation in AI/ML revolution is leading software testing to a new vision. Exploratory Testing is not an exception. We have talked about using AI/ML for UI check, for API Testing, … this is like AI is taking over a part of the work of tester. In other angles, AI/ML can play an assistant role for all exploratory testing instead. Thinking of a perspective when an exploratory tester performing his tests, an AI bot silently running parallel with him. It’s collecting all data in various sources such as test data, application data, system/ resource consumption data, timing data,… all are used for analysis from Machine Learning Core which has been pre-trained. The analysis can result in alerts to testers for the missing scenario that it could be, or resulting from new ideas & recommendations, or reporting a test “coverage” in AI bot’s sense combined with actual testing execution. A pair-testing between the human tester and AI bot tester is a great combination of the power of artificial intelligence and human intelligence. With this angle, AI/ML is not a “competitor” of the tester, instead, it is being a trusted colleague to go alongside.
However, building an AI/ML testing bot is not as simple as creating a test automation framework. Each testing has its own purpose and a different approach. Exploratory testing is more challenging when itself totally depends on the context at test execution. Both taxonomies of AI/ML are Supervised Learning and Unsupervised Learning can be used to build AI testing bot. For example: if you want your AI bot explores & tests the UI Application, it can be a Supervised Learning Machine. Selecting learning models is also a key driver for success. A wrong model will cause inaccuracy in predicting, classifying and analyzing outcomes. While if we are demanding too high accuracy, AI/ML algorithm will cause low performance that is not realistic to use. Therefore, selecting what to build and how to build AI testing bot is not that much simple and it is beyond this article. In next posts, we will have some articles case studies of building an AI testing bot known as Shinobi at MeU Solutions
Supervised Learning: In supervised learning, the user gives a set of input variables (X) and a set of output variables (Y) and use the algorithm to learn the mapping function from the input to the output. Y = f(X). For instance: recognize handwriting, because the output is a category of A-Z and 0-9 Unsupervised Learning: In Unsupervised learning, the user gives a set of input variables (X) and there is no corresponding outcome. For example Classification of customers based on their respective behaviors. |

About the application of AI to exploratory testing, because almost testing conducted in exploration form is taking account a process of thinking throughout testing models, AI testing bot cannot stay away from this principle. This means that it must take into account learning and evaluating the software under test by enabling the process of thinking with applying all heuristics pre-trained. AI testing bot may not be resulting in pass/fail tests that cause biases which impact to the testing of the tester. Instead, for instance, it should come up with a set of new test ideas or supplying new test models from what it’s observing & analyzing collected data with using given heuristics. Following are applications of AI testing bot to exploratory testing that could bring more benefits for it:
- AI testing bot with unsupervised learning type and using heuristics to generate more test ideas
- AI testing bot with unsupervised learning type to recognize tester ‘s bias with something
- AI testing bot with supervised learning type to perform UI testing that does not only stop at verification of UI properties but evaluate how do they appear to users
- AI testing bot with unsupervised learning type to identify deviations between software builds/ releases for new testing ideas
- AI testing bot to monitor and recognize abnormalities that happening in the software background that is hard to be acknowledged by testers
- AI testing bot with the ability of rapid learning from more senior tester’s experience to help junior testers test better
- And more…
Say, we are hard to let machines to do all exploratory testing, removing the human factor is a bad idea. AI/ML testing bot need human to make it smarter, and testers need AI/ML to create more powerful tools and to make their testing better.
In the next article, we will talk about a case study for AI/ML in exploratory testing, that’s known as Shinobi.