Case Study - Test Case Design and Execution for Restaurant POS System Testing

CLIENT

The client is a leading multi channel bill payment service provider in Vietnam.
Their service is available from multiple access points as self-service kiosks, POS/clerk-assisted locations, smartphone applications and through other mediums.

CHALLENGES

Our client provides point-of-sale (POS) for the restaurant industry by combining cloud-based technology with mobile and social innovations. Its mission is to help every restaurant in the world succeed by turning transactions into insights.

About the application under test: CloudPOS app is an intuitive touch screen point of sale solution for restaurants with integrated payment processing and optionally, online ordering. It allows the user to stock inventory, perform monitoring of employees to analyze sales statistics. All data is stored in the cloud and accessible from any device anywhere in the world. In the app, the user will find everything that they need to record sales in the store, a cafe, a bar, etc … Maintaining the warehouse, editing handbooks goods, different access rights of users, configuration, summary reports, and analytics.

They were looking for a testing partner that would fit into their agile and scrum based development life cycle. They needed a comprehensive QA strategy along with test case design and execution for new features, such as search hierarchy, day part report and multiple menu item, …

Tests needed to cover the following areas:

  • Core application features like editing, reports and management.
  • Web services testing.

SOLUTIONS

Based on client needs, MeU Solutions formed a dedicated team of test lead, senior testers, and testers to handle all aspects of Software Testing Life Cycle.

The Approach

Cloud POS Testing Process

  • Development
    The system that produces the product tester will test. How does tester receive the product? How testable is it?
  • Requirements
    The criteria for a successful product. What are the product risks? Whose opinion about quality matters?
  • Test Lab
    The system, tools, and materials that allow tester to do their job. Does tester have the right equipment? Is the bug-tracking system in good order?
  • Test team
    The people available to test the product. Do we have the right staff? Are they up to speed on the technology
  • Mission
    The problems tester must solve to be seen as successful by the clients. Find important bugs fast? Produce an accurate assessment of quality?

The context-driven testing methodology was adopted in this project, it suggests for continuous and creative evaluation of testing opportunities.

It is a testing way that conforms to the context of the project, more than applying a best practice. Context-driven testing recommends our testers to select their test deliverable, test techniques, test documentation as well as test objectives by studying the particulars of the specific situation. This approach allows a good tester to ask as many questions as possible to reveal not only parts of the context but also know about the act of information.

Why do we apply Context-driven testing?

In our client situations, the developers are not given a proper well document which explains them exactly how to do the work they are supposed to do, as well as our testers also will not be provided with complete documents. So that they would not know the complete proper requirement. But, they have a spec or a basic document like BRD (Business Requirement Documentation) or some reference documents that tells them what they need to do. But it is assumed they will figure out the best way to accomplish the task. So Context-driven testing is introduced in a situation like this.

Techniques used for Context-Driven Testing

  1. Exploratory testing
  2. Grey Box testing

The Process

  1. Learn system logic and functions.
  2. Design test charters and engage developers.
  3. Execute testing and report defects.

To begin with, we learned the system’s business logic and functions by reading the client’s requirements, exploring and examining their application.

We then did test design based on the requirement analysis using the following criteria for determining priority.

  • Critical test charters are the cases that test the main and basic features of the software.
  • High priority test charters are the set of cases to ensure functionality is stable, intended behaviors and capabilities are working, and important errors and boundaries are tested.
  • Medium priority test charters test features and functions more detail. They include boundary cases, negative cases and configuration cases.
  • Low priority test charters test some minor or small features and functions of the software.

Finally, we executed the test charters, logged and verified the defects.
The whole project was successfully delivered on schedule.

RESULTS

The following results have been achieved in our project:

  • Manual testing made it possible to identify numerous errors in the application interface.
  • During cross-browser and cross-platform testing, errors in various browsers and platforms were found
  • Exploratory testing helped to uncover the complex logic and non-trivial bugs