QA (eng)

QA terms questioner

What is a Test Plan?

A Test Plan is a document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks and who will do each task (roles and responsibilities) and any risks and its solutions.

What is Software Testing Life Cycle (STLC)?

The testing of software has its own life cycle.  It starts with study and analyzing the requirements.  Here is the software testing life cycle:

  1. Requirement Study
  2. Test Planning
  3. Writing Test Cases
  4. Review the Test Cases
  5. Executing the Test Cases
  6. Bug logging and tracking
  7. Close or Reopen bugs

What is meant by the Build Deployment?

When the Build so prepared by the CONFIGURATION MANAGEMENT TEAM  is sent to different Test Environments, it is called the Build Deployment.

What is Test Strategy?

A test strategy is an outline that describes the testing portion of the software development cycle. It is created to inform project managers, testers, and developers about some key issues of the testing process. This includes the testing objective, methods of testing new functions, total time and resources required for the project, and the testing environment.

The test strategy is created based on development design documents.. It is written by the Test Manager or Lead. It includes introduction, scope, resource and schedule for test activities, acceptance criteria, test environment, test tools, test priorities, test planning, executing a test pass and types of test to be performed.

The following are some of the components that the Test Strategy includes:

1 Test Levels.

2 Roles and Responsibilities.

3 Environment Requirements.

4 Testing Tools.

5 Risks and Mitigation.

6 Test Schedule.

7 Regression Test Approach.

8 Test Groups.

9 Test Priorities.

10 Test Status Collections and Reporting.

11 Test Records Maintenance.

12 Requirements traceability matrix.

13 Test Summary 

Are Test Plan and Test Strategy same type of document?

Test Plan is a document that collects and organizes test cases by functional areas and/or types of testing in a form that can be presented to the other teams and/or customer where as the Test Strategy is the documented approach to testing.

 

What is Negative Testing?

Testing the system or application using negative data is called negative testing, for example, testing password entering 6 characters where it should be 8 characters should display a message.

What is the difference between Load Testing and Performance Testing?

Load testing is the test to check the users’ response time of number of users of any one scenario of the application whereas Performance Testing is the test to check the user response time for multiple scenario of the same application.

SQL

  • Structured Query Language. Database is a collection of logically related data designed in a tabular form to meet the information needs of one or more users.

What is Change Control

  • Change Request

What is XML?

-XML stands for EXtensible Markup Language.
How do you make sure that it is quality software?

There should no critical defects (0 critical), no high defect (0 high), no medium defect (0 medium) and may be 1 low defect
How would you ensure that you have covered 100% testing?

The testing coverage is defined by exit criteria (There is exit criteria and entry criteria in the Test Strategy). For example – only 2 low defects are acceptable. Once the exit criteria meet the requirements, then the software is considered to be sufficiently tested.

What are all the basic elements in a defect report?

The basic elements in a defect report are: Defect ID, Header, Description, Defect Reported by, Date, Status, Version, Assigned to, Approved by, Module where the defect was found and so on.

What is the difference between verification and validation?

Verification: Verification is a process to ensure that the software that is made, matches the original design. It is to check whether you built the product right as per design.

Validation: Validation is a process to check whether the product design fits the client’s need. It checks whether you built the right thing. It checks whether it is designed properly.

What are the types of test cases that you write?

We write test cases for smoke testing, integration testing, functional testing, regression testing, load testing, stress testing, system testing and so on.

How to write Integration test cases?

When we do the functional testing, the integration testing is automatically done. This is my experience.

How to write Regression test cases? What are the criteria?

Regression test cases are also based on the requirement documents.

What is Test Harness?

In software testing, a test harness or automated test framework is a collection of software and test data configured to test a program unit by running it under varying conditions and monitor its behavior and outputs. It has two main parts: the test execution engine and the test script repository.

What are the different matrices that you follow?

There are various reports we normally prepare in QA:
· Test summary Report – It is a report that has list of the total test cases, list of executed test cases, remaining test case to be executed, executed date, pass/fail
· Defect Report – In this report we normally prepare a list of defect in spreadsheet e.g. defect # CQ12345 [ if you log a defect in the application called Rational ClearQuest]
· Traceability Matrix

What is parallel/audit testing?

Parallel/audit testing is testing where the user reconciles the output of the new system to the output of the current system to verify the new system performs the operations correctly.

What is software testing methodology?

One software testing methodology is the use a three step process of…
1. Creating a test strategy;
2. Creating a test plan/design; and
3. Executing tests. This methodology can be used and molded to your organization’s needs. Rob Davis believes that using this methodology is important in the development and in ongoing maintenance of his customers’ applications.

What is the general testing process?

The general testing process is the creation of a test strategy (which sometimes includes the creation of test cases), creation of a test plan/design (which usually includes test cases and test procedures) and the execution of tests.

How do you create a test strategy?

The test strategy is a formal description of how a software product will be tested. A test strategy is developed for all levels of testing, as required. The test team analyzes the requirements, writes the test strategy and reviews the plan with the project team. The test plan may include test cases, conditions, the test environment, a list of related tasks, pass/fail criteria and risk assessment. Inputs for this process:
· A description of the required hardware and software components, including test tools. This information comes from the test environment, including test tool data.
· A description of roles and responsibilities of the resources required for the test and schedule constraints. This information comes from man-hours and schedules.
· Testing methodology. This is based on known standards.
· Functional and technical requirements of the application. This information comes from requirements, change request, technical and functional design documents.
· Requirements that the system can not provide, e.g. system limitations. Outputs for this process:
· An approved and signed off test strategy document, test plan, including test cases.
· Testing issues requiring resolution. Usually this requires additional negotiation at the project management level.

How do you create a test plan/design?

Test scenarios and/or cases are prepared by reviewing functional requirements of the release and preparing logical groups of functions that can be further broken into test procedures. Test procedures define test conditions, data to be used for testing and expected results, including database updates, file outputs, report results. Generally speaking…
Test cases and scenarios are designed to represent both typical and unusual situations that may occur in the application.
Test engineers define unit test requirements and unit test cases. Test engineers also execute unit test cases.
It is the test team that, with assistance of developers and clients, develops test cases and scenarios for integration and system testing.
Test scenarios are executed through the use of test procedures or scripts.
Test procedures or scripts define a series of steps necessary to perform one or more test scenarios.
Test procedures or scripts include the specific data that will be used for testing the process or transaction.
Test procedures or scripts may cover multiple test scenarios.
Test scripts are mapped back to the requirements and traceability matrices are used to ensure each test is within scope.
Test data is captured and base lined, prior to testing. This data serves as the foundation for unit and system testing and used to exercise system functionality in a controlled environment.
Some output data is also base-lined for future comparison. Base-lined data is used to support future application maintenance via regression testing.
A pretest meeting is held to assess the readiness of the application and the environment and data to be tested. A test readiness document is created to indicate the status of the entrance criteria of the release.
Inputs for this process:
Approved Test Strategy Document.
Test tools, or automated test tools, if applicable.
Previously developed scripts, if applicable.
Test documentation problems uncovered as a result of testing.
A good understanding of software complexity and module path coverage, derived from general and detailed design documents, e.g. software design document, source code and software complexity data.
Outputs for this process:
Approved documents of test scenarios, test cases, test conditions and test data.
Reports of software design issues, given to software developers for correction.

How do you execute tests?

Execution of tests is completed by following the test documents in a methodical manner. As each test procedure is performed, an entry is recorded in a test execution log to note the execution of the procedure and whether or not the test procedure uncovered any defects. Checkpoint meetings are held throughout the execution phase. Checkpoint meetings are held daily, if required, to address and discuss testing issues, status and activities.The output from the execution of test procedures is known as test results. Test results are evaluated by test engineers to determine whether the expected results have been obtained. All discrepancies/anomalies are logged and discussed with the software team lead, hardware test lead, programmers, software engineers and documented for further investigation and resolution. Every company has a different process for logging and reporting bugs/defects uncovered during testing.A pass/fail criteria is used to determine the severity of a problem, and results are recorded in a test summary report. The severity of a problem, found during system testing, is defined in accordance to the customer’s risk assessment and recorded in their selected tracking tool.Proposed fixes are delivered to the testing environment, based on the severity of the problem. Fixes are regression tested and flawless fixes are migrated to a new baseline. Following completion of the test, members of the test team prepare a summary report. The summary report is reviewed by the Project Manager, Software QA Manager and/or Test Team Lead.
After a particular level of testing has been certified, it is the responsibility of the Configuration Manager to coordinate the migration of the release software components to the next test level, as documented in the Configuration Management Plan. The software is only migrated to the production environment after the Project Manager’s formal acceptance.

How do you divide the application into different sections to create scripts?

First of all, the application is divided in different parts when a business analyst writes the requirement document (or Use Cases or Design Document), he/she writes EACH requirement document for EACH module

What is a ‘Show Stopper’?

A show stopper is a defect or bug that stops the user for further action (testing).  It has no work around.

Based on:

Related schemes:

Standard
QA (eng)

QA terms and questions

What are different types of software testing?

Note: Except the Shakeout testing and Unit testing which are respectively done by the CMT and Coder/Developer, all other testing are done by the QA Engineer (Tester).

1) Unit testing: It is a test to check the code whether it is properly working or not as per the requirement.  It is done by the developers (Not testers).

2) Shakeout testing: This test is basically carried out to check the networking facility, database connectivity and the integration of modules. (It is done by the Configuration Team)

3) Smoke testing: It is an initial set of test to check whether the major functionalities are working or not and also to check the major breakdowns in the application. It is the preliminary test carried out by the SQA tester.

4) Functional testingal It is a test to check whether each and every functionality of that application is working as per the requirement. It is major test where 80% of the tests are done. In this test, the Test Cases are ‘executed’.

5) Integration testing: It is a test to check whether all the modules are combined together or not and working successfully as specified in the requirement

6) Regression testing: When a functionality is added to an application, we need to make sure that the newly added functionality does not break the application.  In order to make it sure, we perform a repeated testing which is called Regression Testing.  We also do regression testing after the developers fix the bugs.  See the video below for more understanding. (Courtesy of guru99.com).

7) System testing: Testing which is based on overall requirements specification and it covers all combined parts of a system. It is also a black box type of testing. System testing is black box testing, performed by the Test Team, and at the start of the system testing the complete system is configured in a controlled environment. System testing simulates real life scenarios that occur in a “simulated real life” test environment and test all functions of the system that are required in real life. Upon completion of integration testing, system testing is started. Before system testing, all unit and integration test results are reviewed by Software QA to ensure all problems have been resolved.

8) Load testing: It is a test to check the user’s response time of number of users using any one scenario (single business process) of the same application at the same time.

9) Stress testing: In this type of testing the application is tested against heavy load such as complex numerical values, large number of inputs, large number of queries etc. which checks for the stress/load the applications can withstand.

10) Performance testing: It is a test to check the user’s response time of number of users using multiple scenarios (multiple business process) of the same application at the same time.

11) User acceptance testing: In this type of testing, the software is handed over to the user in order to find out if the software meets the user expectations and works as it is expected to.

12) Black box testing: It is test where a tester performs testing without looking into the code. OR A testing method where the application under test is viewed as a black box and the internal behavior of the program is completely ignored. Testing occurs based upon the external specifications. Also known as behavioral testing, since only the external behavior of the program is evaluated and analyzed.

13) White box testing: It is a test where a tester looks into the code and performs the testing.

14) Alpha testing: In this type of testing, the users are invited at the development center where they use the application and the developers note every particular input or action carried out by the user. Any type of abnormal behavior of the system is noted and rectified by the developers.

15) Beta testing: In this type of testing, the software is distributed as a beta version to the users and users test the application at their sites. As the users explore the software, in case if any exception/defect occurs that is reported to the developers.

16) Acceptance testing: Is black box testing that gives the client/customer/project manager the opportunity to verify the system functionality and usability prior to the system being released to production. The acceptance test is the responsibility of the client/customer or project manager, however, it is conducted with the full support of the project team. The test team also works with the client/customer/project manager to develop the acceptance criteria.

17) Recovery/error testing: Is testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

18) Security/penetration testing: Is testing how well the system is protected against unauthorized internal or external access, or willful damage. This type of testing usually requires sophisticated testing techniques.

19) Compatibility testing: Is testing how well software performs in a particular hardware, software, operating system, or network environment.

20) Comparison testing:  Is testing that compares software weaknesses and strengths to those of competitors’ products.

21) Incremental testing: After unit testing is completed, developer performs integration testing. It is the process of verifying the interfaces and interaction between modules. While integrating, there are lots of techniques used by developers and one of them is the incremental approach. In Incremental integration testing, the developers integrate the modules one by one using stubs or drivers to uncover the defects. This approach is known as incremental integration testing. To the contrary, big bang is one other integration testing technique, where all the modules are integrated in one shot.

22) End-to-end testing: End-to-end testing is a technique used to test whether the flow of an application right from start to finish is behaving as expected. The purpose of performing end-to-end testing is to identify system dependencies and to ensure that the data integrity is maintained between various system components and systems. The entire application is tested for critical functionalities such as communicating with the other systems, interfaces, database, network, and other applications.

23) Sanity testing: Sanity testing, a software testing technique performed by the test team for some basic tests. The aim of basic test is to be conducted whenever a new build is received for testing. The terminologies such as Smoke Test or Build Verification Test or Basic Acceptance Test or Sanity Test are interchangeably used, however, each one of them is used under a slightly different scenario. Sanity test is usually unscripted, helps to identify the dependent missing functionalities. It is used to determine if the section of the application is still working after a minor change. Sanity testing can be narrow and deep. Sanity test is a narrow regression test that focuses on one or a few areas of functionality.

24) Usability testing: Usability testing is a way to see how easy to use something is by testing it with real users. Users are asked to complete tasks, typically while they are being observed by a researcher, to see where they encounter problems and experience confusion.

25) Install/uninstall testing: Installation Testing: It is performed to verify if the software has been installed with all the necessary components and the application is working as expected. This is very important as installation would be the first user interaction with the end users. Companies launch Beta Version just to ensure smoother transition to the actual product. Uninstallation Testing: Uninstallation testing is performed to verify if all the components of the application is removed during the process or NOT. All the files related to the application along with its folder structure have to be removed upon successful uninstallation. Post Uninstallation System should be able to go back to the stable state.

26) Exploratory testing, ad-hoc testing: Exploratory testing is a hands-on approach in which testers are involved in minimum planning and maximum test execution. The planning involves the creation of a test charter, a short declaration of the scope of a short (1 to 2 hour) time-boxed test effort, the objectives and possible approaches to be used.

27) Mutation testing: Mutation Testing is a type of software testing where we mutate (change) certain statements in the source code and check if the test cases are able to find the errors. It is a type of white box testing which is mainly used for unit testing. The changes in mutant program are kept extremely small, so it does not affect the overall objective of the program. The goal of Mutation Testing is to assess the quality of the test cases which should be robust enough to fail mutant code. This method is also called as Fault based testing strategy as it involves creating fault in the program

What is Negative Testing?

Testing the system or application using negative data is called negative testing, for example, testing password entering 6 characters where it should be 8 characters should display a message.

When we test an application by putting negative values (instead of actual values), then the system should not allow the other values rather than the actual value.  The system should give an message that the value is not correct.  This is called negative testing.
Another example is, if a user tries to type a letter in a numeric field, the correct behavior in this case would be to display the “Incorrect data type, please enter a number” message. The purpose of negative testing is to detect such situations and prevent applications from crashing. Also, negative testing helps you improve the quality of your application and find its weak points. (source: Jerry Ruban)

What is a Test Plan?

A Test Plan is a document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks and who will do each task (roles and responsibilities) and any risks and its solutions.

A Test Plan includes Heading, Revision History, Table of Contents, Introduction, Scope, Approach, Overview, different types of testing that will be carried out, what software and hardware will be required, issues, risks, assumptions and sign off section. Continue reading

Standard
QA (eng)

Test Framework build 1.0

So, my latest and greatest framework build is ready (available at BickBucket, I could share the access, but it will be no core test methods, cause they came from my work and I can’t share them). This is a link to the overall description of the very first version of it. Framework available for every QA in my company and instructions, manuals, key method descriptions and examples provided, for example:

  • Which method validates that alerts and warnings exists (with expected message) and how to close them.
  • Which one method hides/shows grid columns
  • Which one method checks special characters/too long value/empty value input into the text fields
  • Which method and how add/validates date or number fields input
  • Which one method opens row menu
  • Which one validates that value exists in the grid
  • And tens of others

I successfully made TeamCity project to run test suites for some of our components and every QA could create test and upload them to the test library:

12.png Continue reading

Standard
QA (eng)

Test Suite Report Builder Comparison

Here is a short comparison of for most popular ways to build test report for TestNG + Java framework (with comparison table in the end of the post).

Log4J

This is a pretty simple logger which throws test suite execution to the Windows Console, Eclipse Console and text file.

There is no screenshot capturing or easy navigation between multiple suites, so, this is just a huge text file.

1 Continue reading

Standard
QA (eng)

Test Framework (Java, TestNG, ANT, etc)

Introduction to Automation testing:

Testing is an essential part of a software development process. While testing intermediate versions of products/projects being developed, testing team needs to execute a number of test cases. In addition, prior to release every new version, it is mandatory that the version passed through a set of “regression” and “smoke” tests. Most of all such tests are standard for every new version of product/project, and therefore can be automated in order to save human resources and time for executing them.

Benefits of using automated testing are the following:

  • Reduction of tests’ time execution and human resources required
  • Complete control over the tests’ results (“actual results” vs “expected results”)
  • Possibility to quickly change test’s preconditions and input data, and re-run the tests dynamically with multiple sets of data

Automation workflow for the application can be presented as follows:

  • First of all it is required to identify tasks that an application has to accomplish.
  • Second, a set of necessary input data has to be created.
  • Third, expected results have to be defined in order one can judge that an application (a requested feature) works correspondingly.
  • Fourth, Executes a test.
  • Finally, Compares expected results with actual results, and decides whether the test has been passed successfully.

Goal:

The goal of this framework is to create a flexible and extendable automated testing framework, which should expand test coverage for as many solutions as possible. Framework must have input and output channels and library of methods to work with UI.

Environment Specifications:

  • Selenium Webdriver. Selenium is a suite of tools for cross-platform automated testing of web applications. It runs on many browsers and operating systems and can be controlled by many programming languages and testing frameworks. Selenium WebDriver is a functional automation tool to automate the applications. It makes direct calls to the browser using each native support for automation.
  • Eclipse IDE. Eclipse is an integrated development environment (IDE) used in computer programming, and is the most widely used Java IDE. It contains a base workspace and an extensible plug-in system for customizing the environment. Eclipse is written mostly in Java and its primary use is for developing Java applications.
  • Java.
  • TestNG. Is a testing framework inspired from JUnit and NUnit. It has extended new functionalities, which made it more powerful and easier than the other testing frameworks. It supports ReportNG (simple HTML reporting plug-in) and XLST (Graphical / Pictorial reports) plug-ins to customize or extend the default TestNG reporting style. TestNG also provides ability to implement ‘IReporter’ an interface which can be implemented to generate a Customized TestNG report by users. It has ‘generateReport()’ method which will be invoked after all the suite has completed its execution and gives the report into the specified output directory.
  • Apache Ant is a Java library and command-line tool whose mission is to drive processes described in build files as targets and extension points dependent upon each other. The main known usage of Ant is the build of Java applications.
  • AutoIT Tool used to handle Windows popups for Document Uploads and Downloads.
  • Apache POI to perform operations with excel like read, write and update the excel sheet
  • Webdriver is a driver that contains programming interface for controlling all kinds of possible actions in browser.
  • Selenium TakesScreenshot to take screenshot in case of error.
  • Log4j is a reliable, fast and flexible logging framework (APIs) written in Java, which is distributed under the Apache Software License.
  • JDBC. Java Database Connectivity (JDBC) is an application programming interface (API) for the programming language Java, which defines how a client may access a database. It is part of the Java Standard Edition platform, from Oracle Corporation.

Continue reading

Standard
QA (eng)

Test Suites for Simple Chat Application (REST API)

Introduction

This is a framework with test suites which allows me to test my Simple Chat Application UI in 5 minutes.

test app

I could provide bitbucket project access upon request and this is a link to original chat application.

Short Demo

Architecure

  • TestNG and Java
  • Webdriver is a main test library
  • JDBC to work with DB (clean up test data)
  • ANT to display test results
  • Test Suites and data stores as xls files and their status updates with every run
  • Easy to use with Jenkins

Framework example to begin with

Step by step instruction how to create framework

Standard
QA (eng)

Documents generator

It’s not a secret that there are a lot of common features in a huge amount of different applications. That’s why it will be very useful to collect all this documents in one repository and make it open for everyone. At this repository analysts will be able to  collect their specifications, voting for other specifications and searching the best of them (according to previous votes of all users) for reuse.

Solving this task I see a lot of difficulties:

  1. All of my documents are confident and a have no opportunity to share them. But I have some documentation, which were created for my own ideas, and, of course, I can share them (they got only one minus – they were wrote at my native language and it’s not English).
  2. Not all of analysts ready to share their documents, because it’s the same like teaching your competitors.
  3. It could be quite hard to understand, what was wrote in each document, because each company have their own standard (in Russia everyone tries to prove, that they are working according RUP, IEEE, K. Wiegers books,…, but, as a saw, that is not true).
  4. And the last one – people are too lazy to post something, which, furthermore, connected with their job.

I decided that the first step should solve the problem of creating standardized specifications and the easiest way is to create application, which will not allow describing window/web frames in different ways (that is why I decline the idea of describing fields in Excel or Word documents).

At this point I was inspired by IBM Web Content Manager (WCM). It took about one hour to understand – how should I create web forms with amount of fields and their properties (as type, length and so on). I don’t know if it will be correct to add print screens here, but, hover, it will be easy for you to find them. So, according to WCM work process, it will be enough for analyst to specify – how/where fields should be displayed at GUI and their properties. After that, developer’s work will be decreased 2-3 times, but this is not a first step for me, and, furthermore:

  1. WCM cannot generate all of specifications, what we need at our projects.
  2. It’s too expensive while we are working only on non-portal solutions.
  3. It could work only with web frames, can’t include tables and so on.

Why I decided to made another one application on this theme

I’m educating some students (private courses) and they should understand next important things, which are easy to show on working application, which could be immediately modified:

  1. How to collect requirements
  2. How to provide impact analysis
  3. How to improve usability and make application user friendly
  4. How to work with non-functional requirements
  5. What attributes each field or table should have
  6. How to prepare a test plan for application

 Domain model

After my primary analysis of the problem – I’ll enumerate primary features for application, which will be able to generate such kind of specifications:

  1. Creating GUI description
  2. Adding fields and table parts
  3. Adding checks for input data
  4. Adding formulas for calculated fields
  5. Generating text description

This generator should be available from web or user should have an opportunity to install it on his PC without configuring database procedures. That is why it is appropriate to save all necessary data at xml-files.

Domain model:

1

In fact, I want to see many frames/GUI at project and I want to reuse one frame in other project. Then, I want each frame consist of fields (with opportunity of copying field and field’s property to another frame) and tables. Tables should consist of many frames and have their own attributes (for example, are they editable or not). And all of the verification procedures should be cross project entities.

Version 0.1

Today I made my first version, which, of course, not ready to use at my department. Nevertheless, application already has an export to XLS function and it is quite easy to understand how to work with it.

2

3

Day by day I’m planning to improve it quality and amount of useful features. I will add
description of my Frames for business analysts, rewrite dropdown lists, to make already added frames and field excluded. I have been working as analyst and QA for 6 years, so I have a complicated view on my program and all of the processes, which I’m planning to automate.

However, you will be able to download my application soon. It consists of executable JAR file and a BAT file. BAT file will help you to start my application. I didn’t want to made users install any additional software or database engines, that’s why application uses only XML files for storing information.

Standard