
QA Terminologies
his document provides a comprehensive glossary of commonly used terms and concepts in the field of Quality Assurance (QA). It serves as a quick reference guide for QA professionals, developers, project managers, and anyone involved in the software development lifecycle. Understanding these terms is crucial for effective communication, collaboration, and the successful delivery of high-quality software products.
A
Acceptance Criteria: The pre-established standards or requirements that a system or component must meet to be accepted by a user, customer, or other authorized entity. These criteria define the conditions under which a user will accept a deliverable.
Acceptance Testing: Formal testing conducted to determine whether a system satisfies its acceptance criteria and enables the user, customer, or other authorized entity to determine whether or not to accept the system. It's often the final phase of testing before release.
Accessibility Testing: Testing to ensure that a software application is usable by people with disabilities, including visual, auditory, motor, and cognitive impairments.
Agile Testing: A testing practice that follows the principles of agile software development. It emphasizes continuous testing, collaboration, and responsiveness to change.
Alpha Testing: Testing conducted at the developer's site by a team of internal testers or potential users. It's often performed in a simulated or controlled environment.
Anomaly: Any condition that deviates from expectations based on requirements specifications, design documents, user documents, standards, etc., or from someone's perception or experience. Anomalies can be defects, bugs, errors, or deviations from expected behavior.
Automation Testing: Testing performed using automated testing tools and scripts to execute test cases, compare actual results with expected results, and report test outcomes. It's used to improve efficiency and reduce manual effort.
B
Backend Testing: Testing the server-side components of an application, including databases, APIs, and business logic. It focuses on data integrity, performance, and security.
Behavior-Driven Development (BDD): A software development process that encourages collaboration between developers, testers, and business stakeholders. BDD uses plain language descriptions of system behavior to define acceptance criteria and drive development.
Beta Testing: Testing conducted by a limited number of end-users in a real-world environment. It provides feedback on usability, functionality, and performance before the software is released to the general public.
Black Box Testing: Testing without knowledge of the internal structure or code of the system being tested. Testers focus on input and output, treating the system as a "black box."
Bug: A defect in the code that causes the software to behave in an unintended or unexpected way.
Bug Report: A document that describes a bug, including its symptoms, steps to reproduce it, and any relevant information that can help developers fix it.
Build: A version of the software that is ready for testing.
C
Change Request: A formal proposal for a change to a software system, including its requirements, design, or implementation.
Code Coverage: A measure of the extent to which the source code of a program has been tested. It indicates the percentage of code lines, branches, or paths that have been executed during testing.
Compatibility Testing: Testing to ensure that a software application works correctly with different hardware, software, operating systems, browsers, and network configurations.
Component Testing: Testing individual software components or modules in isolation. It verifies that each component functions correctly according to its specifications.
Configuration Management: The process of tracking and controlling changes to software, hardware, documentation, and other components of a system.
Continuous Integration (CI): A development practice where code changes are frequently integrated into a shared repository and automatically built and tested.
Critical Bug: A bug that causes a major system failure or data loss, rendering the software unusable.
D
Data-Driven Testing: A testing technique where test data is stored in a separate file or database and used to drive test execution. It allows testers to easily test the same functionality with different sets of data.
Debugging: The process of identifying and fixing bugs in the code.
Defect: An imperfection or deficiency in a work product where that work product does not meet its requirements or specifications.
Defect Density: The number of defects found in a software component or system, typically expressed as defects per unit of size (e.g., defects per thousand lines of code).
Defect Life Cycle: The process that a defect goes through from the time it is discovered until it is resolved.
Deployment Testing: Testing the software in the production environment after it has been deployed.
Documentation Testing: Testing the accuracy, completeness, and clarity of software documentation, including user manuals, installation guides, and API documentation.
E
End-to-End Testing: Testing the entire software system from start to finish, simulating real-world user scenarios. It verifies that all components of the system work together correctly.
Exploratory Testing: A testing approach where testers explore the software without predefined test cases, using their knowledge and intuition to discover defects.
F
Functional Testing: Testing that verifies that the software functions correctly according to its specifications. It focuses on the features and functions of the software.
G
GUI Testing: Testing the graphical user interface (GUI) of a software application to ensure that it is user-friendly, intuitive, and visually appealing.
H
Happy Path Testing: Testing the most common or expected user scenarios to ensure that the software functions correctly under normal conditions.
I
Integration Testing: Testing the interaction between different software components or systems. It verifies that the components work together correctly.
L
Load Testing: Testing the software's ability to handle a specific load or workload. It measures the system's performance under normal and peak load conditions.
M
Maintainability Testing: Testing the ease with which the software can be modified, enhanced, or corrected.
Manual Testing: Testing performed by human testers without the use of automated testing tools.
N
Negative Testing: Testing the software with invalid or unexpected inputs to ensure that it handles errors and exceptions gracefully.
Non-Functional Testing: Testing aspects of the software that are not related to its functionality, such as performance, security, usability, and reliability.
P
Performance Testing: Testing the software's speed, responsiveness, stability, and scalability under various load conditions.
Positive Testing: Testing the software with valid inputs to ensure that it functions correctly under normal conditions.
Regression Testing: Testing to ensure that changes to the software have not introduced new defects or broken existing functionality.
Reliability Testing: Testing the software's ability to perform its intended functions without failure for a specified period of time.
Requirements Traceability Matrix (RTM): A document that maps requirements to test cases, ensuring that all requirements are adequately tested.
S
Sanity Testing: A quick test to verify that the major functionalities of the software are working correctly after a new build or release.
Security Testing: Testing to identify vulnerabilities in the software that could be exploited by attackers.
Smoke Testing: A preliminary test to verify that the basic functionalities of the software are working correctly before more extensive testing is performed.
Software Quality Assurance (SQA): A systematic approach to ensuring that software meets specified quality standards and requirements.
Stress Testing: Testing the software's ability to handle extreme load conditions or unexpected events.
System Testing: Testing the entire software system as a whole to ensure that it meets all specified requirements.
T
Test Case: A set of conditions or variables under which a tester will determine whether an application, software system or one of its features is working as it was designed to work.
Test Data: The data used to execute test cases.
Test Environment: The hardware, software, and network configuration used to execute tests.
Test Plan: A document that outlines the scope, objectives, resources, and schedule for testing.
Test Script: A set of instructions that are executed by an automated testing tool.
Test Suite: A collection of test cases that are grouped together for testing a specific feature or functionality.
Usability Testing: Testing the ease with which users can learn and use the software.
Unit Testing: Testing individual units or components of the software in isolation.
User Acceptance Testing (UAT): Testing conducted by end-users to determine whether the software meets their needs and expectations.
V
Validation: The process of evaluating software at the end of the development process to ensure that it meets the customer's needs and expectations. "Are we building the right product?"
Verification: The process of evaluating software during the development process to ensure that it meets specified requirements. "Are we building the product right?"
Vulnerability: A weakness in the software that could be exploited by an attacker.
White Box Testing: Testing with knowledge of the internal structure or code of the system being tested. Testers use their knowledge of the code to design test cases that cover specific code paths or branches.
Likes (
0 )
comments (
0 )
2025-07-12 06:17:15