Software Testing - Part 1: Core Concepts / Part 2: Test Process (1) / Part 3: Test Process (2)

1️⃣ Core Concept of the Test
2️⃣ Test Process Details - 1
3️⃣ Test Process Details - 2
Core Concept of the Test
Impact of Software Defects
What is the purpose of testing? The software we develop is ultimately embedded in final products such as automobiles, aircraft, and mobile devices. A small mistake made by a developer becomes a fault, which then leads to a failure. These failures propagate to the higher-level subsystems that contain the software, then from the subsystem to the system, and finally to the end product, ultimately becoming a major hazard that leads to an accident. This results in harm; including casualties, economic losses, and environmental disasters. A small error originating in software can propagate through fault propagation and escalate into a critical problem. When a fault occurs due to a small human mistake, the primary goal of testing is to detect that fault. By discovering faults through testing, they can be contained rather than propagated to higher-level systems, ultimately reducing the likelihood of accidents. For this reason, testing must be conducted systematically by a third party based on predefined test cases. This is the purpose and role of testing.
[Reference] Difference Between Testing and Debugging
Testing and debugging are easily confused, but they are clearly distinct concepts with different purposes and roles.
The primary purpose of testing is to discover unknown faults. Debugging, on the other hand, aims to accurately correct already known faults identified through testing. There is also a difference in terms of who is responsible. Testing can be performed by internal team members, but a third party such as an external test team can discover more unknown faults by approaching the system from a different perspective. Debugging, however, is best handled by internal developers who are familiar with the system, as it requires locating and correcting known faults.
The key activities also differ. The core activity of testing is fault detection, and test cases must be prepared in advance to carry this out systematically. In debugging, the first step is fault localization; identifying the exact location of the fault, such as which bit in memory is affected or between which modules in a connected system the fault occurred. This is followed by fault identification, which involves determining the type of fault, such as whether it is a compilation error or a logical error. Finally, fault correction, the act of properly fixing the identified fault, is the concluding activity of debugging.
In conclusion, testing is the activity of finding faults without prior knowledge of what went wrong, while debugging is the activity of precisely fixing faults with full knowledge of what the problem is.
Error, Defect, Failure 용어
In the context of software quality, it is important to clearly distinguish between three terms: Error, Defect, and Failure.
An error is the concept that causes a defect and refers to a mistake made by a person, typically a developer or analyst. In other words, it refers to the incorrect human action itself. An error leads to a defect, also referred to as a fault or bug.
A defect is a flaw embedded in a product as a result of an error, and it becomes the root cause of failures or problems. That is, if an error is the human act, then a defect is the flaw left behind in the code or artifact as a consequence of that act.
A failure is the state of malfunction that manifests when the system is actually executed due to an underlying defect. In other words, a failure is the phenomenon in which a latent defect surfaces in the runtime environment.
In summary, a human mistake known as an error produces a defect embedded in the product, and that defect manifests as a failure during system execution. The three terms are linked in a cause-and-effect chain, and understanding each one clearly is fundamental to software quality management.
Example of Error, Defect, and Failure
The concepts of Error, Defect, and Failure in software quality engineering can be illustrated through a concrete example. Consider the following simple pseudocode: Speed = Distance / Time. In this code, Time is positioned in the denominator of the division operation. If the value of Time becomes 0, a divide-by-zero exception will occur.
Mapping the three terms to this scenario makes each concept clear. First, the error is the developer's failure to consider the case where Time could be 0 — a mistake that occurred in the developer's thinking. Next, the defect is the resulting absence of exception-handling code in the program to address the case where Time equals 0 — a flaw reflected in the code itself. Finally, the failure is the occurrence of a Divide By Zero Exception when the program is actually executed with a Time value of 0 — the actual malfunction of the system.
The reason software quality engineering distinguishes between these three concepts is to propose appropriate countermeasures at each stage. At the error stage, training and process improvements are needed to reduce human mistakes. At the defect stage, code reviews and static analysis can eliminate defects before they lead to failures. By clearly distinguishing each stage, it becomes possible to accurately identify the root cause of a problem and systematically establish preventive and corrective measures.
Common Misconceptions About Testing
There are three common misconceptions about software testing.
The first misconception is that testing proves the absence of defects. However, the fundamental purpose of testing is not to prove that there are no defects, but to discover as many unknown defects as possible. Testing is, in essence, an activity that demonstrates the existence of defects. To achieve this, testing should be performed by a third party rather than the developer who built the product, and it should be conducted across diverse environments. Testing across various operating systems such as Windows, iOS, and Android can uncover defects that the developer had not anticipated.
The second misconception is that testing is easy and that all defects can be found. If testing is viewed simply as checking outputs against inputs, it may appear straightforward. However, proper testing requires thorough planning, design, and analysis, as well as a deep understanding of the product under development. Therefore, testing is by no means an easy task, and testers must also possess sufficient knowledge and competence in development. Furthermore, finding all defects is practically impossible. In the era of artificial intelligence, programs can have hundreds of millions of parameters, making it infeasible to test every possible combination within a reasonable timeframe. For this reason, it is important to incorporate the concept of defect prevention from the outset. Through activities such as reviews conducted during the requirements analysis and design phases, defects should be prevented from propagating to subsequent stages before testing even begins.
The third misconception is that testing only needs to take place after the implementation, or coding, phase. From the perspective of the V-Model, however, testing should not begin after coding is complete. Rather, it must be initiated from the earliest stages of development, including requirements analysis and design. Test planning and preparation should begin before implementation starts, enabling quality to be managed systematically throughout the entire development lifecycle.
Test Process Details - 1
What is Systematic Testing?
To understand systematic testing, one must first think of the concept of PDCA. PDCA stands for Plan, Do, Check, and Act, and serves as the foundational framework for any organization to carry out its projects in a structured manner. Systematic testing refers to a state in which a process built upon this PDCA framework is established and followed throughout all testing activities.
1. Test Process from a PDCA Perspective
In the Plan phase, the overall test purpose, scope, schedule, and methods are established, and the features to be tested are selected. For example, in a shopping mall system, features such as member management and order history would be defined as test targets during this phase. Once the plan is established, test design follows, during which test cases are developed and test procedures are defined. From the perspective of the V-Model, test cases are derived based on requirements, architecture, and detailed design, meaning that test case development should begin immediately upon completion of the Plan phase. The test environment must also be prepared at this stage. This includes not only the environment in which the software operates independently, but also the environment in which it interfaces with related hardware devices, and even the real-world operating environment — for instance, if the software is embedded in a vehicle, testing must be conducted during actual driving conditions. Testing across diverse environments is essential to uncovering unknown defects. In particular, once coding is complete in the V-Model, there is likely insufficient time to plan and design tests. This is because the focus must shift to executing the actual tests on the running program. Therefore, it is essential to remember that test planning and design must be carried out in parallel with development in order to improve overall quality.
In the Do phase, the test cases developed during the Plan phase are actually executed and the test results are evaluated. This work can be automated using testing tools, which automatically assess results once test cases are provided as input.
In the Check and Act phases, the test results are analyzed and their adequacy is assessed. Based on the analysis, countermeasures are established and corrective actions are taken. If the identified issues are determined to stem not from defects themselves but from problems in the process or development environment, corrective action recommendations are issued accordingly.
In conclusion, it is essential to remember that systematic testing is not merely about executing tests, but rather an activity in which test planning, design, execution, evaluation, and improvement are carried out organically across the entire PDCA cycle.
[Reference] ISO/IEC/IEEE 29119 Testing Standard
ISO/IEC/IEEE 29119 is the most representative international standard in the field of software testing, jointly established by three international organizations: ISO, IEC, and IEEE. This standard defines a multi-layer test process to ensure that testing is conducted systematically and correctly.
The key emphasis of this standard is that the test process should not be addressed solely at the project level, but that a test process and its foundations must first be established at the organizational level. In other words, the standard requires a top-down structure in which the process and infrastructure for effective testing are first defined at the organizational level, cascaded down to the project and task management level, and ultimately executed by test engineers in accordance with those established criteria.
Specifically, this standard defines a multi-layer test process consisting of four layers, each following a top-down structure in which direction and criteria are passed from higher to lower levels.
The first layer is the organizational test process. At this level, test policies and test strategies that apply across the entire organization are established. This serves as the highest-level foundation that sets the direction and criteria for all subsequent testing activities.
The second layer is the project-level test management process. Based on the policies and strategies established at the organizational level, a test management process is constructed at the project level. This process includes three key activities: test planning, monitoring and control, and test completion.
The third layer is the test management process. Building upon the project-level test management process, this layer involves establishing a more granular test management process broken down by task, phase, and test type. The same three activities; test planning, monitoring and control, and test completion; are applied at this level as well. Although the composition of activities is identical between the second and third layers, they are distinct stages that differ in their scope and level of application.
The fourth layer is the dynamic test execution process. This is the stage in which the software is actually executed and tested based on the criteria and processes passed down from the upper layers. To carry out testing effectively at this stage, test design and implementation must first be completed, followed by the setup and maintenance of the test environment, after which actual test execution takes place.
The key point is that testing should not be viewed merely as an execution activity. Rather, direction and criteria must first be defined and communicated at the organizational and project levels before testing begins. Subsequent chapters will examine in detail what each of the organizational, test management, and dynamic test layers specifically covers.
2. Organizational Test Process
The organizational test process sits at a higher level than the testing carried out at the project level, and represents the stage at which the direction and criteria for testing across the entire organization are defined. This process consists of three key activities.
The first activity is organizational test specification development. This involves developing an organizational test policy specification and an organizational test strategy specification based on the organization's test objectives. For example, if quality achievement targets are set in stages, this activity would involve concretely developing the policies and strategies needed to achieve 50% coverage at stage one, 30% at stage two, and 100% at stage three, broken down to the level of unit testing, integration testing, and so forth.
The second activity is monitoring and control of organizational test specification utilization. This involves monitoring whether the organizational test specifications developed in the first activity are being effectively applied across projects and tasks within the organization, and exercising control when they are not being properly followed. Policies and strategies established at the organizational level must be applied to all subordinate projects, and appropriate control measures must be taken when this is not the case.
The third activity is organizational test specification update. No matter how well-crafted the initial policies and strategies may be, issues are likely to surface when they are applied to real projects. For instance, if shortcomings in an existing policy are revealed during unit test execution, that policy must be revised and improved. The core of this activity lies in continuously incorporating feedback and results from the application of the specifications at the project and task level, and iteratively improving the organizational test policy and strategy specifications.
In conclusion, the organizational test process is not a one-time activity of establishing policies and strategies, but rather a cyclical process of monitoring whether the developed specifications are being correctly applied in practice, and continuously improving them based on the outcomes.
Example of Organizational Test Policy and Strategy Specifications
The organizational test policy and strategy specifications are structured hierarchically, beginning with the highest-level policy specification and cascading down to increasingly detailed strategy specifications.
At the highest level sits the organizational test policy specification. This document contains the broadest set of criteria applicable to all testing activities across the organization, and includes the test purpose, test process, test organization and roles, referenced test standards, test asset management and reuse methods, and policies for test process evaluation and improvement.
Below this sits the organizational test strategy specification. This layer defines more granular test strategies based on the policy specification, and includes risk management related to testing, test selection and prioritization, test documentation, configuration management, defect management, the use of automation tools, and individual test strategies related to performance and security testing.
The next layer defines strategies by specific test type. This is where decisions are made regarding which testing methods will be applied in practice, including unit testing strategy, integration testing strategy, and system testing strategy.
At the lowest layer sit the most granular strategies, including project-level test strategies and individual test-level test strategies. At this layer, specific strategies are defined that apply directly to particular projects or individual test units.
In conclusion, the organizational test policy and strategy specifications represent a system designed to ensure that consistent test criteria are communicated from the organizational policy specification, which captures the overall direction of the organization - all the way down to the project and individual test level through a progressively detailed hierarchical structure.
Test Process Details - 2
1. Test Management Process
Having previously examined the organizational test process, we established that test policies and strategies are formulated in the form of specifications. The test management process is the process by which these established strategies are reflected in the actual projects and individual tasks within the organization, and by which the testing activities carried out within them are systematically managed.
The concept of "management" here is directly linked to the PDCA framework discussed earlier. When PDCA is applied to the test management process, it is structured as follows. First, a test plan is established, corresponding to the Plan phase. Actual test execution is then carried out in the dynamic test process as the Do phase. Whether the results are proceeding appropriately is verified and acted upon in the test monitoring and control phase, corresponding to Check and Act. Finally, upon completion of testing, a test completion report is produced.
Furthermore, if changes arise during the monitoring and control phase, the test plan must also be continuously updated. A plan is not a fixed document once established; rather, it is a living document that must be consistently revised to reflect test execution results and changes in circumstances.
In conclusion, the test management process consists of test planning, test monitoring and control, and test completion, and is a PDCA-based management activity designed to ensure that the organization's test policies and strategies are systematically executed and managed at the project and task level.
Test Management Process; Detailed Composition
The test management process consists of three key activities: test planning, test monitoring and control, and test completion.
In the first activity, test planning, the scope and targets of testing are identified at the project and task level, and the test strategy defined within the organizational test process is established by referencing it as an input.
In the second activity, test monitoring and control, the execution of the dynamic test process is monitored based on the test plan, and the current state of testing is continuously tracked. If issues arise during execution, the testing activities must be appropriately controlled and necessary corrective actions must be taken.
In the third activity, test completion, the artifacts generated after testing is concluded are systematically managed. Since these artifacts may be reused in the future, they must be properly stored, and the test environment must also be organized with reusability in mind. Once these activities are completed, a final test closure report is produced.
Detailed Activities of Test Planning
Test planning is not merely a matter of setting a schedule; rather, it consists of the following highly systematic detailed activities.
The first activity is understanding the context. Before planning the tests, it is essential to first understand the overall situation, including the project's objectives, requirements, relevant stakeholders, and overall schedule. Through this understanding, the scope of testing is clarified and the direction for structuring the test plan is formulated. This process yields a preliminary test plan and development schedule.
The second activity is risk identification and analysis. Risk management is a critical element of project management. During the course of testing, a wide range of risk factors may arise, such as changes in requirements, shifts in priorities, or the replacement of personnel. These risks must be identified and analyzed in advance, and methods for mitigating them must be derived. The analyzed risks and their mitigation strategies are then incorporated into the test strategy design.
The third activity is test strategy design and resource determination. Based on the risk analysis, the test strategy is designed, and the human resources and detailed schedule to be allocated are determined accordingly.
The fourth activity is drafting the test plan. Based on the preceding activities, an initial draft of the test plan is produced. However, the plan at this stage is not yet a finalized document and must go through a review and consensus process involving the relevant stakeholders.
Finally, through the review and consensus process, an agreed-upon test plan is produced and shared with all relevant stakeholders.
In conclusion, it is essential to remember that test planning is not a simple preparatory step, but rather a highly systematic process that spans from understanding the context through risk analysis, strategy design, resource determination, plan drafting, and review and consensus.
Detailed Activities of Test Monitoring and Control
Test monitoring and control is the process of verifying that testing is proceeding correctly in accordance with the established test plan, and taking appropriate corrective action when necessary.
The input to this process is the test plan. Once the test plan is received, the setup required for monitoring and control is configured accordingly. The content of tests executed in the dynamic test execution phase is then compared against the plan, and test measurement is performed. Based on these measurement results, monitoring is carried out. During monitoring, the progress of testing is continuously tracked, and if any deviation from the plan is identified, control activities are initiated. For example, if unit testing has not been performed or integration testing has been omitted, appropriate corrective measures are taken for those activities that have deviated from the plan.
Reporting must also be carried out on these activities. A representative indicator for test status reporting is test case effectiveness, which is expressed as a percentage representing the ratio of the number of defects found to the number of test cases executed. Furthermore, if there are 100 requirements, the degree to which the test cases satisfy those requirements can be measured as a percentage, providing insight into how well the product meets its requirements.
In conclusion, test monitoring and control is not merely about confirming whether testing has been completed, but rather a systematic process of quantitatively measuring and reporting test progress and quality satisfaction levels based on data.
Detailed Activities of Test Completion
Even after testing has been successfully conducted, the test completion phase involves a set of systematic preparatory activities for future testing.
The first activity is verification of the test asset repository. The assets generated during the testing process are reviewed and organized, with decisions made regarding where and how they will be stored. This is done to ensure that these assets can be reused in future testing efforts.
The second activity is test environment restoration. The environment that was configured for testing is restored to its original state so that it can be utilized again in future tests.
The third activity is retrospective review and lessons learned. The strengths and shortcomings of the current round of testing are reflected upon and documented, so that they can be used to improve the quality of future testing activities.
Finally, the test completion report is produced. The report includes a test summary, a comparison of planned versus actual results, test effectiveness metrics, and requirements satisfaction levels. Furthermore, since defects that were not discovered during testing may still remain in the software, the report must also address residual risk identification, post-test countermeasures, test artifact management, a list of reusable assets, and lessons learned, all compiled and managed in report form.
In conclusion, the test completion phase is not simply about wrapping up testing, but consists of a highly systematic set of activities encompassing asset storage, environment restoration, retrospective review, and completion reporting. It is important to remember that the entire process covered so far; test planning, test monitoring and control, and test completion, constitutes a PDCA-based process for systematically managing testing.
2. Dynamic Test Process
Having examined test planning at the task level, the dynamic test process consists of a series of activities carried out to actually execute testing based on that plan.
The first activity is test design and implementation. Test cases are developed based on requirements documents, and the test cases and procedures required for actual test execution are concretely developed at this stage. The second activity is test environment setup and maintenance. The environment in which testing will actually take place is configured and maintained in accordance with the test environment requirements. The appropriate environment must be set up in advance — whether testing will be conducted in a PC environment, an actual vehicle environment, or otherwise — to suit the characteristics of the test target. The third activity is test execution. Tests are executed based on the test specification. The manner in which test results are handled varies depending on whether the issue identified was previously known. If it is a known issue, it must be resolved, and the process of determining how to address the defect continues through test result reporting.
One important point concerns when the test design and environment configuration activities that precede the actual Do phase of test execution take place. It is essential to remember that when the requirements analysis and architecture design processes on the left side of the V-Model are being carried out, test design and environment configuration are also conducted in parallel. The core principle of the dynamic test process is that test preparation must begin from the earliest stages of development, not just immediately before test execution.
Dynamic Test Process; Detailed Composition
The dynamic test process consists of four activities: test design and implementation, test environment setup and maintenance, test execution, and test result reporting.
In the first activity, test design and implementation, test cases and test procedures are developed in accordance with the test scope and test strategy identified in the test plan. Specifically, this involves analyzing the test basis. The development artifacts used to conduct testing, and deriving test requirements, test conditions, test coverage criteria, and test cases.
In the second activity, test environment setup and maintenance, the environment and data required for actual test execution must be prepared. The test environment encompasses not only a standalone software environment such as a PC, but also embedded environments that include hardware, and even vehicle environments, depending on the characteristics of the test target.
In the third activity, test execution, actual tests are run using the previously developed test procedures, and the results of each test execution are recorded in the form of Pass or Fail.
In the fourth activity, test result reporting, defects are identified and recorded based on an analysis of the test execution results. This ensures that discovered defects are systematically managed and that the necessary follow-up actions can be initiated.
[Reference] Test Basis: Concept and Examples
The test basis refers to the development artifacts required for conducting testing; specifically, the documents and materials that serve as the foundation for deriving test cases and test procedures.
Viewed through the lens of the V-Model, the development artifacts on the left side correspond to the test activities on the right side. The test basis utilized at each test level is as follows.
For unit testing, which corresponds to the detailed design phase, the detailed design document serves as the test basis. Since detailed design is concrete to a degree comparable to actual source code, the source code itself is also a representative test basis artifact for unit testing. Integration testing verifies whether the interfaces between individual modules are appropriate; accordingly, the architecture design document that forms the basis for this verification serves as the test basis. System testing uses the requirements specification, the output of requirements analysis; as its basis. Finally, since acceptance testing is the stage at which the highest-level requirements of the customer and end users are verified, the requirements definition document and use case definition document serve as the test basis.
In conclusion, the test basis comprises the artifacts that must be referenced at each test level in order to derive test cases and test procedures, and it is important to understand that the development artifacts on the left side of the V-Model directly provide the foundation for the test activities on the right.



