ISTQB Certified Tester Foundation Level (CTFL)
1 Introduction to Software Testing
1.1 Definition of Software Testing
1.2 Objectives of Software Testing
1.3 Principles of Software Testing
1.4 Testing Throughout the Software Development Lifecycle
1.5 Fundamental Test Process
1.6 The Psychology of Testing
1.7 Ethics and Professionalism in Software Testing
2 Testing Throughout the Software Development Lifecycle
2.1 Testing and the Software Development Lifecycle Models
2.2 Requirements Analysis and Test Planning
2.3 Test Design and Implementation
2.4 Test Execution and Evaluation
2.5 Test Closure and Reporting
2.6 Configuration Management in Testing
2.7 Risk and Testing
3 Static Techniques
3.1 Overview of Static Techniques
3.2 Reviews and Inspections
3.3 Static Analysis
3.4 Static Testing in the Software Development Lifecycle
4 Test Design Techniques
4.1 Overview of Test Design Techniques
4.2 Black-Box Test Design Techniques
4.2 1 Equivalence Partitioning
4.2 2 Boundary Value Analysis
4.2 3 Decision Table Testing
4.2 4 State Transition Testing
4.2 5 Use Case Testing
4.3 White-Box Test Design Techniques
4.3 1 Statement Testing
4.3 2 Decision Testing
4.3 3 Condition Testing
4.3 4 Path Testing
4.4 Experience-Based Test Design Techniques
4.4 1 Error Guessing
4.4 2 Exploratory Testing
5 Test Management
5.1 Test Organization and Roles
5.2 Test Planning and Estimation
5.3 Test Monitoring and Control
5.4 Test Closure Activities
5.5 Incident Management
5.6 Configuration Management in Testing
5.7 Risk and Testing
6 Tool Support for Testing
6.1 Overview of Test Tools
6.2 Categories of Test Tools
6.3 Selection and Evaluation of Test Tools
6.4 Implementation of Test Tools
6.5 Impact of Test Tools on the Organization
7 Improving the Testing Process
7.1 Overview of Process Improvement
7.2 Test Maturity Model Integration (TMMi)
7.3 Capability Maturity Model Integration (CMMI)
7.4 Key Performance Indicators (KPIs) in Testing
7.5 Continuous Improvement in Testing
8 Practical Software Testing
8.1 Overview of Practical Testing
8.2 Test Planning and Control in Practice
8.3 Test Design and Execution in Practice
8.4 Test Evaluation and Reporting in Practice
8.5 Incident Management in Practice
8.6 Test Tools in Practice
8.7 Continuous Improvement in Practice
9 Specialized Areas of Testing
9.1 Overview of Specialized Areas of Testing
9.2 Usability Testing
9.3 Performance Testing
9.4 Security Testing
9.5 Mobile Application Testing
9.6 Embedded Systems Testing
9.7 Agile Testing
10 Legal and Professional Issues
10.1 Overview of Legal and Professional Issues
10.2 Software Testing Standards
10.3 Ethical Considerations in Software Testing
10.4 Legal Considerations in Software Testing
10.5 Professionalism in Software Testing
Boundary Value Analysis

Boundary Value Analysis

Boundary Value Analysis (BVA) is a software testing technique that focuses on the boundaries of input values to identify potential defects. This method is particularly useful for testing the limits of input ranges, where errors are most likely to occur. BVA is a subset of Equivalence Partitioning, which divides input data into partitions that can be considered the same. However, BVA specifically targets the values at the edges of these partitions.

Key Concepts

  1. Boundary Values
  2. Valid and Invalid Boundaries
  3. Min, Max, Just Inside, and Just Outside
  4. Application in Testing

1. Boundary Values

Boundary Values are the specific values that lie at the edges of input ranges. These values are critical because they are where errors are most likely to occur. For example, if an input range is from 1 to 100, the boundary values would be 1 and 100.

2. Valid and Invalid Boundaries

Valid Boundaries are the values that lie within the acceptable range of input values. Invalid Boundaries are the values that lie just outside the acceptable range. For instance, if the valid range is 1 to 100, the valid boundaries are 1 and 100, while the invalid boundaries are 0 and 101.

3. Min, Max, Just Inside, and Just Outside

In BVA, testers often consider four key values around each boundary:

4. Application in Testing

BVA is applied by designing test cases that include these boundary values. This ensures that the software behaves correctly at the limits of its input ranges. For example, if a function accepts an integer between 1 and 100, the test cases would include:

        Test Case 1: Input = 1 (Min)
        Test Case 2: Input = 100 (Max)
        Test Case 3: Input = 2 (Just Inside Min)
        Test Case 4: Input = 99 (Just Inside Max)
        Test Case 5: Input = 0 (Just Outside Min)
        Test Case 6: Input = 101 (Just Outside Max)
    

By focusing on these boundary values, testers can identify defects that might not be caught by testing random values within the range. This method is particularly effective in finding issues related to input validation and error handling.

For instance, consider a function that calculates the discount based on the number of items purchased. The discount rules are as follows:

Using BVA, the test cases would include:

        Test Case 1: Items = 0 (Just Outside Min of 1st Range)
        Test Case 2: Items = 1 (Min of 1st Range)
        Test Case 3: Items = 10 (Max of 1st Range)
        Test Case 4: Items = 11 (Min of 2nd Range)
        Test Case 5: Items = 20 (Max of 2nd Range)
        Test Case 6: Items = 21 (Min of 3rd Range)
    

By testing these boundary values, testers can ensure that the discount calculation is correct at the edges of each range, where errors are most likely to occur.