International Software Testing Qualifications Board

Certified Tester Foundation Level Syllabus

Notify me when the book’s added
To read this book, upload an EPUB or FB2 file to Bookmate. How do I upload a book?
  • Alexandra Arakelovahas quoted7 years ago
    Compared to dynamic testing, static techniques find causes of failures (defects) rather than the failures themselves.
    Typical defects that are easier to find in reviews than in dynamic testing include: deviations from standards, requirement defects, design defects, insufficient maintainability and incorrect interface specifications.
  • Alexandra Arakelovahas quoted7 years ago
    Can explain the similarities and differences between integration and system testing:
    ○ Similarities: testing more than one component, and can test non-functional aspects
    ○ Differences: integration testing concentrates on interfaces and interactions, and system testing concentrates on whole-system aspects, such as end-to-end processing
  • Alexandra Arakelovahas quoted7 years ago
    [IEEE Std 829-1998] IEEE Std 829™ (1998) IEEE Standard for Software Test Documentation, See Sections 2.3, 2.4, 4.1, 5.2, 5.3, 5.5, 5.6
    [IEEE 1028] IEEE Std 1028™ (2008) IEEE Standard for Software Reviews and Audits, See Section 3.2
    [IEEE 12207] IEEE 12207/ISO/IEC 12207-2008, Software life cycle processes, See Section 2.1
    [ISO 9126] ISO/IEC 9126-1:2001, Software Engineering – Software Product Quality, See Section 2.3
  • Alexandra Arakelovahas quoted7 years ago
    Dynamic Analysis Tools (D)
    Dynamic analysis tools find defects that are evident only when software is executing, such as time dependencies or memory leaks. They are typically used in component and component integration testing, and when testing middleware.
    Performance Testing/Load Testing/Stress Testing Tools
    Performance testing tools monitor and report on how a system behaves under a variety of simulated usage conditions in terms of number of concurrent users, their ramp-up pattern, frequency and relative percentage of transactions. The simulation of load is achieved by means of creating virtual users carrying out a selected set of transactions, spread across various test machines commonly known as load generators.
    Monitoring Tools
    Monitoring tools continuously analyze, verify and report on usage of specific system resources, and give warnings of possible service problems.
  • Alexandra Arakelovahas quoted7 years ago
    Product Risks (K2)
    Potential failure areas (adverse future events or hazards) in the software or system are known as product risks, as they are a risk to the quality of the product. These include:
    ○ Failure-prone software delivered
    ○ The potential that the software/hardware could cause harm to an individual or company
    ○ Poor software characteristics (e.g., functionality, reliability, usability and performance)
    ○ Poor data integrity and quality (e.g., data migration issues, data conversion problems, data transport problems, violation of data standards)
    ○ Software that does not perform its intended functions
    Risks are used to decide where to start testing and where to test more; testing is used to reduce the risk of an adverse effect occurring, or to reduce the impact of an adverse effect.
    Product risks are a special type of risk to the success of a project. Testing as a risk-control activity provides feedback about the residual risk by measuring the effectiveness of critical defect removal and of contingency plans.
    A risk-based approach to testing provides proactive opportunities to reduce the levels of product risk, starting in the initial stages of a project. It involves the identification of product risks and their use in guiding test planning and control, specification, preparation and execution of tests. In a risk-based approach the risks identified may be used to:
    ○ Determine the test techniques to be employed
    ○ Determine the extent of testing to be carried out
    ○ Prioritize testing in an attempt to find the critical defects as early as possible
    ○ Determine whether any non-testing activities could be employed to reduce risk (e.g., providing training to inexperienced designers)
    Risk-based testing draws on the collective knowledge and insight of the project stakeholders to determine the risks and the levels of testing required to address those risks.
    To ensure that the chance of a product failure is minimized, risk management activities provide a disciplined approach to:
    ○ Assess (and reassess on a regular basis) what can go wrong (risks)
    ○ Determine what risks are important to deal with
    ○ Implement actions to deal with those risks
    In addition, testing may support the identification of new risks, may help to determine what risks should be reduced, and may lower uncertainty about risks.
  • Alexandra Arakelovahas quoted7 years ago
    The ‘Standard for Software Test Documentation’ (IEEE Std 829-1998) outline for test plans requires risks and contingencies to be stated.
  • Alexandra Arakelovahas quoted7 years ago
    Project Risks (K2)
    Project risks are the risks that surround the project’s capability to deliver its objectives, such as:
    ○ Organizational factors:
    Skill, training and staff shortages
    Personnel issues
    Political issues, such as:
    Problems with testers communicating their needs and test results
    Failure by the team to follow up on information found in testing and reviews (e.g., not improving development and testing practices)
    Improper attitude toward or expectations of testing (e.g., not appreciating the value of finding defects during testing)
    ○ Technical issues:
    Problems in defining the right requirements
    The extent to which requirements cannot be met given existing constraints
    Test environment not ready on time
    Late data conversion, migration planning and development and testing data conversion/migration tools
    Low quality of the design, code, configuration data, test data and tests
    ○ Supplier issues:
    Failure of a third party
    Contractual issues
  • Alexandra Arakelovahas quoted7 years ago
    For testing, configuration management may involve ensuring the following:
    ○ All items of testware are identified, version controlled, tracked for changes, related to each other and related to development items (test objects) so that traceability can be maintained throughout the test process
    ○ All identified documents and software items are referenced unambiguously in test documentation
    For the tester, configuration management helps to uniquely identify (and to reproduce) the tested item, test documents, the tests and the test harness(es).
    During test planning, the configuration management procedures and infrastructure (tools) should be chosen, documented and implemented.
  • Alexandra Arakelovahas quoted7 years ago
    Test Control (K2)
    Test control describes any guiding or corrective actions taken as a result of information and metrics gathered and reported. Actions may cover any test activity and may affect any other software life cycle activity or task.
    Examples of test control actions include:
    ○ Making decisions based on information from test monitoring
    ○ Re-prioritizing tests when an identified risk occurs (e.g., software delivered late)
    ○ Changing the test schedule due to availability or unavailability of a test environment
    ○ Setting an entry criterion requiring fixes to have been re-tested (confirmation tested) by a developer before accepting them into a build
  • Alexandra Arakelovahas quoted7 years ago
    Test Reporting (K2)
    Test reporting is concerned with summarizing information about the testing endeavor, including:
    ○ What happened during a period of testing, such as dates when exit criteria were met
    ○ Analyzed information and metrics to support recommendations and decisions about future actions, such as an assessment of defects remaining, the economic benefit of continued testing, outstanding risks, and the level of confidence in the tested software
    The outline of a test summary report is given in ‘Standard for Software Test Documentation’ (IEEE Std 829-1998).
    Metrics should be collected during and at the end of a test level in order to assess:
    ○ The adequacy of the test objectives for that test level
    ○ The adequacy of the test approaches taken
    ○ The effectiveness of the testing with respect to the objectives
fb2epub
Drag & drop your files (not more than 5 at once)