ISTQB Sample Paper 2

Q.1: What would be the appropriate result of a Stress test at its peak?
a. A gradual performance slow-down leading to a non-catastrophic system halt
b. A gradual performance improvement leading to a catastrophic system halt
c. A gradual performance slow-down leading to a catastrophic system halt
d. A gradual performance improvement to a non-catastrophic system halt

Q.2: Which of the following statements is incorrect in relation to Code coverage?
a. It describes the degree to which the source code of a program has been tested
b. It is a form of White-box testing
c. It is a form of Black-box testing

Q.3: Which of the following resources are tested by most of the Stress testing tools?
a. Disk space
b. Network bandwidth
c. Internal memory
d. All of the above

Q.4: Which of the following is not a part of System testing?

a. Recovery testing and failover testing
b. Performance, Load and Stress testing
c. Usability testing
d. Top-down integration testing

Q.5: Which of the following statements about Equivalence partitioning is correct?
a. A software testing design technique in which tests are designed to include representatives of boundary values
b. A type of software testing used for testing two or more modules or functions together with the intent of finding interface defects between the modules or functions
c. It is the degree to which the source code of a program has been tested
d. A software testing technique that divides the input data of a software unit into partitions of data    from which test cases can be derived

Q.6: Which of the following is not a Static testing methodology?
a. Code review
b. Inspection
c. Walkthroughs
d. System tests

Q.7: What do you understand by the term "Monkey test"?
a. It is random testing performed by automated testing tools
b. It is used to simulate the actions a user might perform
c. It is another name for Monitor testing

Q.8: From the following options, choose the best example which represents a reliability failure for an application:

a. Slow response time
b. Excessive application consumption
c. Random application termination
d. Failure to encrypt data

Q.9:Choose the correct description of a Stub in software testing:

a. A Stub is basically the same as a driver except that it is very fast in returning results
b. A Stub is a dummy procedure, module or unit that stands in for an unfinished portion of a system
c. A Stub is just a different name for an Emulator
d. None of the above

Q.10: Beta testing is performed by:

a. An independent test team
b. The software development team
c. In-house users
d. External users

Q.11: The main focus of Black-box testing is:
a. to check for logical errors
b. to ensure that each code statement is executed once
c. to test the functionality of the system as a whole
d. to identify all paths through the software

Q.12: What is Boundary value testing?
a. It tests values at and near the minimum and maximum allowed values for a particular input or output
b. It tests different combinations of input circumstances
c. It is a testing technique associated with White-box testing
d. Both a and b

Q.13: Identification of set-use pairs is accomplished during which of the following static analysis activities?
a. Control flow analysis
b. Data flow analysis
c. Coding standards analysis
d. Function Point Analysis


Q.14: Which one of the following is a major benefit of verification early in the software development life cycle?
a. It allows the identification of changes in user requirements
b. It facilitates timely set up of the test environment
c. It reduces defect multiplication
d. It allows testers to become involved early in the project

Q.15: The testing performed by external organizations or standards bodies to give greater guarantees of compliance is called:
a. Usability testing
b. Conformance testing
c. Integration testing
d. System testing
b. Conformance testing


Q.16: In which of the following testing methodologies does the test case generation use the system model?
a. Repetitive testing
b. Model testing
c. Modular testing
d. System testing

Q.17: The testing phase in which individual software modules are combined and tested as a group is called:
a. Unit testing
b. Integration testing
c. Module testing
d. White-box testing
b. Integration testing

Q.18: In which of the following testing methodologies does the automatic generation of efficient test procedures/vectors use models of system requirements and specified functionality?
a. Repetitive testing
b. Model testing
c. Modular testing
d. System testing



Remote Access To MySQL Database Server on window

1. Navigate to MySQL Server bin folder by Command Prompt
2. Login MYSQL Server
mysql -u[username] -p[passoword]
Example: mysql -uroot -psdsl
3. Given grant remote access to MYSQL Server
Mysql> GRANT ALL PRIVILEGES ON *.* TO 'username'@'%' IDENTIFIED BY 'mysql_password'  WITH GRANT OPTION;
Example: mysql> GRANT ALL PRIVILEGES ON *.* TO root@'%' IDENTIFIED BY 'sdsl' WITH GRANT OPTION;
4. To execute FLUSH and reload privilege.
mysql> FLUSH PRIVILEGES;
5. Logout from MySQL Server
mysql> exit;
6. To Check remote enable
mysql -u[root] -p[password] -h[mysql_serverip]
Example: mysql -uroot -psdsl -h192.168.1.96

Introducing Software Test

Introducing  Software Test 
Overview:
 1.Introduction to Software Testing
 2.Relevance of Software Testing
 3.Testing Process  Overview/Software Testing Life Cycle STLC
 4.The V model
 5.Verification & Validation
 6.Test Levels/Types of Testing
 7.Career Path and Test Engineer Capabilities
 8.Test plan
 9.Test case
 10.Defect management


1. Introduction to Software Testing

1.1. Definition of  Software Testing :
  •  Testing is the process of identifying defects, where a defect is any variance between actual and expected results. 
  • Testing is process of trying to discover every conceivable fault or weakness in a work product.
  • Software testing is operating the software under controlled conditions,
    (1) to verify that it behaves “as specified”;
    (2) to detect errors, and
    (3) to validate that what has been specified is what the user actually wanted.

2. Relevance of Software Testing

2.1. The Purpose of  Testing :
The purpose of testing is verification, validation and error detection in order to find problems and the purpose of finding those problems is to get them fixed.
  •  The product/project should meet the Requirements.
  • To ensure a quality product, which is reasonably bug free.
  • To satisfy the End-User.

2.2 Why need Testing?




3. Testing Process  Overview

3.1 Software Testing Life Cycle (STLC) :



3.2 Requirement Analysis:
  • During this phase, test team studies the requirements from a testing point of view to identify the testable requirements.
  •  The QA team may interact with various stakeholders (Client, Business Analyst, Technical Leads, System Architects etc) to understand the requirements in detail.
  • Requirements could be either Functional (defining what the software must do) or Non Functional (defining system performance /security availability ) .Automation feasibility for the given testing project is also done in this stage.
Activities:
  1. Identify types of tests to be performed.
  2. Gather details about testing priorities and focus.
  3. Automation feasibility analysis (if required).
  4. Prepare Requirement Traceability Matrix (RTM).
  5. Identify test environment details where testing is supposed to be carried out.

3.3 Test Planning:
This phase is also called Test Strategy phase. Typically , in this stage, a Senior QA manager will determine effort and cost estimates for the project and would prepare and finalize the Test Plan.
 Activities:
  1. Preparation of test plan/strategy document for various types of testing
  2. Test tool selection
  3. Test effort estimation
  4. Resource planning and determining roles and responsibilities.
  5. Training requirement
3.4 Test Case Development:
This phase involves creation, verification and rework of test cases & test scripts. Test data , is identified/created and is reviewed and then reworked as well.

Activities:
  1. Create test cases, automation scripts (if applicable)
  2. Review and baseline test cases and scripts
  3. Create test data (If Test Environment is available)

3.5 Test Environment Setup:
Test environment decides the software and hardware conditions under which a work product is tested. Test environment set-up is one of the critical aspects of testing process and can be done in parallel with Test Case Development Stage. Test team may not be involved in this activity if the customer/development team provides the test environment in which case the test team is required to do a readiness check (smoke testing) of the given environment.

Activities:
  1. Understand the required architecture, environment set-up and prepare hardware and software requirement list for the Test Environment.
  2. Setup test Environment and test data
  3. Perform smoke test on the build

3.6 Test Execution:
During this phase test team will carry out the testing based on the test plans and the test cases prepared. Defects will be reported back to the development team for correction and retesting will be performed.

Activities:
  1. Execute tests as per plan
  2. Document test results, and log defects for failed cases
  3. Map defects to test cases in RTM
  4. Retest the defect fixes
  5. Track the defects to closure
3.7 Test Cycle Closure :
Testing team will meet , discuss and analyze testing artifacts to identify strategies that have to be implemented in future, taking lessons from the current test cycle. The idea is to remove the process bottlenecks for future test cycles and share best practices for any similar projects in future.

Activities:
  1. Evaluate cycle completion criteria based on Time, Test coverage, Cost, Software, Critical Business Objectives , Quality
  2. Document the learning out of the project
  3. Prepare Test closure report
  4. Qualitative and quantitative reporting of quality of the work product to the customer.
  5. Test result analysis to find out the defect distribution by type and severity.

4. The V Model 


5. Verification & Validation

5.1 Verification:
Verification is the checking or testing of items, including software, for conformance and consistency by evaluating the results against pre-specified requirements.
[Verification: Are we building the system right?]

5.2 Validation :
Validation looks at the system correctness – i.e. is the process of checking that what has been specified is what the user actually wanted.
[Validation: Are we building the right system?]

5.3 Verification vs Validation :
Verification: Are we building the system right?
Validation: Are we building the right system?

In other words, validation checks to see if we are building what the customer wants/needs, and verification checks to see if we are building that system correctly.


6. Test Levels

6.1 Types  of  Testing:

Major types of testing:
Unit Testing
Integration Testing
System testing
Regression testing
Acceptance testing

6.2 Description Types  of  Testing:

5.2.1 Unit Testing:
        Testing the functionality of a specific section of code, usually at the function level. These types of tests are usually executed by developers as they work on code (white-box style), to ensure that the specific function is working as expected. Unit testing is also called component testing.

6.2.2 Integration Testing:
        Testing two or more modules or functions together with the intent of finding interface defects between integrated components (modules).
There two approaches to integration testing
Top-down approach
Bottom - up approach

6.2.2.1 Top-down approach:
    Start from the top level module and integrate individual components and stub the rest of the system

6.2.2.2 Bottom - up approach:
    Integrate individual components from lower level module until whole system is integrated.


6.2.3 System Testing:
  • System testing is typically “Black Box “ testing where the system is taken as black –box and tested based only on its functionality
  • Test cases are based on the overall system requirement specification
  • It is critical part of the testing process as the system is verified as a whole

6.2.4 Regression Testing:
  • Re-testing after fixes or modifications of the software or its environment.
  • It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle.
  • Automated testing tools can be especially useful for this type of testing.

6.2.5 Acceptance Testing:
  • User Acceptance Testing (UAT) is a process of obtaining confirmation that a system meets mutually agreed-upon requirements.
  • Final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time.

6.2.6 Performance testing :

  • Performance testing is in general testing performed to determine how a system performs in terms of responsiveness and stability under a particular workload.
  • It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage.
Performance Testing Types:
  1. Load testing
  2. Stress testing
  3. Endurance testing (soak testing)
  4. Spike testing
  5. Configuration testing

6.2.7 Security testing :
  • Security testing is a process to determine that an information system protects data and maintains functionality as intended.
  • The six basic security concepts that need to be covered by security testing are:
  1. Confidentiality,
  2. Integrity,
  3. Authentication,
  4. Availability,
  5. Authorization and
  6. Non-repudiation
7. Career Path and Test Engineer Capabilities 




8. Test Plan

8.1 Test Plan:
A document that defines the overall testing approach is called Test Plan.

  • Introduction
  1. Project ID 
  2. Project Name
  3. Module / Application Name
  4. Revision History
  • Testing Details
  1. Module to be tested (Based on Priority) 
  2. Features to be tested in this time
  3. Features not to be tested
  4. Test Approach (Strategy)
  5. Test Type
  6. Test Team
  7. Test Objective
  8. Test Environment
  9. Test Methodology
  10. Entry and Exit Criteria 
  11. Suspension and Resumption criteria 
  • Acceptance Test
  1. Acceptance Test Scope
  2. Out of Scope
  3. Acceptance Test Criteria and Procedures
  4. Test Execution
  5. Open Issues
  • Test Schedule
  • Traceability to Requirement
  • Test Case
  • Maintenance of Test plan         
                                                    
9. Test Case
9.1 Test Case:
  • A set of test inputs, execution conditions, and expected results developed for a particular objective 
  • A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly 
  • A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results  
Elements of a Test Case with example:
    Module Name        : DS-Payroll   
    Component Name    : Payroll
    Test Type        : System Test   
    Test Case ID        : DS-PROLL-001   
    Use Case No        : UC-001

    Test Condition: 1. Request to the system for 'Company Information' entry screen.
                 2. After appear 'Company Information' entry screen insert all the  necessary information and  request system to save the data.
                 3. Without insert any value try to add information.
                 4. Try to add same information again.

    Expected Output:1. System will provide 'Company Information‘ entry screen successfully.
                2. Inserted information will be saved successfully and it will be notify by a user  friendly message.
                3. System will asking for the required field the values.
                4. System will not permit to add same information again and will notify about the existing information.








10. Defect Management

10.1 Definition of defect:
  • A flaw in a system or system component that causes the system or component to fail to perform its required function. - SEI
  • A defect, if encountered during execution, may cause a failure of the system.
  • The purpose of finding defects:
  1. The purpose of finding defects is to get them fixed
  2. The prime benefit of testing is that it results in improved quality. Bugs get fixed
10.2 Why do defects occurs: 

There are various reasons for the occurrence of the faults; it may be due to
  • Poor documentation
  • Ambiguous or unclear requirements
  • Lack of programming skills
  • Due to increase in work pressure and assigned deadlines   
10.3 Defect Types: