Q. What is Requirements Traceability? What is the purpose of it? Explain types of traceability matrices?
A. Requirement traceability is last section of the tester's test plan that measure to cover all the requirements of the projects/products.
Q. What do you mean by Requirements coverage? Why is the purpose of this?
A. Requirements Coverage indirectly known as Traceability Matrix it means we have to cross check whether our application covers all requirements of customer or not....so that we can build and test the application without failing of any requirements which is the most critical features of Customer satisfaction..
Q. Explain the techniques used to estimate the number of test cases to be written for a particular project?
A. There is no particular techniques to estimate the number of test cases required for a project that mainly depends on test engineer who is writing test cases.
Q. What is the difference between stub and driver?
A. A simple meaning for STUBS and DRIVES
A Dummy program which are used to test the original program system development.
A Dummy program which are used to test the original program system development.
OR
Stub and driver both are dummy programs using during integration of modules.
Stubs are using in Top-down approach.
Drivers are using in Bottom-up approach
Stubs are using in Top-down approach.
Drivers are using in Bottom-up approach
Q. Describe me to the basic elements you put in a defect report?
A. Complete information such that developers can understand the bug, get an idea of its severity, and reproduce it if necessary.
Bug identifier (number, ID, etc.)
Brief description of the bug
Steps to reproduce the bug
Current bug status (e.g., 'Released for Retest', 'New', etc.)
The application name or identifier and version
The function, module, feature, object, screen, etc. where the bug occurred
Environment specifics, system, platform, relevant hardware specifics, app server, database
Test case name/number/identifier
Full bug description
Severity of the bug
Screen shots where the bug has occurred
Bug identifier (number, ID, etc.)
Brief description of the bug
Steps to reproduce the bug
Current bug status (e.g., 'Released for Retest', 'New', etc.)
The application name or identifier and version
The function, module, feature, object, screen, etc. where the bug occurred
Environment specifics, system, platform, relevant hardware specifics, app server, database
Test case name/number/identifier
Full bug description
Severity of the bug
Screen shots where the bug has occurred
Q. Can anyone explain, "What is meant by Equivalence Partitioning "?
A. Equivalence partitioning
A technique in black box testing is equivalence partitioning. Equivalence partitioning is designed to minimize the number of test cases by dividing tests in such a way that the system is expected to act the same way for all tests of each equivalence partition. Test inputs would be selected from each partition.
Equivalence partitions are designed so that every possible input belongs to one and only one equivalence partition.
Disadvantages
Doesn't test every input
No guidelines for choosing inputs
Heuristic based
very limited focus
A technique in black box testing is equivalence partitioning. Equivalence partitioning is designed to minimize the number of test cases by dividing tests in such a way that the system is expected to act the same way for all tests of each equivalence partition. Test inputs would be selected from each partition.
Equivalence partitions are designed so that every possible input belongs to one and only one equivalence partition.
Disadvantages
Doesn't test every input
No guidelines for choosing inputs
Heuristic based
very limited focus
Q. How to calculate the estimate for test case design and review?
A. Its completely depends upon project requirements, resources, level of testing with time constraints.
Q. What is the difference between System Testing, Integration Testing & System Integration Testing?
A. System Testing
It a End to End testing of the application against requirements where the testing Environment is just like a Production Environment but not Production Environment...
It a End to End testing of the application against requirements where the testing Environment is just like a Production Environment but not Production Environment...
Integration Testing
To check the interface between the modules is called Integration Testing.
To check the interface between the modules is called Integration Testing.
Q. Give example of a bug having high priority and low severity?
A. It really depends on the project requirement, but one good example I can think off is say there is a requirement for rounding up off $ amount in a banking application & the requirement in SRS says the $ in the checking account should not be rounded but if the application is rounding off the $ amount to higher value than its a high priority -low severity bug.
Q. Describe bottom-up and top-down approaches
A. Top down and Bottom-up approach mainly comes in the integration testing.
In the Top down approach:-
* Modules are integrated by moving downwards. The sub level is called stubs.
* Stubs replace low level modules.
*Depath first or breadth first manner.
In the Bottom -up approach:-
* Modules are integrated by moving upwards.
* Drivers are needed.
In the Top down approach:-
* Modules are integrated by moving downwards. The sub level is called stubs.
* Stubs replace low level modules.
*Depath first or breadth first manner.
In the Bottom -up approach:-
* Modules are integrated by moving upwards.
* Drivers are needed.
Q. Could anybody explain the difference b/w Patch, Build and Pilot?
A. The build is the code which is developed by the developers (coders).for example. an application
Patch is the modification which is being done to the code
OR
Patch is the modification which is being done to the code
OR
adding additional functionality to the code (application)
Pilot is testing the application in the real time environment with limited number of users
Pilot is testing the application in the real time environment with limited number of users
OR
Build: build can be defined as the module given to the QA team with some functionality.
Patch: patch can be defined as the enhancement in the released product... or the new functionality added to the same version.
Pilot: pilot can be defined as the basic functionality of the product that is required to show the strength of the development team to the client... from whome the project is obtained.
Patch: patch can be defined as the enhancement in the released product... or the new functionality added to the same version.
Pilot: pilot can be defined as the basic functionality of the product that is required to show the strength of the development team to the client... from whome the project is obtained.
Q. What is the difference between functional testing & black box testing?
A. Black-box test design treats the system as a "black-box", so it doesn't explicitly use knowledge of the internal structure. Black-box test design is usually described as focusing on testing functional requirements.
Synonyms for black-box include: behavioral, functional, opaque-box, and closed-box testing.
Synonyms for black-box include: behavioral, functional, opaque-box, and closed-box testing.
Q. What are the different methodologies used in testing???
A. Some of the methodologies used in Testing are :
1)CMM(Capability Maturity Model)
2)IEEE(Institute of Electrical Electronics Engine)
3)RUP(Rational Unified Process)
4)Agile Methodology
5)TQM(Total Quality Management
If anybody knows more methodologies apart from these let me know.......
1)CMM(Capability Maturity Model)
2)IEEE(Institute of Electrical Electronics Engine)
3)RUP(Rational Unified Process)
4)Agile Methodology
5)TQM(Total Quality Management
If anybody knows more methodologies apart from these let me know.......
Q. What are the modes of win runner? (neither recording modes /nor run modes).
A. Win runner Recording modes are Consex sencitive Mode and Analog Mode
Win runner Run Mode are Verify Mode, debuge Mode, Update Mode
Win runner GUI modes:-Global, Per test
Win runner Recording methods:-
Record
Pass-up
Object
Ignore
Win runner Run Mode are Verify Mode, debuge Mode, Update Mode
Win runner GUI modes:-Global, Per test
Win runner Recording methods:-
Record
Pass-up
Object
Ignore
Q. What is heuristic checklist used in Unit Testing?
A. The preferable check list is (I am using this).
1. Understand the need of module/function in relation to specs
2.Make sure about the type of values you are passing to the function as input.
3. Have a clear idea about the output that you are expecting from the function based on point 1(above).
4. Be specific about the test data you are passing to the function in terms of type (incase of positive testing).
5. Remember that you need to do both positive and negative testing.
6. Be Clear about type casting (if any).
7. Have a Cristal clear idea about type of assertions (is used to test/compare the actual with expected).
8. Be clear about how the function is being called and is there any other function calls involved in the function you are testing.
9. Perform Regression testing for each new build and keep a log of modifications you are making to your Unit test project (better if you use Visual source Safe).
10. Its always better to debug both positive and negative testing to see how the function is performing so that you can have a clear understandability about the function you are testing.
11. its always better to have a separate project for unit testing by using just referencing the dll of the application.
I hope my answer will help you, if you find anything new and interesting feel free to share.
1. Understand the need of module/function in relation to specs
2.Make sure about the type of values you are passing to the function as input.
3. Have a clear idea about the output that you are expecting from the function based on point 1(above).
4. Be specific about the test data you are passing to the function in terms of type (incase of positive testing).
5. Remember that you need to do both positive and negative testing.
6. Be Clear about type casting (if any).
7. Have a Cristal clear idea about type of assertions (is used to test/compare the actual with expected).
8. Be clear about how the function is being called and is there any other function calls involved in the function you are testing.
9. Perform Regression testing for each new build and keep a log of modifications you are making to your Unit test project (better if you use Visual source Safe).
10. Its always better to debug both positive and negative testing to see how the function is performing so that you can have a clear understandability about the function you are testing.
11. its always better to have a separate project for unit testing by using just referencing the dll of the application.
I hope my answer will help you, if you find anything new and interesting feel free to share.
Q. Explain about Metrics and types of metrics like schedule variance, effort variance?
A. Schedule Variance = ((Actual Duration - Planned Duration)/Planned Duration)*100
Effort Variance = ((Actual Effort - Planned Effort)/planned effort) *100
Effort Variance = ((Actual Effort - Planned Effort)/planned effort) *100
If you have any quarries/feed back send me on - knowthetesting@gmail.com.
Thank you
Ram
No comments:
Post a Comment