Friday, September 25, 2009

User Acceptance Testing


User Acceptance Testing is a key feature of projects to implement new systems or processes. It is the formal means by which we ensure that the new system or process does actually meet the essential user requirements. Each module to be implemented will be subject to one or more User Acceptance Tests (UAT) before being ‘signed off’ as meeting user needs. The following overview answers some of the main questions that have been asked about UATs.
 

What is a User Acceptance Test?
A User Acceptance Test is:
• A chance to completely test business processes and software
• A scaled-down or condensed version of a system
• The final UAT for each system module is the last chance to perform the above in a test Situation

What does the User Acceptance Test cover?
The scope of each User Acceptance Test will vary depending on which business process is being tested. In general however, tests will cover the following broad areas:
• A number of defined test cases using quality data to validate end-to-end business processes.
• A comparison of actual test results against expected results
• A meeting/discussion forum to evaluate the process and facilitate issue resolution.

What are the objectives of a User Acceptance Test?
Objectives of the User Acceptance Test are for a group of key users to:
• Validate system set-up for transactions and user access
• Confirm use of system in performing business processes
• Verify performance on business critical functions
• Confirm integrity of converted and additional data, for example values that appear in a look-up table
• Assess and sign off go-live readiness

Who will attend the User Acceptance Tests?
The project team will work with relevant stakeholders and managers to identify the people who can best contribute to system testing. Most of those involved in testing will also have been involved in earlier discussions and decision making about the system set-up. All users will receive basic training to enable them contribute effectively to the test.

UAT Agenda
The agenda for each UAT will be agreed in advance with the users. The time required will vary depending on the extent of the functionality to be tested. The test schedule will allow time for discussion and issue resolution.

Roles and Responsibilities
The process of the User Acceptance Test must be carefully managed to ensure that it is able to meet the objectives above. The project team will be responsible for coordinating the preparation of all test cases and the UAT group will be responsible for the execution of all test cases (with support from the project team).

The User Acceptance Test Group will
• Ensure that the definition of the tests provide comprehensive and effective coverage of all reasonable aspects of functionality
• Execute the test cases using sample source documents as inputs and ensure that the final outcomes of the tests are satisfactory
• Validate that all test case input sources and test case output results are documented and can be audited
• Document any problems, and work with the project team to resolve problems identified during the tests
• Sign off on all test cases by signing the completed test worksheets
• Accept the results on behalf of the relevant user population
• Recognize any changes necessary to existing processes and take a lead role locally in ensuring that the changes are made and adequately communicated to other users

The Project Team will:
• Provide first level support for all testing issues
• Advise on changes to business process and procedure and/or
• Change the system functionality, where possible, via set up changes
• Track and manage test problems



If you have any quarries/feed back send me on - knowthetesting@gmail.com

Thank you
Ram




Wednesday, September 23, 2009

Interview Questions Part 4

-->
Q. What can be done if requirements are changing continuously?
A. QA/Test Managers are familiar with the software development process; able to maintain enthusiasm of their team and promote a positive atmosphere; able to promote teamwork to increase productivity; able to promote cooperation between Software and Test/QA Engineers, have the people skills needed to promote improvements in QA processes, have the ability to withstand pressures and say *no* to other managers when quality is insufficient or QA processes are not being adhered to; able to communicate with technical and non-technical people; as well as able to run meetings and keep them focused.

What can be done if requirements are changing continuously?
Work with management early on to understand how requirements might change, so that alternate test plans and strategies can be worked out in advance. It is helpful if the application's initial design allows for some adaptability, so that later changes do not require redoing the application from scratch. Additionally, try to...

  • * Ensure the code is well commented and well documented; this makes changes easier for the developers.
  • * Use rapid prototyping whenever possible; this will help customers feel sure of their requirements and minimize changes.
  • * In the project's initial schedule, allow for some extra time to commensurate with probable changes.
  • * Move new requirements to a 'Phase 2' version of an application and use the original requirements for the 'Phase 1' version.
  • * Negotiate to allow only easily implemented new requirements into the project; move more difficult, new requirements into future versions of the application.
  • * Ensure customers and management understand scheduling impacts, inherent risks and costs of significant requirements changes. Then let management or the customers decide if the changes are warranted; after all, that's their job.
  • * Balance the effort put into setting up automated testing with the expected effort required to redo them to deal with changes.
  • * Design some flexibility into automated test scripts;
  • * Focus initial automated testing on application aspects that are most likely to remain unchanged;
  • * Devote appropriate effort to risk analysis of changes, in order to minimize regression-testing needs;
  • * Design some flexibility into test cases; this is not easily done; the best bet is to minimize the detail in the test cases, or set up only higher-level generic-type test plans;
  • * Focus less on detailed test plans and test cases and more on ad-hoc testing with an understanding of the added risk this entails.

Q. What if the application has functionality that wasn't in the requirements?
A. It may take serious effort to determine if an application has significant unexpected or hidden functionality, which it would indicate deeper problems in the software development process. If the functionality isn't necessary to the purpose of the application, it should be removed, as it may have unknown impacts or dependencies that were not taken into account by the designer or the customer.
If not removed, design information will be needed to determine added testing needs or regression testing needs. Management should be made aware of any significant added risks as a result of the unexpected functionality. If the functionality only affects areas, such as minor improvements in the user interface, it may not be a significant risk.

Q.  Why are there so many software bugs?
A. Generally speaking, there are bugs in software because of unclear requirements, software complexity, programming errors, changes in requirements, errors made in bug tracking, time pressure, poorly documented code and/or bugs in tools used in software development.

  • * There are unclear software requirements because there is miscommunication as to what the software should or shouldn't do.
  • * Software complexity. All of the followings contribute to the exponential growth in software and system complexity: Windows interfaces, client-server and distributed applications, data communications, enormous relational databases and the sheer size of applications.
  • * Programming errors occur because programmers and software engineers, like everyone else, can make mistakes.
  • * As to changing requirements, in some fast-changing business environments, continuously modified requirements are a fact of life. Sometimes customers do not understand the effects of changes, or understand them but request them anyway. And the changes require redesign of the software, rescheduling of resources and some of the work already completed have to be redone or discarded and hardware requirements can be effected, too.
  • * Bug tracking can result in errors because the complexity of keeping track of changes can result in errors, too.
  • * Time pressures can cause problems, because scheduling of software projects is not easy and it often requires a lot of guesswork and when deadlines loom and the crunch comes, mistakes will be made.
  • * Code documentation is tough to maintain and it is also tough to modify code that is poorly documented. The result is bugs. Sometimes there is no incentive for programmers and software engineers to document their code and write clearly documented, understandable code. Sometimes developers get kudos for quickly turning out code, or programmers and software engineers feel they cannot have job security if everyone can understand the code they write, or they believe if the code was hard to write, it should be hard to read.
  • * Software development tools , including visual tools, class libraries, compilers, scripting tools, can introduce their own bugs. Other times the tools are poorly documented, which can create additional bugs.

Q. What is risk analysis? What does it have to do with Severity and Priority?
A. Risk analysis is a method to determine how much risk is involved in something. In testing, it can be used to determine when to test something or whether to test something at all. Items with higher risk values should be tested early and often. Items with lower risk value can be tested later, or under some circumstances if time runs out, not at all. It can also be used with defects. Severity tells us how bad a defect is: "how much damage can it cause?". Priority tells us how soon it is desired to fix the defect: "should we fix this and if so, by when?".
Companies usually use numeric values to calculate both values. The number of values will change from place to place. I assume a five-point scale but a three-point scale is commonly used. Using a defect as an example, Major would be Severity1 and Trivial would be Severity5. A Priority1 would imply that it needs to be fixed immediately and a Priority5 means that it can wait until everything else is done. You can add or multiply the two digits together (there is only a small difference in the outcome) and the results become the risk value. You use the event's risk value to determine how you should address the problem. The lower values must be addressed before the middle values, and the higher values can wait the longest.

Defect 12345
Foo displays an error message with incorrect path separators when the optional showpath switch is applied
Sev5
Pri5
Risk value (addition method) 10
Defect 13579
Module Bar causes system crash using derefenced handle
Sev1
Pri1
Risk value (addition method) 2
Defect 13579 will usually be addressed before 12345.

Another method for Risk Assessment is based on a military standard, MIL-STD-882. It describes the risk of failure for military hardware. The main area of interest is section A.4.4.3 and its children where they indicate the Assessment of mishap risk. They use a four-point severity rating: Catastrophic; Critical; Marginal; Negligible. They then use a five-point probability rating: Frequent; Probable; Occasional; Remote; Improbable. Then rather than using a mathematical calculation to determine a risk level, they use a predefined chart. It is this chart that is novel as it groups risks together rather than giving them discrete values. If you want a copy of the current version, search for MIL-STD-882D using Yahoo! or Google.

Q.   Give some examples of
Low Severity and Low Priority Bugs
High Severity and Low Priority Bugs
Low Severity and High Priority Bugs
High Severity and High Priority Bugs ?

A.  Answer1:
First know about severity and priority then its easy to decide Low or Medium or High Priority-Business oriented
Severity-Effect of bug in the functionality
1. For example there is cosmetic change in the clients name and you found this bug at the time of delivery, so the severity of this bug is low but the priority is high because it affects to the business.
2. If you found that there is major crash in the functionality of the application but the crash lies in the module which is not delivered in the deliverables in this case the priority is low and severity is high.

Answer2:
Priority - how soon your business side needs a fix. (Tip: The engineering side never decides priority.)
Severity - how bad the bug bites. (Tip: Only engineers decide severity.)
For a high priority, low severity example, suppose your program has an easter egg (a secret feature) showing a compromising photo of your boss. Schedule this bug to be removed immediately.
Low priority, high severity example: A long chain of events leads to a crash that risks the main data file. Because the chain of events is longer than customers might probably reproduce, so keep an eye on this one while fixing higher priority things.
Testers should report bugs, the business side should understand them and set their priorities. Then testers and engineers should capture the bugs with automated tests before killing them. This reduces the odds they come back, and generally reduces "churn", which is bug fixes causing new bugs.

Answer3:
Priority is how important it is to the customer and if the customer is going to find it. Severity is how bad it is, if the customer found it.
High Priority low severity
I have a text editor and every 3 minutes it rings a bell (it is also noted that the editor does an auto-save every 3 minutes). This is going to drive the customer insane. They want it fixed ASAP; i.e. high priority. The impact is minimal. They can turn off the audio when using the editor. There are workarounds. Should be easy for the developer to find the code and fix it.
Low Priority High severity
If I press CRTL-Q-SHIFT-T, only in that order, then eject a floppy diskette from the drive it formats my hard drive. It is a low priority because it is unlikely a customer is going to be affected by it. It is high severity because if a customer did find it the results would be horrific.
High Priority High severity
If I open the Save As dialog and same the file with the same name as the Save dialog would have used it saves a zero byte file and all the data is lost. Many customers will select Save As then decide to overwrite the original document instead. They will NOT cancel the Save As and select Save instead, they will just use Save As and pick the same file name as the one they opened. So the likelihood of this happening is high; therefore high priority. It will cause the customer to lose data. This is costly. Therefore high severity.
Low Priority low severity
If I hold the key combination LEFT_CTRL+LEFT_ALT+RIGHT_ALT+RIGHT_CTRL+F1+F12 for 3 minutes it will display cryptic debug information used by the programmer during development. It is highly unlikely a customer will find this so it is low priority. Even if they do find it it might result in a call to customer service asking what this information means. Telling the customer it is debug code left behind; they didn't want to remove it because it would have added risk and delayed the release of the program is safer than removing it and potentially breaking something else.

 Answer4:
High Priority
low severity
Spelling the name of the company president wrong
Low Priority High severity
Year end processing breaks ('cause its 6 more months 'till year end)
High Priority High severity
Application won't start
Low Priority low severity
spelling error in documentation; occasionally screen is slightly
misdrawn requiring a screen refresh




If you have any quarries/feed back send me on - knowthetesting@gmail.com

Thank you
Ram

Tuesday, September 22, 2009

Interview Questions Part 3

-->
Q.  What is Requirements Traceability? What is the purpose of it? Explain types of traceability matrices?
A.  Requirement traceability is last section of the tester's test plan that measure to cover all the requirements of the projects/products.
Q.  What do you mean by Requirements coverage? Why is the purpose of this?
A.  Requirements Coverage indirectly known as Traceability Matrix it means we have to cross check whether our application covers all requirements of customer or not....so that we can build and test the application without failing of any requirements which is the most critical features of Customer satisfaction..
Q.   Explain the techniques used to estimate the number of test cases to be written for a particular project?
A. There is no particular techniques to estimate the number of test cases required for a project that mainly depends on test engineer who is writing test cases.
Q. What is the difference between stub and driver?
A.   A simple meaning for STUBS and DRIVES
A Dummy program which are used to test the original program system development.
OR
Stub and driver both are dummy programs using during integration of modules.
Stubs are using in Top-down approach.
Drivers are using in Bottom-up approach
Q.  Describe me to the basic elements you put in a defect report?
A. Complete information such that developers can understand the bug, get an idea of its severity, and reproduce it if necessary.
Bug identifier (number, ID, etc.)
Brief description of the bug
Steps to reproduce the bug
Current bug status (e.g., 'Released for Retest', 'New', etc.)
The application name or identifier and version
The function, module, feature, object, screen, etc. where the bug occurred
Environment specifics, system, platform, relevant hardware specifics, app server, database
Test case name/number/identifier
Full bug description
Severity of the bug
Screen shots where the bug has occurred
Q. Can anyone explain, "What is meant by Equivalence Partitioning "?
A.  Equivalence partitioning
A technique in black box testing is equivalence partitioning. Equivalence partitioning is designed to minimize the number of test cases by dividing tests in such a way that the system is expected to act the same way for all tests of each equivalence partition. Test inputs would be selected from each partition.
Equivalence partitions are designed so that every possible input belongs to one and only one equivalence partition.
Disadvantages
Doesn't test every input
No guidelines for choosing inputs
Heuristic based
very limited focus
Q.  How to calculate the estimate for test case design and review?
A.  Its completely depends upon project requirements, resources, level of testing with time constraints.
Q.  What is the difference between System Testing, Integration Testing & System Integration Testing?
A. System Testing
It a End to End testing of the application against requirements where the testing Environment is just like a Production Environment but not Production Environment...

Integration Testing
To check the interface between the modules is called Integration Testing.
Q.  Give example of a bug having high priority and low severity?
A.  It really depends on the project requirement, but one good example I can think off is say there is a requirement for rounding up off $ amount in a banking application & the requirement in SRS says the $ in the checking account should not be rounded but if the application is rounding off the $ amount to higher value than its a high priority -low severity bug.
Q.  Describe bottom-up and top-down approaches
A.  Top down and Bottom-up approach mainly comes in the integration testing.
In the Top down approach:-
* Modules are integrated by moving downwards. The sub level is called stubs.
* Stubs replace low level modules.
*Depath first or breadth first manner.
In the Bottom -up approach:-
* Modules are integrated by moving upwards.
* Drivers are needed.
Q.  Could anybody explain the difference b/w Patch, Build and Pilot?
A. The build is the code which is developed by the developers (coders).for example. an application
Patch is the modification which is being done to the code

OR
adding additional functionality to the code (application)
Pilot is testing the application in the real time environment with limited number of users

OR
Build: build can be defined as the module given to the QA team with some functionality.
Patch: patch can be defined as the enhancement in the released product... or the new functionality added to the same version.
Pilot: pilot can be defined as the basic functionality of the product that is required to show the strength of the development team to the client... from whome the project is obtained.
Q.  What is the difference between functional testing & black box testing?
A. Black-box test design treats the system as a "black-box", so it doesn't explicitly use knowledge of the internal structure. Black-box test design is usually described as focusing on testing functional requirements.
Synonyms for black-box include: behavioral, functional, opaque-box, and closed-box testing.
Q. What are the different methodologies used in testing???
A. Some of the methodologies used in Testing are :
1)CMM(Capability Maturity Model)
2)IEEE(Institute of Electrical Electronics Engine)
3)RUP(Rational Unified Process)
4)Agile Methodology
5)TQM(Total Quality Management
If anybody knows more methodologies apart from these let me know.......
Q.  What are the modes of win runner? (neither recording modes /nor run modes).
A. Win runner Recording modes are Consex sencitive Mode and Analog Mode
Win runner Run Mode are Verify Mode, debuge Mode, Update Mode
Win runner GUI modes:-Global, Per test
Win runner Recording methods:-
Record
Pass-up
Object
Ignore
Q.  What is heuristic checklist used in Unit Testing?
A. The preferable check list is (I am using this).
1. Understand the need of module/function in relation to specs
2.Make sure about the type of values you are passing to the function as input.
3. Have a clear idea about the output that you are expecting from the function based on point 1(above).
4. Be specific about the test data you are passing to the function in terms of type (incase of positive testing).
5. Remember that you need to do both positive and negative testing.
6. Be Clear about type casting (if any).
7. Have a Cristal clear idea about type of assertions (is used to test/compare the actual with expected).
8. Be clear about how the function is being called and is there any other function calls involved in the function you are testing.
9. Perform Regression testing for each new build and keep a log of modifications you are making to your Unit test project (better if you use Visual source Safe).
10. Its always better to debug both positive and negative testing to see how the function is performing so that you can have a clear understandability about the function you are testing.
11. its always better to have a separate project for unit testing by using just referencing the dll of the application.
I hope my answer will help you, if you find anything new and interesting feel free to share.
Q.  Explain about Metrics and types of metrics like schedule variance, effort variance?
A.  Schedule Variance = ((Actual Duration - Planned Duration)/Planned Duration)*100
Effort Variance = ((Actual Effort - Planned Effort)/planned effort) *100






If you have any quarries/feed back send me on - knowthetesting@gmail.com.

Thank you
Ram

Monday, September 21, 2009

Interview Questions Part 2

-->
Q.  What is a Risk? What are the important components of the risk?
A. Risk is a condition that will result in loss.
The important components of risk are a) Probability the risk will occur b) Impact of the risk c) Frequency of occurrence

Q.  What is walkthrough and inspection?                 
A. Walkthrough is there for both Testing and Coding.
Walkthrough for testing means brief review of documents, Test cases, Test script etc.
Walkthrough for Coding means review the coding for whether the Developer follows the Coding standards or not.

Inspection is job of Quality Control (QC). He can conduct Inspections and Audits on the project at any time to check whether the process is going on correctly or not.
I will tell y one example for Test Engineer, QA, QC
Take an Examination Center: In that


  • Test Engineer is Examiner
  • QA is sitting squad
  • QC is a flying squad
Q. What is Release Acceptance Testing?
What is Forced Error Testing?
What is Data Integrity Testing?
What is System Integration Testing
A.  Following are the details regarding Release Acceptance Testing, System Integration Testing, Data Integrity Testing

1. Release Acceptance Testing
Lets consider the following example:
The business says a bank plans to plan to test a cards application in 3 releases: Release 1, 2 & 3.
Each release has few enhancements (functionalities) like Chargeback’s, Fraud reporting etc.,
Testing these enhancements release wise is termed as Release Acceptance Testing

2. System Integration Testing
Clubbing all the modules of an application together and testing it as one single system is termed as System Integration Testing

3. Data Integrity Testing
Consider the scenario below:
When some external data (like Flat Files) are transmitted into to your application you expect complete communication of data (say for example if you are expecting Rs.5000/- from the external file to be populated as Rs.5000/- in Transaction Amount field in your application then you are looking out for Integrity of data transferred from external interface to your application. This is Data Integrity Testing
No idea about Forced Error Testing.

Q.  What is the Compatibility testing difference between testing in IE explorer and testing in Firefox?
A. In IE we can look perfectly the web pages where as in Firefox some images and text are not displayed properly

Q.  How do u conduct boundary analysis testing for "ok" pushbutton?
A. There is no boundary value analysis for "Ok" Push-Button.

Q. What is the strength and weakness in testing?
A. Strength: Judgeemnt skills, patience, pessimistic in nature, strong desire quality Weakness: like, i love coffee so i drink coffee every hour

Q.  What do we do when there is a known bug at the time of software release?
A.  If there is any open bug at the time of release we mention in the release note which we send with the release, in open list of issues. As we know if client finds that we have mentioned the issue in release note he will not be worried about the type of testing being done on the application. But if he himself finds that bug it will hurt our company image.

Q.  What is "bug leakage?" and what is "bug release?"
A. The bugs un discovered in the previous stage / cycles is called bug leakage for that stage/cycle. Eg. Suppose you have completed System Testing (ST)
and certified the application as fully tested and send it for UAT. but UAT certainly uncovers some of the bugs which are not found at ST stage. so bugs are leaked from ST stage to UAT. this is called bug leakage.

Q.  Which of the following statements about regression testing are true?
(1) Regression Testing must consist of a fixed set of tests to create a baseline
(2) Regression Testing should be used to detect defects in new features
(3) Regression Testing can be run on every build
(4) Regression Testing should be targeted to areas of high risk and known code change
(5) Regression Testing, when automated, is highly effective in preventing defects
A.  1 4 and 5 are correct

Q. How can we perform testing without expected results?
A. Main concept in testing is the expected result. By knowing the expected behavior of the application or system from SRS and FDS, we can derive a test case. When executed the derived test cases, actual result is noted. Any deviation from the expected is considered as a defect.

In Adhoc testing, there is no need of a test case, but if we want to log a defect, we should know the expected behaviour of the application or a system.

There is only one possibility for this question according to me. Exploratory testing.
An interactive process of concurrent product exploration, test design, and test execution. The heart of exploratory testing can be stated simply. The outcome of this test influences the design of the next test. Tester will explore the product or application and will note down the expected result and will design a test case and execute the test.

Q. What is defect leakage?
A. If any defect which we could not find out in system test environment then it is called as defect leakage.

Q. What are the parameters of Quality cost?
A.  Quality cost or Cost of quality depends on the time factor that is at what time the bug is detected. If the bug is detected at the time of release then the cost of quality will increase & if the bug is detected at requirement phase then the cost of quality will be very less. So the conclusion is the main parameter of Quality cost is "Time at which bug is detected".

Q. Could we test window's calculator through win runner. for e.g. if I have to show get the value of calculator buttons.
A. yes, we can test the window's calculator using win Runner and also you can show the values of buttons.

Q. What is SQA Activities?
A. SQA activities - suggesting & reviewwing the process docuemtents
Example - reviewing project management plan, etc
Q.  What is the difference in writing the test cases for Integration testing and system testing
A. Integration Testing:
Mostly the integration Testing is not done by Test Engineer.
Integration testing test case includes a partial amount of the conditions used for system testing.
System testing = Functionality Testing + Integration + Unit (Complete round of testing)
Q. What is a deliverable ? Can you please mention the names of deliverables in your current project .
A. Test Deliverables:-
1.Test plan document
2.Test case document
3. Test strategic document
4.test script document
5.test scenario document
6.test log document
7.coding guidlines document
8. automation procedure document
Q. What metrics used to measure the size of the software?
A.  There are 4 ways in which u can measure the program size1)Lines of code i.e.LOC2)Function points3)McCabe's complexity metric which is the number of decisions(+1) in a program4)Halstead's metrics that are used to calculate program length
Q.  Let us know your understanding of Audits ?
A. Audit comes under QA. Its a checking of the processes in the organistion which are define are followed by the employee.
In this every product is audited for the process, that its going as per the organistion's define process or not. Reviwing the documents, matrix related to the product are checked.
Q.  What are the contents of Risk management Plan? Have you ever prepared a Risk Management Plan ?
A.  In general it consist of types risk associated and mitigation for it. It also consist of severity. In general RMP is created by PM and updated by Module Leads when needed.
Q. How can you say your project testing is completed? Explain the various factors?
A.  There are so many factors to know this answer.. some main factors are.....
1) Dead line of your Project reaches
2) Your Estimated Budget over.
3) Acives Customer's Requirement with certain percentage of test case passed.
etc......





If you have any quarries/feed back send me on - knowthetesting@gmail.com

Thank you
Ram

Saturday, September 19, 2009

Interview Questions Part 1


Q. Give me full explanation for SDLC process.
A. SDLC means Software Development Life Cycle.
In this approach following activities are carried out.
1. Requirements gathering
2.  Analysis and planning

3.  Design
4.  Coding
5. Testing
6. Maintenance


In this approach testing comes into picture after coding and conducted by same development people. Testing can be have a place in any of these stage depends on the Development Models.


Q.  What is the testing life cycle?
A. There is no standard, but it consists of:
Test Planning (Test Strategy, Test Plan(s), Test Bed Creation)
Test Development (Test Procedures, Test Scenarios, Test Cases)
Test Execution
Result Analysis (compare Expected to Actual results)
Defect Tracking
Reporting

OR
I have done some modification in the above answer , please find below the same :
Test Life Cycle :
1. Test Plan :
a)Test Scope
b)Test Strategy
c)Test scheduling
d)Test Estimation
e)Test Bed Creation
f)Test Techniques
2. Test Development:
a)Test Scenarios
b)Test Case Authoring
3. Test Execution
4. Result Analysis ( comparing the expected and actual result)
5.Defect Tracking -->
a)Filing of new defect
b)Verification of defect
c)closure of defect
6.Test Regression
7.Test Stop.



Q.  Difference between Configuration Management and Version Control.
A. Version Control is part of Configuration Management.
Read the following, you will understand.


Configuration Management is the approach used in managing the individual components (software & hardware) that make up the? System? It is important not to confuse ? Configuration Management?  with ? Change Management?
Change management is concerned with changes made to an item/s. Whereas Configuration Management is concerned with managing all of the individual items, and all of the items as a whole (System).


Software exists in two forms; non-executable (source code) and executable code (object code). When errors are found in the software, changes may be required to be made to the source code. When this situation occurs, it is imperative to be able to identify which version of code to change. There may also arise a situation where two Developers need to make changes to the code. If the Developers are unaware of the other updating the version, then both updated versions could be saved causing lost changes or worse. Also, Testers may not be aware of which version of code to test with causing further problems.


When testing is complete, a Test Leader will need to demonstrate that the testing has been completed on a specific version of code, a necessity to ensure correct test coverage.


It is not only software and hardware components that need to be controlled, but also documentation, as project members will need to ensure they work from the correct versions.

Configuration Management consists of the following four parts:
Configuration Identification
Configuration Control
Status Accounting
Configuration Auditing
Configuration Identification
This is the process of identifying all of the individual items that will be subject to version control within the project. Details such as version and status may be recorded.


Configuration Control
This activity ensures that any changes are controlled and monitored. A master copy should be kept in order for people to be able to check out the latest version of the document to avoid two people working on the same document version. Items such as ?dates?, ?version numbers? and ?updated by? are details that may be recorded. Once the item has been updated, the item can be checked back in, resulting in it becoming the master copy. A history will be displayed when multiple versions exist.
Status Accounting
this is the process of recording and reporting on the current status of the item. It is in effect the ability to be able to view the current state of the item.
Configuration Auditing
Configuration Auditing is used to ensure that the control process that is used is being correctly adhered to



Q.  How do you ensure the quality of the product?
A. The quality of the product can be ensure by look into the minimum bugs in a products as per the standard maintained by the organization for the clients. That means if a company is the six sigma oriented company then there should be at least  3-4 bugs per millions.


Q. What if there isn't enough time for thorough testing?
A. Since it's rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects.

Use risk analysis to determine where testing should be focused. This requires judgment skills, common sense and experience. The checklist should include answers to the following questions:
* Which functionality is most important to the project's intended purpose?
* Which functionality is most visible to the user?
* Which functionality has the largest safety impact?
* Which functionality has the largest financial impact on users?
* Which aspects of the application are most important to the customer?
* Which aspects of the application can be tested early in the development cycle?
* Which parts of the code are most complex and thus most subject to errors?
* Which parts of the application were developed in rush or panic mode?
* Which aspects of similar/related previous projects caused problems?
* Which aspects of similar/related previous projects had large maintenance expenses?
* Which parts of the requirements and design are unclear or poorly thought out?
* What do the developers think are the highest-risk aspects of the application?
* What kinds of problems would cause the worst publicity?
* What kinds of problems would cause the most customer service complaints?
* What kinds of tests could easily cover multiple functionalities?
* Which tests will have the best high-risk-coverage to time-required ratio?



Q. What's normal practices of the QA specialists with perspective of software?
A. These are the normal practices of the QA specialists with perspective of software
[note: these are all QC activities, not QA activities.]
1-Desgin Review Meetings with the System Analyst and If possible should be the part in Requirement gathering
2-Analysing the requirements and the desing and to trace the desing with respect to the requirements
3-Test Planning
4-Test Case Identification using different techniques (With respect to the Web Based Applciation and Desktoip Applications)
5-Test Case Writing (This part is to be assigned to the testing engineers)
6-Test Case Execution (This part is to be assigned to the testing engineers)
7-Bug Reporting (This part is to be assigned to the testing engineers)
8-Bug Review and thier Analysis so that future bus can be removed by desgining some standards



Q. What is the difference between high level design and low level design?
A. Every software design is presenting with one high level design and one low level design.Because high level design is a system level document and low level desings are functionalise and modules level documents.
OR
High Level Document and Low Level Document are comes under Software Requirement Specifications (SRS). It comes under the Design phase which is in SDLC.

High Level Documents are design documents wat exactly meaning is what the customer wants this comes under HLDD.


Low Level Documents in this we have in which way client wants the application and describes about the procedures and sub routins and basic documents.


Q. What is the difference between Quality Assurance and Quality Control?
A. Quality assurance is set of activitates designed to ensure that maintenance process is adequate to ensure a system will meet it object.
Quality control is set of activitates designed to evaluate a developed work product

OR
Quality Assurance deals with Monitoring and Improving entire SDLC.
Quality Control(QC) deals with Walkthroughs, Reviews and Inspection.









If you have any quarries/feed back send me on - rampeddireddy2006@gmail.com, ram@examsinfo.in.


Thank you
Ram