Sunday, August 16, 2009

QC Roles & Responsibilities

-->
1. Test Associate

Reporting To:
Team Lead of a project

Responsibilities:
  • Design and develop test conditions and cases with associated test data based upon requirements
  • Design test scripts
  • Executes the test ware (Conditions, Cases, Test scripts etc.) with the test data generated
  • Reviews test ware, record defects, retest and close defects
  • Preparation of reports on Test progress

2. Test Engineer

Reporting To:
Team Lead of a project

Responsibilities:
  • Design and develop test conditions and cases with associated test data based upon requirements
  • Design test scripts
  • Executes the test ware (Conditions, Cases, Test scripts etc.) with the test data generated
  • Reviews test ware, record defects, retest and close defects
  • Preparation of reports on Test progress
3. Senior Test Engineer

Reporting To:
Team Lead of a project

Responsibilities:
  • Responsible for collection of requirements from the users and evaluating the same and send out for team discussion
  • Preparation of the High level design document incorporating the feedback received on the high level design document and initiate on the low level design document
  • Assist in the preparation of test strategy document drawing up the test plan
  • Preparation of business scenarios, supervision of test cases preparation based on the business scenarios
  • Maintaining the run details of the test execution, Review of test condition/cases, test scripts
  • Defect Management
  • Preparation of test deliverable documents and defect metrics analysis report
4. Test Lead

Reporting To:
Test Manager

Responsibilities:
  • Technical leadership of the test project including test approach and tools to be used
  • Preparation of test strategy
  • Ensure entrance criteria prior to test start-off
  • Ensure exit criteria prior to completion sign-off
  • Test planning including automation decisions
  • Review of design documents (test cases, conditions, scripts)
  • Preparation of test scenarios and configuration management and quality plan
  • Manage test cycles
  • Assist in recruitment
  • Supervise test team
  • Resolve team queries/problems
  • Report and follow-up test systems outrages/problems
  • Client interface
  • Project progress reporting
  • Defect Management
  • Staying current on latest test approaches and tools, and transferring this knowledge to test team
  • Ensure test project documentation
5. Test Manager

Reporting To:
Management

Responsibilities:
  • Liaison for interdepartmental interactions: Representative of the testing team
  • Client interaction
  • Recruiting, staff supervision, and staff training.
  • Test budgeting and scheduling, including test-effort estimations.
  • Test planning including development of testing goals and strategy.
  • Test tool selection and introduction.
  • Coordinating pre and post test meetings.
  • Test program oversight and progress tracking.
  • Use of metrics to support continual test process improvement.
  • Test process definition, training and continual improvement.
  • Test environment and test product configuration management.
  • Nomination of training
  • Cohesive integration of test and development activities.
  • Mail Training Process for training needs, if required
  • Review of the proposal

If you have any quarries/feed back send me on - 
knowthetesting@gmail.com

Thank you
Ram

Regression Testing and Re-testing


“Retesting of a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made.”
… BS7925-1


“Regression Testing is the process of testing the changes to computer programs to make sure that the older programs still work with the new changes.”



“When making improvements on software, retesting previously tested functions to make sure adding new features has not introduced new problems.”



Regression testing is an expensive but necessary activity performed on modified software to provide confidence that changes are correct and do not adversely affects other system components. Four things can happen when a developer attempts to fix a bug. Three of these things are bad, and one is good:






New Bug
No New Bug

Successful Change

Bad
Good
Unsuccessful Change

Bad
Bad



Because of the high probability that one of the bad outcomes will result from a change to the system, it is necessary to do regression testing. A regression test selection technique chooses, from an existing test set, the tests that are deemed necessary to validate modified software.



There are three main groups of test selection approaches in use:
  • Minimization approaches seek to satisfy structural coverage criteria by identifying a minimal set of tests that must be rerun.
  • Coverage approaches are also based on coverage criteria, but do not require minimization of the test set. Instead, they seek to select all tests that exercise changed or affected program components.
  • Safe attempt instead to select every test that will cause the modified program to produce different output than original program.



1. Factors favour Automation of Regression Testing
  • Ensure consistency
  • Speed up testing to accelerate releases
  • Allow testing to happen more frequently
  • Reduce costs of testing by reducing manual labor
  • Improve the reliability of testing
  • Define the testing process and reduce dependence on the few who know it



2. Tools used in Regression testing
  • WinRunner from Mercury
  • e-tester from Empirix
  • WebFT from Radview
  • Silktest from Radview
  • Rational Robot from Rational
  • QA Run from Compuware



If you have any quarries/feed back send me on - rampeddireddy2006@gmail.com, ram@examsinfo.in.

Thank you
Ram

Tuesday, August 11, 2009

What you need to know about BVT (Build Verification Testing)

What is BVT?
Build Verification test is a set of tests run on every new build to verify that build is testable before it is released to test team for further testing. These test cases are core functionality test cases that ensure application is stable and can be tested thoroughly. Typically BVT process is automated. If BVT fails that build is again get assigned to developer for fix.
BVT is also called smoke testing or build acceptance testing (BAT)

New Build is checked mainly for two things:

  • Build validation
  • Build acceptance
Some BVT basics:
  • It is a subset of tests that verify main functionalities.
  • The BVT’s are typically run on daily builds and if the BVT fails the build is rejected and a new build is released after the fixes are done.
  • The advantage of BVT is it saves the efforts of a test team to setup and test a build when major functionality is broken.
  • Design BVTs carefully enough to cover basic functionality.
  • Typically BVT should not run more than 30 minutes.
  • BVT is a type of regression testing, done on each and every new build.
BVT primarily checks for the project integrity and checks whether all the modules are integrated properly or not. Module integration testing is very important when different teams develop project modules. I heard many cases of application failure due to improper module integration. Even in worst cases complete project gets scraped due to failure in module integration.
What is the main task in build release? Obviously file ‘check in’ i.e. to include all the new and modified project files associated with respective builds. BVT was primarily introduced to check initial build health i.e. to check whether - all the new and modified files are included in release, all file formats are correct, every file version and language, flags associated with each file.
These basic checks are worth before build release to test team for testing. You will save time and money by discovering the build flaws at the very beginning using BVT.

Which test cases should be included in BVT?
This is very tricky decision to take before automating the BVT task. Keep in mind that success of BVT depends on which test cases you include in BVT.
Here are some simple tips to include test cases in your BVT automation suite:
  • Include only critical test cases in BVT.
  • All test cases included in BVT should be stable.
  • All the test cases should have known expected result.
  • Make sure all included critical functionality test cases are sufficient for application test coverage.
Also do not includes modules in BVT, which are not yet stable. For some under-development features you can’t predict expected behavior as these modules are unstable and you might know some known failures before testing for these incomplete modules. There is no point using such modules or test cases in BVT.
You can make this critical functionality test cases inclusion task simple by communicating with all those involved in project development and testing life cycle. Such process should negotiate BVT test cases, which ultimately ensure BVT success. Set some BVT quality standards and these standards can be met only by analyzing major project features and scenarios.
Example: Test cases to be included in BVT for Text editor application (Some sample tests only):
1) Test case for creating text file.
2) Test cases for writing something into text editor
3) Test case for copy, cut, paste functionality of text editor
4) Test case for opening, saving, deleting text file.

These are some sample test cases, which can be marked as ‘critical’ and for every minor or major changes in application these basic critical test cases should be executed. This task can be easily accomplished by BVT.
BVT automation suits needs to be maintained and modified time-to-time. E.g. include test cases in BVT when there are new stable project modules available.
What happens when BVT suite run:
Say Build verification automation test suite executed after any new build.
1) The result of BVT execution is sent to all the email ID’s associated with that project.
2) The BVT owner (person executing and maintaining the BVT suite) inspects the result of BVT.
3) If BVT fails then BVT owner diagnose the cause of failure.
4) If the failure cause is defect in build, all the relevant information with failure logs is sent to respective developers.
5) Developer on his initial diagnostic replies to team about the failure cause. Whether this is really a bug? And if it’s a bug then what will be his bug-fixing scenario.
6) On bug fix once again BVT test suite is executed and if build passes BVT, the build is passed to test team for further detail functionality, performance and other testes.

This process gets repeated for every new build.
Why BVT or build fails?
BVT breaks sometimes. This doesn’t mean that there is always bug in the build. There are some other reasons to build fail like test case coding error, automation suite error, infrastructure error, hardware failures etc.
You need to troubleshoot the cause for the BVT break and need to take proper action after diagnosis.

Tips for BVT success:
1) Spend considerable time writing BVT test cases scripts.
2) Log as much detailed info as possible to diagnose the BVT pass or fail result. This will help developer team to debug and quickly know the failure cause.
3) Select stable test cases to include in BVT. For new features if new critical test case passes consistently on different configuration then promote this test case in your BVT suite. This will reduce the probability of frequent build failure due to new unstable modules and test cases.
4) Automate BVT process as much as possible. Right from build release process to BVT result - automate everything.
5) Have some penalties for breaking the build ;-) Some chocolates or team coffee party from developer who breaks the build will do.

Conclusion:
BVT is nothing but a set of regression test cases that are executed each time for new build. This is also called as smoke test. Build is not assigned to test team unless and until the BVT passes. BVT can be run by developer or tester and BVT result is communicated throughout the team and immediate action is taken to fix the bug if BVT fails. BVT process is typically automated by writing scripts for test cases. Only critical test cases are included in BVT. These test cases should ensure application test coverage. BVT is very effective for daily as well as long term builds. This saves significant time, cost, resources and after all no frustration of test team for incomplete build.





An approach for Security Testing of Web Applications

Introduction
As more and more vital data is stored in web applications and the number of transactions on the web increases, proper security testing of web applications is becoming very important. Security testing is the process that determines that confidential data stays confidential (i.e. it is not exposed to individuals/ entities for which it is not meant) and users can perform only those tasks that they are authorized to perform (e.g. a user should not be able to deny the functionality of the web site to other users, a user should not be able to change the functionality of the web application in an unintended way etc.).
Some key terms used in security testing
Before we go further, it will be useful to be aware of a few terms that are frequently used in web application security testing:
What is “Vulnerability”?
This is a weakness in the web application. The cause of such a “weakness” can be bugs in the application, an injection (SQL/ script code) or the presence of viruses.

What is “URL manipulation”?
Some web applications communicate additional information between the client (browser) and the server in the URL. Changing some information in the URL may sometimes lead to unintended behavior by the server.

What is “SQL injection”?
This is the process of inserting SQL statements through the web application user interface into some query that is then executed by the server.

What is “XSS (Cross Site Scripting)”?
When a user inserts HTML/ client-side script in the user interface of a web application and this insertion is visible to other users, it is called XSS.

What is “Spoofing”?
The creation of hoax look-alike websites or emails is called spoofing.
Security testing approach:

In order to perform a useful security test of a web application, the security tester should have good knowledge of the HTTP protocol. It is important to have an understanding of how the client (browser) and the server communicate using HTTP. Additionally, the tester should at least know the basics of SQL injection and XSS. Hopefully, the number of security defects present in the web application will not be high. However, being able to accurately describe the security defects with all the required details to all concerned will definitely help.
1. Password cracking:
The security testing on a web application can be kicked off by “password cracking”. In order to log in to the private areas of the application, one can either guess a username/ password or use some password cracker tool for the same. Lists of common usernames and passwords are available along with open source password crackers. If the web application does not enforce a complex password (e.g. with alphabets, number and special characters, with at least a required number of characters), it may not take very long to crack the username and password.
If username or password is stored in cookies without encrypting, attacker can use different methods to steal the cookies and then information stored in the cookies like username and password.
2. URL manipulation through HTTP GET methods:
The tester should check if the application passes important information in the querystring. This happens when the application uses the HTTP GET method to pass information between the client and the server. The information is passed in parameters in the querystring. The tester can modify a parameter value in the querystring to check if the server accepts it.
Via HTTP GET request user information is passed to server for authentication or fetching data. Attacker can manipulate every input variable passed from this GET request to server in order to get the required information or to corrupt the data. In such conditions any unusual behavior by application or web server is the doorway for the attacker to get into the application.
3. SQL Injection:
The next thing that should be checked is SQL injection. Entering a single quote (‘) in any textbox should be rejected by the application. Instead, if the tester encounters a database error, it means that the user input is inserted in some query which is then executed by the application. In such a case, the application is vulnerable to SQL injection.
SQL injection attacks are very critical as attacker can get vital information from server database. To check SQL injection entry points into your web application, find out code from your code base where direct MySQL queries are executed on database by accepting some user inputs.
If user input data is crafted in SQL queries to query the database, attacker can inject SQL statements or part of SQL statements as user inputs to extract vital information from database. Even if attacker is successful to crash the application, from the SQL query error shown on browser, attacker can get the information they are looking for. Special characters from user inputs should be handled/escaped properly in such cases.
4. Cross Site Scripting (XSS):
The tester should additionally check the web application for XSS (Cross site scripting). Any HTML e.g. or any script e.g.
Attacker can use this method to execute malicious script or URL on victim’s browser. Using cross-site scripting, attacker can use scripts like JavaScript to steal user cookies and information stored in the cookies.
Many web applications get some user information and pass this information in some variables from different pages.
E.g.: http://www.examplesite.com/index.php?userid=123&query=xyz
Attacker can easily pass some malicious input or

Thursday, August 6, 2009

Automation


What is Automation Automated testing is automating the manual testing process currently in use



Today, rigorous application testing is a critical part of virtually all software development projects. As more organizations develop mission-critical systems to support their business activities, the need is greatly increased for testing methods that support business objectives. It is necessary to ensure that these systems are reliable, built according to specification, and have the ability to support business processes. Many internal and external factors are forcing organizations to ensure a high level of software quality and reliability.



In the past, most software tests were performed using manual methods. This required a large staff of test personnel to perform expensive, and time-consuming manual test procedures. Owing to the size and complexity of today’s advanced software applications, manual testing is no longer a viable option for most testing situations.



Every organization has unique reasons for automating software quality activities, but several reasons are common across industries.



Using Testing Effectively
By definition, testing is a repetitive activity. The very nature of application software development dictates that no matter which methods are employed to carry out testing (manual or automated), they remain repetitious throughout the development lifecycle. Automation of testing processes allows machines to complete the tedious, repetitive work while human personnel perform other tasks.



Automation allows the tester to reduce or eliminate the required “think time” or “read time” necessary for the manual interpretation of when or where to click the mouse or press the enter key.



An automated test executes the next operation in the test hierarchy at machine speed, allowing tests to be completed many times faster than the fastest individual. Furthermore, some types of testing, such as load/stress testing, are virtually impossible to perform manually.




Reducing Testing Costs
The cost of performing manual testing is prohibitive when compared to automated methods. The reason is that computers can execute instructions many times faster, and with fewer errors than individuals. Any automated testing tools can replicate the activity of a large number of users (and their associated transactions) using a single computer. Therefore, load/stress testing using automated methods requires only a fraction of the computer hardware that would be necessary to complete a manual test. Imagine performing a load test on a typical distributed client/server application on which 50 concurrent users were planned.



To do the testing manually, 50 application users employing 50 PCs with associated software, an available network, and a cadre of coordinators to relay instructions to the users would be required. With an automated scenario, the entire test operation could be created on a single machine having the ability to run and rerun the test as necessary, at night or on weekends without having to assemble an army of end users. As another example, imagine the same application used by hundreds or thousands of users. It is easy to see why a manual method for load/stress testing is an expensive and logistical nightmare.





Replicating Testing Across Different Platforms
Automation allows the testing organization to perform consistent and repeatable tests. When applications need to be deployed across different hardware or software platforms, standard or benchmark tests can be created and repeated on target platforms to ensure that new platforms operate consistently.



Repeatability and Control
By using automated techniques, the tester has a very high degree of control over which types of tests are being performed, and how the tests will be executed. Using automated tests enforces consistent procedures that allow developers to evaluate the effect of various application modifications as well as the effect of various user actions.



For example, automated tests can be built that extract variable data from external files or applications and then run a test using the data as an input value. Most importantly, automated tests can be executed as many times as necessary without requiring a user to recreate a test script each time the test is run.




Greater Application Coverage
The productivity gains delivered by automated testing allow and encourage organizations to test more often and more completely. Greater application test coverage also reduces the risk of exposing users to malfunctioning or non-compliant software. In some industries such as healthcare and pharmaceuticals, organizations are required to comply with strict quality regulations as well as being required to document their quality assurance efforts for all parts of their systems.

Defect Reporting Guidelines


The key to making a good report is providing the development staff with as much information as necessary to reproduce the bug. This can be broken down into 5 points:

  1. Give a brief description of the problem
  2. List the steps that are needed to reproduce the bug or problem
  3. Supply all relevant information such as version, project and data used.
  4. Supply a copy of all relevant reports and data including copies of the expected results.
  5. Summarize what you think the problem is.



When you are reporting a defect the more information you supply, the easier it will be for the developers to determine the problem and fix it.



Simple problems can have a simple report, but the more complex the problem– the more information the developer is going to need.



For example: cosmetic errors may only require a brief description of the screen, how to get it and what needs to be changed.



However, an error in processing will require a more detailed description, such as:



1) The name of the process and how to get to it.
2) Documentation on what was expected. (Expected results)
3) The source of the expected results, if available. This includes spread sheets, an earlier version of the software and any formulas used)
4) Documentation on what actually happened. (Perceived results)
5) An explanation of how the results differed.
6) Identify the individual items that are wrong.
7) If specific data is involved, a copy of the data both before and after the process should be included.
8) Copies of any output should be included.



As a rule the detail of your report will increase based on a) the severity of the bug, b) the level of the processing, c) the complexity of reproducing the bug.




Anatomy of a bug report



Bug reports need to do more than just describe the bug. They have to give developers something to work with so that they can successfully reproduce the problem.



In most cases the more information– correct information– given the better. The report should explain exactly how to reproduce the problem and an explanation of exactly what the problem is.



The basic items in a report are as follows:




Version: This is very important. In most cases the product is not static, developers will have been working on it and if they’ve found a bug– it may already have been reported or even fixed. In either case, they need to know which version to use when testing out the bug.



Product: If you are developing more than one product– Identify the product in question.



Data: Unless you are reporting something very simple, such as a cosmetic error on a screen, you should include a dataset that exhibits the error.



If you’re reporting a processing error, you should include two versions of the dataset, one before the process and one after. If the dataset from before the process is not included, developers will be forced to try and find the bug based on forensic evidence. With the data, developers can trace what is happening.



Steps: List the steps taken to recreate the bug. Include all proper menu names, don’t abbreviate and don’t assume anything.



After you’ve finished writing down the steps, follow them - make sure you’ve included everything you type and do to get to the problem. If there are parameters, list them. If you have to enter any data, supply the exact data entered. Go through the process again and see if there are any steps that can be removed.



When you report the steps they should be the clearest steps to recreating the bug.



Description: Explain what is wrong - Try to weed out any extraneous information, but detail what is wrong. Include a list of what was expected. Remember report one problem at a time; don’t combine bugs in one report.



Supporting documentation:
If available, supply documentation. If the process is a report, include a copy of the report with the problem areas highlighted. Include what you expected. If you have a report to compare against, include it and its source information (if it’s a printout from a previous version, include the version number and the dataset used)



This information should be stored in a centralized location so that Developers and Testers have access to the information. The developers need it to reproduce the bug, identify it and fix it. Testers will need this information for later regression testing and verification.



Test Report


A Test Report is a document that is prepared once the testing of a software product is complete and the delivery is to be made to the customer. This document would contain a summary of the entire project and would have to be presented in a way that any person who has not worked on the project would also get a good overview of the testing effort.



Contents of a Test Report
The contents of a test report are as follows:



Executive Summary
Overview
Application Overview
Testing Scope
Test Details
Test Approach
Types of testing conducted
Test Environment
Tools Used
Metrics
Test Results
Test Deliverables
Recommendations

These sections are explained as follows:

1. Executive Summary

This section would comprise of general information regarding the project, the client, the application, tools and people involved in such a way that it can be taken as a summary of the Test Report itself (i.e.) all the topics mentioned here would be elaborated in the various sections of the report.



  1. Overview
This comprises of 2 sections – Application Overview and Testing Scope.




Application Overview – This would include detailed information on the application under test, the end users and a brief outline of the functionality as well.



Testing Scope – This would clearly outline the areas of the application that would / would not be tested by the QA team. This is done so that there would not be any misunderstandings between customer and QA as regards what needs to be tested and what does not need to be tested.
This section would also contain information of Operating System / Browser combinations if Compatibility testing is included in the testing effort.



  1. Test Details
This section would contain the Test Approach, Types of Testing conducted, Test Environment and Tools Used.



Test Approach – This would discuss the strategy followed for executing the project. This could include information on how coordination was achieved between Onsite and Offshore teams, any innovative methods used for automation or for reducing repetitive workload on the testers, how information and daily / weekly deliverables were delivered to the client etc.



Types of testing conducted – This section would mention any specific types of testing performed (i.e.) Functional, Compatibility, Performance, Usability etc along with related specifications.



Test Environment – This would contain information on the Hardware and Software requirements for the project (i.e.) server configuration, client machine configuration, specific software installations required etc.



Tools used – This section would include information on any tools that were used for testing the project. They could be functional or performance testing automation tools, defect management tools, project tracking tools or any other tools which made the testing work easier.



  1. Metrics
This section would include details on total number of test cases executed in the course of the project, number of defects found etc. Calculations like defects found per test case or number of test cases executed per day per person etc would also be entered in this section. This can be used in calculating the efficiency of the testing effort.



  1. Test Results
This section is similar to the Metrics section, but is more for showcasing the salient features of the testing effort. Incase many defects have been logged for the project, graphs can be generated accordingly and depicted in this section. The graphs can be for Defects per build, Defects based on severity, Defects based on Status (i.e.) how many were fixed and how many rejected etc.



  1. Test Deliverables
This section would include links to the various documents prepared in the course of the testing project (i.e.) Test Plan, Test Procedures, Test Logs, Release Report etc.



  1. Recommendations
This section would include any recommendations from the QA team to the client on the product tested. It could also mention the list of known defects which have been logged by QA but not yet fixed by the development team so that they can be taken care of in the next release of the application.