Friday, July 31, 2009

Introduction To Software Standards


Capability Maturity Model - Developed by the software community in 1986 with leadership from the SEI. The CMM describes the principles and practices underlying software process maturity. It is intended to help software organizations improve the maturity of their software processes in terms of an evolutionary path from ad hoc, chaotic processes to mature, disciplined software processes. The focus is on identifying key process areas and the exemplary practices that may comprise a disciplined software process.

What makes up the CMM?
The CMM is organized into five maturity levels:


· Initial

· Repeatable

· Defined

· Manageable

· Optimizing

Except for Level 1, each maturity level decomposes into several key process areas that indicate the areas an organization should focus on to improve its software process.
Level 1 - Initial Level: Disciplined process, Standard, Consistent process, Predictable process, Continuously Improving process
Level 2 – Repeatable: Key practice areas - Requirements management, Software project planning, Software project tracking & oversight, Software subcontract management, Software quality assurance, Software configuration management
Level 3 – Defined: Key practice areas - Organization process focus, Organization process definition, Training program, Integrated software management, Software product engineering, Intergroup coordination, Peer reviews
Level 4 – Manageable: Key practice areas - Quantitative Process Management, Software Quality Management
Level 5 – Optimizing: Key practice areas - Defect prevention, Technology change management, Process change management
Six Sigma
Six Sigma is a quality management program to achieve "six sigma" levels of quality. It was pioneered by Motorola in the mid-1980s and has spread to many other manufacturing companies, notably General Electric Corporation (GE).
Six Sigma is a rigorous and disciplined methodology that uses data and statistical analysis to measure and improve a company's operational performance by identifying and eliminating "defects" from manufacturing to transactional and from product to service. Commonly defined as 3.4 defects per million opportunities, Six Sigma can be defined and understood at three distinct levels: metric, methodology and philosophy...
Training Sigma processes are executed by Six Sigma Green Belts and Six Sigma Black Belts, and are overseen by Six Sigma Master Black Belts.
ISO

ISO - International Organization for Standardization is a network of the national standards institutes of 150 countries, on the basis of one member per country, with a Central Secretariat in Geneva, Switzerland, that coordinates the system. ISO is a non-governmental organization. ISO has developed over 13, 000 International Standards on a variety of subjects.

Most common software errors


Following are the most common software errors that aid you in software testing. This helps you to identify errors systematically and increases the efficiency and productivity of software testing.

Types of errors with examples

· User Interface Errors: Missing/Wrong Functions, Doesn’t do what the user expects, Missing information, Misleading, Confusing information, Wrong content in Help text, Inappropriate error messages. Performance issues - Poor responsiveness, Can't redirect output, Inappropriate use of key board
· Error Handling: Inadequate - protection against corrupted data, tests of user input, version control; Ignores – overflow, data comparison, Error recovery – aborting errors, recovery from hardware problems.
· Boundary related errors: Boundaries in loop, space, time, memory, mishandling of cases outside boundary.

· Calculation errors: Bad Logic, Bad Arithmetic, Outdated constants, Calculation errors, Incorrect conversion from one data representation to another, Wrong formula, Incorrect approximation.



· Initial and Later states: Failure to - set data item to zero, to initialize a loop-control variable, or re-initialize a pointer, to clear a string or flag, Incorrect initialization.
· Control flow errors: Wrong returning state assumed, Exception handling based exits, Stack underflow/overflow, Failure to block or un-block interrupts, Comparison sometimes yields wrong result, Missing/wrong default, Data Type errors.
· Errors in Handling or Interpreting Data: Un-terminated null strings, Overwriting a file after an error exit or user abort.
· Race Conditions: Assumption that one event or task finished before another begins, Resource races, Tasks starts before its prerequisites are met, Messages cross or don't arrive in the order sent.
· Load Conditions: Required resources are not available, No available large memory area, Low priority tasks not put off, Doesn't erase old files from mass storage, Doesn't return unused memory.
· Hardware: Wrong Device, Device unavailable, Underutilizing device intelligence, Misunderstood status or return code, Wrong operation or instruction codes.
· Source, Version and ID Control: No Title or version ID, Failure to update multiple copies of data or program files.
· Testing Errors: Failure to notice/report a problem, Failure to use the most promising test case, Corrupted data files, Misinterpreted specifications or documentation, Failure to make it clear how to reproduce the problem, Failure to check for unresolved problems just before release, Failure to verify fixes, Failure to provide summary report.

Sunday, July 26, 2009

Risk-based analysis

We now understand the need of prioritizing but we have yet to discuss how this should be done. We have found several examples of what to consider when prioritizing that all can be summed up in these guidelines found at Microsoft Accessibility, Technology for Everyone (Microsoft, 2000):


• Prioritize testing features that are necessary parts of the product.
• Prioritize testing features that affect the largest number of users.
• Prioritize testing features that are chosen frequently by users.


What these features are, differ from application to application and they are not always obvious. Considering the application’s purpose might help deciding the important parts of the site. Earlier we introduced purposes of web sites that we had derived from Ho’s (1997) business purposes. These purposes present different needs of prioritizing. A site for business transactions, for instance an Internet banking service, has security requirements that must be fulfilled for us users to feel confident in the application, or we will not use it. A promotional site, on the other hand, has no apparent need of high security in that sense. This can be translated into assessing the significance of a specific function or the importance of a function not to fail, which leads us to risk-based analysis where some ideas come from James Bach (2000).


Whenever we make decisions there is something working in the background considering things that might go wrong and the effects that they might have. This is also the basis of risk-based analysis. Risk-based analysis is a way of determining the order of priority between all possible errors that might occur. Risk-based analysis takes into account the two factors mentioned above:


• The Likelihood of an error to occur (L)
• The Cost of an error (C)


These two factors are given numeric values and are multiplied with each other creating a risk-value.




Fig 2.5. Risk based analysis



The higher the value – the higher the risk – the higher the priority. Based on this the further test actions can be planned.

Saturday, July 25, 2009

Testing Process


In order to make the testing process as effective as possible it needs to be viewed as one with the development process (Fig 2.1.). In many organizations, testing is normally an ad-hoc process being performed in the last stage of a project, if performed at all.


Figure 2.1. Testing should be conducted throughout the development process.



Studies show that testing often represents between 30-50% of the software development cost (RUP).
In order to reduce testing costs, a structured and well defined way of testing needs to be implemented.
Certain projects may appear too small to justify extensive testing. However, one should consider the impact of errors and not the size of the project.



Important to remember is that, unfortunately, testing only shows the presence of errors, not the absence of them.

The figure below (Fig 2.2.) gives a general explanation about both the development process and the testing process but it also shows the relation between these two processes. In real life these two processes, as stated earlier, should be viewed and handled as one. The colors in the figure are to show relations between phases. For example, system testing of an application is to check so that it meets the requirements specified at the start of the development. A large part of integration testing is to check the logical design done in the design phase of the development. The relationships between the phases are based partly on the V-model as presented by Mark Fewster and Dorothy Graham (Software Test Automation, 1999).



Figure 2.2. Test phases relations with Development phases



As mentioned above, a well defined and understood way of testing is essential to make the process of testing as effective as possible. In order to produce software products with high quality one has to view the testing process as a planned and systematic way of performing activities. The activities included are Test Management, Test Planning, Test Case Design & Implementation and Test Execution & Evaluation. Test management will not be further discussed in this report.

1.1 Test planning

As for test planning, the purpose is to plan and organize resources and information as well as describing the objectives, scope and focus for the test effort. The first step is to identify and gather the requirements for the test. In order for the requirements to be of use for the test, they need to be verifiable or measurable. Within test planning an important part is risk analysis. When assigning a certain risk factor, one must examine the likelihood of errors occurring, the effect of the errors and the cost caused by the errors. To make the analysis as exhaustive as possible, each requirement should be reviewed. The purpose of the risk analysis is to identify what is needed to prioritize when performing the test. Risk analysis will be further discussed later in this report.

In order to be able to create a complete test plan, resources needs to be identified and allocated. Resources include

  • Human – Who and how many are needed
  • Knowledge - What skills are needed
  • Time - How much time needs to be asserted
  • Economic - What is the estimated cost
  • Tools - What kind of tools are needed (hardware, software, platforms, etc.)

A good test plan should also include stop criterias. These can be very intricate to define, since the actual quality of the software is difficult to determine. Some common criterias used are (Rick Hower, 2000):

  • Deadlines (release deadlines, testing deadlines, etc.)
  • Test cases completed with certain percentage passed
  • Test budget depleted
  • Coverage of requirements reaches a specified point
  • Bug rate falls below a certain level
  • Beta or alpha testing period ends

Since every error in today’s complex software very rarely is found, one can go on testing forever if stop criterias aren’t used. Specific criterias should therefore be defined for each separate test case in the process.

The outcome of test planning should of course be the test plan, which will function as the backbone, providing the strategies to be used throughout the test process.

2 Test case design & implementation

The main objective with test case design is to identify the different test cases or scenarios for every software build. These cases shall describe the test procedures and how to perform the test in order to reach the goal of each case. For each test case, the particular conditions for that test shall be described as well as the specific data needed and the supposed result. If there have been testing done on the subject before, the use of old cases becomes important.

The design of test cases is based on what is to be tested. Features to be tested often present a unique need and the testing should be done in small sections to cope with the differences in test case design that occur due to this. When testing a single feature there are a number of things to consider; how does it work, what may cause it to crash, what are the possible variables?

Both data input and user actions should be done in ways that test the designed logic so that we get answers to the questions; do we get the expected answer, what happens when wrong input is used? If, for instance, you are prompted to write your age in a field, the logic behind this may expect a number between 0 and 100, but of course you might by mistake put a letter instead. What happens then? If the application is not designed to see this mistake it will cause the application to crash or at least give an unexpected answer. This makes it important to test this feature so that it does not accept the wrong input but still, of course, accepts the expected input.

Considering the many ways to make similar mistakes there are thousands of different inputs that should be tested. It is not possible to test all these potential inputs, instead one should choose input that in a good way represent all possible groups of input. This procedure is often referred to as boundary testing.

For the results of the test to be useful and to know when stop criterias are reached, the completeness of the test cases is very important.

When creating tests based on the test cases, certain objectives should be addressed.

  • Make your tests as reusable as possible
  • Make your tests easy to maintain
  • Use existing tests when possible

3 Test execution & evaluation

When running your tests, the results have to be taken care of in a defined manner. Was the test completed or did it halt. Are the results of the test the expected ones and how can it be verified that the results originate from a correct run of a test.

When the actual results from a test do not match the expected, certain actions have to be taken. The first is to determine why the actual and the expected results differ. Does the error lie in the tested application or e.g. the test script?

When errors are found, they need to be properly reported. Information on the bug needs to be communicated so that developers and programmers can solve the problem. These reports should include, among others, application name and version, test date, tester’s name and description on the occurred error.

When a bug has been properly taken care of, the software needs to be re-tested to verify that the problems have been solved, and that the corrections made did not create new conflicts.

During the process of development and testing of software, many changes will surely be made to the software as well as its environments. When such changes are made, there will be a need to assure that the program still functions as required. This kind of testing is called regression testing. The difference between a re-test and regression test is that the latter is done when changes has been made to the program regarding, for example, its functionality, whereas re-test is done to test the software after bug fixing.

4 Test phases

As the testing process should be viewed as parallel with the development, it will go through certain phases. When the development moves forward the scope and targets for testing changes, starting with each piece and ends with the complete system. The goal is that every part is tested and fully functioning before being integrated with other parts. Important in the development process is that neither of these phases is completely separated from each other. There is no definite border between when the different phases starts or ends. They can be seen as an overall approximate guideline for how to perform a successful test. Several authors, including Bill Hetzel and Hans Schaefer, describe the test process as consisting of the following phases:

  • Unit testing
  • Integration testing
  • System testing
  • Acceptance testing

Unit testing

Also called module test. The testing done on this stage is on the isolated unit.




Integration testing

When units interact with others, one must assure that the communication between them works.
Conflicts often occur when units are developed separately or if the syntax to be used is not communicated in a sufficient way.



System testing

When the system is complete, testing on the system as a whole can commence. Test cases with actual user behaviour can be implemented and non-functional tests, such as usability and performance, may be made.



Acceptance testing

The purpose is to let end users or customers decide whether to accept the system or not. Are the users feeling comfortable with the product and does it perform the activities as required?

5 Test types

The previous text describes the general guidelines for testing, whether it is software applications or web applications. But as the title of this report implies, the scope is centred on how to perform successful testing on web applications and how this process differs from the general test process.
In order to proceed to this area certain issues have to be addressed first. One must be acquainted with the different types of tests that are performed within the different stages throughout the process. The text that follows describes, in short, the most commonly used terms for test types used, with aim on the medium of the web.

There are no definite borders between the types and several of them can seem overlapping with adjacent areas. Needless to say, there are several opinions on this matter and we base the following descriptions on authors such as Hans Schaefer, Bill Hetzel, Tim Van Tongeren and Hung Q. Nguyen.




Functionality testing

The purpose of this type of test is to ensure that every function is working according to the specifications. Functions apply to a complete system as well as a separated unit. Within the context of the web, functionality testing can, for example, include testing links, forms, Java Applets or ActiveX applications.



Performance testing

To ensure that the system has the capability that is requested the performance have to be tested for. The characteristics normally measured are execution time, response time etc. In order to identify bottlenecks, the system or application have to be tested under various conditions. Varying the number of users and what the users are doing helps identify weak areas that are not shown during normal use. When testing applications for the web this kind of testing becomes very important. Since the users are not normally known and the number of users can vary dramatically, web applications have to be tested thoroughly. The general way of performance testing should not vary, but the importance of this kind of test varies (Schaefer, 2000). Testing web applications for extreme conditions is done by load- and stress testing. These two are performed to ensure that the application can withstand, for example, a large amount of users simultaneously or large amount of data from each user. Other characteristics of the web that is important can be download time, network speed etc.



Usability

To ensure that the product will be accepted on the market it has to appeal to users. There are several ways to measure usability and user response. For the web, this is often very important due to the users low acceptance level for glitches in the interface or navigation. Due to the nearly complete lack of standards for web layout this area is dependent on actual usage of the site to receive as useful information as possible. Microsoft has extensive standards down to pixel level on where, for example, buttons are to be placed when designing programs for windows. The situation on the web is exceptionally different with almost no standards at all on how a site layout should be designed.



Compatibility testing

This refers to different settings or configuration of, for example, client machine, server or external databases. When looking at web this can be a very intricate area to test due to the total lack of control over the client machine configuration or an external database. Will your site be compatible with different browser versions, operating systems or external interfaces? Testing every combination is normally not possible, so identifying the most likely used combinations is usually how it’s done.


Security testing

In order to persuade customers to use Internet banking services or shop over the web, security must be high enough. One must feel safe when posting personal information on a site in order to use it. Typical areas to test are directory setup, SSL, logins, firewalls and logfiles.



Security is an area of great importance as well as great extent, not least for the web. A lot of literature has been written on this subject and more will come. Due to the complexity and size of this particular subject, we will not cover this area more than the basic features and where one should put in extra effort.


Wednesday, July 15, 2009

Testing Techniques

1. Static Testing Techniques

“Analysis of a program carried out without executing the program.”
… BS 7925-1


1.1.Review - Definition
Review is a process or meeting during which a work product or set of work products, is presented
to project personnel, managers, users, customers, or other interested parties for comment or
approval.
[IEEE]


1.2.Types of Reviews
There are three general classes of reviews:
· Informal / peer reviews
· Semiformal / walk-through
· Formal / inspections.


1.2.1. Walkthrough
“A review of requirements, designs or code characterized by the author of the material under review guiding the progression of the review. “
[BS 7925-1]
A 'walkthrough' is an informal meeting for evaluation or informational purposes. Little or no preparation is usually required.
These are led by the author of the document, and are educational in nature. Communication is therefore predominately one-way in nature. Typically they entail dry runs of designs, code and scenarios/ test cases.


1.2.2. Inspection
A group review quality improvement process for written material. It consists of two aspects; product (document itself) improvement and process improvement (of both document production and inspection).
[BS 7925-1]

An inspection is more formalized than a 'walkthrough', typically with 3-8 people including a moderator, reader, and a recorder to take notes. The subject of the inspection is typically a document such as a requirements specification or a test plan, and the purpose is to find problems and see what's missing, not to fix anything. Attendees should prepare for this type of meeting by reading thru the document; most problems will be found during this preparation. The result of the inspection meeting should be a written report. Thorough preparation for inspections is difficult, painstaking work, but is one of the most cost effective methods of ensuring quality. Led by trained moderator (not author), has defined roles, and includes metrics and formal process based on rules and checklists with entry and exit criteria.


1.2.3. Informal Review
· Unplanned and Undocumented
· Useful, Cheap and widely used
· Contrast with walkthroughs is that communication is very much two-way in nature


1.2.4. Technical Review
Technical reviews are also known as peer review as it is vital that participants are made up from the 'peer group', rather than including managers.
· Documented
· Defined fault detection process
· Includes peers and technical experts
· No management participant





1.3. Activities performed during review
Activities in Review: Planning, overview meeting, Review meeting and follow-up.
Deliverables in Review: Product changes, source document changes and improvements.
Factors for pitfall of review: Lack of training, documentation and management support. Review of the Requirements / Planning and Preparing Acceptance Test At the beginning of the project the test activities must start. These first activities are:
· Fixing of test strategy and test concept
· risk analysis
· determine criticality
· expense of testing
· test intensity
· Draw up the test plan
· Organize the test team
· Training of the test team - If necessary
· Establish monitoring and reporting
· Provide required hardware resources (PC, data base, …)
· Provide required software resources (software version, test tools, …)

The activities include the foundations for a manageable and high-quality test process. A test strategy is determined after a risk evaluation, a cost estimate and test plan are developed and progress monitoring and reporting are established. During the development process all plans must be updated and completed and all decisions must be checked for validity. In a mature development process reviews and inspections are carried out through the whole process. The review of the requirement document answers questions like: Are all customers’ requirements fulfilled? Are the requirements complete and consistent? And so on. It is a look back to fix problems before going on in development. But just as important is a look forward. Ask questions like: Are the requirements testable? Are they testable with defensible expenditure? If the answer is no, then there will be problems to implement these requirements. If you have no idea how to test some requirements then it is likely that you have no idea how to implement these requirements. At this stage of the development process all the knowledge for the acceptance tests is available and to hand. So this is the best place for doing all the planning and preparing for acceptance testing.


For example one can
· Establish priorities of the tests depending on criticality
· Specify (functional and non-functional) test cases
· Specify and - if possible - provide the required infra-structure
· At this stage all of the acceptance test preparation is finished and can be achieved.


1.4. Review of the Specification / Planning and Preparing System Test
In the review meeting of the specification documents ask questions like: Is the specification testable? Are they testable with defensible expenditure? Only these kinds of specifications can be ealistically implemented and be used for the next steps in the development process. There must be a re-work of the specifications if the answers to the questions are no. Here all the knowledge for the system tests is available and to hand. Tasks in planning and preparing for system testing include:
· Establishing priorities of the tests depending on criticality
· Specifying (functional / non-functional) system test cases
· Defining and establishing the required infra-structure

As with the acceptance test preparation, all of the system test preparation is finished at this early development stage.
· Review of the Architectural Design
· Detailed Design Planning and
· Preparing Integration/Unit Test

During the review of the architectural design one can look forward and ask questions like: What is about the testability of the design? Are the components and interfaces testable? Are they testable with defensible expenditure? If the components are too expensive to test a re-work of the architectural design has to be done before going further in the development process. Also at this stage all the knowledge for integration testing is available. All preparation, like specifying control flow and data flow integration test cases, can be achieved. All accordingly activities of the review of the architectural design and the integration tests can be done here at the level of unit tests.


1. Roles and Responsibilities
In order to conduct an effective review, everyone has a role to play. More specifically, there are certain roles that must be played, and reviewers cannot switch roles easily. The basic roles in a review are:
· The moderator
· The recorder
· The presenter
· Reviewers


Moderator:
The moderator makes sure that the review follows its agenda and stays focused on the topic at hand. The moderator ensures that side-discussions do not derail the review, and that all reviewers participate equally.


Recorder:
The recorder is an often overlooked, but essential part of the review team. Keeping track of what was discussed and documenting actions to be taken is a full-time task. Assigning this task to one of the reviewers essentially keeps them out of the discussion. Worse yet, failing to document what was decided will likely lead to the issue coming up again in the future. Make sure to have a recorder and make sure that this is the only role the person plays.


Presenter:
The presenter is often the author of the artifact under review. The presenter explains the artifact and any background information needed to understand it (although if the artifact was not selfexplanatory, it probably needs some work). It’s important that reviews not become “trials” – the focus should be on the artifact, not on the presenter. It is the moderator’s role to make sure that participants (including the presenter) keep this in mind. The presenter is there to kick-off the discussion, to answer questions and to offer clarification.


Reviewer:
Reviewers raise issues. It’s important to keep focused on this, and not get drawn into side discussions of how to address the issue. Focus on results, not the means.