Wednesday, November 17, 2010

Error Guessing

Error Guessing is a test case design technique where the tester has to guess what faults might occur and to design the tests to represent them.

Purpose



The purpose of error guessing is to focus the testing activity on areas that have not been handled by the other more formal techniques, such as equivalence partitioning and boundary value analysis. Error guessing is the process of making an educated guess as to other types of areas to be tested.

For example, educated guesses can be based on items such as metrics from past testing experiences, or the tester's identification of situations in the Functional Design Specification or Detailed Design Specification, that are not addressed clearly.

Examples



Though metrics from past test experiences are the optimum basis for error guessing, these may not be available. Examples of error prone situations include:

initialization of data, (e.g., repeat a process to see if data is properly removed),

wrong kind of data, (e.g., negative numbers, non-numeric versus numeric),

handling of real data, (i.e., test using data created through the system or real records, because programmers tend to create data that reflects what they are expecting),

error management, (e.g., proper prioritization of multiple errors, clear error messages, proper retention of data when an error is received, processing continues after an error if it is supposed to),

calculations, (e.g., hand calculate items for comparison),


restart/recovery, (i.e., use data that will cause a batch program to terminate before completion and determine if the restart/recovery process works properly),


proper handling of concurrent processes, (i.e., for event driven applications, test multiple processes concurrently).


Tuesday, October 19, 2010

QTP Script For Connecting to Database

QTP Script for connecting to MS Access.

Option Explicit
Dim con,rs
Set con=createobject("adodb.connection")
Set rs=createobject("adodb.recordset")
 con.provider="microsoft.jet.oledb.4.0"
 con.open"d:testdata.mdb"
 rs.open"select*from emp",con
Do while not rs.eof
VbWindow("Form1").VbEdit("val1").Set rs.fields("v1")
VbWindow("Form1").VbEdit("val2").Set rs.fields("v2")
VbWindow("Form1").VbButton("ADD").Click
rs.movenext Loop
The database we are using here is MS Access.Before running this script create a table in MS Acess.
 In the above script I used table called "emp" and column names as "v1" and "v2". "d:testdata.mdb" is path of the table which we created. The main use of this script is to use testdata of table(which is in database) in the application. In the above script we are passing values from database to Textboxes in Windows Application.

Similarly script for connecting to other 2 databases are

 QTP Script for connecting to sqlserver.

Option Explicit
 Dim con,rs
Set con=createobject("adodb.connection")
Set rs=createobject("adodb.recordset") con.open"provider=sqloledb.1;server=localhost;uid=sa;pwd=;database=testdata"
rs.open"select*from emp",con
Do while not rs.eof
VbWindow("Form1").VbEdit("val1").Set rs.fields("v1")
VbWindow("Form1").VbEdit("val2").Set rs.fields("v2")
VbWindow("Form1").VbButton("ADD").Click
 rs.movenext
 Loop

Script for connecting to oracle

Option Explicit
Dim con,rs
Set con=createobject("adodb.connection")
Set rs=createobject("adodb.recordset")
con.open"provider=oraoledb.1;server=localhost; uid=scott;pwd=tiger;database=testdata" rs.open"select*from emp",con
Do while not rs.eof
VbWindow("Form1").VbEdit("val1").Set rs.fields("v1")
VbWindow("Form1").VbEdit("val2").Set rs.fields("v2")
VbWindow("Form1").VbButton("ADD").Click
rs.movenext
Loop
This is the way u connect to database in QTP and get the values from database.

Saturday, October 16, 2010

Vbscript Advanced Tutorial

Vbscript Advanced Tutorial#1



Vbscript Advanced Tutorial#2

VB Script Video Tutorial Series

VBScript Tutorial 1-Overview and Output



VBScript Tutorial 2 - Variables and Arrays



VBScript Tutorial 3 - Conditional Statements



VBScript Tutorial 4 - Loops



VBScript Tutorial 5 - Procedures



VBScript tutorial 6- Replace and Round



VBScript tutorial 7- Return and Random



VBScript tutorial 8- Case, Trim , and Reverse




Saturday, October 9, 2010

Characteristics of Good Testers

Is inquisitive

Has functional/business knowledge

Is detail-oriented

Is open-minded

Has a good personality

Has a technical background, but does not want to be a programmer

Has testing experience

Is a team player

Is flexible

Is self-reliant

Is self-starting

Has a positive attitude

Is logical

Handles stress well

Is a quick thinker

Knows specific tools

Has good common sense

Is politically astute

Has a sense of humor

Understands the software development lifecycle

Levels (Stages) of Test Planning


Levels (Stages) of Test Planning
Test planning can and should occur at several levels or stages. The first plan to consider is the Master Test Plan (MTP), which can be a separate document or could be included as part of the project plan. The purpose of the MTP is to orchestrate testing at all levels. The IEEE Std. 829-1998 Standard for Software Test Documentation identifies the following levels of test: Unit, Integration, System, and Acceptance. Other organizations may use more or less than four levels and possibly use different names. Some other levels (or at least other names) that we frequently encounter include beta, alpha, customer acceptance, user acceptance, build, string, and development. In this book, we will use the four levels identified in the IEEE and illustrated in figure 3-1.




Figure 3-1: Levels of Test Planning
Key Point Test planning CAN'T be separated from project planning.
All important test planning issues are also important project planning issues.
The test manager should think of the Master Test Plan as one of his or her major communication channels with all project participants. Test planning is a process that ultimately leads to a document that allows all parties involved in the testing process to proactively decide what the important issues are in testing and how to best deal with these issues. The goal of test planning is not to create a long list of test cases, but rather to deal with the important issues of testing strategy, resource utilization, responsibilities, risks, and priorities.
Key Point Test planning SHOULD be separated from test design.
In test planning, even though the document is important, the process is ultimately more important than the document. Discussing issues of what and how to test early in the project lifecycle can save a lot of time, money, and disagreement later. Case Study 3-1 describes how one company derived a great benefit from their Master Test Plan, even though it was never actually used.
Case Study 3-1: If the Master Test Plan was so great, why didn't they use it?


The "Best" Test Plan We Ever Wrote
I once had a consulting assignment at a major American company where I was supposed to help them create their first ever Master Test Plan. Following up with the client a few months later, the project manager told me that the creation of the Master Test Plan had contributed significantly to the success of the project, but unfortunately they hadn't really followed the plan or kept it up to date. I replied, "Let me get this straight. You didn't use the plan, but you felt that it was a major contributor to your success. Please explain." The project manager told me that when they began to fall behind, they dispensed with much of the project documentation, including the test plan (sound familiar?). But because they created the plan early in the project lifecycle, many testing issues were raised that normally weren't considered until it was too late to take action. The planning process also heightened the awareness of the importance of testing to all of the project participants. Now, I believe that keeping test plans up to date is important, so that's not the purpose of telling you this story. Rather, I'm trying to stress the importance of the testing process, not just the document.
— Rick Craig



Key Point "We should think of planning as a learning process - as mental preparation which improves our understanding of a situation… Planning is thinking before doing."
- Planning, MCDP5 U.S. Marine Corps
Key Point Ike said it best: "The plan is nothing, the planning is everything."
- Dwight D. Eisenhower
In addition to the Master Test Plan, it is often necessary to create detailed or level-specific test plans. On a larger or more complex project, it's often worthwhile to create an Acceptance Test Plan, System Test Plan, Integration Test Plan, Unit Test Plan, and other test plans, depending on the scope of your project. Smaller projects, that is, projects with smaller scope, number of participants, and organizations, may find that they only need one test plan, which will cover all levels of test. Deciding the number and scope of test plans required should be one of the first strategy decisions made in test planning. As the complexity of a testing activity increases, the criticality of having a good Master Test Plan increases exponentially, as illustrated in Figure 3-2.

Figure 3-2: Importance of Test Planning

Saturday, September 25, 2010

QTP Certification - Review your Skills: Q. 21 to 30

Q. 21: Object Spy can be found in __________ menu.

A. Tool

B. Tools

C. Task

D. Tasks
<<<<<< =================== >>>>>>
Q. 22: The ________________ displays the open documents side-by-side.

A. Tile Vertically

B. Tile Horizontally

C. Cascade

D. Tile Cascade
<<<<<< =================== >>>>>>

Q. 23: For opening the Quick Test Professional Help we can use _________

A. F3

B. F5

C. F1

D. F2

<<<<<< =================== >>>>>>

Q. 24: If QTP cannot find any object that matches the description, or if it finds more than one object that matches, QuickTest may use the ___________________ mechanism to identify the object.

A. Ordinal Identifier

B. Index Identifier

C. Smart Identification

D. Assistive Identification


<<<<<< =================== >>>>>>

Q. 25: You can configure the _________ and ___________.properties that QuickTest uses to record descriptions of the objects in your application

A. Mandatory, assistive, and ordinal identifier
B. Mandatory, required, and ordinal identifier
C. Smart, assistive, and ordinal identifier
D. Index, assistive, and ordinal identifier


<<<<<< =================== >>>>>>

Q. 26: The ___________ property set for each test object is created and maintained by QuickTest.

A. Run-Time Object
B. Test Object

C. Smart Identification Object
D. Assistive Object


<<<<<< =================== >>>>>>


Q. 27: You can access and perform ______________ methods using the Object property.


A. Run-Time Object

B. Test Object

C. Smart Identification Object

D. Assistive Object

<<<<<< =================== >>>>>>

Q. 28: You can view or modify the test object property values that are stored with your test in the _______________

A. Information Pane
B. Data Table
C. Information Pane & Data Table Both
D. Object Properties & Object Repository dialog box.


<<<<<< =================== >>>>>>


Q. 29: You can retrieve or modify property values of the test object during the run session by adding _______________ statements in the Keyword View or Expert View.


A. GetROProperty & SetROProperty
B. GetTOProperty & SetTOProperty
C. GetTOProperty & SetROProperty

D. GetROProperty & SetTOProperty

<<<<<< =================== >>>>>>

Q. 30: If the available test object methods or properties for an object do not provide the functionality you need, you can access ______________ of any run-time object using the Object property.

A. The internal methods and properties
B. The mandatory methods and properties
C. The selective methods and properties
D. The assistive methods and properties

Correct Answers to Questions - Q.21 to Q 30 are as under:


QTP Certification - Review your Skills: Q. 11 to 20


Q. 11: Using the Object Spy, you can view
A. The run-time or test object properties and methods of any object in an open application.
B. The run-time or test object properties of any object in an open application.
C. The test object properties and methods of any object in an open application.
D. The run-time object properties and methods of any object in an open application.
<<<<<< =================== >>>>>>
Q. 12: There are ________ object type filters in Object spy dialog box.
A. Two

B. Three

C. Four

D. Five
<<<<<< =================== >>>>>>
Q. 13: In the Object Spy window, in the Properties Tab
A. Copying of Properties and its values is possible with CTRL+C
B. Copying of Properties and its values is possible by right clicking on it and choosing copy.
C. Copying of Properties and its values is possible with both A. and B. methods
D. Copying of Properties and its values is possible is not possible
<<<<<< =================== >>>>>>
Q. 14: In the Object Spy window, in the methods Tab
A. Copying of Methods is possible with CTRL+C
B. Copying of Methods is possible by right clicking on it and choosing copy.
C. Copying of Methods is possible with both A. and B. methods
D. Copying of Methods is possible is not possible
<<<<<< =================== >>>>>>
Q. 15: Object Spy dialog box
A. Can be resized
B. Cannot be resized

<<<<<< =================== >>>>>>

Q. 16: The ___________ are the highest level of the test hierarchy in the Keyword view.
A. Tests
B. Steps

C. Call to Actions

D. Actions
<<<<<< =================== >>>>>>
Q. 17: You can copy and paste or drag and drop actions to move them to a different location within a test
A. True

B. False
<<<<<< =================== >>>>>>
Q. 18: You can print the contents of the Keyword View to your Windows default printer (and even preview the contents prior to printing.
A. True

B. False
<<<<<< =================== >>>>>>
Q. 19: In the Keyword View, you can also view properties for items such as checkpoints.
A. True

B. False
<<<<<< =================== >>>>>>
Q. 20: In the step Browser > Page > Edit > Set "Genius", identify container object(s)
A. Browser

B. Edit

C. Page

D. Both Browser & Page


Correct Answers to - Q.11 to Q 20 are as under:

QTP Certification - Review your Skills: Q. 1 to 10

Objective Type / Multiple Choice Questions on QTP - QuickTest Professional under the Series

(Quickly Review Your QTP Skills before appearing for HP Certification Exam)

Set of 10 Questions

Q. 1: You can manage the test actions and the test or function library steps using the _________ menu commands

A. File

B. Edit

C. Automation

D. Tools

<<<<<< =================== >>>>>>

Q. 2: To expand all the steps in the keyword view which option you would use from the View menu.
A. Expand

B. Expand All

C. Expand Items

D. Expand Rows

<<<<<< =================== >>>>>>

Q. 3: What is the shortcut key to open a Step Generator?

A. F2

B. F5

C. F6

D. F7

<<<<<< =================== >>>>>>

Q. 4: Function Definition Generator in found in which menu option.

A. File

B. Tools

C. Insert

D. View

<<<<<< =================== >>>>>>

Q. 5: The shortcut keys for Record, Stop and Run respectively are

A. F3, F4, F5

B. F4, F3, F5

C. F4, F5, F3

D. F3, F5, F4

<<<<<< =================== >>>>>>

Q. 6: What is the shortcut key for opening an Object Repository?

A. Alt+R

B. Shift+R

C. Ctrl+R

D. Shift+O+R

<<<<<< =================== >>>>>>

Q. 7: Shortcut key to Insert/Remove a breakpoint is

A. F9

B. F8

C. Ctrl+b

D. Shift+b

<<<<<< =================== >>>>>>

Q. 8: The __________ runs only the current line of the script. If the current line calls a method, the method is displayed in the view but is not performed.


A. Step over

B. Step out

C. Step into

D. Step Till


<<<<<< =================== >>>>>>

Q. 9: The ________ runs only the current line of the script. When the current line calls a method, the method is performed in its entirety, but is not displayed in the view.

A. Step Over

B. Step Out

C. Step Into

D. Step Till

<<<<<< =================== >>>>>>

Q. 10: What is the shortcut key to clear all Breakpoints?

A. Ctrl+Shift+F9

B. Shift+Ctrl+F9

C. Alt+Shift+F9

D. Alt+Ctrl+F9

Essential Elements of Testing Web Applications

Today everyone depends upon websites for business, education and trading purpose. Websites are related to the internet. It is believed that no work is possible without internet today. There are so many types of users connected to the websites who need different type of information. So, websites should respond according to the user requirements. At the same time, the correct behaviour of sites has become crucial to the success of businesses and organizations and thus should be tested thoroughly and frequently.

Here we are discussing various methods to test a website. However, testing a website is not an easy job since we have to test not only the client-side but also the server-side. With this approach we can completely test a website with minimum number of errors.


Introduction to Web Testing:
The client end of the system is represented by a browser, which connects to the website server via the Internet.The centerpiece of all web applications is a relational database which stores dynamic contents. A transaction server controls the interactions between the database and other servers (often called "application servers"). The administration function handles data updates and database administration.



According to the above Architecture of Web Applications, It is evident that we need to conduct the following tests to ensure the suitability of web applications.

1) What are the expected loads on the server and what kind of performance is required under such loads. This may include web server response time, database query response times.

2) What kind of browsers will be used?

3) What kinds of connection speeds will they have?

4) Are they intra-organization (thus with likely high connection speeds and similar browsers) or Internet-wide (thus with a wide variety of connection speeds and browser types)?

5) What kind of performance is expected on the client side (e.g., how fast should pages appear, how fast should animations, applets, etc. load and run)?


There are many possible terms for the web application development life cycle including the spiral life cycle or some form of iterative life cycle. A more cynical way to describe the most commonly observed approach is to describe it as the unstructured development similar to the early days of software development before software engineering techniques were introduced. The "maintenance phase" often fills the role of adding missed features and fixing problems.



We need to have ready answers to the following questions:

1) Will down time for server and content maintenance / upgrades be allowed? How much?

2) What kinds of security (firewalls, encryptions, passwords, etc.) will be required and what is it expected to do? How can it be tested?

3) How reliable the Internet connections are? And how does that affect backup system or redundant connection requirements and testing?

4) What processes will be required to manage updates to the website's content, and what are the requirements for maintaining, tracking, and controlling page content, graphics, links, etc.?

5) Will there be any standards or requirements for page appearance and/or graphics throughout a site or parts of a site?

6) How will internal and external links be validated and updated? How often?

7) How many times the user login and do they require testing?

8) How are CGI programs, Applets, Javascripts, ActiveX components, etc. to be maintained, tracked, controlled and tested?

Functional or Black Box Testing of Web Applications

Web Browser-Page Testing:
This type of test covers the objects and code that executes within the browser, but does not execute the server-based components. For example, JavaScript and VB Script code within HTML that does rollovers, and other special effects. This type of test also includes field validations that are done at the HTML level. Additionally, browser-page tests include Java applets that implement screen functionality or graphical output.


For web browser testing we can create test cases using following guidelines:

1) If all mandatory fields on the form are not filled then it will display a message on pressing a submit button.

2) It will not show the complete information about sensitive data like full credit card number, social security number (SSN) etc.

3) Hidden passwords.

4) Login by the user is must for accessing the sensitive information.

5) It should check the limits of all the fields given in the form.


Transaction Testing:
In this testing, test cases are designed to confirm that information entered by the user at the web page level makes it to the database, in the proper way, and that when database calls are made from the web page, the proper data is returned to the user.


Conclusion:
For trouble-free operation of a website we must follow both non-functional and functional testing methods. With these methods one can test the performance, security, reliability, user interfaces etc. which are the critical issues related to any website.

Non Functional Testing of Web Applications

Non Functional or White Box Testing of Web Applications invove either or all of the following seven types of testing

1) Configuration Testing: This type of test includes

2) Usability Testing

3) Performance Testing

4) Scalability Testing

5) Security Testing

6) Recoverability Testing

7) Reliability Testing


Let us discuss each types of these testings in detail


1) Configuration Testing: This type of test includes

a) The operating system platforms used.

b) The type of network connection.

c) Internet service provider type.

d) Browser used (including version).


The real work for this type of test is ensuring that the requirements and assumptions are understood by the development team, and that test environments with those choices are put in place to properly test it.


2) Usability Testing:

For usability testing, there are standards and guidelines that have been established throughout the industry. The end-users can blindly accept these sites since the standards are being followed. But the designer shouldn't completely rely on these standards.

While following these standards and guidelines during the making of the website, he should also consider the learnability, understandability, and operability features so that the user can easily use the website.


3) Performance Testing: Performance testing involves testing a program for timely responses.

The time needed to complete an action is usually benchmarked, or compared, against either the time to perform a similar action in a previous version of the same program or against the time to perform the identical action in a similar program. The time to open a new file in one application would be compared against the time to open a new file in previous versions of that same application, as well as the time to open a new file in the competing application. When conducting performance testing, also consider the file size.


In this testing the designer should also consider the loading time of the web page during more transactions. For example: a web page loads in less than eight seconds, or can be as complex as requiring the system to handle 10,000 transactions per minute, while still being able to load a web page within eight seconds.


Another variant of performance testing is load testing. Load testing for a web application can be thought of as multi-user performance testing, where you want to test for performance slow-downs that occur as additional users use the application. The key difference in conducting performance testing of a web application versus a desktop application is that the web application has many physical points where slow-downs can occur. The bottlenecks may be at the web server, the application server, or at the database server, and pinpointing their root causes can be extremely difficult.


We can create performance test cases by following steps:

a) Identify the software processes that directly influence the overall performance of the system.

b) For each of the identified processes, identify only the essential input parameters that influence system performance.

c) Create usage scenarios by determining realistic values for the parameters based on past use. Include both average and heavy workload scenarios. Determine the window of observation at this time.

d) If there is no historical data to base the parameter values on, use estimates based on requirements, an earlier version, or similar systems.

e) If there is a parameter where the estimated values form a range, select values that are likely to reveal useful information about the performance of the system. Each value should be made into a separate test case.

Performance testing can be done through the "window" of the browser, or directly on the server. If done on the server, some of the performance time that the browser takes is not accounted for.


4) Scalability Testing:

The term "scalability" can be defined as a web application's ability to sustain its required number of simultaneous users and/or transactions, while maintaining adequate response times to its end users.

When testing scalability, configuration of the server under test is critical. All logging levels, server timeouts, etc. need to be configured. In an ideal situation, all of the configuration files should be simply copied from test environment to the production environment, with only minor changes to the global variables.

In order to test scalability, the web traffic loads must be determined to know what the threshold requirement for scalability should be. To do this, use existing traffic levels if there is an existing website, or choose a representative algorithm (exponential, constant, Poisson) to simulate how the user "load" enters the system.


5) Security Testing:

Probably the most critical criterion for a web application is that of security. The need to regulate access to information, to verify user identities, and to encrypt confidential information is of paramount importance. Credit card information, medical information, financial information, and corporate information must all be protected from persons ranging from the casual visitor to the determined cracker. There are many layers of security, from password-based security to digital certificates, each of which has its pros and cons.

We can create security test cases by following steps:

a) The web server should be setup so that unauthorized users cannot browse directories and the log files in which all data from the website stores.

b) Early in the project, encourage developers to use the POST command wherever possible because the POST command is used for large data.

c) When testing, check URLs to ensure that there are no "information leaks" due to sensitive information being placed in the URL while using a GET command.

d) A cookie is a text file that is placed on a website visitor's system that identifies the user's "identity." The cookie is retrieved when the user revisits the site at a later time. Cookies can be controlled by the user, regarding whether they want to allow them or not. If the user does not accept cookies, will the site still work?

e) Is sensitive information stored in the cookie? If multiple people use a workstation, the second person may be able to read the sensitive information saved from the first person's visit. Information in a cookie should be encoded or encrypted.


6) Recoverability Testing:

Website should have backup or redundant server to which the traffic is rerouted when the primary server fails. And the rerouting mechanism for the data must be tested. If a user finds your service unavailable for an excessive period of time, the userwill switch over or browse the competitor's website. If the site can't recover quickly then inform the user when the site will be available and functional.


7) Reliability Testing:

Reliability testing is done to evaluate the product's ability to perform its required functions and give response under stated conditions for a specified period of time.

For example: A web application is trusted by users who use an online banking web application (service) to complete all of their banking transactions. One would hope that the results are consistent and up to date and according to the user's requirements

An introduction to Control Flow Testing – A Black Box Testing Technique

Behavioral control-flow testing was introduced as the fundamental model of black-box testing. The control-flow graph is the basic model for the test design.

Control-flow behavioral testing is a fundamental testing technique that is applicable to majority of software programs and is quite effective for them. It is generally applicable for comparatively smaller programs or even for smaller segments of bigger programs.

The Technique of Test Design & Execution

Test design begins by creating a behavioral control-flow graph model from requirements documents such as specifications. The list notation is generally more convenient than graphical graphs, but small graphs are an aid to model design.

Test design and execution consists of the following steps:

Step 1: Examine the requirements and validate: Examine the requirements and analyze them for operationally satisfactory completeness and self-consistency. Confirm that the specification correctly reflects the requirements, and correct the specification if it doesn't.


Step 2: Rewrite the specification: Rewrite the specification using pseudo-code as a sequence of short sentences. The use of a semiformal language like pseudo-code helps to assure that things will be stated unambiguously. Although this looks like programming, it is not programming - it is modeling. We can use the link list notation because it's easier.

We need to pay special attention to predicates. Break up compound predicates to equivalent sequences of simple predicates. Watch for selector nodes and document them as simple lists. Remove any "ANDs" that are not part of predicates - break the sentence in half instead.


Step 3: Number the sentence uniquely. These will be the node names later.


Step 4: Build the model. We can program our model in an actual programming language and using the programmed model as an aid to test design.

Few tips for effective modeling:


a) Compound predicates should be avoided in the model and spelled out (e.g., replaced by equivalent graphs) so as not to hide essential complexity.

b) Use a truth table instead of a graph to model compound predicates with more than three component predicates.

c) Segment the model into pieces that start and end with a single node and note which predicates are correlated with which in all other segments.

d) Build the test paths as combinations of paths in the segments, eliminating unachievable paths as we go ahead.

e) Use contradictions between correlated predicates to rule out combinations wholesale.

Step 5: Verify the model – since tester’s work is as bug prone as that of the programmers.

Step 6: Select the test paths.

Few tips for effective path selection:

a) Pick enough paths through the model to assure 100 percent link coverage. Don't worry about having too many tests.

b)) Start by picking the obvious paths that relate directly to the requirements and see if we can achieve the coverage that way.

c) Augment these tests by as many paths as needed to guarantee 100 percent link coverage.


Step 7: Sensitize the test paths: paths were picked up by first interpreting the predicates along the path in terms of input values. That is, select input values that would cause the software to do the equivalent of traversing our selected paths if there were no bugs.

The interpreted predicates yield a set of conditions or equations or inequalities such that any solution to that set of inequalities will cause the selected path to be traversed. If sensitization is not obvious, check the work for a specification or model bug before investing time in equation solving.

Step 8: Predict and record the expected outcome for each test.

Step 9: Define the validation criterion for each test.

Step 10: Run the tests.

Step 11: Confirm the outcomes.

Step 12: Confirm the path.

Assumptions about bugs targeted by Control Flow Testing:


1) Majorities of bugs are able to uncover control flow errors or misbehavior.

2) Bugs have direct impact on control flow predicates or it is possible that the control flow itself might be incorrect.

Pros & Cons of Control Flow Testing:

1) These days we use structured programming languages & as such control flow bugs get reduced dramatically. For older applications build with assembly language or COBOL etc. such control flow bugs had been quite common.

2) Control flow testing is not the best technique to use, while computational bugs which do not have impact on the control flow may not be detected by this technique. We can use data flow testing & domain testing to unearth such bugs.

3) We won't be able to detect any missing requirement unless, our model had included this it missed the attention of the programmer.


4) We won't be able to detect unwanted features, which happened to get included in the model, but were not present in the requirements.


5) If the programmers have already done thorough unit testing, there is remote likelihood of detection of new bugs by control flow testing technique.

6) If the same person has written the program & the test model, there is a remote chance of detection of missing features & paths. However if someone else has designed the control flow tests more efforts have to be pumped in to detect paths and features that leave the program.

7) It is difficult to have correct software merely by a coincidence, however such an eventuality defeats the control flow testing technique unless we had verified all intermediate calculations & predicate values.


Automation of Control Flow Testing Process:


As of now commercial tools directly supporting the behavioral control-flow testing are not available, but many tools are available that support structural control-flow testing. We can use these tools by actually programming our models in a supported programming language, like C, Pascal, or Basic.

If we have created a properly detailed graph model, means we have done most of the work required to express the semiformal model as a program.

It may be borne in mind that programming a model is definitely not the same as programming the real thing. The major difference is that we don't have to be concerned with all the real life stuff, like data base access, I/O, operating system interfaces, environment issues including remaining stuff where the real bugs are born.

The model program need not include many details like it doesn't have to work on the target platform, it doesn't have to be that efficient, and most important of all, it doesn't have to be integrated with the remaining software.

Then the question comes as to what is the use of this model? When it is not at all the same as running tests on the actual program. Then how should we debug our tests? The model is used as a tool to help in designing a covering set of tests, to help pick and sensitize paths, and to act as an oracle for the real software. If we can create a running model, then we can use commercial test tools as well on it & that could make our job much easier.

Friday, September 24, 2010

Reading XML sibling Nodes

Const XMLDataFile = "C:\Documents and Settings\kalyani.g\Desktop\detailedReport.xml"
Set xmlDoc = CreateObject("Microsoft.XMLDOM")
xmlDoc.Async = False
xmlDoc.Load(XMLDataFile)
xmlDoc.SetProperty "SelectionLanguage", "XPath"
Set NodeList = xmlDoc.selectNodes("//Measure[Type='Page data[kB]']")
msgbox(NodeList.Item(0).getAttribute("name"))
msgbox(NodeList.Item(0).getElementsByTagName("Sum")(0).text)

Wednesday, September 15, 2010

QTP Quick Reference Card

VSTS - Adding Different Types of Data Sources to a Web Test


Data binding is one of the more useful features of a web test. It allows you to have a different set of data used for each iteration of a web test. For example, suppose you have a list of users you would like to simulate using your web application. You could add a data source which contains the list of users and passwords. Then you would bind this data source to the user name and password fields on your login request. Now each iteration of the web test will simulate a different user. The following are instructions for adding some of the most common types of data sources to a web test.
Adding a CSV Data Source
Follow these steps to add a CSV file as a data source

1) Create a directory to hold your CSV file
2) Place your CSV file in the directory and make sure the file has a header row
3) In the Web Test Editor click the Add Data Source button on the toolbar.
4) Select “Microsoft Jet 4.0 OLE DB Provider” from the OLE DB Provider drop down.
5) In the server or file name text box, enter the directory that the CSV file exists in. Do not enter the file name; just enter the directory where the file is located.
6) Click the advanced button.
7) Click on Extended Properties.
8) Set the value equal to: text
9) Click Ok to close the advanced editor.
10) Click Ok to close the connection property dialog.
11) Click the check box for the files with the data you would like to use for this test case then click Ok.
12) Now the text file is ready to use a data source.



Adding an Access Database as a Data Source
After setting up the database, perform the following steps to add the data source.

1) In the Web Test Editor click the Add Data Source button on the toolbar.
2) Select “Microsoft Jet 4.0 OLE DB Provider” from the OLE DB Provider drop down.
3) In the server or file name text box, enter the directory and file name of the access database. i.e. c:\temp\databinding.mdb
4) Click Ok to close the connection property dialog.
5) Choose the tables you would like to include for data binding then click Ok. Now the access database is ready to use a data source.


Adding a SQL Server Database as a Data Source
After setting up the database, perform the following steps to add the data source.
1) In the Web Test Editor click the Add Data Source button on the toolbar.
2) Select “Microsoft OLE DB Provider for SQL Server” from the OLE DB Provider drop down.
3) In the server or file name text box, enter the database server
4) Enter the username and password or select the “Use Windows NT Integrated Security” option.
5) If you entered a username and password check the “Allow saving password” checkbox.
6) Choose the database name from the dropdown list.
7) Click Ok to close the editor.
8) Choose the tables you would like to include for data binding then click Ok. Now the SQL Server database is ready to use a data source.


Adding an Excel Spreadsheet as a Data Source
One excel worksheet can define multiple tables that can be used for data binding. In order to use an Excel spread sheet as a data source, you need to do the following.

1) Create an excel worksheet.
2) The first row of your table should be column headers. You can have multiple tables on one work sheet or spread the tables across multiple worksheets. The following steps need to be done for each table which will be used.
3) Highlight the entire table including the column headers.
4) On the insert menu point to name and then click define.
5) Type a name for this selection and click Ok.
6) This process needs to be done for each table in the workbook that will be used for data binding.
7) When you are done save the workbook and exit excel.
Note: If you add more rows to a table after you define it, you will need to update the definition by doing this process again. Otherwise the new data will not be available for testing.


The next step is to add the data source to the test case. Follow these steps for that process

1) In the Web Test Editor click the Add Data Source button on the toolbar.
2) Select “Microsoft Jet 4.0 OLE DB Provider” from the OLE DB Provider drop down.
3) In the server or file name text box, enter the directory and file name of the excel spreadsheet. i.e. c:\temp\book1.xls
4) Click the advanced button.
5) Click on Extended Properties.
6) Set the value equal to: Excel 8.0
7) Click Ok to close the advanced editor.
8) Click Ok to close the connection property dialog.
9) Choose the tables you would like to include for data binding then click Ok. Note: The worksheet names appear in this list with a $ after them. You can use these for table data. If you try to use these, the test will hang. You need to select the name you gave the table in your worksheet.
10) Now the excel file is ready to use a data source.


Additional Considerations when adding a Data Source
If you are going to be executing a web test on a Controller/Agent setup, you need to consider the location of the data source before creating the connection string. The data for a data source is loaded on each agent machine. So if you create a data source and set the location to c:\test\datadinding.mdb because this is were it is located on the client machine, the agent will also expect the MDB file to be located in the same location on the agent machine. There are 2 options for handling this problem:

1) Place your data sources on a network share that each agent can access. So when you create the data source, you would use some like \\machine\datasources\datadinding.mdb. Since each agent has access to this location, the connection string will work for each agent.

2) The other option is to copy the data source to the same location on each agent machine. So if you create your connection string with c:\test\databinding.mdb, then you would need to create a c:\test directory on each agent and copy the databinding.mdb file to the directory.


http://blogs.msdn.com/b/slumley/archive/2006/12/15/adding-different-types-of-data-sources-to-a-web-test.aspx

RepositoriesCollection utility

RepositoriesCollection utility object introduced in QTP 9.2 allows adding a Object Repository dynamically to a Action. The code shown below demonstrates how to add a object repository at run-time
RepositoriesCollection.Add "C:\Test.tsr"
The code above associates the Object Repository to the current action. But when the same code is added to a library and associate with the test, QTP throws an error “Cannot perform the operation because the action is a read-only action.”.
The reason this error occurs is that library files are loaded first and then the Actions. When the code run inside the library file, QTP tries to associate it with the current action. Since the Actions are not loaded yet the RepositoriesCollection object associates the Repository permanently to the test’s first Action. This is a bug in RepositoriesCollection as it is never supposed to associate the Object Repository permanently.
Fixing the issue
Fixing the issue is pretty simple. We can either move the code from the library file to the Action or we can put the code in a function inside library file and later call the function inside the Action


Inside library file
Function LoadOR()
RepositoriesCollection.Add "C:\Test.tsr"
End Function

'Inside the action
Call LoadOR()

Monday, August 30, 2010

QTP -Performance increase in table lookup functions

Using object properties instead of QTP standard functions will improve the performance of QTP tests significantly. In our case, we often want to lookup the location of a certain value in a WebTable. QTP provides several functions to read out data from a table, but is slow when used iterating the table (like two for loops, one iterating the rows, the embedded second one iterating the columns per row).

Example of a conservative way to do this:

Public Function LocateTextInTable(byRef tbl, textToFind, byRef row, byRef col)

For row = 1 to tbl.RowCount
For col = 1 to tbl.ColCount
If tbl.GetCellData(row, col) = textToFind then
LocateTextInTable = True
Exit function
End if
Next
Next

row = -1 : col = -1
LocateTextInTable = False
End Function

The crux is this: .GetCellData is not a very fast method and with a table, consisting of 30 rows and 10 columns, this method is iterated up to 300 times in the most worse case scenario (= text not found).

A faster way to retrieve the data is through the Document Object Model (DOM). This allows you to use the more native properties of an object with the sacrifice of some ease of use.

A table consists of row elements and each row element contains one or more cells. We can iterate them just the same way as we did with the function above:

Public Function LocateTextInTableDOM(byRef tbl, textToFind, byRef row, byRef col)

Dim objRow, objCell

row = 1 : col = 1

For each objRow in tbl.object.Rows
For each objCol in objRow.Cells
If objCol.Value = textToFind then
LocateTextInTableDOM = True
Exit function
End if
col = col + 1
Next
row = row + 1
Next

row = -1 : col = -1
LocateTextInTableDOM = False
End Function

From our experience, this will increase the performance of the function with a factor 10.
But be aware, there is one big assumption: This function assumes that the row objects (and cell objects) are perfectly iterated from the first row to the last row and in this exact order. Although a For…Each construct cannot guarantee this behaviour, we never encountered an incorrect location.

Seven Personal Qualities Found In A Good Leader:

1. A good leader has an exemplary character. It is of utmost importance that a leader is trustworthy to lead others. A leader needs to be trusted and be known to live their life with honestly and integrity. A good leader “walks the talk” and in doing so earns the right to have responsibility for others. True authority is born from respect for the good character and trustworthiness of the person who leads.

2. A good leader is enthusiastic about their work or cause and also about their role as leader. People will respond more openly to a person of passion and dedication. Leaders need to be able to be a source of inspiration, and be a motivator towards the required action or cause. Although the responsibilities and roles of a leader may be different, the leader needs to be seen to be part of the team working towards the goal. This kind of leader will not be afraid to roll up their sleeves and get dirty.

3. A good leader is confident. In order to lead and set direction a leader needs to appear confident as a person and in the leadership role. Such a person inspires confidence in others and draws out the trust and best efforts of the team to complete the task well. A leader who conveys confidence towards the proposed objective inspires the best effort from team members.

4. A leader also needs to function in an orderly and purposeful manner in situations of uncertainty. People look to the leader during times of uncertainty and unfamiliarity and find reassurance and security when the leader portrays confidence and a positive demeanor.

5. Good leaders are tolerant of ambiguity and remain calm, composed and steadfast to the main purpose. Storms, emotions, and crises come and go and a good leader takes these as part of the journey and keeps a cool head.

6. A good leader as well as keeping the main goal in focus is able to think analytically. Not only does a good leader view a situation as a wh***, but is able to break it down into sub parts for closer inspection. Not only is the goal in view but a good leader can break it down into manageable steps and make progress towards it.

7. A good leader is committed to excellence. Second best does not lead to success. The good leader not only maintains high standards, but also is proactive in raising the bar in order to achieve excellence in all areas.

Seven Personal Qualities Found In A Good Leader:

1. A good leader has an exemplary character. It is of utmost importance that a leader is trustworthy to lead others. A leader needs to be trusted and be known to live their life with honestly and integrity. A good leader “walks the talk” and in doing so earns the right to have responsibility for others. True authority is born from respect for the good character and trustworthiness of the person who leads.

2. A good leader is enthusiastic about their work or cause and also about their role as leader. People will respond more openly to a person of passion and dedication. Leaders need to be able to be a source of inspiration, and be a motivator towards the required action or cause. Although the responsibilities and roles of a leader may be different, the leader needs to be seen to be part of the team working towards the goal. This kind of leader will not be afraid to roll up their sleeves and get dirty.

3. A good leader is confident. In order to lead and set direction a leader needs to appear confident as a person and in the leadership role. Such a person inspires confidence in others and draws out the trust and best efforts of the team to complete the task well. A leader who conveys confidence towards the proposed objective inspires the best effort from team members.

4. A leader also needs to function in an orderly and purposeful manner in situations of uncertainty. People look to the leader during times of uncertainty and unfamiliarity and find reassurance and security when the leader portrays confidence and a positive demeanor.

5. Good leaders are tolerant of ambiguity and remain calm, composed and steadfast to the main purpose. Storms, emotions, and crises come and go and a good leader takes these as part of the journey and keeps a cool head.

6. A good leader as well as keeping the main goal in focus is able to think analytically. Not only does a good leader view a situation as a wh***, but is able to break it down into sub parts for closer inspection. Not only is the goal in view but a good leader can break it down into manageable steps and make progress towards it.

7. A good leader is committed to excellence. Second best does not lead to success. The good leader not only maintains high standards, but also is proactive in raising the bar in order to achieve excellence in all areas.

Saturday, February 6, 2010

SQL: VIEWS


SQL: VIEWS


A view is, in essence, a virtual table. It does not physically exist. Rather, it is created by a query joining one or more tables.

Creating a VIEW

The syntax for creating a VIEW is:

CREATE VIEW view_name AS

SELECT columns

FROM table

WHERE predicates;



For example:

CREATE VIEW sup_orders AS

SELECT suppliers.supplier_id, orders.quantity, orders.price

FROM suppliers, orders

WHERE suppliers.supplier_id = orders.supplier_id

and suppliers.supplier_name = 'IBM';

This would create a virtual table based on the result set of the select statement. You can now query the view as follows:

SELECT *

FROM sup_orders;



Updating a VIEW

You can update a VIEW without dropping it by using the following syntax:

CREATE OR REPLACE VIEW view_name AS

SELECT columns

FROM table

WHERE predicates;



For example:


CREATE or REPLACE VIEW sup_orders AS

SELECT suppliers.supplier_id, orders.quantity, orders.price

FROM suppliers, orders

WHERE suppliers.supplier_id = orders.supplier_id

and suppliers.supplier_name = 'Microsoft';



Dropping a VIEW

The syntax for dropping a VIEW is:

DROP VIEW view_name;

For example:

DROP VIEW sup_orders;

SQL Joins Introduction

SQL: Joins


A join is used to combine rows from multiple tables. A join is performed whenever two or more tables is listed in the FROM clause of an SQL statement.

There are different kinds of joins. Let's take a look at a few examples.

Inner Join (simple join)

Chances are, you've already written an SQL statement that uses an inner join. It is the most common type of join. Inner joins return all rows from multiple tables where the join condition is met.

For example,

SELECT suppliers.supplier_id, suppliers.supplier_name, orders.order_date

FROM suppliers, orders

WHERE suppliers.supplier_id = orders.supplier_id;

This SQL statement would return all rows from the suppliers and orders tables where there is a matching supplier_id value in both the suppliers and orders tables.

Let's look at some data to explain how inner joins work:

We have a table called suppliers with two fields (supplier_id and supplier_ name).
It contains the following data:

supplier_id supplier_name
10000 IBM
10001 Hewlett Packard
10002 Microsoft
10003 NVIDIA

We have another table called orders with three fields (order_id, supplier_id, and order_date).

It contains the following data:

order_id supplier_id order_date
500125 10000 2003/05/12
500126 10001 2003/05/13

If we run the SQL statement below:

SELECT suppliers.supplier_id, suppliers.supplier_name, orders.order_date

FROM suppliers, orders

WHERE suppliers.supplier_id = orders.supplier_id;


Our result set would look like this:

supplier_id name order_date
10000 IBM 2003/05/12
10001 Hewlett Packard 2003/05/13

The rows for Microsoft and NVIDIA from the supplier table would be omitted, since the supplier_id's 10002 and 10003 do not exist in both tables.


Outer Join

Another type of join is called an outer join. This type of join returns all rows from one table and only those rows from a secondary table where the joined fields are equal (join condition is met).


For example,

select suppliers.supplier_id, suppliers.supplier_name, orders.order_date

from suppliers, orders

where suppliers.supplier_id = orders.supplier_id(+);

This SQL statement would return all rows from the suppliers table and only those rows from the orders table where the joined fields are equal.

The (+) after the orders.supplier_id field indicates that, if a supplier_id value in the suppliers table does not exist in the orders table, all fields in the orders table will display as <null> in the result set.

The above SQL statement could also be written as follows:

select suppliers.supplier_id, suppliers.supplier_name, orders.order_date

from suppliers, orders

where orders.supplier_id(+) = suppliers.supplier_id



Let's look at some data to explain how outer joins work:

We have a table called suppliers with two fields (supplier_id and name).
It contains the following data:

supplier_id supplier_name
10000 IBM
10001 Hewlett Packard
10002 Microsoft
10003 NVIDIA

We have a second table called orders with three fields (order_id, supplier_id, and order_date).

It contains the following data:

order_id supplier_id order_date
500125 10000 2003/05/12
500126 10001 2003/05/13

If we run the SQL statement below:

select suppliers.supplier_id, suppliers.supplier_name, orders.order_date

from suppliers, orders

where suppliers.supplier_id = orders.supplier_id(+);



Our result set would look like this:

supplier_id supplier_name order_date
10000 IBM 2003/05/12
10001 Hewlett Packard 2003/05/13
10002 Microsoft <null>
10003 NVIDIA <null>

The rows for Microsoft and NVIDIA would be included because an outer join was used. However, you will notice that the order_date field for those records contains a <null> value.



Thursday, February 4, 2010

Difference between truncate and delete in mysql

Truncate and Delete are both SQL commands which result in removing the table records. So lets list the differences one by one :
Type of Command – Truncate is a DDL command and Delete is a Dml command
RollBack - As mentioned above Truncate is DDL command, so the changes made by it are committed automatically hence there is nothing called rollback when you use truncate, while Delete commands can be rolled back
Table Structure – When you use Truncate command, all the rows in the table are delete and the structure of the table is recreated and so does the indexes. On the contrary if you use Delete command only the desired rows or all the rows are deleted and the structure remains unchanged.
Syntax - The syntax for both the commands is :
Truncate table #command to truncate a table.
Delete from #command to delete all the records from table.
Practical example -
#creates a table with 2 columns, 1st column is auto incremented
Create table mysqlDemo (id integer not null auto_increment,name varchar(100),PRIMARY KEY(id));

#now insert two records in the table
insert into mysqlDemo(name)values ('sachin');
insert into mysqlDemo(name)values ('digimantra');

#check the records and note their auto_increment values
select * from mysqlDemo;

#Let us try delete it using Delete command
delete from mysqlDemo;

#Now the table is empty, lets insert values from the first row.
insert into mysqlDemo(name)values ('new_sachin');
insert into mysqlDemo(name)values ('new_digimantra');

#check the records and note their auto_increment values
select * from mysqlDemo; #the aut_increment values will continue from the last records, as the table structure is preserved.

#Now let us Truncate the table and re-insert the values.
Truncate table mysqlDemo;
insert into mysqlDemo(name)values ('sachin');
insert into mysqlDemo(name)values ('digimantra');

#check the records and note their auto_increment values
select * from mysqlDemo;

#this time the auto_increment value will start from one, as the table structure is recreated because we used Truncate instead of Delete.
So this is what the difference is, in short always remember Truncate command recreates the structure of the table and deletes all the records of the table. However Delete command does not recreates the structure and deletes the complete or partial records (as desired) from the table.

So this is what the difference is, in short always remember Truncate command recreates the structure of the table and deletes all the records of the table. However Delete command does not recreates the structure and deletes the complete or partial records (as desired) from the table.

Wednesday, February 3, 2010

SQL SERVER – Definition, Comparison and Difference between HAVING and WHERE Clause


HAVING specifies a search condition for a group or an aggregate function used in SELECT statement.
HAVING can be used only with the SELECT statement. HAVING is typically used in a GROUP BY clause. When GROUP BY is not used, HAVING behaves like a WHERE clause.
A HAVING clause is like a WHERE clause, but applies only to groups as a whole, whereas the WHERE clause applies to individual rows. A query can contain both a WHERE clause and a HAVING clause. The WHERE clause is applied first to the individual rows in the tables . Only the rows that meet the conditions in the WHERE clause are grouped. The HAVING clause is then applied to the rows in the result set. Only the groups that meet the HAVING conditions appear in the query output. You can apply a HAVING clause only to columns that also appear in the GROUP BY clause or in an aggregate function. (Reference :BOL)
Example of HAVING and WHERE in one query:
SELECT titles.pub_id, AVG(titles.price)
FROM titles INNER JOIN publishers
ON titles.pub_id = publishers.pub_id
WHERE publishers.state = 'CA'
GROUP BY titles.pub_id
HAVING AVG(titles.price) > 10
Sometimes you can specify the same set of rows using either a WHERE clause or a HAVING clause. In such cases, one method is not more or less efficient than the other. The optimizer always automatically analyzes each statement you enter and selects an efficient means of executing it. It is best to use the syntax that most clearly describes the desired result. In general, that means eliminating undesired rows in earlier clauses.
Reference : Pinal Dave (http://blog.SQLAuthority.com)

Monday, February 1, 2010

Lesson 4...Bug Life Cycle

Check out this SlideShare Presentation:

Bug Life Cycle

The standard Bug life cycle which i have collected from Bugzilla....is given below



Generally,what we use to follow...More simplified version of the above is ...

vb script to compare two excel sheets

This is a fantastic code to compare two excel sheets.Basically this can be treated as Lean project.:)


ExcelFilePath1 = InputBox("Please Enter the Path of first Excel File")

Set fso = CreateObject("Scripting.FileSystemObject")

If (fso.FileExists(ExcelFilePath1) = false )Then

msgbox ExcelFilePath1 & " doesn't exist."

wscript.quit

End If

ExcelFilePath2 = InputBox("Please Enter the Path of second Excel File")

If (fso.FileExists(ExcelFilePath2) = false )Then

msgbox ExcelFilePath2 & " doesn't exist."

wscript.quit

End If





Set objExcel = CreateObject("Excel.Application")

objExcel.Visible = false





Set objWorkbook1 = objExcel.Workbooks.Open(ExcelFilePath1)

Set objWorkbook2 = objExcel.Workbooks.Open(ExcelFilePath2)



Set objWorksheet1= objWorkbook1.Worksheets(1)



Set objWorksheet2= objWorkbook2.Worksheets(1)



For Each cell In objWorksheet1.UsedRange

If cell.Value <> objWorksheet2.Range(cell.Address).Value Then





'Highlights in green color if any changes in cells (for the first file)

cell.Interior.ColorIndex = 4

'Highlights the same cell in the Second file

objWorksheet2.range(cell.Address).interior.colorIndex = 4



Else

cell.Interior.ColorIndex = 0

End If

Next



ObjExcel.displayAlerts = False



objExcel.Save

objExcel.quit



set objExcel=nothing



msgbox "It is Done dude"
 
 
Source : http://askqtp.blogspot.com

Automated Testing with Agile

Check out this SlideShare Presentation:








Software Testing Basics