Friday, August 5, 2016

IP Spoofing in Loadrunner

What is IP Address: ?

Application servers and network devices use IP addresses to identify clients. The application server often caches information about clients coming from the same machine. Network routers try to cache source and destination information to optimize throughput. If many users have the same IP address, both the server and the routers try to optimize. Since Vusers on the same host machine have the same IP address, server and router optimizations do not reflect real- life situations.

Loadrunner's Multiple IP Addresses: 

Load Runner’s multiple IP address feature enables Vusers running on a single machine to be identified by many IP addresses. The server and router recognize the Vusers as coming from different machines and as a result, the testing environment is more realistic.

Adding IP Addresses to HOST :

LoadRunner includes an IP Wizard program that you run on each host machine to create multiple IP addresses. You add new IP addresses to a machine once and use the addresses for all scenarios.

The following procedure summarizes how to add new IP addresses to a host:

  • Run the IP Wizard on the host machine to add a specified number of IP addresses. 
  • Re- start the host machine. 
  • Update the server’s routing table with the new addresses, if necessary. 
  • Enable this feature from the Controller. 

Using the IP Wizard
The IP Wizard resides on each host machine. You run this process one time to create and save new IP addresses. The new addresses can be a range of addresses defined by the Internet Assignment Numbers Authority. They are for internal use only, and cannot connect to the internet. This range of addresses are
the default used by the IP Wizard.
To add new IP addresses to a host machine:

  1. Invoke the IP Wizard from the Load Runner program group.
  2. If you have an existing file with IP address settings, select Load settings from file and choose the file.
  3. If you are defining new settings, select New Settings .
  4. Click Next to proceed to the next step. If you have more than one network card,choose the card to use for IP addresses and click Next .
The optional Web server IP address step enables the IP Wizard to check the server’s routing table to see if it requires updating after the new IP addresses are added to the host

 
  • To check the server’s routing table directly after adding the addresses, enter the server IP address.
  • Click Next to see a list of the machine’s IP address(es). Click Add to define the range of addresses.


IP addresses include two components, a netid and hostid. The submask determines where the netid portion of the address stops and where the hostid begins. 


  • Select a class that represents the correct submask for the machine’s IP addresses.
  • Specify the number of addresses to create. Select Verify that new IP addresses are not already in use to instruct the IP Wizard to check the new addresses. If some addresses are in use, the IP Wizard will only add the addresses not in use.
  • Click OK to proceed.
After the IP Wizard creates the new addresses, the summary dialog box lists all of the IP addresses.



  • Click Finish to exit the IP Wizard. The IP Wizard Summary dialog box is dispalyed.
  • Note the address of the .bat file and check Reboot now to update routing tables to initialize the NT device drivers with the new addresses.
  • Click OK .
  • Edit the .bat file by inserting your IP address instead of CLIENT. IP.
  • Update the Web Server routing table

 

Tuesday, August 2, 2016

Siebel Scripting Challenges in Loadrunner

Load test scripts are difficult to create due to highly dynamic nature of Siebel applications. Load test scripts typically automate transactions at the protocol level for maximum scalability, however Siebel requests are very dynamic and if the recording tool captures hard-coded data values then they will not play back.
Manual parameterization is often required which is tedious and time-consuming, plus it requires extensive Siebel expertise. Once you get your tests running, the highly distributed Siebel architecture makes performance bottlenecks difficult to identify.
Correlation in Siebel:
Correlating standard HTML pages is not a difficult task, all you usually need to do is look for a value inside an HTML tag and match it to a regular expression, which tends to be an easy task, in the case of Siebel however, it does not send all of its information in an HTML format, instead it does so in a very difficult format to read by humans and to correlate using regular expressions, see the text below for an example.
@0`1`3`3``0`UC`1`Status`OK`SWEC`16`1`0`ResultSet`0`Return Alerts`<pre><br><font face="verdana" color="red" size=3>   Executive - Transfer call to Executive Specialist Team if applicable<br>or<br>Handle with Care <br> </font></pre><br><font face="verdana" color="black" size=2>   &nbsp&nbsp&nbsp1. Pension Eligible`0`12` Notifications `0`2`0``0`OP`bn` bc`S_BC4`7`0``0`br`0`ArgsArray`34*qrs WMT Contact Toggle FormApplet 1*1`size`1`cr`0`OP`g`bc`S_BC4`type `SWEIRowSelection`2`0``0`OP`en`bc`S_BC4`2`0``0`OP`bn`bc`S_BC2`7`0``0`br`0`ArgsArray`38*qrs WMT Contact Summary SR List pplet1*11*01*01*01*0`size`5`cr`4`OP`g`bc`S_BC2`type` SWEIRowSelection`2`0``0`OP`en`bc`S_BC2`2`0``0`OP`bn`bc `S_BC3`7`0``0`br`0`ArgsArray`29*FIN

While developing Siebel performance test scripts using load runner we need to do very few script enhancement.
SWEACn ---> Siebel Web Extension Applet count.
This value usually appears in web_url() and you can correlate @ first occurrence in the start and replace all other occurrences with the correlated variable name.

SWERowids ---> Siebel Web Extension Rowids
Kindly note this value is not same as the one mentioned earlier. This value would correspond with the SWEMethod implemented/used in the step. So the Ordinal No depends on the SWEMethod in the Server Response.

SWERowId = rowid to perform the action
SWERowIds = record's parent rowId

SWETS ---> Siebel Web Extension Time Stamp
This can be correlated by using web_reg_save_timestamp() just above the request and pass it.
SWEC ---> Siebel Web Extension Click count
This can be correlated by increment method followed in LR or using the web_reg_save_param ()

I know Siebel Correlation library will reduce the scripting efforts to some extent, but I would also like to focus on row_ids and SWEACn and SWC correlations.

Read "Correlating Siebel-Web Scripts" in VuGen guide which will help you to import WebSiebel77Correlation.cor file. This makes your correlation part easier

Dynamic Record Changing:
Problem: whenever any new record is added or existing record is modified the position of records will change. While recording we need to select a record (SR) from the list of records, but the selected record position will change whenever any new record is added/modified, this leads to a problem while replaying the script as the record position has changed.

 Default: while trying to correlate the record items (each record has atleast 20 fields) the boundaries of fields are changing dynamically, and there is no specific pattern for boundaries.

Example:  1-4611378121*21HealthandWelfare10*Phone
                  Call1*05*Sivakota2*HW7*Inquiry0*4*TEST5*COBRA19*0

If we want to capture ---> the boundaries of SIVAKOTA are changing dynamically without any specific pattern.

Solution: To the above problem we are explicitly capturing the entire record and storing into the parameter array.
//*1-4819497423*HRO3*ODM1*15*Sivakota7*Payroll11*Transaction0*5*ALBEE12*

web_reg_save_param("Record","LB/DIG=*#-#####","RB/DIG=*#-#####","ORD=1"LAST);

The entire record is captured into array “Record”, now we are explicitly matching the field values, by writing some code logic it will take the first value, and using StringCheck function whenever the value is not matched StringCheck function will return a null pointer. Once the value is matched, the matched value will be send as field input to server.

Pros:  The logic will handle dynamic record changes And Script maintenance is less.
Cons: All the field values should be statically kept into pointer array.

          Ex: char* LastnameArray [] = {"ALBEE","ADKINS","TEST","ADELMUND","Sivakota1"};
          Memory utilization is high.

Siebel Load and Performance Testing.

Siebel Load and Performance Testing

Unlike functional testing, performance testing works on a protocol level, in the case of Siebel that's done by simulating the HTTP requests that the Siebel UI generates. We can use any web testing tool for testing the Siebel Application.
The Siebel Test Automation framework allows you to implement secure access to test automation, in which the SWE requires a password to generate test automation information. This is useful for real-time testing in a production environment, and can be integrated with system monitoring tools.
Load testing can help you ensure that your Siebel application will perform and scale under real user workloads once it’s deployed to production. This can help you ensure that it will be able to withstand the expected number of concurrent users while maintaining acceptable performance and response times.It can also help you identify and address critical bottlenecks prior to deployment. Monitoring the components of your Siebel environment during load testing is extremely important.

Stress testing can be performed to test beyond the limits of normal operation and helps you assess the capacity and scalability of your application infrastructure automated load testing for Siebel has its own challenges. 

Friday, July 29, 2016

Advantages and Disadvantages of Performance Testing over the Cloud

Performance Testing over the Cloud
Cloud computing is becoming more and more famous and mature with the time and its usage has been increased exponentially. Performance engineers setup the copy of the production system in cloud and deploy load injectors on different geographical locations over the cloud to effectively perform the load test.
Advantages
  • Cloud testing provides the flexibility of deploying the production system on discrete environment to conveniently test the application
  • It’s extremely simple to fix the defects and quickly configure the changes
  • It reduces the test cost due to its convenient rental models
  • It provides greater test control to simulate required user load and to identify and simulate the bottlenecks
Disadvantages
  • Security and privacy of data is the biggest concern in cloud computing
  • Cloud computing works on-line and completely depends on the network connection speed
  • Complete dependency on Cloud Service Provider for quality of service
  • Although cloud hosting is a lot cheaper in long run but its initial cost is usually higher than the traditional technologies
  • Although it’s a short-term issue due to the emerging technology and it’s difficult to upgrade it without losing the data

Performance Test Issues and Trouble-Shooting

Many differences are commonly found between the test and production systems after the proper validation of test system. A differently configured performance test environment will produce invalid results which can greatly mislead all the stakeholders and the application itself can fail in production. There can be a dozens of reasons why your test environment is not producing the required results and some of them are as follows:
  • Load Injectors overloaded: Check the load injector machines resource utilization. Quite often load injector machines consume more processor and memory and are unable to simulate the required number of virtual users. Run a small and simple test first and check the system resources consumption on these before running the detailed test.
  • Insufficient network bandwidth: Network bandwidth plays a vital role when you are conducting the performance test over the WAN. Test results can greatly differ on the basis of available bandwidth. So make sure that sufficient network bandwidth is available for starting the test. Moreover, you need two network interface cards (NICs) when web server and database server are on different layers, one NIC will be facing the clients and the other one will be used for database communication.
  • Improper test data: Improper test data can also create various issues in performance testing. It’s highly possible that a variable is not parameterized and same value is being submitted to the database for every user, which can lead to low processor activity due to artificial locking.

Performance Testing Best Practices on Production System

Performance Testing Best Practices on Production System
Although the various reasons due to which it’s important to conduct the performance testing on production system have been discussed however there are still lots of issues and concerns (some of them have been discussed in the above section) on the basis of which the companies are hesitant to go for it. In this section, we will discuss some of the best practices that can be opted to minimize the impacts of performance testing on production system.

Testing During Maintenance Window

Almost all the large organizations’ applications go for scheduled maintenance and during that period of time they restrict their users from interacting with the application. You can co-ordinate with responsible teams and plan out your performance testing activity during this scheduled downtime without affecting actual users’ experience.

Test before Release

One of the best approaches could be to test the application just before making it available for actual users. You can include application performance testing part of your release management plan to make sure that performance tests are always executed before releasing the application.

Test during Off-Hours of Off-Days

Conduct the performance testing during off-hours of the off-days if you are not left with any of the above two options. Minimum number of application actual users is affected on conducting the performance testing at this schedule. It not only helps in minimizing the impact of testing on real users activities but also in identifying the bottlenecks root-causes. The best and most suitable time considered for such approach is midnight Saturday or Sunday.

Test Read-only Transactions

Many companies don’t prefer to do any testing activity on their production system due to the fear that the test data might get mixed with the actual applications users’ data. Especially in case of business critical applications, companies are not willing to take even the minor risks. That is why production database is almost never used in testing and even if it’s used, it’s used only for the read-only operations. These simple transactions don’t affect the application data but can reveal important performance bottlenecks.

Increase Load Gradually

One approach that could be exercised to minimize the impact of performance testing on real users is to increase the simulated users gradually unless the real users’ transactions are within the acceptable threshold. We have mentioned above that performance testing is not all about breaking the system but also to find out the application behavior under normal conditions. Run a test and increase the load gradually unless the users’ response time is within the acceptable range and they are able to successfully perform their transactions. Then analyze the test results, fix the bottlenecks and re-test. You can thoroughly test any application for most of its bottlenecks in multiple iterations without actually impacting the real users, however they will be experiencing slower user experience but still will be able to complete their transactions.

Careful Monitoring and Continuous Communication during Test Execution

The performance testing approach and its expected outcomes along with the involved risks should be clearly communicated to all the stakeholders. Moreover, you need to be very pro-active while testing on the production system and all the stakeholders should be carefully monitoring the test and test should be stopped immediately if and when it affects the actual users beyond their acceptable threshold.

Performance Test Environment Checklist

We all know about the importance of having test environment similar to the production system. Once we have setup the performance test environment, we can get an initial idea of the test environment state by comparing it with production environment based on the following factors:

  • Number of Servers: Number of physical and virtual servers
  • Load Balancing Strategy: The type of load balancing mechanism is in use
  • Hardware Resources: CPUs count and type, RAM capacity, Number and type of NICs
  • Software Resources: Standard application build apart from components of the AUT
  • Application Components: Application components description which needs to be deployed on the server
  • External Links: Links to third party application and other internal system components

Cloud Performance Testing

Benefits of Performance Testing in the Cloud

All levels of testing could be performed in cloud infrastructure, but performance testing benefits greatly from cloud environments. 

Flexibility

Different levels of tests can be executed on discrete environments at the convenience of an enterprise. Performance testers no longer have to wait until the end of the testing phase in order to move to a production-like environment for their performance and stress tests. Instead such an environment can be brought into action at will. 

Simplicity

The cloud model provides a new level of simplicity in the form of bug fixing environments that can be launched as quickly as the configuration can be put in place. 

Comprehensive Testing

End-to-end tests for more generic processes can be performed in the cloud. All the necessary components can be published in the cloud to create the complete chain of systems. In this manner the overall business process can be tested;

Cost Reduction

Cloud environments could be enabled and disabled at will, reducing the cost of environmental management. Cost reduction is the major factor influencing companies to choose Cloud. As per IDC survey reports, economic benefits are the key drivers of cloud adoption. 

Cloud Testing leverages the cloud infrastructure, minimizing the unit cost of computing and increasing the efficiency of performance testing. The report on cloud enabled testing service providers reveals that the cost savings usually range from 40% to 70%. 

Small and medium-sized enterprises (SMEs) that cannot afford huge capital expenditures also find cloud enabled performance testing an ideal approach. As there is no need to make upfront payments in infrastructure, Public cloud allows enterprises to shift to a flexible operating expenditure model. 

In case of Private cloud, infrastructure can be deactivated once the testing process is complete. This frees enterprises from incurring expensive operational costs.

Cleaner and Greener Testing

It is apparently true that cloud computing capabilities make it significantly greener than traditional models and this is true for testing process. By just sharing cloud resources for their test infrastructure, enterprises can use IT resources on demand and eliminate waste. Consumers using cloud infrastructures can minimize energy use and deliver environmental savings in carbon dioxide of around 55%. 

Greater Control

Cloud-based environments can provide greater control on test execution, analyze application performance and find bottlenecks while the tests are running. Cloud model allows test engineers to ascend from a few thousands to millions of concurrent users to evaluate breaking points. This gives testers a perfect picture of all possible runtime errors and adapts enterprises for peak demand times. 

Internal Lab Testing vs. Cloud Testing

So what is the best choice? 
  • Setup an internal copy of production as a test environment and use several computers to generate load internally
  • Setup an internal copy of production as a test environment and use load injectors on the cloud to generate load distributed geographically
  • Setup a copy of production on the cloud as a test environment and use load injectors on the cloud to generate load distributed geographically

We saw that performance testing from the cloud gives you a complete understanding of the final user experience and reduce drastically investment and configuration costs. However, it may not fit to all organization (security, product licenses) and can complexify the analysis of performance bottlenecks (too much variables). 

These choices depend really on the type of application to be tested and the company culture and processes. 

A first performance testing run in a simpler lab with smaller loads is still valuable as it gives an overview of early performance issues. An application which does not pass the lab test, needs to be tuned before going to larger scale testing over the Internet! 

A load testing tool which supports both lab and cloud testing with the same use of scripts ans use cases across both types of tests is definitely a winning choice as it gives you flexibility and scalability across your project. 

Load Testing Scenarios Selection Approaches in Perfoqamcne testing

Load Testing Scenarios Selection Approaches

We have discussed above the different principles regarding the selection of your load testing scenarios. Further in this section, we will discuss different approaches which a performance testing team could follow to affectively select load testing scenarios. We will define a mechanism here for scenarios selection.

Identify AUT’s all Scenarios

You will select your load test scenarios based on the above mentioned criteria. So it’s necessary to have a complete list of application scenarios before making your choice. Start the activity by developing a complete list of all the features of the AUT.
This approach will make sure that you are not missing even a single application scenario which should be a part of your load test. In a typical E-commerce web application, most common scenarios could be,
  • Browsing the Catalog
  • Product Searching
  • Order Placement
Identification of Scenario Activities
Once you have figured out all the application scenarios, next step is to identify users’ activities within every scenario. This activity will help you to dig more into the AUT and get more application insights to make a wise selection. For example, following activities would be involved in Order Placement scenario of an E-commerce web application:
  • Login to application
  • Browse the product catalog
  • Searching for the desired product
  • Select the product and its quantity
  • Add selected product to your shopping cart
  • Validate your payment method
  • Place the order
Scenarios Selection
Comparing the application scenarios (based on the above mentioned criteria) can be the next action once you have list down the details of all the activities of AUT scenarios. You can assign certain weight-age to every criterion (based on its importance for load testing) and give appropriate weight to every scenario and compare your scenarios’ importance based on their aggregate score/weight. The top score scenarios will be the best candidates for load testing.
Share your selected scenarios with all the application stakeholders and get their approval before formally start working on them. Load testing is a very complex and costly activity and any missing or additionally picked scenario can not only invalidate your test results in production environment but can also be the cause of waste of lots of money and efforts.

Performance testing in the Agile Process

Performance testing in the Agile Process

Performance testing is an integral part of Agile processes, it can help your organization develop higher quality software in less time while reducing development costs. The goal is to test performance early and often in the development effort, and to test functionality and performance in the same sprint. That’s because the longer you wait to conduct performance tests, the more expensive it will become to incorporate changes.
In an Agile project, the definition of “done” should include the completion of performance testing within 
a sprint. Only when performance testing has been completed can you confidently deliver a successful 
application to your end users. The best practices for performance testing within a sprint are:

  • Gather all performance-related requirements and address them during system architecture discussions and planning.
  • Work closely with end users and stakeholders to define acceptance criteria for each performance story.
  • Involve performance testers early in the project, in the planning and infrastructure stages.
  • Make performance testers part of the development (sprint) team.
  • Ensure that the performance testers work on test cases and test data preparation while developers are
  • coding for those user stories.
  • Get performance testers to create stubs for any external Web services.
  • Deliver each relevant user story to performance testers as soon as it is signed off by the functional testers.
  • Provide continuous feedback to developers, architects, and system analysts.
  • Share performance test assets across projects and versions.
  • Schedule performance tests for off-hours to increase the utilization of time within the sprint.
Key considerations in Agile performance testing

Service-level objectives
    Service-level objectives (SLOs) drive the planning of performance requirements in Agile environments. Through SLOs, business and IT managers agree on requirements for application throughput, response times, numbers of simultaneously supported users, and other factors that affect the end-user experience. These requirements, which become part of the application backlog, must be met before an application can go live in a production environment

    Focused performance testing

    SLOs spell out the expected behavior of applications. But in a practical sense, development, test teams, and the business don’t always know the exact requirements for an application until they see how it performs in a production environment. For this reason, Agile processes rely on the production availability lifecycle (PAL) to see how an application actually performs in the real world. The feedback from the production environment helps developers and testers focus on specific problem areas of the application, thereby increasing the resources during short sprint cycles

    Test data preparation

    The earlier test data preparation takes place, the more time you have for testing. So performance testers should work with stakeholders in the planning stage to prepare tests and test data. This kind of collaboration is one of the keys to getting a lot of work done in a relatively short sprint. Different types of work must happen simultaneously. In this sense, Agile software development is a bit like just-in-time (JIT) manufacturing. You plan carefully for your needs, and then bring in the resources just when you need them.

    Trending

    In an Agile environment, it’s important for application owners to see continual improvement in an application over the course of successive sprints. They want to see a favorable trend, where each iteration of the application is better than the last. This makes it all the more important to monitor application performance trends in terms of SLO requirements. Trending reports allows you to give stakeholders regular snapshots of performance, which should ideally show that performance is getting steadily better or at least is not degrading. In addition, by looking at trending reports you do not necessarily have to study the analysis on every test run.

    Reusable and shared testing assets

    To accelerate testing and development work, Agile processes should make use of a repository of reusable and shared testing assets. This repository of test scripts gives everyone on the virtual development team, including contractors working on an outsourced basis, access to the same test assets.
    Among other benefits, the repository provides efficiencies that come with “follow-the-sun” testing. Test scripts can be created over the course of a day in one geography and then made available to testers in another geography, who will run the test during their business day.
    The ability to reuse and share test assets becomes more important with Agile development when the testing cycle is limited. It allows more work to get done within the available time window

    Automated testing

    The use of automated testing tools can speed up the process of performance testing. With the right software in place, you can create a script, make it reusable, and then schedule a test to run in the off hours, when developers are not changing the code you’re testing. This makes it possible to achieve higher levels of software quality in less time.
    Automated testing helps you meet your regression and performance testing objectives within the tight timeframes of a two- to four-week sprint. This becomes even more important when you consider that developers often hold onto their work till 60 percent of the time has passed in a sprint, before handing off the build for testing. That doesn’t leave a lot of time for testing.

    Continual analysis 

    In Agile processes, continual analysis is important.
    Both contributors (“pigs” in scrum terminology) and stakeholders (“chickens” in scrum terminology) need to keep a close eye on the progress of the project, especially when it comes to application functionality and performance. To give them the view they need, performance analysis should be both continual and comprehensive. This ongoing analysis helps pinpoint any problem areas in the application. Analysis takes place all the way down to the daily scrums that include IT infrastructure and performance testers as contributors and application stakeholders. 
    Contributors are active members of the sprint team who participate in daily scrums, which give all stakeholders visibility into the current state of the development effort. When all interested team members know the performance of each sprint, they are in a better position to keep the quality of the entire application high. The sooner problems are found, the sooner they can be fixed.

    Component testing

    In an Agile environment, you will not have an end-to-end application in every sprint. This makes it important to be able to performance test only a portion or a component of an application. Stubbing provides a way to test components. Stubbing simulates parts of an application that are either not written or not available. If an application uses outside, third-party data sources—a common occurrence in a Web 2. 0 world—then performance testers and quality assurance (QA) specialists are going to need stubs because they cannot add load to third-party production servers. By performance testing components inside each sprint, you will help enable the development effort yields a high-quality application that performs well from end-to-end

    Activities Involved in Workload Modeling

    Performance testing is a complex activity as it consists of various phases and each phase has several activities in it. Workload modeling is one of the most important parts of the performance testing activity and it’s not simple by any means. Some of the activities necessary for identifying the performance test workload model are listed below:



    1. Test Objectives Identification

    Not just the performance testing but in any activity you put efforts which are aligned to your objectives. Identifying your test objectives means to examine what actions you need to take in order to successfully achieve those test objectives. So before we formally start working on any application’s performance testing, first step is to identify its test objectives in detail. For an E-commerce web application, following are some of the examples of performance test objectives:
    • Response Time: Product search should not take more than 3 seconds.
    • Throughput: Application server should have the capacity of entertaining 500 transactions per second.
    • Resource Utilization: All the resources like processor and memory utilization, network Input output and disk input output etc should be at less than 70% of their maximum capacity.
    • Maximum User Load: System should be able to entertain 1000 concurrent users by fulfilling all of the above defined objectives.

    2. Application Understanding

    Complete understanding of the AUT with all its features is the basic step in any testing activity. You can’t thoroughly test an application unless you have its complete understanding. Same is the case with the performance testing. Performance testing starts with planning and planning starts with application understanding. You explore the application from performance perspectives and try to get the answers of the following questions:
    • How many types of users are using this application?
    • What are the business scenarios of every user?
    • What is the AUT current and predicted peak user load for all its users’ actions over time?
    • How the user load is expected to grow with time?
    • In how much time a specific user action will achieve its peak load?
    • For how long the peak load will continue?
    Performance testing teams can also collaborate with network team to check the server logs to get the answers of some of the above mentioned questions. They can also interview the marketing team and application stakeholders for some of the answers.

    3. Key Scenarios Identification

    It’s neither practiced nor required to simulate all user actions in performance tests due to the budget and time constraints. Performance testing teams always select limited number of user actions which have greater performance impact on the application. In one of our previous Defaultpapers (Load Testing Scenarios Identification) we have discussed this topic in detail. Following are the examples of few scenarios which should be selected while conducting the performance tests:
    • Measureable Scenarios: A basic criterion for selecting any user scenario for performance testing is it should be fully measureable.
    • Most Frequently Accessed Scenarios: Application scenarios which are mostly accessed by the users when they browse through the application.
    • Business Critical Scenarios: Application core scenarios which contain its business transactions.
    • Resource Intensive Scenarios: User scenarios which consume more resources as compared to typical scenarios.
    • Time Dependent Frequently Accessed Scenarios: Such application scenarios which are accessed on specific occasions only but are accessed very frequently.
    • Stakeholders Concerning Scenarios: Application features about which the stakeholders are more concerned such as AUT newly integrated modules.
    Some of the most desired performance testing scenarios of an E-commerce application could be,
    • Browsing product catalog
    • Creating a user account
    • Searching for a product
    • Login to application
    • Order Placement

    4. Determining Navigation Paths of Key Scenarios

    Once you have identified all the AUT scenarios which should be included in a performance test, next step is to figure out each scenario’s all possible paths which a user can opt to successfully complete it. Any application users most probably have different level of domain and technical expertise and it’s quite obvious that they will follow different steps to complete a specific scenario(s). Performance testing teams identify all possible paths which users could follow to successfully complete the identified scenario and also figure out the frequency of each path to decide whether it should be included in performance test or not? Application response for the same scenario can greatly vary depending upon user navigation path and it’s strongly advised to test the selected scenario’s all major paths under load. Following are the few guidelines which could be followed to identify any scenario’s navigation paths:
    • Figure out AUT paths which can be used to successfully complete more than one identified scenarios and have major performance impact.
    • Read manuals (design and user) to find out identified scenario’s all possible paths.
    • In case of production application, check log files to find out the users’ navigation patterns to complete an identified scenario.
    • Explore the application and try to find out all possible paths of a scenario by yourself.
    • Another approach could be to provide application access to new and experienced users and ask them to complete certain scenarios and observe their actions.

    5. Identify Unique Test Data

    Unfortunately, identifying selected scenario’s all possible navigation paths doesn’t provide all the information required to successfully simulate them in a performance test. Several bits of information is still required to accurately simulate the workload model and preparing the required test data is one of them. You can never achieve accurate test results with improper or inefficient test data. One can develop an initial idea about the required test data by getting answers of the following queries:
    • While navigating through a specific path, how much time a user spend on a page?
    • Which conditions force the user to alter the navigation path?
    • What input data should be required for a specific path?
    You will also require a greater amount of test data and you need to keep an eye on the data base state if you wish to effectively run the test. Following are the few considerations which should be followed while executing a performance test where multiple navigation paths are tested for identified scenarios:
    • Make sure that you have all the required test data as you will need more test data when you are testing scenarios with all navigation paths.
    • Avoid using same test data for multiple users as it will produce invalid results.
    • Periodically test the database status during the test execution and make sure it’s not overloaded.
    • Include some invalid test data as well to simulate the real users’ behavior because users sometimes provide invalid values as well.

    6. Relative Load Distribution Across Identified Scenarios

    Now that you have understood the application, identified its key test scenarios, their navigation paths and test data required for each identified scenario, next step is to figure out the distribution of all the identified scenarios in your workload model. It’s quite obvious that in any application few scenarios are executed more frequently as compared to others. Moreover, for some applications’ scenarios the execution also depends on relative time. For example, for an online billing application where bill payments are made only during first week of the month, the administrator will be using the application mainly for accounts updating and importing billing information etc. in last 3 weeks of the month but as the first week starts, the major chunk of site users will be using it for bill payments. So you need to figure out which AUT scenarios will be executed at what time and what will be their execution percentage in the overall workload. Following techniques could be helpful in identifying relative distribution of your identified scenarios:
    • Check out the server log files to identify users’ trends on application if it is in the production environment.
    • Consult with the sales and marketing teams to figure out which features they believe will be mostly used.
    • You can also interview the existing and potential clients to find out in which features they are most interested.
    • Another approach could be to share a beta release among a smaller segment of users and check out their trends and behaviors on application from server log files.
    • In case none of the above mentioned approaches work then use your experience and intuitions to get the answers of your questions.

    7. Identify Target Load Levels

    One last step involved in workload model completion after performing all the above mentioned activities is to identify the normal and peak load levels of every selected scenario to test the application for expected and peak load conditions. Mainly depending upon the application you need to identify the monthly, weekly, daily and hourly average load targets for every selected scenario. You need to know the following information in order to identify the target load levels for an application:
    • What are the current and expected normal and peak user requests level?
    • What are the application key test scenarios?
    • What is the distribution of user requests on the identified scenarios?
    • What are the navigation paths for all the scenarios along with relative utilization of every scenario?

    8. Workload Design Miscellaneous Options

    Once you have identified the key test scenarios, their relative distribution and the target load levels the last thing is to figure out different options for your workload like browser mix, network mix, think time and pausing between the iterations to simulate the workload as per the real environment. All performance testing tools are equipped with following options:
    • Browser Mix: List down all the browsers which you want to include in your test and specify the load distribution across all browsers to verify what will be the system response in all these browsers.
    • Network Mix: There are various internet connections available these days and people use almost all of them based on their availability and constraints. So it’s better to include major list of network connections in your test and appropriately distribute the load on them.
    • Think Time: As real users always take some time while taking different actions, it’s very important to include the think time in your test based on the application users’ comfort level in using the application identified scenarios. Ignoring the think time can invalidate the test results.
    • Pause Time: There is always a certain pause before the user receives a response from server and send a new request which can be cater to with pause time between the iterations.

    Challenges Involved in Workload Modeling

    We have discussed all the activities involved in workload modeling and its quite obvious that the workload modeling for performance test is not a piece of cake. Performance testing teams face various challenges while performing all the above mentioned activities. Following is a list of some of these challenges:
    • Complete application access in planning phase before executing the performance test
    • Getting help from the marketing and network teams for their input on application performance critical scenarios and their workload distribution
    • Accessibility to relevant server logs
    • Applying the data mining techniques on server logs to extract the relevant information
    • Presenting the workload models to the stakeholders in an effective manner to get their approval on it.

    Identifying Critical Scenarios and Metrics

    Identifying critical scenarios is an inception and critical phase in load test process. Following are the various factors play a vital role in this phase: expertise in the application domain, business rules and requirements, technology etc.
    Following the important metrics that needs to be collected and monitored:
    1. Response time – how fast your application is?
    2. Throughput – how many orders your application can process i.e. transactions per second?
    3. CPU Utilization – What is the percentage utilization of your CPU?
    4. Memory Utilization – What is the percentage utilization of your memory?
    5. Network Utilization – What is the percentage utilization of your network bandwidth?
    6. Disk Utilization – What is the percentage utilization of your disk?
    7. User load – What is the maximum number of concurrent users your application can withstand?
    Also there are different kinds of metrics which needs to be identified such as Client-side metrics, server-side metrics, business metrics, network metrics etc.