Friday, July 29, 2016

Advantages and Disadvantages of Performance Testing over the Cloud

Performance Testing over the Cloud
Cloud computing is becoming more and more famous and mature with the time and its usage has been increased exponentially. Performance engineers setup the copy of the production system in cloud and deploy load injectors on different geographical locations over the cloud to effectively perform the load test.
Advantages
  • Cloud testing provides the flexibility of deploying the production system on discrete environment to conveniently test the application
  • It’s extremely simple to fix the defects and quickly configure the changes
  • It reduces the test cost due to its convenient rental models
  • It provides greater test control to simulate required user load and to identify and simulate the bottlenecks
Disadvantages
  • Security and privacy of data is the biggest concern in cloud computing
  • Cloud computing works on-line and completely depends on the network connection speed
  • Complete dependency on Cloud Service Provider for quality of service
  • Although cloud hosting is a lot cheaper in long run but its initial cost is usually higher than the traditional technologies
  • Although it’s a short-term issue due to the emerging technology and it’s difficult to upgrade it without losing the data

Performance Test Issues and Trouble-Shooting

Many differences are commonly found between the test and production systems after the proper validation of test system. A differently configured performance test environment will produce invalid results which can greatly mislead all the stakeholders and the application itself can fail in production. There can be a dozens of reasons why your test environment is not producing the required results and some of them are as follows:
  • Load Injectors overloaded: Check the load injector machines resource utilization. Quite often load injector machines consume more processor and memory and are unable to simulate the required number of virtual users. Run a small and simple test first and check the system resources consumption on these before running the detailed test.
  • Insufficient network bandwidth: Network bandwidth plays a vital role when you are conducting the performance test over the WAN. Test results can greatly differ on the basis of available bandwidth. So make sure that sufficient network bandwidth is available for starting the test. Moreover, you need two network interface cards (NICs) when web server and database server are on different layers, one NIC will be facing the clients and the other one will be used for database communication.
  • Improper test data: Improper test data can also create various issues in performance testing. It’s highly possible that a variable is not parameterized and same value is being submitted to the database for every user, which can lead to low processor activity due to artificial locking.

Performance Testing Best Practices on Production System

Performance Testing Best Practices on Production System
Although the various reasons due to which it’s important to conduct the performance testing on production system have been discussed however there are still lots of issues and concerns (some of them have been discussed in the above section) on the basis of which the companies are hesitant to go for it. In this section, we will discuss some of the best practices that can be opted to minimize the impacts of performance testing on production system.

Testing During Maintenance Window

Almost all the large organizations’ applications go for scheduled maintenance and during that period of time they restrict their users from interacting with the application. You can co-ordinate with responsible teams and plan out your performance testing activity during this scheduled downtime without affecting actual users’ experience.

Test before Release

One of the best approaches could be to test the application just before making it available for actual users. You can include application performance testing part of your release management plan to make sure that performance tests are always executed before releasing the application.

Test during Off-Hours of Off-Days

Conduct the performance testing during off-hours of the off-days if you are not left with any of the above two options. Minimum number of application actual users is affected on conducting the performance testing at this schedule. It not only helps in minimizing the impact of testing on real users activities but also in identifying the bottlenecks root-causes. The best and most suitable time considered for such approach is midnight Saturday or Sunday.

Test Read-only Transactions

Many companies don’t prefer to do any testing activity on their production system due to the fear that the test data might get mixed with the actual applications users’ data. Especially in case of business critical applications, companies are not willing to take even the minor risks. That is why production database is almost never used in testing and even if it’s used, it’s used only for the read-only operations. These simple transactions don’t affect the application data but can reveal important performance bottlenecks.

Increase Load Gradually

One approach that could be exercised to minimize the impact of performance testing on real users is to increase the simulated users gradually unless the real users’ transactions are within the acceptable threshold. We have mentioned above that performance testing is not all about breaking the system but also to find out the application behavior under normal conditions. Run a test and increase the load gradually unless the users’ response time is within the acceptable range and they are able to successfully perform their transactions. Then analyze the test results, fix the bottlenecks and re-test. You can thoroughly test any application for most of its bottlenecks in multiple iterations without actually impacting the real users, however they will be experiencing slower user experience but still will be able to complete their transactions.

Careful Monitoring and Continuous Communication during Test Execution

The performance testing approach and its expected outcomes along with the involved risks should be clearly communicated to all the stakeholders. Moreover, you need to be very pro-active while testing on the production system and all the stakeholders should be carefully monitoring the test and test should be stopped immediately if and when it affects the actual users beyond their acceptable threshold.

Performance Test Environment Checklist

We all know about the importance of having test environment similar to the production system. Once we have setup the performance test environment, we can get an initial idea of the test environment state by comparing it with production environment based on the following factors:

  • Number of Servers: Number of physical and virtual servers
  • Load Balancing Strategy: The type of load balancing mechanism is in use
  • Hardware Resources: CPUs count and type, RAM capacity, Number and type of NICs
  • Software Resources: Standard application build apart from components of the AUT
  • Application Components: Application components description which needs to be deployed on the server
  • External Links: Links to third party application and other internal system components

Cloud Performance Testing

Benefits of Performance Testing in the Cloud

All levels of testing could be performed in cloud infrastructure, but performance testing benefits greatly from cloud environments. 

Flexibility

Different levels of tests can be executed on discrete environments at the convenience of an enterprise. Performance testers no longer have to wait until the end of the testing phase in order to move to a production-like environment for their performance and stress tests. Instead such an environment can be brought into action at will. 

Simplicity

The cloud model provides a new level of simplicity in the form of bug fixing environments that can be launched as quickly as the configuration can be put in place. 

Comprehensive Testing

End-to-end tests for more generic processes can be performed in the cloud. All the necessary components can be published in the cloud to create the complete chain of systems. In this manner the overall business process can be tested;

Cost Reduction

Cloud environments could be enabled and disabled at will, reducing the cost of environmental management. Cost reduction is the major factor influencing companies to choose Cloud. As per IDC survey reports, economic benefits are the key drivers of cloud adoption. 

Cloud Testing leverages the cloud infrastructure, minimizing the unit cost of computing and increasing the efficiency of performance testing. The report on cloud enabled testing service providers reveals that the cost savings usually range from 40% to 70%. 

Small and medium-sized enterprises (SMEs) that cannot afford huge capital expenditures also find cloud enabled performance testing an ideal approach. As there is no need to make upfront payments in infrastructure, Public cloud allows enterprises to shift to a flexible operating expenditure model. 

In case of Private cloud, infrastructure can be deactivated once the testing process is complete. This frees enterprises from incurring expensive operational costs.

Cleaner and Greener Testing

It is apparently true that cloud computing capabilities make it significantly greener than traditional models and this is true for testing process. By just sharing cloud resources for their test infrastructure, enterprises can use IT resources on demand and eliminate waste. Consumers using cloud infrastructures can minimize energy use and deliver environmental savings in carbon dioxide of around 55%. 

Greater Control

Cloud-based environments can provide greater control on test execution, analyze application performance and find bottlenecks while the tests are running. Cloud model allows test engineers to ascend from a few thousands to millions of concurrent users to evaluate breaking points. This gives testers a perfect picture of all possible runtime errors and adapts enterprises for peak demand times. 

Internal Lab Testing vs. Cloud Testing

So what is the best choice? 
  • Setup an internal copy of production as a test environment and use several computers to generate load internally
  • Setup an internal copy of production as a test environment and use load injectors on the cloud to generate load distributed geographically
  • Setup a copy of production on the cloud as a test environment and use load injectors on the cloud to generate load distributed geographically

We saw that performance testing from the cloud gives you a complete understanding of the final user experience and reduce drastically investment and configuration costs. However, it may not fit to all organization (security, product licenses) and can complexify the analysis of performance bottlenecks (too much variables). 

These choices depend really on the type of application to be tested and the company culture and processes. 

A first performance testing run in a simpler lab with smaller loads is still valuable as it gives an overview of early performance issues. An application which does not pass the lab test, needs to be tuned before going to larger scale testing over the Internet! 

A load testing tool which supports both lab and cloud testing with the same use of scripts ans use cases across both types of tests is definitely a winning choice as it gives you flexibility and scalability across your project. 

Load Testing Scenarios Selection Approaches in Perfoqamcne testing

Load Testing Scenarios Selection Approaches

We have discussed above the different principles regarding the selection of your load testing scenarios. Further in this section, we will discuss different approaches which a performance testing team could follow to affectively select load testing scenarios. We will define a mechanism here for scenarios selection.

Identify AUT’s all Scenarios

You will select your load test scenarios based on the above mentioned criteria. So it’s necessary to have a complete list of application scenarios before making your choice. Start the activity by developing a complete list of all the features of the AUT.
This approach will make sure that you are not missing even a single application scenario which should be a part of your load test. In a typical E-commerce web application, most common scenarios could be,
  • Browsing the Catalog
  • Product Searching
  • Order Placement
Identification of Scenario Activities
Once you have figured out all the application scenarios, next step is to identify users’ activities within every scenario. This activity will help you to dig more into the AUT and get more application insights to make a wise selection. For example, following activities would be involved in Order Placement scenario of an E-commerce web application:
  • Login to application
  • Browse the product catalog
  • Searching for the desired product
  • Select the product and its quantity
  • Add selected product to your shopping cart
  • Validate your payment method
  • Place the order
Scenarios Selection
Comparing the application scenarios (based on the above mentioned criteria) can be the next action once you have list down the details of all the activities of AUT scenarios. You can assign certain weight-age to every criterion (based on its importance for load testing) and give appropriate weight to every scenario and compare your scenarios’ importance based on their aggregate score/weight. The top score scenarios will be the best candidates for load testing.
Share your selected scenarios with all the application stakeholders and get their approval before formally start working on them. Load testing is a very complex and costly activity and any missing or additionally picked scenario can not only invalidate your test results in production environment but can also be the cause of waste of lots of money and efforts.

Performance testing in the Agile Process

Performance testing in the Agile Process

Performance testing is an integral part of Agile processes, it can help your organization develop higher quality software in less time while reducing development costs. The goal is to test performance early and often in the development effort, and to test functionality and performance in the same sprint. That’s because the longer you wait to conduct performance tests, the more expensive it will become to incorporate changes.
In an Agile project, the definition of “done” should include the completion of performance testing within 
a sprint. Only when performance testing has been completed can you confidently deliver a successful 
application to your end users. The best practices for performance testing within a sprint are:

  • Gather all performance-related requirements and address them during system architecture discussions and planning.
  • Work closely with end users and stakeholders to define acceptance criteria for each performance story.
  • Involve performance testers early in the project, in the planning and infrastructure stages.
  • Make performance testers part of the development (sprint) team.
  • Ensure that the performance testers work on test cases and test data preparation while developers are
  • coding for those user stories.
  • Get performance testers to create stubs for any external Web services.
  • Deliver each relevant user story to performance testers as soon as it is signed off by the functional testers.
  • Provide continuous feedback to developers, architects, and system analysts.
  • Share performance test assets across projects and versions.
  • Schedule performance tests for off-hours to increase the utilization of time within the sprint.
Key considerations in Agile performance testing

Service-level objectives
    Service-level objectives (SLOs) drive the planning of performance requirements in Agile environments. Through SLOs, business and IT managers agree on requirements for application throughput, response times, numbers of simultaneously supported users, and other factors that affect the end-user experience. These requirements, which become part of the application backlog, must be met before an application can go live in a production environment

    Focused performance testing

    SLOs spell out the expected behavior of applications. But in a practical sense, development, test teams, and the business don’t always know the exact requirements for an application until they see how it performs in a production environment. For this reason, Agile processes rely on the production availability lifecycle (PAL) to see how an application actually performs in the real world. The feedback from the production environment helps developers and testers focus on specific problem areas of the application, thereby increasing the resources during short sprint cycles

    Test data preparation

    The earlier test data preparation takes place, the more time you have for testing. So performance testers should work with stakeholders in the planning stage to prepare tests and test data. This kind of collaboration is one of the keys to getting a lot of work done in a relatively short sprint. Different types of work must happen simultaneously. In this sense, Agile software development is a bit like just-in-time (JIT) manufacturing. You plan carefully for your needs, and then bring in the resources just when you need them.

    Trending

    In an Agile environment, it’s important for application owners to see continual improvement in an application over the course of successive sprints. They want to see a favorable trend, where each iteration of the application is better than the last. This makes it all the more important to monitor application performance trends in terms of SLO requirements. Trending reports allows you to give stakeholders regular snapshots of performance, which should ideally show that performance is getting steadily better or at least is not degrading. In addition, by looking at trending reports you do not necessarily have to study the analysis on every test run.

    Reusable and shared testing assets

    To accelerate testing and development work, Agile processes should make use of a repository of reusable and shared testing assets. This repository of test scripts gives everyone on the virtual development team, including contractors working on an outsourced basis, access to the same test assets.
    Among other benefits, the repository provides efficiencies that come with “follow-the-sun” testing. Test scripts can be created over the course of a day in one geography and then made available to testers in another geography, who will run the test during their business day.
    The ability to reuse and share test assets becomes more important with Agile development when the testing cycle is limited. It allows more work to get done within the available time window

    Automated testing

    The use of automated testing tools can speed up the process of performance testing. With the right software in place, you can create a script, make it reusable, and then schedule a test to run in the off hours, when developers are not changing the code you’re testing. This makes it possible to achieve higher levels of software quality in less time.
    Automated testing helps you meet your regression and performance testing objectives within the tight timeframes of a two- to four-week sprint. This becomes even more important when you consider that developers often hold onto their work till 60 percent of the time has passed in a sprint, before handing off the build for testing. That doesn’t leave a lot of time for testing.

    Continual analysis 

    In Agile processes, continual analysis is important.
    Both contributors (“pigs” in scrum terminology) and stakeholders (“chickens” in scrum terminology) need to keep a close eye on the progress of the project, especially when it comes to application functionality and performance. To give them the view they need, performance analysis should be both continual and comprehensive. This ongoing analysis helps pinpoint any problem areas in the application. Analysis takes place all the way down to the daily scrums that include IT infrastructure and performance testers as contributors and application stakeholders. 
    Contributors are active members of the sprint team who participate in daily scrums, which give all stakeholders visibility into the current state of the development effort. When all interested team members know the performance of each sprint, they are in a better position to keep the quality of the entire application high. The sooner problems are found, the sooner they can be fixed.

    Component testing

    In an Agile environment, you will not have an end-to-end application in every sprint. This makes it important to be able to performance test only a portion or a component of an application. Stubbing provides a way to test components. Stubbing simulates parts of an application that are either not written or not available. If an application uses outside, third-party data sources—a common occurrence in a Web 2. 0 world—then performance testers and quality assurance (QA) specialists are going to need stubs because they cannot add load to third-party production servers. By performance testing components inside each sprint, you will help enable the development effort yields a high-quality application that performs well from end-to-end

    Activities Involved in Workload Modeling

    Performance testing is a complex activity as it consists of various phases and each phase has several activities in it. Workload modeling is one of the most important parts of the performance testing activity and it’s not simple by any means. Some of the activities necessary for identifying the performance test workload model are listed below:



    1. Test Objectives Identification

    Not just the performance testing but in any activity you put efforts which are aligned to your objectives. Identifying your test objectives means to examine what actions you need to take in order to successfully achieve those test objectives. So before we formally start working on any application’s performance testing, first step is to identify its test objectives in detail. For an E-commerce web application, following are some of the examples of performance test objectives:
    • Response Time: Product search should not take more than 3 seconds.
    • Throughput: Application server should have the capacity of entertaining 500 transactions per second.
    • Resource Utilization: All the resources like processor and memory utilization, network Input output and disk input output etc should be at less than 70% of their maximum capacity.
    • Maximum User Load: System should be able to entertain 1000 concurrent users by fulfilling all of the above defined objectives.

    2. Application Understanding

    Complete understanding of the AUT with all its features is the basic step in any testing activity. You can’t thoroughly test an application unless you have its complete understanding. Same is the case with the performance testing. Performance testing starts with planning and planning starts with application understanding. You explore the application from performance perspectives and try to get the answers of the following questions:
    • How many types of users are using this application?
    • What are the business scenarios of every user?
    • What is the AUT current and predicted peak user load for all its users’ actions over time?
    • How the user load is expected to grow with time?
    • In how much time a specific user action will achieve its peak load?
    • For how long the peak load will continue?
    Performance testing teams can also collaborate with network team to check the server logs to get the answers of some of the above mentioned questions. They can also interview the marketing team and application stakeholders for some of the answers.

    3. Key Scenarios Identification

    It’s neither practiced nor required to simulate all user actions in performance tests due to the budget and time constraints. Performance testing teams always select limited number of user actions which have greater performance impact on the application. In one of our previous Defaultpapers (Load Testing Scenarios Identification) we have discussed this topic in detail. Following are the examples of few scenarios which should be selected while conducting the performance tests:
    • Measureable Scenarios: A basic criterion for selecting any user scenario for performance testing is it should be fully measureable.
    • Most Frequently Accessed Scenarios: Application scenarios which are mostly accessed by the users when they browse through the application.
    • Business Critical Scenarios: Application core scenarios which contain its business transactions.
    • Resource Intensive Scenarios: User scenarios which consume more resources as compared to typical scenarios.
    • Time Dependent Frequently Accessed Scenarios: Such application scenarios which are accessed on specific occasions only but are accessed very frequently.
    • Stakeholders Concerning Scenarios: Application features about which the stakeholders are more concerned such as AUT newly integrated modules.
    Some of the most desired performance testing scenarios of an E-commerce application could be,
    • Browsing product catalog
    • Creating a user account
    • Searching for a product
    • Login to application
    • Order Placement

    4. Determining Navigation Paths of Key Scenarios

    Once you have identified all the AUT scenarios which should be included in a performance test, next step is to figure out each scenario’s all possible paths which a user can opt to successfully complete it. Any application users most probably have different level of domain and technical expertise and it’s quite obvious that they will follow different steps to complete a specific scenario(s). Performance testing teams identify all possible paths which users could follow to successfully complete the identified scenario and also figure out the frequency of each path to decide whether it should be included in performance test or not? Application response for the same scenario can greatly vary depending upon user navigation path and it’s strongly advised to test the selected scenario’s all major paths under load. Following are the few guidelines which could be followed to identify any scenario’s navigation paths:
    • Figure out AUT paths which can be used to successfully complete more than one identified scenarios and have major performance impact.
    • Read manuals (design and user) to find out identified scenario’s all possible paths.
    • In case of production application, check log files to find out the users’ navigation patterns to complete an identified scenario.
    • Explore the application and try to find out all possible paths of a scenario by yourself.
    • Another approach could be to provide application access to new and experienced users and ask them to complete certain scenarios and observe their actions.

    5. Identify Unique Test Data

    Unfortunately, identifying selected scenario’s all possible navigation paths doesn’t provide all the information required to successfully simulate them in a performance test. Several bits of information is still required to accurately simulate the workload model and preparing the required test data is one of them. You can never achieve accurate test results with improper or inefficient test data. One can develop an initial idea about the required test data by getting answers of the following queries:
    • While navigating through a specific path, how much time a user spend on a page?
    • Which conditions force the user to alter the navigation path?
    • What input data should be required for a specific path?
    You will also require a greater amount of test data and you need to keep an eye on the data base state if you wish to effectively run the test. Following are the few considerations which should be followed while executing a performance test where multiple navigation paths are tested for identified scenarios:
    • Make sure that you have all the required test data as you will need more test data when you are testing scenarios with all navigation paths.
    • Avoid using same test data for multiple users as it will produce invalid results.
    • Periodically test the database status during the test execution and make sure it’s not overloaded.
    • Include some invalid test data as well to simulate the real users’ behavior because users sometimes provide invalid values as well.

    6. Relative Load Distribution Across Identified Scenarios

    Now that you have understood the application, identified its key test scenarios, their navigation paths and test data required for each identified scenario, next step is to figure out the distribution of all the identified scenarios in your workload model. It’s quite obvious that in any application few scenarios are executed more frequently as compared to others. Moreover, for some applications’ scenarios the execution also depends on relative time. For example, for an online billing application where bill payments are made only during first week of the month, the administrator will be using the application mainly for accounts updating and importing billing information etc. in last 3 weeks of the month but as the first week starts, the major chunk of site users will be using it for bill payments. So you need to figure out which AUT scenarios will be executed at what time and what will be their execution percentage in the overall workload. Following techniques could be helpful in identifying relative distribution of your identified scenarios:
    • Check out the server log files to identify users’ trends on application if it is in the production environment.
    • Consult with the sales and marketing teams to figure out which features they believe will be mostly used.
    • You can also interview the existing and potential clients to find out in which features they are most interested.
    • Another approach could be to share a beta release among a smaller segment of users and check out their trends and behaviors on application from server log files.
    • In case none of the above mentioned approaches work then use your experience and intuitions to get the answers of your questions.

    7. Identify Target Load Levels

    One last step involved in workload model completion after performing all the above mentioned activities is to identify the normal and peak load levels of every selected scenario to test the application for expected and peak load conditions. Mainly depending upon the application you need to identify the monthly, weekly, daily and hourly average load targets for every selected scenario. You need to know the following information in order to identify the target load levels for an application:
    • What are the current and expected normal and peak user requests level?
    • What are the application key test scenarios?
    • What is the distribution of user requests on the identified scenarios?
    • What are the navigation paths for all the scenarios along with relative utilization of every scenario?

    8. Workload Design Miscellaneous Options

    Once you have identified the key test scenarios, their relative distribution and the target load levels the last thing is to figure out different options for your workload like browser mix, network mix, think time and pausing between the iterations to simulate the workload as per the real environment. All performance testing tools are equipped with following options:
    • Browser Mix: List down all the browsers which you want to include in your test and specify the load distribution across all browsers to verify what will be the system response in all these browsers.
    • Network Mix: There are various internet connections available these days and people use almost all of them based on their availability and constraints. So it’s better to include major list of network connections in your test and appropriately distribute the load on them.
    • Think Time: As real users always take some time while taking different actions, it’s very important to include the think time in your test based on the application users’ comfort level in using the application identified scenarios. Ignoring the think time can invalidate the test results.
    • Pause Time: There is always a certain pause before the user receives a response from server and send a new request which can be cater to with pause time between the iterations.

    Challenges Involved in Workload Modeling

    We have discussed all the activities involved in workload modeling and its quite obvious that the workload modeling for performance test is not a piece of cake. Performance testing teams face various challenges while performing all the above mentioned activities. Following is a list of some of these challenges:
    • Complete application access in planning phase before executing the performance test
    • Getting help from the marketing and network teams for their input on application performance critical scenarios and their workload distribution
    • Accessibility to relevant server logs
    • Applying the data mining techniques on server logs to extract the relevant information
    • Presenting the workload models to the stakeholders in an effective manner to get their approval on it.

    Identifying Critical Scenarios and Metrics

    Identifying critical scenarios is an inception and critical phase in load test process. Following are the various factors play a vital role in this phase: expertise in the application domain, business rules and requirements, technology etc.
    Following the important metrics that needs to be collected and monitored:
    1. Response time – how fast your application is?
    2. Throughput – how many orders your application can process i.e. transactions per second?
    3. CPU Utilization – What is the percentage utilization of your CPU?
    4. Memory Utilization – What is the percentage utilization of your memory?
    5. Network Utilization – What is the percentage utilization of your network bandwidth?
    6. Disk Utilization – What is the percentage utilization of your disk?
    7. User load – What is the maximum number of concurrent users your application can withstand?
    Also there are different kinds of metrics which needs to be identified such as Client-side metrics, server-side metrics, business metrics, network metrics etc.

    Work Load medeling in Performance Testing

    Importance of Workload Modeling


    Your performance test results will be more accurate if you manage to properly simulate your production system parameters in your test. It’s the planning phase where the performance analyst makes sure that the information of all the parameters of AUT has been acquired in order to accurately simulate them in the performance test. Identifying AUT workload model is one of the most important parts of the planning activity. Workload model provides the information of what type of user actions will be tested under load, what will be the business scenarios for all the users and what will be users’ distribution on every scenario. This information helps the performance testing teams in many ways such as:
    • Performance Scenarios Identification: The fundamental activity of the Workload model is to understand the application and identify its performance scenarios.
    • Performance Test SLAs: Performance testing teams translate AUT non-functional requirements into performance test SLAs through workload model.
    • Makes Communication Easier: Workload model makes it easy for the performance testing teams to communicate the AUT performance scenarios and users’ distribution on them with all the application stakeholders.
    • Test Data Preparation: Workload model helps in identifying the type and amount of test data which is always required before the working on the tool is started.
    • Required Number of Load Injectors: You always require a lot of infrastructures to successfully conduct the performance testing activity. Incorrect results are produced if the application is tested with inadequate infrastructure. Normally users load is simulated from multiple machines (i.e. load injectors) for accurate testing which is also identified from the Workload model

    Identifying Critical Scenarios and Metrics

    Identifying critical scenarios is an inception and critical phase in load test process. Following are the various factors play a vital role in this phase: expertise in the application domain, business rules and requirements, technology etc.
    Following the important metrics that needs to be collected and monitored:
    1. Response time – how fast your application is?
    2. Throughput – how many orders your application can process i.e. transactions per second?
    3. CPU Utilization – What is the percentage utilization of your CPU?
    4. Memory Utilization – What is the percentage utilization of your memory?
    5. Network Utilization – What is the percentage utilization of your network bandwidth?
    6. Disk Utilization – What is the percentage utilization of your disk?
    7. User load – What is the maximum number of concurrent users your application can withstand?
    Also there are different kinds of metrics which needs to be identified such as Client-side metrics, server-side metrics, business metrics, network metrics etc.

    Work Load medeling in Performance Testing

    Importance of Workload Modeling


    Your performance test results will be more accurate if you manage to properly simulate your production system parameters in your test. It’s the planning phase where the performance analyst makes sure that the information of all the parameters of AUT has been acquired in order to accurately simulate them in the performance test. Identifying AUT workload model is one of the most important parts of the planning activity. Workload model provides the information of what type of user actions will be tested under load, what will be the business scenarios for all the users and what will be users’ distribution on every scenario. This information helps the performance testing teams in many ways such as:
    • Performance Scenarios Identification: The fundamental activity of the Workload model is to understand the application and identify its performance scenarios.
    • Performance Test SLAs: Performance testing teams translate AUT non-functional requirements into performance test SLAs through workload model.
    • Makes Communication Easier: Workload model makes it easy for the performance testing teams to communicate the AUT performance scenarios and users’ distribution on them with all the application stakeholders.
    • Test Data Preparation: Workload model helps in identifying the type and amount of test data which is always required before the working on the tool is started.
    • Required Number of Load Injectors: You always require a lot of infrastructures to successfully conduct the performance testing activity. Incorrect results are produced if the application is tested with inadequate infrastructure. Normally users load is simulated from multiple machines (i.e. load injectors) for accurate testing which is also identified from the Workload model

    Bussiness Critical Scenarios in Performance testing

    Business Critical Scenarios

    Any application’s core scenarios are called its Business Critical scenarios. Testing and optimizing most frequently accessed scenarios is not all about load testing. In fact performance of the application’s business critical scenarios is more important. These are the application’s core areas and they generate significant amount of revenue for the company. If the users are unable to complete application business processes effectively, it will create a huge frustration among them. In case of an E-commerce application, purchasing a product will be an example of business critical scenario.
    Different approaches could be adopted to identify any application’s business critical scenarios. Some of them are as follows:

    • You can consult with application’s major stakeholders especially the marketing department and ask them to provide their input on these scenarios.
    • You can also read the marketing material to identify AUT’s business critical scenarios.
    • Another technique is to browse through the application and use your experience to figure out the business critical scenarios own your own.

    Thursday, July 28, 2016

    Performance Testing-Web Application Bottlenecks

    Web Applications Performance Bottlenecks


    Application architecture is formed by several components and there could be dozens of bad performance symptoms in each component. Being a good performance tester, one must know the list of performance symptoms on each tier to diagnose bottlenecks effectively.
    Below is the detailed list of symptoms of each of the web applications 3-tier component.

    Network Performance Bottlenecks

    Network bottlenecks contribute very little however they are important enough to be discussed in detail because you cannot afford minor issues as well because they can lead to disasters. Following are the major network performance symptoms in context of 3-tier web applications,

    • Load balancing ineffectiveness
    • Network interface card insufficient/poor configuration
    • Very tight security
    • Inadequate over all bandwidth
    • Pathetic network architecture

    Network performance bottlenecks don’t have any certain source. Load balancing, security and network architecture can be the major sources. Below pie chart depict percentage of each source to illustrate their impact on performance bottlenecks.
    Web Server Performance Bottlenecks
    Like network performance bottlenecks, web server bottlenecks don’t have major contribution to the performance issues as well. Web servers act as a liaison between client and processing servers (application and database). So web server performance bottlenecks need to be addressed properly since they can affect other components performance to great extent.
    Below is the list of bottlenecks which can affect web server performance,   
    • Broken links
    • Inadequate transaction design
    • Very tight security
    • Inadequate hardware capacity
    • High SSL transactions
    • Server poorly configured
    • Servers with ineffective load balancing
    • Less utilization of OS resources
    • Insufficient throughput
    Secure transactions has major contribution to web server performance bottlenecks however usually it is load balancing as well and sometimes high resource specialized functions cause web server performance. Below is the graphical representation of each web server performance bottlenecks with percentage.

    Application Server Performance Bottlenecks

    Business logic of an application resides on the application server. Application server hardware, software and application design can affect the performance to great extent. Poor application server performance can be a critical source of performance bottlenecks.
    Below is the list of application server bad performance causes,   
    • Memory leaks
    • Useless/inefficient garbage collection
    • DB connections poor configuration
    • Useless/inefficient code transactions
    • Sub-optimal session model
    • Application server poor configuration
    • Useless/inefficient hardware resources
    • Useless/inefficient object access model
    • Useless/inefficient security model
    • Less utilization of OS resources
    Object caching, SQL and database connection polling are the main causes of application server bottlenecks and they contribute 60% to the application server. 20% of the times inefficient application server causes poor performances.  
    Below is the complete detail of application server bottlenecks with their impact.

    Database Server Performance Bottlenecks
    Database performance is most critical for application performance as this is the main culprit in performance bottlenecks. Database software, hardware and design can really impact the whole system performance.
    Following is the comprehensive list of database poor performance causes,    
    • Inefficient/ineffective SQL statement
    • Small/insufficient query plan cache
    • Inefficient/ineffective SQA query model
    • Inefficient/ineffective DB configurations
    • Small/insufficient data cache
    • Excess DB connections
    • Excess rows at a time processing
    • Missing/ineffective indexing
    • Inefficient/ineffective concurrency model
    • Outdated statistics
    • Deadlocks
    Bad SQL and indexes contributes nearly 60% to the database server performance bottlenecks. Below chart will show complete detail of database server causes with percentage.
    Client Side Performance Bottlenecks
    The client side performance aspects as undercome an increased interest in the last years with the release of Google performance optimization best practices like caching, lesser number of static files, file minification, compression, java script processing time, page rendering, etc.

    For rich internet applications with lots of images, videos, etc, the client side aspects have a bigger bearing on the actual response time when compared to the server side response time and should be given due importance.

    Using many modern AJAX architectures, it is possible to place so much code in the client that a significant amount of time is required before the request is transmitted to the application server. This is particularly true for underpowered client machines with inadequate memory and slow processors.
    Top 10 client side performance symptoms:

    • Slow CSS Selectors on Internet Explorer
    • Slow executing external services
    • Multiple CSS Lookups for same object
    • Extensive XHR Calls
    • Large DOM
    • Expensive DOM Manipulations
    • Extensive Visual Effects
    • Extensive JavaScript files
    • Extensive Event Handler Bindings
    • Too fine granular logging and monitoring
       Third Party Services Performance Issues
      Today web applications heavily relay on third party components which affect the page loading and result in bad user experience. It is a common practice that third party tools are not properly analyzed from performance point of view before integrating them into the application. If you ever observed the page components load time, you would have experienced that third party components take more time. These third party components can cause various performance bottlenecks but following are the most common,
      • Size of page increases 
      • Third party services utilize more bandwidth utilization
      • Not decreased resources
      • Inadequate response time of third party component provider