Service-level objectivesService-level objectives (SLOs) drive the planning of performance requirements in Agile environments. Through SLOs, business and IT managers agree on requirements for application throughput, response times, numbers of simultaneously supported users, and other factors that affect the end-user experience. These requirements, which become part of the application backlog, must be met before an application can go live in a production environment
Focused performance testing
SLOs spell out the expected behavior of applications. But in a practical sense, development, test teams, and the business don’t always know the exact requirements for an application until they see how it performs in a production environment. For this reason, Agile processes rely on the production availability lifecycle (PAL) to see how an application actually performs in the real world. The feedback from the production environment helps developers and testers focus on specific problem areas of the application, thereby increasing the resources during short sprint cycles
Test data preparation
The earlier test data preparation takes place, the more time you have for testing. So performance testers should work with stakeholders in the planning stage to prepare tests and test data. This kind of collaboration is one of the keys to getting a lot of work done in a relatively short sprint. Different types of work must happen simultaneously. In this sense, Agile software development is a bit like just-in-time (JIT) manufacturing. You plan carefully for your needs, and then bring in the resources just when you need them.
Trending
In an Agile environment, it’s important for application owners to see continual improvement in an application over the course of successive sprints. They want to see a favorable trend, where each iteration of the application is better than the last. This makes it all the more important to monitor application performance trends in terms of SLO requirements. Trending reports allows you to give stakeholders regular snapshots of performance, which should ideally show that performance is getting steadily better or at least is not degrading. In addition, by looking at trending reports you do not necessarily have to study the analysis on every test run.
Reusable and shared testing assets
To accelerate testing and development work, Agile processes should make use of a repository of reusable and shared testing assets. This repository of test scripts gives everyone on the virtual development team, including contractors working on an outsourced basis, access to the same test assets.
Among other benefits, the repository provides efficiencies that come with “follow-the-sun” testing. Test scripts can be created over the course of a day in one geography and then made available to testers in another geography, who will run the test during their business day.
The ability to reuse and share test assets becomes more important with Agile development when the testing cycle is limited. It allows more work to get done within the available time window
Automated testing
The use of automated testing tools can speed up the process of performance testing. With the right software in place, you can create a script, make it reusable, and then schedule a test to run in the off hours, when developers are not changing the code you’re testing. This makes it possible to achieve higher levels of software quality in less time.
Automated testing helps you meet your regression and performance testing objectives within the tight timeframes of a two- to four-week sprint. This becomes even more important when you consider that developers often hold onto their work till 60 percent of the time has passed in a sprint, before handing off the build for testing. That doesn’t leave a lot of time for testing.
Continual analysis
In Agile processes, continual analysis is important.
Both contributors (“pigs” in scrum terminology) and stakeholders (“chickens” in scrum terminology) need to keep a close eye on the progress of the project, especially when it comes to application functionality and performance. To give them the view they need, performance analysis should be both continual and comprehensive. This ongoing analysis helps pinpoint any problem areas in the application. Analysis takes place all the way down to the daily scrums that include IT infrastructure and performance testers as contributors and application stakeholders.
Contributors are active members of the sprint team who participate in daily scrums, which give all stakeholders visibility into the current state of the development effort. When all interested team members know the performance of each sprint, they are in a better position to keep the quality of the entire application high. The sooner problems are found, the sooner they can be fixed.
Component testing
In an Agile environment, you will not have an end-to-end application in every sprint. This makes it important to be able to performance test only a portion or a component of an application. Stubbing provides a way to test components. Stubbing simulates parts of an application that are either not written or not available. If an application uses outside, third-party data sources—a common occurrence in a Web 2. 0 world—then performance testers and quality assurance (QA) specialists are going to need stubs because they cannot add load to third-party production servers. By performance testing components inside each sprint, you will help enable the development effort yields a high-quality application that performs well from end-to-end