Learn about how different is defect management for Performance Testing in comparison with Functional Testing
Performance Testing though considered under the umbrella of
Testing, it is very different from other testing types in many ways. One of the
key differences is the defect management process. There is a complete change in
the attitude towards finding defects in case of performance testing. It is not
like functional testing where you are responsible for finding deviations from
the expected results for your test cases to meet the requirements mentioned in
the product specification.
In Performance testing, you need to carry a completely different
attitude, often playing different type of roles during the test life cycle. You
need to be have an attitude of a business analyst to validate the non
functional requirements (this happens many a time unfortunately) & finalize
your workload to be tested, you need to have an attitude of tester to decide on
the type of tests & identify the violations in the application, you need to
have an attitude of architect & developer to identify the root cause of the
problems based on the test observations, you to need to have an attitude of
infrastructure capacity planner to evaluate the hardware footprints & its projections
to meet the target user loads, etc.
What is Defect
Management?
While testing a software
application, any deviation from the expected result mentioned in the functional
specification document results into a defect or error. Defect management is a
means to give insight about the quality of the software by reporting the
defects found during the testing. It will vary in agile environment as defects
are handled differently than in waterfall environment.
In case of functional testing, there
are several best practices for reporting the test efficiency, test coverage,
defect severity, etc using popular metrics like % Test coverage, % Test
Efficiency, Defect Discovery Rate, % Test cases passed & failed, First run
fail rate & many more. But
unfortunately, none of above mentioned metrics can be applicable for
performance testing. So, obviously using defect management tools like Quality
Center, Bugzilla, JIRA, etc for performance testing might not be the
appropriate way to track & close the bugs.
In Performance Testing, based on
the performance investigation of various layers, here are few metrics that
needs to be measured & reported. It would be more appropriate to call as a
finding or test observation rather than ‘defect’. If there are any violations
on the non-functional requirements (if quantitatively available), that can be
reported as a finding. For example, if your NFR is response time of all the
transactions is expected to be less than 5 seconds & if during your target
load test, you observe some of the transactions beyond 5 seconds; this can be
reported as a finding.
- · Transaction Response time (measured in seconds)
- · Layer-wise / Component-wise response time breakup (measured in seconds)
- · System Throughput (measured in Transactions per second)
- · Server Load (measured in Users per unit time or PageViews per unit time or Hits per unit time)
- · Server Resource Utilizations (CPU, Memory, Disk & Network)
- · Scalability Level (# peak users supported)
Sometimes, it might be required
to rerun the tests to confirm the behavior or observation. How much analysis needs
to be performed on the test results is purely based on the scope of work. From
my point of view, every Performance Tester should need to carry out test
analysis by using various techniques like correlation, scatter plot analysis,
trend analysis, drill down analysis, etc to provide more insight on the
problems / bottlenecks.
You can visit the below blog
post to download a copy of “Performance Bottleneck Analysis Made Simple – A quick
reference guide for Performance Testers”.
Happy Performance Testing!
No comments:
Post a Comment