Wednesday, September 28, 2016

Importance of NFR (which has become Negligent Functional Requirements).

Lets give life to EFR / NFR (treated as Negligent Fundamental Requirements) to thrive user experience.

May be because NFRs (Non-Functional Requirements) are always considered negligent, alternative name EFRs (Extra Functional Requirements) came into picture long back, though not prevalent. In recent few years, digital revolution has brought lot of importance to User Experience (UX) which can be addressed via NFRs / EFRs.  
Nowadays, we see more awareness on this as everyone now understand the underlying fact –
UX cannot be an optional quality for my digital product? So are the EFRs.”

What are Extra Functional Requirements (EFRs)?

A Functional Requirement (FR) specifies what a system needs to do whereas an Extra Functional Requirement (EFR) describes how the system will do it. Usually EFRs specifies the quality attributes of a digital product describing the user experience of the system. It is applicable to system as a whole & cannot be categorized to individual features.
News about losing credibility, competitor wins, etc becoming headlines often, nowadays every digital product owner shows lot of importance in User Experience (UX) & thereby to EFRs. Everyone now understands the mantra “Success or failure of the digital product is mainly decided by the EFRs”.

EFRs usually exist with interrelationships, sometimes making it very challenging to measure the quality of an EFR in isolation where interrelationship exists. For example, Performance EFR is related to Scalability, Availability & Capacity. Hence certifying the system for Performance EFR often seem incomplete without accounting EFR related to Scalability, Availability & Capacity.

EFRs are grouped considering its relevance to type of users. As an end user, Performance & Security are essential EFRs whereas for a developer maintainability & reusability will be of more relevance.

  • EFRs relevant for End Users (for improving User Experience):       
    • Performance, Security, Usability, UX, Accessibility, Reliability, Scalability,    Availability,   Capacity, Flexibility, Interoperability, Compatibility, etc.
  • EFRs relevant for Developers / Testers during SDLC:
    •  Maintainability, Reusability, Testability, Portability, Supportability, Packaging requirements, etc.
Direct Mapping of User Concerns to EFRs:
As EFRs represent the quality characteristics of the systems, it can be easily mapped to end user concerns related to user experience. Few examples provided below.
  •          Speed & Resource utilization - Performance & Capacity
  •          Unauthorized Access - Security
  •          Ease of Use – Usability
  •          Likelihood of Failure – Reliability
  •          Ease of Change & Repair – Maintainability / Flexibility
EFRs are generally informally stated, often contradictory, difficult to enforce during development and evaluate for the customer prior to delivery. But now in the recent days, this trend is changing exponentially as user experience (UX) is being thought as a prime factor for digital product success. It is our responsibility to educate our team & sometimes our customer about the importance of NFRs / EFRs. Do not regret for missing NFRs anymore. Be a catalyst to bring this change in the projects where you are involved.

“To measure is to know. If you cannot measure it, you cannot improve it.”

Check out the entire article @

http://elitesouls.in/give-life-to-efr-nfr-treated-as-negligent-fundamental-requirements-to-thrive-ux/


Happy EFR Testing!!!

Monday, September 26, 2016

Defect Management in Performance Testing

Learn about how different is defect management for Performance Testing in comparison with Functional Testing

Performance Testing though considered under the umbrella of Testing, it is very different from other testing types in many ways. One of the key differences is the defect management process. There is a complete change in the attitude towards finding defects in case of performance testing. It is not like functional testing where you are responsible for finding deviations from the expected results for your test cases to meet the requirements mentioned in the product specification.

In Performance testing, you need to carry a completely different attitude, often playing different type of roles during the test life cycle. You need to be have an attitude of a business analyst to validate the non functional requirements (this happens many a time unfortunately) & finalize your workload to be tested, you need to have an attitude of tester to decide on the type of tests & identify the violations in the application, you need to have an attitude of architect & developer to identify the root cause of the problems based on the test observations, you to need to have an attitude of infrastructure capacity planner to evaluate the hardware footprints & its projections to meet the target user loads, etc.

What is Defect Management?

While testing a software application, any deviation from the expected result mentioned in the functional specification document results into a defect or error. Defect management is a means to give insight about the quality of the software by reporting the defects found during the testing. It will vary in agile environment as defects are handled differently than in waterfall environment.
In case of functional testing, there are several best practices for reporting the test efficiency, test coverage, defect severity, etc using popular metrics like % Test coverage, % Test Efficiency, Defect Discovery Rate, % Test cases passed & failed, First run fail rate & many more.  But unfortunately, none of above mentioned metrics can be applicable for performance testing. So, obviously using defect management tools like Quality Center, Bugzilla, JIRA, etc for performance testing might not be the appropriate way to track & close the bugs.

In Performance Testing, based on the performance investigation of various layers, here are few metrics that needs to be measured & reported. It would be more appropriate to call as a finding or test observation rather than ‘defect’. If there are any violations on the non-functional requirements (if quantitatively available), that can be reported as a finding. For example, if your NFR is response time of all the transactions is expected to be less than 5 seconds & if during your target load test, you observe some of the transactions beyond 5 seconds; this can be reported as a finding.
  • ·         Transaction Response time (measured in seconds)
  • ·         Layer-wise / Component-wise response time breakup (measured in seconds)
  • ·         System Throughput (measured in Transactions per second)
  • ·         Server Load (measured in Users per unit time or PageViews per unit time or Hits per unit time)
  • ·         Server Resource Utilizations (CPU, Memory, Disk & Network)
  • ·         Scalability Level (# peak users supported)

Sometimes, it might be required to rerun the tests to confirm the behavior or observation. How much analysis needs to be performed on the test results is purely based on the scope of work. From my point of view, every Performance Tester should need to carry out test analysis by using various techniques like correlation, scatter plot analysis, trend analysis, drill down analysis, etc to provide more insight on the problems / bottlenecks.

You can visit the below blog post to download a copy of “Performance Bottleneck Analysis Made Simple – A quick reference guide for Performance Testers”.



Happy Performance Testing!