Wednesday, September 28, 2016

Importance of NFR (which has become Negligent Functional Requirements).

Lets give life to EFR / NFR (treated as Negligent Fundamental Requirements) to thrive user experience.

May be because NFRs (Non-Functional Requirements) are always considered negligent, alternative name EFRs (Extra Functional Requirements) came into picture long back, though not prevalent. In recent few years, digital revolution has brought lot of importance to User Experience (UX) which can be addressed via NFRs / EFRs.  
Nowadays, we see more awareness on this as everyone now understand the underlying fact –
UX cannot be an optional quality for my digital product? So are the EFRs.”

What are Extra Functional Requirements (EFRs)?

A Functional Requirement (FR) specifies what a system needs to do whereas an Extra Functional Requirement (EFR) describes how the system will do it. Usually EFRs specifies the quality attributes of a digital product describing the user experience of the system. It is applicable to system as a whole & cannot be categorized to individual features.
News about losing credibility, competitor wins, etc becoming headlines often, nowadays every digital product owner shows lot of importance in User Experience (UX) & thereby to EFRs. Everyone now understands the mantra “Success or failure of the digital product is mainly decided by the EFRs”.

EFRs usually exist with interrelationships, sometimes making it very challenging to measure the quality of an EFR in isolation where interrelationship exists. For example, Performance EFR is related to Scalability, Availability & Capacity. Hence certifying the system for Performance EFR often seem incomplete without accounting EFR related to Scalability, Availability & Capacity.

EFRs are grouped considering its relevance to type of users. As an end user, Performance & Security are essential EFRs whereas for a developer maintainability & reusability will be of more relevance.

  • EFRs relevant for End Users (for improving User Experience):       
    • Performance, Security, Usability, UX, Accessibility, Reliability, Scalability,    Availability,   Capacity, Flexibility, Interoperability, Compatibility, etc.
  • EFRs relevant for Developers / Testers during SDLC:
    •  Maintainability, Reusability, Testability, Portability, Supportability, Packaging requirements, etc.
Direct Mapping of User Concerns to EFRs:
As EFRs represent the quality characteristics of the systems, it can be easily mapped to end user concerns related to user experience. Few examples provided below.
  •          Speed & Resource utilization - Performance & Capacity
  •          Unauthorized Access - Security
  •          Ease of Use – Usability
  •          Likelihood of Failure – Reliability
  •          Ease of Change & Repair – Maintainability / Flexibility
EFRs are generally informally stated, often contradictory, difficult to enforce during development and evaluate for the customer prior to delivery. But now in the recent days, this trend is changing exponentially as user experience (UX) is being thought as a prime factor for digital product success. It is our responsibility to educate our team & sometimes our customer about the importance of NFRs / EFRs. Do not regret for missing NFRs anymore. Be a catalyst to bring this change in the projects where you are involved.

“To measure is to know. If you cannot measure it, you cannot improve it.”

Check out the entire article @

http://elitesouls.in/give-life-to-efr-nfr-treated-as-negligent-fundamental-requirements-to-thrive-ux/


Happy EFR Testing!!!

Monday, September 26, 2016

Defect Management in Performance Testing

Learn about how different is defect management for Performance Testing in comparison with Functional Testing

Performance Testing though considered under the umbrella of Testing, it is very different from other testing types in many ways. One of the key differences is the defect management process. There is a complete change in the attitude towards finding defects in case of performance testing. It is not like functional testing where you are responsible for finding deviations from the expected results for your test cases to meet the requirements mentioned in the product specification.

In Performance testing, you need to carry a completely different attitude, often playing different type of roles during the test life cycle. You need to be have an attitude of a business analyst to validate the non functional requirements (this happens many a time unfortunately) & finalize your workload to be tested, you need to have an attitude of tester to decide on the type of tests & identify the violations in the application, you need to have an attitude of architect & developer to identify the root cause of the problems based on the test observations, you to need to have an attitude of infrastructure capacity planner to evaluate the hardware footprints & its projections to meet the target user loads, etc.

What is Defect Management?

While testing a software application, any deviation from the expected result mentioned in the functional specification document results into a defect or error. Defect management is a means to give insight about the quality of the software by reporting the defects found during the testing. It will vary in agile environment as defects are handled differently than in waterfall environment.
In case of functional testing, there are several best practices for reporting the test efficiency, test coverage, defect severity, etc using popular metrics like % Test coverage, % Test Efficiency, Defect Discovery Rate, % Test cases passed & failed, First run fail rate & many more.  But unfortunately, none of above mentioned metrics can be applicable for performance testing. So, obviously using defect management tools like Quality Center, Bugzilla, JIRA, etc for performance testing might not be the appropriate way to track & close the bugs.

In Performance Testing, based on the performance investigation of various layers, here are few metrics that needs to be measured & reported. It would be more appropriate to call as a finding or test observation rather than ‘defect’. If there are any violations on the non-functional requirements (if quantitatively available), that can be reported as a finding. For example, if your NFR is response time of all the transactions is expected to be less than 5 seconds & if during your target load test, you observe some of the transactions beyond 5 seconds; this can be reported as a finding.
  • ·         Transaction Response time (measured in seconds)
  • ·         Layer-wise / Component-wise response time breakup (measured in seconds)
  • ·         System Throughput (measured in Transactions per second)
  • ·         Server Load (measured in Users per unit time or PageViews per unit time or Hits per unit time)
  • ·         Server Resource Utilizations (CPU, Memory, Disk & Network)
  • ·         Scalability Level (# peak users supported)

Sometimes, it might be required to rerun the tests to confirm the behavior or observation. How much analysis needs to be performed on the test results is purely based on the scope of work. From my point of view, every Performance Tester should need to carry out test analysis by using various techniques like correlation, scatter plot analysis, trend analysis, drill down analysis, etc to provide more insight on the problems / bottlenecks.

You can visit the below blog post to download a copy of “Performance Bottleneck Analysis Made Simple – A quick reference guide for Performance Testers”.



Happy Performance Testing!

Monday, August 8, 2016

Performance Engineering – Dusting out Subjectivity & Bringing light to hidden Track


How can it be possible ? But meaning of Performance Engineer is subjective as of now. Let’s try to break it. Performance is a quality attribute which cannot be achieved in a SDLC phase. For assuring Performance, you need many players in SDLC phases but you can’t call everyone as Performance Engineers. We need architectural / design expert, UX designer, development lead, performance tester, capacity planner, etc. to assure my digital product performance & thereby user experience.

Performance Engineering is a discipline that involves systematic practices, techniques & activities during each phase of Software development life cycle (SDLC) to meet the performance requirements. It strives to build performance standards by focusing on the architecture, design & implementation choices.

Ideally speaking, performance being one of the CTQ (Critical To Quality) attribute, it needs to be proactively thought throughout the software development phases right from the requirements phase till ongoing production maintenance. In this proactive mode, a Performance Development Engineer (who is a solution architect with technology expertise aware of right architectural & design pattern choices for system performance & scalability) should be involved right from the initial SDLC phase to ensure system is build with performance standards.

In a reactive mode, system performance is not thought till end of implementation / functional testing phase leading to reverse engineering of the digital product sometimes leading to architectural/design level changes. In proactive or reactive mode, when a Performance Test Engineer tries to test & assess the developed system for its performance & when SLAs are not met, he performs a deep dive analysis on the specific layer(s) which doesn’t meet the SLAs. In this case, depending upon the type & complexity of the bottleneck being analyzed, deep dive analysis & tuning can be done by self or specialized SMEs like DB architect, Websphere specialist, Network Engineer, etc can be involved to analyze the performance issue in detail to provide tuning recommendation. (Note: I hope we all accept the fact that a Performance Tester should have the capability to test & assess the performance problems in the system at all layers & report RCA findings whereas a Performance Engineer is an experienced person who can be a technology expert with good bottleneck analysis/tuning skills to provide recommendation for fixing the issue. But ideally a sound Performance Tester upon gaining good experience develops the skills of a Performance Engineer).

Gaining Clarity on the Contradict

Here comes the disagreement – If you notice above, in both proactive & reactive modes, a Performance Engineer is involved. But note, I have called the former as Performance Development Engineer & later as Performance Test Engineer. The skills of a Performance Development Engineer can be very different from that of Performance Test Engineer.

But we need to remember, we can have both Performance Development Engineer & Performance Test Engineer available from the early SDLC phases as both are not duplicating the skills, actually they are complimenting each other.  This is very similar to this scenario - Testing done by development team (Unit Testing) & testing done by testing team (System Testing) has its own objectives & advantages. They complement each other & try to find as much defects as early to reduce the cost of fixing the defect.

I look at a Performance Test Engineer to be a Performance Assurance Expert where (s)he need not be a technology expert (building system with performance is the job of a Performance Development Engineer), rather (s)he needs to look at the digital product from all perspective to validate whether the digital product will create great user experience by providing expected performance. 

Apart from the knowledge of various testing, monitoring & APM tools, (s)he need to be aware of technology agnostic performance principles & know how to detect performance issues by strategizing right type of performance tests with right workload model. With thorough performance bottleneck analysis skills across all layers, need to have matured thought-process on when my hardware will saturate/reach its thresholds affecting the scalability (though may not be a capacity planning expert).

Hidden Track of Performance Engineering

Though Performance Engineering is itself very broad, but still there is something that is hidden or forgotten. Hidden/Forgotten I meant, it has not gained so much popularity comparatively & we don’t find people easily with these skills very often.

Being the performance assurance expert, Performance Engineer also needs to be capable of employing scientific/mathematical principles to engineer various test activities (like verifying whether test or workload is valid mathematically, forecast peak traffic hour workload, how to map test results from low end environment to PROD hardware, etc), to perform prediction or performance extrapolation, to bring in mathematical strategies to model & validate the performance even before the system is completely built, etc.

Generally speaking, with respect to Test Competency, every Performance Tester aspires to become a Performance Engineer, who is well versed in bottleneck analysis skills, profiling, tuning, etc. But I have met very few Performance Testers who aspire to gain knowledge on Queuing Theory principles to get into this hidden world of Performance prediction, modeling & application capacity planning. This track is the base for the onset of high end capacity planning tools & recent boom on Performance analytics & Predictive/Prescriptive Modeling on Performance data. Many businesses are having great demand for this skillset.

Performance Test Engineers need to have plans to expand their knowledge on this space for having successful & great learning career.

If you disagree, don’t forget to share your point of view, for the benefit of Performance Testers/Engineers. Together lets strive to create more awareness & remove any subjective usage of 
the terms & hold the torch on hidden tracks.


Happy Performance Testing & Engineering!!

Wednesday, August 3, 2016

Precise Workload Analysis in Performance Testing & Application Capacity Planning – The Secret Sauce




Many Performance Testers/Engineers underestimate the importance analyzing the historical end user access patterns while developing the workload model for performance testing / application capacity planning. On majority of my audits doing RCA on production performance issues, the culprit will be wrong workload. The Performance test strategy talks about various types of tests that will be planned, infrastructure components that will be monitored, type of analysis that be performed, etc but when it comes to workload, it is always expected to be provided by customer or business analysts.  Definitely, our Customer / Business Analysts knows who are end users & frequently used business flows, but its sole responsibility of Performance Tester/Engineer to understand (& sometimes educate customers) the additional detailed analysis required in order to increase the accuracy of our performance tests.

We need to remember the fact that if the workload selected for running the performance tests is not reflective of realistic end user access pattern, the entire test results will go wrong & will result in jeopardy. Remember the below points during your workload analysis:
  •  Analyze your end user access patterns. Try to understand your user’s behaviors.
  •  Identify your average & peak point workloads in your historical trends.
  • Define your peak point workload both quantitatively & qualitatively.
  • During peak point pattern analysis pay attention to ignore outliers.

Now create workload model that needs to be used for your performance tests. You can have more than one workload model for your tests. I mean the workload used for your load test can be different from that of endurance test or stress test. It all depends on your end user access patterns. Remember, knowing basics of Queuing Theory (Operational laws) can help to validate the correctness of your workload model & even to an extent whether your peak hour SLA is valid.

If you are dealing with a very business critical & high availability application where it’s really worth, spend time in understanding the underlying statistical distribution pattern for your peak traffic hour workloads. For a web application accessed by independent geographical distributed users, usually fall to a Poisson distribution or Self-Similar distribution. In simple terms, which distribution does my application workload belong to is about analyzing how much bursts & spikes does my peak hour workload have. Representing the burstiness of your traffic using a metric called Hurst & employing various techniques to quantify the Hurst value will confirm which statistical distribution your application fall into & how much your peak hour workload can vary in future.

For Application Capacity Planning, choosing right workload peak points becomes very essential. Unless you choose a series of peak point workloads from your historical statistics & understand the quantitatively & qualitatively what the workload really comprises of, you will not succeed in accurate forecasting of hardware demands for your application. Applying analytical modeling techniques to answer business demanded what-if scenarios can be made possible using carefully selected workload. Without doing this basic homework, you cannot rightly size your infrastructure for the projected business loads.

Also, most of the capacity planning techniques requires actual application performance benchmarks for careful extrapolation / forecasts. Performance benchmarking becomes very important in capacity planning to understand the hardware resource requirements (represented as service demands) & other performance characteristics of your application. Using right workload to carryout performance benchmarking is the first step towards successful application capacity planning.  

Happy Workload Analysis & Modeling!!

Thursday, July 7, 2016

Interview Ethics for Performance Testers & Engineers


With the experience of interviewing 500+ candidates in the line of Performance Testing & Engineering during last 2 years for junior, mid-senior & senior roles, would like to share my observations & recommendations to have a pleasing experience for interviewer, interviewee & talent acquisition team involved in the hiring process.

Particularly in Performance Testing, the roles & responsibilities are ambiguous & often its confused with Engineering skills. The terms, 'Tester' & 'Engineer' are used interchangeably in many IT corporate leading to additional confusions. 

My below recommendations will help in meeting the Interview Ethics of both firm & candidate.


For Interviewer:

Value the time & ensure you create a good learning experience for yourself & for the candidate while doing the technical assessment.  You are not evaluating the candidate is good or bad, but assessing technical skills for their fitment against organization’s expectation on the available role.

  • Always greet the candidate & start with a casual question. Starting your first conversation with questions like ‘How was your day?’ or ‘How long it took to travel down to the office location?’ This start will bring the candidate to comfort zone.
  • Give space to understand the background & career history of the candidate by asking for self-introduction before you start your questions. This space will increase the confidence to speak out.
  • Try to assess the conceptual knowledge level of the candidate before jumping into the tools knowledge.
  • Quickly evaluate the performance bottleneck analysis skill level – assess whether candidate can understand tool reports only or can perform first level of end to end problem analysis or can perform deep dive analysis of specific tier or all tiers? This will clarify testing & engineering skillset he/she posses.
  • Within first 20 minutes of your discussion, you will need to get the grip of candidate’s overall skillset on Performance Testing / Engineering.
  • Assess the practical knowledge of the candidate for next 10 minutes speaking about their responsibilities in projects mentioned in the resume. This evaluation will bring in whether they have only theoretical knowledge reading some articles or have real practical exposure & tried it in projects.
  • For the next 15 minutes, try assessing the problem solving capability for the given problem statement as expected to be handled by the role.
  • It’s very important to end the discussion after spending at least 15 minutes to understand candidate’s interest areas, likes & dislikes of current organization, other activities apart from project experience, actual reason for leaving the current organization, key challenges he have in current team/project, expectations on his next job, etc followed by a quick brief on your organization, your team, technical responsibilities & expectations for the role.
  • In 75% of the interview discussions, you will know whether the candidate is worth interviewing for the role during first 15 minutes of your discussion. But remember, everyone is not lucky to get the right opportunities or good mentors to guide them at right time. Ensure you give good interview experience to the candidate instead of winding up your discussion abruptly. Atleast you can try to be a good mentor to speak to him for next 20 minutes in the interest of giving him some key take away from your meeting.
  • End the discussion by saying, your HR folks will get back. Don’t extend the meeting for a long time & cover any pending areas of assessment in next round. Do not forget to write the areas you have assessed ,  your feedback & observations along with quantitative ratings that will help the technical panelist / HR in further rounds.

For Interviewee:

  • Ensure your updated resume is made available for the interview discussion.
  • Do a self study of what is your strength & weakness & how it has impacted your work style in your current organization before giving interview.
  • Do not jump between jobs with pure motivation to increase your financial package. In long run, it will not help in having successful career.
  • Do not make it a practice to deny the offer letter on the joining day. If you can have multiple offers, you can definitely take time to decided the best & deny the rest. Respect the organization that has spent time to assess & offer you a role in their team. Have courage to inform the HR team in advance notice about your plans.
  • Just because you carry multiple offer letters, don’t spoil your attitude towards work. Learning is a continuous process & develop courage to work in un-comfort zone.
  • Be honest while sharing reasons for job change, any intermediate breaks, your expectations in the next job, career aspirations, etc. Never ever add any fake details in your resume.

For HR / Talent acquisition team:

  • Clearly quote the job description clearly in bullet points emphasizing upon the responsibilities & skill requirement expected, as there is a constant confusion between tester & engineer roles. 
  • Amidst your busy schedule, if you can manage your time so well to schedule the interview with the candidate, understand their expectations & background, send multiple gentle reminders to have them at venue on time, etc, please spend additional 5 minutes of your time to inform or at least mail the candidate if not selected in any of the rounds.