Wednesday, September 28, 2016

Importance of NFR (which has become Negligent Functional Requirements).

Lets give life to EFR / NFR (treated as Negligent Fundamental Requirements) to thrive user experience.

May be because NFRs (Non-Functional Requirements) are always considered negligent, alternative name EFRs (Extra Functional Requirements) came into picture long back, though not prevalent. In recent few years, digital revolution has brought lot of importance to User Experience (UX) which can be addressed via NFRs / EFRs.  
Nowadays, we see more awareness on this as everyone now understand the underlying fact –
UX cannot be an optional quality for my digital product? So are the EFRs.”

What are Extra Functional Requirements (EFRs)?

A Functional Requirement (FR) specifies what a system needs to do whereas an Extra Functional Requirement (EFR) describes how the system will do it. Usually EFRs specifies the quality attributes of a digital product describing the user experience of the system. It is applicable to system as a whole & cannot be categorized to individual features.
News about losing credibility, competitor wins, etc becoming headlines often, nowadays every digital product owner shows lot of importance in User Experience (UX) & thereby to EFRs. Everyone now understands the mantra “Success or failure of the digital product is mainly decided by the EFRs”.

EFRs usually exist with interrelationships, sometimes making it very challenging to measure the quality of an EFR in isolation where interrelationship exists. For example, Performance EFR is related to Scalability, Availability & Capacity. Hence certifying the system for Performance EFR often seem incomplete without accounting EFR related to Scalability, Availability & Capacity.

EFRs are grouped considering its relevance to type of users. As an end user, Performance & Security are essential EFRs whereas for a developer maintainability & reusability will be of more relevance.

  • EFRs relevant for End Users (for improving User Experience):       
    • Performance, Security, Usability, UX, Accessibility, Reliability, Scalability,    Availability,   Capacity, Flexibility, Interoperability, Compatibility, etc.
  • EFRs relevant for Developers / Testers during SDLC:
    •  Maintainability, Reusability, Testability, Portability, Supportability, Packaging requirements, etc.
Direct Mapping of User Concerns to EFRs:
As EFRs represent the quality characteristics of the systems, it can be easily mapped to end user concerns related to user experience. Few examples provided below.
  •          Speed & Resource utilization - Performance & Capacity
  •          Unauthorized Access - Security
  •          Ease of Use – Usability
  •          Likelihood of Failure – Reliability
  •          Ease of Change & Repair – Maintainability / Flexibility
EFRs are generally informally stated, often contradictory, difficult to enforce during development and evaluate for the customer prior to delivery. But now in the recent days, this trend is changing exponentially as user experience (UX) is being thought as a prime factor for digital product success. It is our responsibility to educate our team & sometimes our customer about the importance of NFRs / EFRs. Do not regret for missing NFRs anymore. Be a catalyst to bring this change in the projects where you are involved.

“To measure is to know. If you cannot measure it, you cannot improve it.”

Check out the entire article @

http://elitesouls.in/give-life-to-efr-nfr-treated-as-negligent-fundamental-requirements-to-thrive-ux/


Happy EFR Testing!!!

Monday, September 26, 2016

Defect Management in Performance Testing

Learn about how different is defect management for Performance Testing in comparison with Functional Testing

Performance Testing though considered under the umbrella of Testing, it is very different from other testing types in many ways. One of the key differences is the defect management process. There is a complete change in the attitude towards finding defects in case of performance testing. It is not like functional testing where you are responsible for finding deviations from the expected results for your test cases to meet the requirements mentioned in the product specification.

In Performance testing, you need to carry a completely different attitude, often playing different type of roles during the test life cycle. You need to be have an attitude of a business analyst to validate the non functional requirements (this happens many a time unfortunately) & finalize your workload to be tested, you need to have an attitude of tester to decide on the type of tests & identify the violations in the application, you need to have an attitude of architect & developer to identify the root cause of the problems based on the test observations, you to need to have an attitude of infrastructure capacity planner to evaluate the hardware footprints & its projections to meet the target user loads, etc.

What is Defect Management?

While testing a software application, any deviation from the expected result mentioned in the functional specification document results into a defect or error. Defect management is a means to give insight about the quality of the software by reporting the defects found during the testing. It will vary in agile environment as defects are handled differently than in waterfall environment.
In case of functional testing, there are several best practices for reporting the test efficiency, test coverage, defect severity, etc using popular metrics like % Test coverage, % Test Efficiency, Defect Discovery Rate, % Test cases passed & failed, First run fail rate & many more.  But unfortunately, none of above mentioned metrics can be applicable for performance testing. So, obviously using defect management tools like Quality Center, Bugzilla, JIRA, etc for performance testing might not be the appropriate way to track & close the bugs.

In Performance Testing, based on the performance investigation of various layers, here are few metrics that needs to be measured & reported. It would be more appropriate to call as a finding or test observation rather than ‘defect’. If there are any violations on the non-functional requirements (if quantitatively available), that can be reported as a finding. For example, if your NFR is response time of all the transactions is expected to be less than 5 seconds & if during your target load test, you observe some of the transactions beyond 5 seconds; this can be reported as a finding.
  • ·         Transaction Response time (measured in seconds)
  • ·         Layer-wise / Component-wise response time breakup (measured in seconds)
  • ·         System Throughput (measured in Transactions per second)
  • ·         Server Load (measured in Users per unit time or PageViews per unit time or Hits per unit time)
  • ·         Server Resource Utilizations (CPU, Memory, Disk & Network)
  • ·         Scalability Level (# peak users supported)

Sometimes, it might be required to rerun the tests to confirm the behavior or observation. How much analysis needs to be performed on the test results is purely based on the scope of work. From my point of view, every Performance Tester should need to carry out test analysis by using various techniques like correlation, scatter plot analysis, trend analysis, drill down analysis, etc to provide more insight on the problems / bottlenecks.

You can visit the below blog post to download a copy of “Performance Bottleneck Analysis Made Simple – A quick reference guide for Performance Testers”.



Happy Performance Testing!

Monday, August 8, 2016

Performance Engineering – Dusting out Subjectivity & Bringing light to hidden Track


How can it be possible ? But meaning of Performance Engineer is subjective as of now. Let’s try to break it. Performance is a quality attribute which cannot be achieved in a SDLC phase. For assuring Performance, you need many players in SDLC phases but you can’t call everyone as Performance Engineers. We need architectural / design expert, UX designer, development lead, performance tester, capacity planner, etc. to assure my digital product performance & thereby user experience.

Performance Engineering is a discipline that involves systematic practices, techniques & activities during each phase of Software development life cycle (SDLC) to meet the performance requirements. It strives to build performance standards by focusing on the architecture, design & implementation choices.

Ideally speaking, performance being one of the CTQ (Critical To Quality) attribute, it needs to be proactively thought throughout the software development phases right from the requirements phase till ongoing production maintenance. In this proactive mode, a Performance Development Engineer (who is a solution architect with technology expertise aware of right architectural & design pattern choices for system performance & scalability) should be involved right from the initial SDLC phase to ensure system is build with performance standards.

In a reactive mode, system performance is not thought till end of implementation / functional testing phase leading to reverse engineering of the digital product sometimes leading to architectural/design level changes. In proactive or reactive mode, when a Performance Test Engineer tries to test & assess the developed system for its performance & when SLAs are not met, he performs a deep dive analysis on the specific layer(s) which doesn’t meet the SLAs. In this case, depending upon the type & complexity of the bottleneck being analyzed, deep dive analysis & tuning can be done by self or specialized SMEs like DB architect, Websphere specialist, Network Engineer, etc can be involved to analyze the performance issue in detail to provide tuning recommendation. (Note: I hope we all accept the fact that a Performance Tester should have the capability to test & assess the performance problems in the system at all layers & report RCA findings whereas a Performance Engineer is an experienced person who can be a technology expert with good bottleneck analysis/tuning skills to provide recommendation for fixing the issue. But ideally a sound Performance Tester upon gaining good experience develops the skills of a Performance Engineer).

Gaining Clarity on the Contradict

Here comes the disagreement – If you notice above, in both proactive & reactive modes, a Performance Engineer is involved. But note, I have called the former as Performance Development Engineer & later as Performance Test Engineer. The skills of a Performance Development Engineer can be very different from that of Performance Test Engineer.

But we need to remember, we can have both Performance Development Engineer & Performance Test Engineer available from the early SDLC phases as both are not duplicating the skills, actually they are complimenting each other.  This is very similar to this scenario - Testing done by development team (Unit Testing) & testing done by testing team (System Testing) has its own objectives & advantages. They complement each other & try to find as much defects as early to reduce the cost of fixing the defect.

I look at a Performance Test Engineer to be a Performance Assurance Expert where (s)he need not be a technology expert (building system with performance is the job of a Performance Development Engineer), rather (s)he needs to look at the digital product from all perspective to validate whether the digital product will create great user experience by providing expected performance. 

Apart from the knowledge of various testing, monitoring & APM tools, (s)he need to be aware of technology agnostic performance principles & know how to detect performance issues by strategizing right type of performance tests with right workload model. With thorough performance bottleneck analysis skills across all layers, need to have matured thought-process on when my hardware will saturate/reach its thresholds affecting the scalability (though may not be a capacity planning expert).

Hidden Track of Performance Engineering

Though Performance Engineering is itself very broad, but still there is something that is hidden or forgotten. Hidden/Forgotten I meant, it has not gained so much popularity comparatively & we don’t find people easily with these skills very often.

Being the performance assurance expert, Performance Engineer also needs to be capable of employing scientific/mathematical principles to engineer various test activities (like verifying whether test or workload is valid mathematically, forecast peak traffic hour workload, how to map test results from low end environment to PROD hardware, etc), to perform prediction or performance extrapolation, to bring in mathematical strategies to model & validate the performance even before the system is completely built, etc.

Generally speaking, with respect to Test Competency, every Performance Tester aspires to become a Performance Engineer, who is well versed in bottleneck analysis skills, profiling, tuning, etc. But I have met very few Performance Testers who aspire to gain knowledge on Queuing Theory principles to get into this hidden world of Performance prediction, modeling & application capacity planning. This track is the base for the onset of high end capacity planning tools & recent boom on Performance analytics & Predictive/Prescriptive Modeling on Performance data. Many businesses are having great demand for this skillset.

Performance Test Engineers need to have plans to expand their knowledge on this space for having successful & great learning career.

If you disagree, don’t forget to share your point of view, for the benefit of Performance Testers/Engineers. Together lets strive to create more awareness & remove any subjective usage of 
the terms & hold the torch on hidden tracks.


Happy Performance Testing & Engineering!!

Wednesday, August 3, 2016

Precise Workload Analysis in Performance Testing & Application Capacity Planning – The Secret Sauce




Many Performance Testers/Engineers underestimate the importance analyzing the historical end user access patterns while developing the workload model for performance testing / application capacity planning. On majority of my audits doing RCA on production performance issues, the culprit will be wrong workload. The Performance test strategy talks about various types of tests that will be planned, infrastructure components that will be monitored, type of analysis that be performed, etc but when it comes to workload, it is always expected to be provided by customer or business analysts.  Definitely, our Customer / Business Analysts knows who are end users & frequently used business flows, but its sole responsibility of Performance Tester/Engineer to understand (& sometimes educate customers) the additional detailed analysis required in order to increase the accuracy of our performance tests.

We need to remember the fact that if the workload selected for running the performance tests is not reflective of realistic end user access pattern, the entire test results will go wrong & will result in jeopardy. Remember the below points during your workload analysis:
  •  Analyze your end user access patterns. Try to understand your user’s behaviors.
  •  Identify your average & peak point workloads in your historical trends.
  • Define your peak point workload both quantitatively & qualitatively.
  • During peak point pattern analysis pay attention to ignore outliers.

Now create workload model that needs to be used for your performance tests. You can have more than one workload model for your tests. I mean the workload used for your load test can be different from that of endurance test or stress test. It all depends on your end user access patterns. Remember, knowing basics of Queuing Theory (Operational laws) can help to validate the correctness of your workload model & even to an extent whether your peak hour SLA is valid.

If you are dealing with a very business critical & high availability application where it’s really worth, spend time in understanding the underlying statistical distribution pattern for your peak traffic hour workloads. For a web application accessed by independent geographical distributed users, usually fall to a Poisson distribution or Self-Similar distribution. In simple terms, which distribution does my application workload belong to is about analyzing how much bursts & spikes does my peak hour workload have. Representing the burstiness of your traffic using a metric called Hurst & employing various techniques to quantify the Hurst value will confirm which statistical distribution your application fall into & how much your peak hour workload can vary in future.

For Application Capacity Planning, choosing right workload peak points becomes very essential. Unless you choose a series of peak point workloads from your historical statistics & understand the quantitatively & qualitatively what the workload really comprises of, you will not succeed in accurate forecasting of hardware demands for your application. Applying analytical modeling techniques to answer business demanded what-if scenarios can be made possible using carefully selected workload. Without doing this basic homework, you cannot rightly size your infrastructure for the projected business loads.

Also, most of the capacity planning techniques requires actual application performance benchmarks for careful extrapolation / forecasts. Performance benchmarking becomes very important in capacity planning to understand the hardware resource requirements (represented as service demands) & other performance characteristics of your application. Using right workload to carryout performance benchmarking is the first step towards successful application capacity planning.  

Happy Workload Analysis & Modeling!!

Thursday, July 7, 2016

Interview Ethics for Performance Testers & Engineers


With the experience of interviewing 500+ candidates in the line of Performance Testing & Engineering during last 2 years for junior, mid-senior & senior roles, would like to share my observations & recommendations to have a pleasing experience for interviewer, interviewee & talent acquisition team involved in the hiring process.

Particularly in Performance Testing, the roles & responsibilities are ambiguous & often its confused with Engineering skills. The terms, 'Tester' & 'Engineer' are used interchangeably in many IT corporate leading to additional confusions. 

My below recommendations will help in meeting the Interview Ethics of both firm & candidate.


For Interviewer:

Value the time & ensure you create a good learning experience for yourself & for the candidate while doing the technical assessment.  You are not evaluating the candidate is good or bad, but assessing technical skills for their fitment against organization’s expectation on the available role.

  • Always greet the candidate & start with a casual question. Starting your first conversation with questions like ‘How was your day?’ or ‘How long it took to travel down to the office location?’ This start will bring the candidate to comfort zone.
  • Give space to understand the background & career history of the candidate by asking for self-introduction before you start your questions. This space will increase the confidence to speak out.
  • Try to assess the conceptual knowledge level of the candidate before jumping into the tools knowledge.
  • Quickly evaluate the performance bottleneck analysis skill level – assess whether candidate can understand tool reports only or can perform first level of end to end problem analysis or can perform deep dive analysis of specific tier or all tiers? This will clarify testing & engineering skillset he/she posses.
  • Within first 20 minutes of your discussion, you will need to get the grip of candidate’s overall skillset on Performance Testing / Engineering.
  • Assess the practical knowledge of the candidate for next 10 minutes speaking about their responsibilities in projects mentioned in the resume. This evaluation will bring in whether they have only theoretical knowledge reading some articles or have real practical exposure & tried it in projects.
  • For the next 15 minutes, try assessing the problem solving capability for the given problem statement as expected to be handled by the role.
  • It’s very important to end the discussion after spending at least 15 minutes to understand candidate’s interest areas, likes & dislikes of current organization, other activities apart from project experience, actual reason for leaving the current organization, key challenges he have in current team/project, expectations on his next job, etc followed by a quick brief on your organization, your team, technical responsibilities & expectations for the role.
  • In 75% of the interview discussions, you will know whether the candidate is worth interviewing for the role during first 15 minutes of your discussion. But remember, everyone is not lucky to get the right opportunities or good mentors to guide them at right time. Ensure you give good interview experience to the candidate instead of winding up your discussion abruptly. Atleast you can try to be a good mentor to speak to him for next 20 minutes in the interest of giving him some key take away from your meeting.
  • End the discussion by saying, your HR folks will get back. Don’t extend the meeting for a long time & cover any pending areas of assessment in next round. Do not forget to write the areas you have assessed ,  your feedback & observations along with quantitative ratings that will help the technical panelist / HR in further rounds.

For Interviewee:

  • Ensure your updated resume is made available for the interview discussion.
  • Do a self study of what is your strength & weakness & how it has impacted your work style in your current organization before giving interview.
  • Do not jump between jobs with pure motivation to increase your financial package. In long run, it will not help in having successful career.
  • Do not make it a practice to deny the offer letter on the joining day. If you can have multiple offers, you can definitely take time to decided the best & deny the rest. Respect the organization that has spent time to assess & offer you a role in their team. Have courage to inform the HR team in advance notice about your plans.
  • Just because you carry multiple offer letters, don’t spoil your attitude towards work. Learning is a continuous process & develop courage to work in un-comfort zone.
  • Be honest while sharing reasons for job change, any intermediate breaks, your expectations in the next job, career aspirations, etc. Never ever add any fake details in your resume.

For HR / Talent acquisition team:

  • Clearly quote the job description clearly in bullet points emphasizing upon the responsibilities & skill requirement expected, as there is a constant confusion between tester & engineer roles. 
  • Amidst your busy schedule, if you can manage your time so well to schedule the interview with the candidate, understand their expectations & background, send multiple gentle reminders to have them at venue on time, etc, please spend additional 5 minutes of your time to inform or at least mail the candidate if not selected in any of the rounds.


Monday, July 4, 2016

Top 10 Success Secrets for Performance Assurance (Testing & Engineering) Team


From Big service giants to startups, many organizations have setup successful Performance Assurance Services, but unfortunately some firms couldn’t manage to succeed due to some loopholes they have in orchestrating testing & engineering requirements rightly, client acquisition strategies, team setup, operational challenges, hiring right talent with right skillset, incomplete service offerings, etc.

There seem to be recurring challenges & issues that can be easily taken care if realized & planned at the initial stage itself. I would like to share some of the top key take aways that I have learnt & realized from my experience working for Performance Testing & Performance Engineering delivery teams / COEs for services firms & directly for clients.

I am sure many pioneers in this field who have created exponential increase in revenue targets, brought  large number of client acquisitions, built innovative IPs would have more success secrets in addition to the below ones. Am excited to know & learn from your experience.

Keep your Unique Differentiator Solution Stories Ready  Specialize yourself with USP (Unique Selling Point) in few major focus areas apart from usual performance testing/engineering services for example, say Performance Prediction & Analytics & have your story line elaborated & take it to your prospective clients with the help of marketing / sales teams. Plan to position yourself as unique differentiated player & develop innovative accelerators / tools and/or methodologies on that selected focus area(s) though you might have other services under your portfolio. We can see many organizations that provide mere performance testing services only with the help lateral hires or internally groomed resources who know how to use tools like HP LoadRunner or Apache JMeter with the support of people managers who don’t carry technical knowledge. Have subject matter experts who are conceptually strong & capable of understanding & solving real performance problems.  

Strong Client acquisition strategy becomes the foremost key important factor for your success. Perform periodic market study/surveys to understand the state of other service providers & also on the new technology trends where bringing performance angle will be most valued & appreciated. Have a strategic plan created in Q4 of previous year about how you want to take your solutions to market & set your revenue targets. This planning varies based on your organizations but generally the plan should include campaigns, road shows, publishing articles & ebooks, workshops, etc depending upon several other factors to take the solutions & services to clients. These activities needs to be planned more vigorously for small/startups compared to established players. Usually established players will end up focusing more on project delivery related challenges whereas startups/small scale players will focus on differentiator ideas to establish themselves as matured player with qualified SMEs.

Prioritize & Value your Proposals as they are the major gateways for projects. Demand for differentiator solutions & aggressive timelines are always key attributes of large proposals, but believe me it’s worth the efforts. At this stage, don’t think about how to deliver or who will deliver, etc. Your only focus should be on proposing right technical solution with smart pricing strategy. The strategic learning’s & key take away from a proposal experience is far higher than project execution experience. Every proposal needs to be dealt with utmost care to bring in your differentiator pitch remembering your competitors & their strengths. Apart from technical solutioning, look at several other factors like their existing vendors, competitors, historical challenges if already handled proposals/projects for the client in past, tool preferences, commercial expectations, etc. Impressive technical solution with acceptable pricing will have better chances for winning.

Setting up Competency framework & Career path is important to retain the talents within the team. Create a competency framework for Performance Tester & Performance Engineer separately with systematic levels (4 levels would be ideal).  Though the words tester & engineer are used interchangeably in IT firms, every organization should define what is expected from tester & how a tester should have a career path as they move up in the ladder in the organization. A Performance Tester at expert level might have skillset of Performance Engineer. The competency framework should be decided considering several factors primarily based on the vision & mission of the organization’s key focus areas & how they want to project their story line to customers. Bring in multiple levels of learning courses & certification programs for beginners & practitioners to train them internally.

Dedicated Center of Excellence (COE) team should be made available apart from project delivery specialists/managers , at least a small / thin dedicated team considering the organizational constraints(SGA cost & others) with highly qualified experts is highly required. The COE should be solely responsible for handling proposals, tool evaluations, brining in initiatives on innovative tool developments, periodic project audits, process setup & improvements, exploring new revenue pipelines, bringing in new service offerings, creating differentiator accelerators / solution presentation decks, publishing articles & Point Of View (POV) recommendations, organizing technical events like road shows, etc.  Also, the COE should act as primary point of contact for the project delivery teams for any technical support & taking up other service offerings to existing clients. At any cost, do not include revenue targets or billing related KRAs (Key Result Area) for the core COE team.

Avoid Unnecessary Investments, Be Strategic about your tool licensing investments & onboarding of senior Performance Engineering SMEs. Decide how you want to picturize yourself to clients. Not all clients expect the service provider to procure the tool licenses, but if you prefer to provide performance testing as a service (PTAAS) model, you need to package your services on top the tool to decide on the commercials. Usually Performance testing tools in particular are very costly. Probe all possible alternative options to replace commercial tools by freeware/open source tools if tool license cost will concern your client. Highly qualified SMEs are required but don’t over hire many of them as they are very costly resources. Be strategic & create an operating model with minimal senior SMEs for well functioning of the team. Take strategic steps to increase senior members in your team based on your project pipelines.

Creating Fresher Pipeline gives way for bringing in young millennial who can be mentored & trained internally to assist project teams in scripting & test execution activities to start with. Making them as shadow resources  with senior members (even in non-billable mode) in existing projects brings lot of confidence & quick understanding of practical challenges than just knowing a performance testing tool.  Very important to ensure fresher training /induction material is created & is made handy with 2 to 3 months learning curriculum along with assessment details.  Mentoring & Self learning followed by periodic assignments should be planned to track & bring seriousness for having quick learning curve for the fresher resources. Insist the young team for taking up internal certification programs designed for assessing both conceptual knowledge & tool expertise.

Set up the Team right with separate Testing & Engineering streams with their boundaries clearly set. It is very much essential to create a positive jelling environment between testing & engineering streams as many organizations fail in this aspect. I am not insisting on having two separate teams, plan to make the two teams work collaboratively depending upon your client problem statements working under one common umbrella. Collaboration between these two skill set is possible only if both have common leadership team.  Production readiness validation type of projects might require few performance testers together with a performance engineer. Engineers can independently work on projects where the problem statements are clearly related to diagnosis & tuning for a system that is not able to meet the performance SLAs. Expert architects from engineering team can be involved for problem statements where architecture/design reviews for performance best practices or application capacity sizing recommendations are required, etc. But facilitate experienced testers to graduate to engineering team & have training mechanisms for grooming them for providing engineering solutions as hiring Performance Engineer is not that easy.
Some organizations where Performance Testing & Performance Engineering are two different isolated teams fail due to operational challenges & bad politics as there is a thin boundary line between these two teams.  May be it might work out when the Performance Assurance team is established & you start servicing many major accounts only for Performance testing or Engineering services. 

Provision  diverse Skill set Talents  A well functioning established performance assurance team will demand people with varied skillsets including Tool experts, Performance Testers, People Managers, Techno Managers, Infrastructure Architects, Capacity Planners, Prediction & modeling experts, Technology specific architects, Fresh Trainees, Developers, etc.  A Performance Assurance team needs to have strong talents to guide other senior specialists who take part in various SDLC phases, hence you need an architect, designer, programmer, developer, tester, engineer, capacity planner, system admin. And in Agile / Devops environments, this becomes more vigorous & the performance specialists are always expected to have all these skills. This brings in a huge challenge to facilitate right people for the projects. But all projects don’t demand these types of high-end experts as even now majority of the projects think about performance later in SDLC demanding reactive performance engagements. More than building a team with diverse talents, creating a positive & learning environment complimenting each other will be a challenge, but I will leave that to the people management skills of the director who would head the team.

Right Effort Estimation Strategy & reasonable Rate Cards should be created for performing Performance Testing & Performance Engineering activities. Usually the rate card for Performance Tester is slightly higher than functional / automation testers. Performance Engineers rates are usually very high compared to Performance testers depending upon the type of SME skillset & market demand. Usually short term projects effort estimation & pricing strategy needs to be very different from the strategy adopted for long term projects. It’s sometimes experience based or subjective decision purely based on your gut feel.
Follow best practices for deciding upon the estimates with the help of handy estimation templates. Understand clearly the scope to include right mixture of testing/engineering activities and clearly document your estimation/pricing assumptions. Ensure multiple levels of internal review meetings are held with different senior leaders before finalizing your commercials. Don’t forget to ask yourself whether you are creating value for the proposed commercial putting yourself in customer’s shoes. And don’t forget to think twice about your competitors & if you don’t want to miss the opportunity/client, ready to compromise a bit by proposing lower rates if the new opportunity can open up big doors for you.



Tuesday, June 28, 2016

Myths about Performance Testing versus Performance Engineering Unveiled

As a software professional, we all have a very good unified common understanding of what is Engineering & Testing team skillset and how different both are. But many of us have difference of opinion when it comes to Performance Testing versus Performance Engineering.

Though Performance Testing comes under the umbrella of Testing, in many aspects Performance testing is very different from the usual functional testing stuff. Be it the effort estimation strategy, test planning , the defect management cycle or the tools knowledge requirement, performance testing is quite different. From test management perspective, there are quite a lot of differences that needs to be exhibited in the management style as well.

Performance Testing is not a type of automation testing where test scripts are created using a tool & automated test run is scheduled. In functional or automation testing, test coverage becomes very important factor whereas in Performance testing, test accuracy becomes essential. Realistic simulation of end user access patterns both quantitatively & qualitatively is a key factor towards successful performance testing but unfortunately this is not measured or expressed using a metric. This has led to a state where anyone who knows using a performance testing tool can do performance testing.

What is Performance Testing

So finally, lets come to the definition, Performance Testing is a type of testing that simulates the realistic end user load & access pattern in a controlled environment in order to identify the responsiveness, speed & stability of the system. It usually requires reporting of system performance metrics like transaction response time, concurrent user loads supported, server throughput, etc  along with additional metrics reporting the software & hardware layer specific performance metrics like browser performance, code performance, server resource utilization,etc  that helps in analyzing the potential bottlenecks that affects the system performance & scalability.

So, yes Performance tester should know how to report the performance metrics of different layers while conducting a performance test. In a 3 tier architecture, performance of individual tiers - web server, application server & DB server , client-side / browser performance, network performance, server hardware performance, etc  needs to be measured & reported. This cannot be considered as an Engineering activity. Deep dive analysis of why a layer specific performance metrics doesn’t meet the SLA can be considered as an Engineering activity.

Usually the confusion starts when it comes to performance bottleneck analysis. I agree there is a thin boundary line exists. Whether it’s a job of performance tester or engineer. Here is my point of view on this topic – Sophisticated performance monitoring tools & Application Performance Management (APM) tools are used independently or integrated with performance testing tools itself to measure & monitor the performance of various tiers (in software layer) & the infrastructure server resource utilizations (in hardware layer) with clearly reported metrics. Hence, it’s the responsibility of a Performance Tester to measure & monitor the performance of end to end system during performance tests & report the observations & findings. The basic straight forward analysis & experience based analysis can be performed by a performance tester to reconfirm the performance problems & observations. Now if the findings requires deep dive analysis, say a specific transaction or method reported to have high response time or a server resource is over utilized, that needs to be debugged further to fine tune it will be the responsibility of Performance Engineer. Application capacity planning, performance modelling & prediction analysis, infrastructure sizing analysis, etc are also core responsibilities of a Performance Engineer. Measuring & Monitoring serveral parameters that can impact overall system performance will be the responsibility of the Performance tester.

What is Performance Engineering

Lets start with the definition for Performance Engineering, it is a discipline that involves systematic practices, techniques & activities during each phase od Software development life cycle (SDLC) to meet  performance requirements. It strives to build performance standards by focusing on the architecture, design & implementation choices.

Hence, Performance needs to be proactively thought throughout the software development phases right from the requirements phase. In this proactive mode,  a Performance Engineer is involved right from the initial SDLC phase to ensure system is build with performance standards. There are several techniques available to validate performance at each SDLC stage even when a testable system is not available.

In reactive mode, when a system is tested for its performance & found to be not scalable, i.e doesn’t meet the non-functional requirements related to response time SLAs, user scalability levels, etc, then a Performance Engineer usually tries to understand the metrics reported by Performance tester & perform a deep dive analysis on the specific layer(s) which doesn’t meet the SLAs. In this case, depending upon the bottleneck reported, specific SMEs like DB architect, Websphere specialist, Network Engineer, etc  can be involved to analyze the performance issue in detail to provide tuning recommendation.

Engineering career path for successful Performance Testers

Performance Testers after gaining good performance testing experience, who possess great interest towards problem analysis & tuning end up having their career path into Performance Engineering.  They are usually not specific technology experts rather they have good understanding of what to be tuned on what circumstances & seem to have good knowledge on various parameters that have to be looked & tuned to have performance & scalability.

These Engineers ususlly have the below skills :

  • ·    Possess good experience in reviewing system architecture / deployment architecture & providing suggestions for better performance
  • ·   Good knowledge in developing strategies for assuring web / mobile application performance throughout SDLC phases
  • ·    Good experience in various Performance testing tools like HP LoadRunner, Jmeter, NeoLoad, etc.
  • ·     Good experience in measuring/monitoring performance of various layers involved in end to end system.
  • ·     Experience in analysing the application traffic patterns using tools like Omniture, DeepLogAnalyzer, etc.
  • ·      Experience in performance monitoring tools & APM tools like Perfmon, HP Sitescope, CA Introscope, Dynatrace, AppDynamics, etc.
  • ·         Good experience in using profiling tools like Jprofiler, Jprobe,Jconsole, VisualVM, HP Diagnostics, etc, including GC / JVM analysis tools & heap/thread dump analysis tools.
  • ·         Experience in DB profiling tools like Statspack / AWR  / SQL profiler,etc.
  • ·         Experience in front end web performance analysis using Yslow, WebPageTest, Ajax Dynatrace, PageSpeed, etc.
  • ·         Experience in performance prediction / modelling analysis during early SDLC phases.
  • ·         Experience in Capacity planning/sizing through Queuing Theory principles & tools like       TeamQuest, Metron Athene, etc.

A Person with above kind of skillset can also be called Performance Engineers not necessarily they need to have core development skills.  Also, every Performance Engineer might not have skills across all technologies. Based on their practical experience & technical exposure , though they might tend to have knowledge in specific technology, they would carry better understanding of what can make your system scalable & highly available.

A successful Performance Center of Excellence(PCOE) should have engineers with the above qualities. They would have better confidence to guide towards performance assurance rather than people who knows how to execute performance tests using a tool. My sincere advise would be don’t call the COE as testing only or Engineering only, because it will look incomplete from customer point of view. Let your customer’s problem statements drive the project to do performance testing or engineering services. A successful PCOE should comprise of both performance testers & performance engineers complimenting each other with their skillset

Looking from customer standpoint, their online business application needs to have complaince against their Performance NFRs (Non-Functional Requirements). To ensure this, as a Performance SME, you need to do testing to measure the performance metrics & certify the system for performance & scalability followed by deep dive analysis & tuning incase (only if) performance SLAs are not met. Unless, your COE have capabilities to do both, your COE will not look complete from customer point of view.

Tips for organizations for setting up a successful Performance Assurance Services is discussed in another post.



Monday, June 27, 2016

Journey towards Performance Analytics & Prediction Models


With Agile & DevOps becoming more & more predominant in software development methodologies, early performance analysis & predictive performance is becoming a norm for business critical & high traffic applications. Performance modelling & prediction analytics using the historical statistics gathered across several layers & that can help in analyzing several what-if scenarios & making quick performance judgement without actually testing the system.

So what is Performance Prediction?

It’s the process of modelling the system performance against simulated user loads by using mathematical expressions. Predictive models can only FORECAST, cannot ascertain what might happen in future as it is probabilistic in nature.

A Performance model uses specific number of building blocks to predict how performance will vary under different what-if scenarios like varied set of load conditions, change in workloads, and change in server capacity, etc.Usually the inputs to the model are expressed in mathematical quantities such as number of users, arrival rate, response time, throughput, resource utilization, etc.

How is Performance Analytics & Prediction correlated ?

A robust performance analytics solution should comprise of system performance data collected from production environment like server resource utilization monitoring, server health & performance metrics from APM tools, etc. A sophisticated analytics solution should have data collected at various stages of software development life cycle phases that can be correlated well using modelling & prediction techniques to forecast the system performance for great combination of what-if scenarios.

When the analytics solution build facilitating different types of data collection & storage used with the sound intelligence built through sound modelling & forecasting algorithms, it becomes predictive analytics which provides realistic forecasts on system performance for what-if scenarios.

There are 2 key types of models – Predictive & Prescriptive. A Predictive model built along with the intelligence to prescribe an action to business to act upon along with the feedback system that tracks outcome produced by the action taken becomes Prescriptive model. For example, a predictive model can predict the peak traffic throughput of the application under test whereas a prescriptive model can predict &recommend / alert business about the need to bring down the resident time of specific layer/method or to upgrade the specific hardware resource with high service demand to meet the performance SLAs with clear data points about the expected performance improvements.

Visualizing the required data as onion layers, the performance prediction accuracy increases when the data layers used for building the analytics solution increases. Some of the major data layers to be considered include performance modelling results & actual performance test results from controlled performance test environment, Network performance simulation results & device side performance metrics (for mobile applications), Test versus Production environment capacity differences , Production infrastructure monitoring statistics,website end user traffic patterns, & web (browser) performance statistics. At least to start with, production environment monitoring statistics & website user traffic statistics data layers are essential ones to do forecasts based on  historical data analysis using regression techniques.


                      
There are several open source & commercial tools generally used to perform the testing & analysis at each of the data layers represented in the figure. The key challenge lies in building the intelligence to parse the results produced by variety of tools & provision a tool agnostic generic reporting dashboard  that can be fed to the prediction / modelling algorithms to support in performance forecasting.

Details on modelling / prediction techniques will be available in a different post. 

Thursday, May 5, 2016

Upsurge in Savvy Performance Analysis Tools leading to Enigmatic Trend


This article is also published in WOMEN TESTERS – APRIL 2016 EDITION
http://www.womentesters.com/2016/04/


Introduction

With the onset of SMAC (Social, Media, Analytics and Cloud) technologies& Digital transformations, there is an exponential increase in the level of knowledge expected from Performance Testers & Engineers. There has been a sudden increase in the number of vendors providing various application performance management tools to satisfy the growing demand on quick performance insights as high performance is one of the key factors driving the technological revolution. As Performance Engineering focuses on building systems with high performance and performing tuning & optimization to meet the performance, scalability & capacity demands, Performance Testers are usually seen inclined to have their career path towards Performance Engineering. 

This emerging trend in the rise of various performance testing &engineering toolsets for quick performance analysis &easy bottleneck detection has led to a confusing trend withinPerformance Testing/Engineering community.  The upcoming millennials seem to have developed more interest to get their hands on various tools rather than focusing on deepening their concepts &developing thought process for systematic performance problem diagnosis &tuning. This trend is creating a very big challenge for Performance COE’s (Center Of Excellence) as they strive towards successful project delivery in environments where these savvy tools don’t exists.

Tools are just Enablers; Don’t forget to add your Intelligence

The primary skills for Performance Testers is to design & execute various types of performance tests using variety of tools like HP Load Runner or JMeter. Performance Engineers through their vast performance testing experience for years or from developmental background have built stronger credibility as they understand the problems in application performance management including building scalable architecture solutions, selecting right design patterns that enhances performance, code profiling, etc. slightly better & know the areas to be analysed to identify the root cause of the problems when a performance SLA is not met.

Even now, a good performance tester is expected to have HP LoadRunner tool certification. There are no tool agnostic courses for acquiring performance testing capability. Due to which performance testers are usually seem biased & inclined towards specific tool terminologies & concepts. Majority of them think Performance testing is all about developing test scripts for the requested business use cases and create test scenario in the tool to execute load tests for the expected load conditions.
Performance Testers should firstly know how to strategize the tests, it’s not about just conducting load test or stress test using a tool.The key value adds in performance testing on how to bring in better accuracy & real time end user simulation in the performance tests, etc. Your favourite performance testing tool do not have the intelligence to report if the workload (use cases & load distribution) used for the test is incorrectly modelled. Many testers seem to be very comfortable with the knowledge of various tool specific settings without knowing the fact that a wrongly configured think time can end up running a completely wrong testnot even worth doing performance testing for your application.Majority of the production failure RCAs (root cause analysis) revealed improper & inaccurate performance tests due to incorrect design of the workload or improper NFRs used during validation tests.

The Performance testers should think beyond the performance testing tool & focus on gaining stronger understanding of the underlying Performance Testing & Engineering concepts to be able to run accurate & meaningful performance tests.

In recent past, there has been a sudden rise in the popularity of various commercial APM (Application Performance Monitoring) tools like Dynatrace, AppDynamics, New Relic, etc., while these tools have helped in quick & easy performance problem detection providing productivity boom due to lot of performance problem insights & readymade analysis recommendations in few button clicks, it is unfortunate that it has created a situation where Performance Testers /Engineers are more biased to gain experience on these tools rather than understanding the problem analysis concepts & principles.

The Performance Engineers we could find these days also have got biased to tool terminologies and tool provided metrics losing the big picture on overall application performance management & 360 degree of problem analysis. This lack of understanding in some cases have caused great embarrassments during project delivery. But I must agree the fact that these new generation tools are a great tribute for Performance Engineers & have brought our performance bottleneck analysis efforts from weeks to days thereby giving us room to fix it up quickly before it impacts the end user.

The bottom line fact is that there is no substitute for understanding of processes & Performance testing & Engineering concepts/principles which is required for managing performance throughout the SDLC life cycle phases. Gaining knowledge of the tools is important but not to be considered as a substitute for all the underlying performance testing/engineering principles. 

Tools are not an evil. They are great weapons to address the problem quickly. It’s just our perception to use them as enablers needs to be improved & we need to continue focusing on underlying Performance Testing/Engineering principles.