Calculating QA Costs – Backing In Costing Method (BIC)

The series on Costing Models Includes Service Planning Costing (SPC), Agile Anarchy Method (AAM), and Just Test Something (JTS) Method. The intent here is not to claim one model as king, but rather to evaluate the potential benefits and pitfalls of relying on one model exclusively.  My hope being that by sharing these approaches, QA organizations will evaluate their current models and perhaps find room to tune them for greater excellence.

Very few organization use 100% one model or the next. These models are neither exclusive or of a rigid scale. Rather most organizations will use one model as primary model, and perhaps have 1 or more secondary models. Placing them somewhere on a scale between one or more of these models.

Backing In Costing Model (BIC) or baseline in model is uses past measurements to determine future costs with very limited desire to changing Process, Coverage or anything for that matter.  Similar, yet not to be confused with regression testing, a baseline from previous year/s is used to work backwards to costs future projects.

For example, last year BIC Model Inc. had 10 QA staff that delivered 2 large projects to the relative satisfaction of everyone. Last years cost, were based on the previous year and I no one remembers  how those were calculated.  They established a baseline, and set the expectations within their organization and to their customers. The process was settled, roles understood and number of production errors accepted. The entire cost allocated to QA last year was $600,000.  That is $300,000 per large QA project and average of $60,000 per QA staff (The hourly QA costs, although rarely used are seen as $30.36).

This year  BIC Model Inc. has a global executive requirement for 10% reduction in workforce and costs.  However they have 2 similar size projects and a 3rd project that the management accepted is ABOUT 50% the size of the other two. This years QA will be fortunate to have an increased budget to $ 675,000. BUT “Don’t consider this a new baseline, next year  we drop back to $540,000 from last year” . The amount is calculated by taking the baseline $600,000 -10% annual savings target =540,000  x 25% for the additional project. BIC Model Inc. decides to spend this additional money by adding 2 fresh new recruits to QA at $37,000 a year to offset the additional workload expected.

The BIC Model is focussed on previously accepted and predictable costs and about maintaining same Status Quo. Usually effected by annual cost reductions efforts, but rarely is the team successful as in the example above raising the budget.

It’s often characterized by management seeing little benefit to training staff or updating tools, as these bring change and possibly new costs. Vocal supporters for streamlining process, there is usually much resistance to change. Until some accepted baseline measurement like usual amount of production errors, fails to meet expectations. The motivation then, is often to streamline process to maintain the previous baseline.

Advantages

  1. Costing is based on real corporate experience and history. There is no unexpected surprises like additional corporate expense sharing like health or retirement plans or desk space lease etc
  2. It’s a easy sell to executives, they expected the costs and there is no need to explain percentage coverage calculations, number of services, test coverage, release cycles, emerging technology concerns etc
  3. People tend to know their roles and relationship between development, business and QA is usually mature.
  4. It’s very subjective regarding size of projects. In the example above, how similar in size are these projects really? This allows for a certain amount of exaggeration.
  5. Software maintenance and regression testing costs are usually well understood

Disadvantages

  1. Backing in to how much testing will be done, based only loosely based on the amount of testing required. It’s very subjective regarding size of projects. Time per test case development, number test cases, number of software releases etc can often be ignored
  2. The focus is on maintaining Status Quo and not on improvement. There is often little progression, changes to process,  promotion, training, introduction of new tools and skills development.
  3. The baseline is often dated. Rather than re-calculating the baseline and to better understand the time per test case development, number test cases, number of software releases etc. after each project, the baseline can be years old.
  4. Bad process, habits,  practices, people etc. become part of the baseline to be protected.
  5. Corporate costs cutting initiatives, like 10% example In BIC Model Inc. can eventually be the “straw that breaks the camels back”.   On the other hand, QA is constantly living under the threat cuts,  and needs to show that each year is equal or larger than the previous. Annual negotiation becomes and constant battle for survival balanced against expectations on the delivery of quality. Subjective numbers, not backed by detailed baselines, can become very inflated.

Conclusion

Taking a baseline at the end of each project, to understand the impacts of new process, people, tools etc and to ensure that your calculations for any future project is good practice. Evaluating one project vs another.  Were the BIC model can rapidly fail, is if the detail or level of the baseline is poor or the baseline becomes outdated. Backing into the amount of testing to be done based vs. using these baselines to calculate forward using a more detailed costing model.

The second in this series covers Just Test Something (JTS) Method.

Did I miss an advantage or disadvantage. Please feel free to comment below. My next costing model will be posted shortly.

1. SOAPSonar – Installing and Getting Started

So new you new to SOAPSonar. Perhaps you joined a new team that uses it, a student,  or you downloaded a trial and trying to get started.   Here is a quick tutorial on the install and UI.

Installation

After downloading the latest version (request a no obligation 14 day trial here), install by right click and run as administrator. If you don’t install with Administrator rights, you should get warnings.

Once installed you should be presented with the registration screen. Please enter your name, company and email. If you are behind a web proxy, please first enter the proxy settings. These can be taken from your browser settings. Go, File, Settings and Preferences and select the Global Proxy Settings tab. enter your organizations proxy settings here before registering. Alternately you can select the Manual Activate button Option.

Proxy

If you doing and evaluation or your company uses Instance based licensing, enter your license key then select activate key.  Instance based licensing can only be loaded on one machine per license key.  Note if you using Instance based pricing, the number of days still on your license is shown.

Licenese

If your company uses Floating licenses centralized license server is used. The server is used to check-out or check back the license only and does not require persistent network access. Licenses server is only accessed at the time the lease is granted or released. The server is not a file server and need not be a dedicated machine, but requires a  static IP address and a small windows application. SOAPSonar can then be loaded onto as many machines as you would like, but only one machine can use a licensee at a time.  If you using floating licenses, select the button for Use License Server and enter the details.  (or hostname) and open port. Request New License Lease and select the Key Type and how long you wish to check the license out for.  Select Request License and if there is an available Floating license of the Key Type requested, SOAPSonar will activate.

License Server

The User Interface

UI

SOAPSonar UI follows the usual windows convention

  • File – File related tasks like new project, load project and save project with Settings and Preferences
  • Mode – SOAPSonar 4 pillars of QA (functional), Performance, Compliance, Vulnerability (security). Note the mode can also be changed in the right corner.
  • Tools – Various tools from key management to traffic capture.
  • Library – List of Automated Data Sources and vulnerability definitions that can be used.
  • Updates – To check for updates to the latest release. SOAPSonar does not require manual scripting. If manual scripts have not being used, upgrades should have no impact on test cases. It is highly recommended that you stay current to avoid known issues and get the new features.
  • Registration – Discussed above
  • Simulation – To launch a CLOUDPort generated run-time virtualized mock service to test against
  • Agents – For downloading and configuring remote performance load agents.
  • Help – Yes there is a help file.

Now each of the 4 pillars (QA, Performance, Compliance, Vulnerability) can be set to

  1. Project View – For the configuration of your Project Tree or Test Suite Groups for automating your testing.
  2. Run View – For running automated tests projects
  3. Report View – For those who actually want to see the results after a test is run.

Conclusion

You should now be ready to start testing. If you have a project and feeling ready, you can start that.

Else we are working on developing Tutorials. These will be very basic tutorials to highlight a few frequently used features that should not take more than a couple hours to complete.

Cost of Versioning and API or Service in the Data Economy

The Data Economy is booming, and much is being written about Software “eating the world”. many companies however, have not formalized a strategy on Developing and Versioning their API’s. In a recent Forbes article “Collaborate to Grow Says Deloitte Global CEO Barry Salzberg”  MIT Sloan graduates Jaime Contreras and Tal Snir are quoted as saying “the peer-to-peer exchange of goods and services – is being called the next big trend in social commerce, and represents what some analysts say is a potential $110 billion market.” Last month,  Information week did as entire special issue on the “Age Of The API“, the enablers of the Data Economy.

This exponential growth in API’s is however creating significant versioning concerns and many organizations are beginning to consider their strategy for API versioning as their current strategies become insupportable. For example, the business needs a REST version of an existing SOAP service for mobile access. Should the migrate the entire service and all the client Consumers of the service to a new REST API and end of life the existing SOAP service? Or perhaps develop new API and leave the SOAP service in place?  Whatever the reason is for the change, is the best strategy to create new, update the existing, replace the existing or do something else? How many of these changes can they manage in a given period of time and what are the costs? Read More

PERFORMANCE TUNING MOBILE API – CLIENT

In my first post in this series, I highlighted the need to isolate and break down the user experience into logical and measurable portions to used as a baseline.

The client often gets the most focus the device in the users hand at the end of the chain of factors influencing performance. As it’s at the end of the chain, it is the sum of all the others and unarguably the user’s final experience. This being said, the client impact on performance is only the time added from the moment the device receives a complete message, to display. Or the point of submit till the time it leaves the device.  It’s may not be hard to identify when a mobile application is giving a poor user experience, but QA needs to also identify why. Read More

PERFORMANCE TUNING MOBILE API – NETWORK

A mobile application by default has a network component. The portion from the phone to the Ethernet card of the API servers. Who has not seen the US commercials “can you hear me now?” Canadian Wireless Service providers spend significant effort to plan their network coverage, identify poor performance, do capacity planning and ensure signal coverage. This includes crowd sourcing, BI, using tools and even driving around. Wireless networks are however not static and everything from the number of tree leaves, to time of day, affect the signal strength and capacity for a given location (Wireless bandwidth is limited and shared per cell frequency and cell coverage). Add to this nearly 10,000,000 square km of geography we have in Canada and you can understand the enormity of testing the network.

Read More

PERFORMANCE TUNING MOBILE API – API Themselves

To non-technical minded people and API is often a “a program running on a server somewhere” and rarely considered impacting to user experience. Web Services API’s Provide the information responses after doing the necessary calculations to requests made by the mobile application that populate the information on the screen. The client is all about presentation, and usually does little computation. Its the API and not the client that is doing the heavy computational tasks and hence can have greater effect on the user experience. Sure you can throw more computing power at it, but this does not always work.

Depending on the design, each request to by the mobile device to Consume an API, could respond by Providing multiple fields. Say the API is a customer record API. A request by the Client, would result in the API Providing the entire customer record even if only a small part of this response is needed.  That means if the application needs to display the customer number, yet the API Provides the entire customer record, the entire record would be transmitted to the device, which would drop everything but the customer number.  On the other hand, any screen on the client, can request more than one API.  Say a second API Provides order history for a given customer number. The client application has a screen that displays customer number and the last order made. It would need to first request for the customer record API. The API will Provide the entire customer record.  The client would then send the customer number as part of its request to Consume order history API, and the server would then Provide a response for after doing the necessary computation to generate the order history. This computation process could rely on a external DB or CRM system (what we call a enabler). The client would then populate and display the page with just the portions needed. This workflow we call a Chained service, the response from one request, being used as a request for another.

The time taken for a API to respond, includes this logic or computation that can involve look-ups on other systems (enablers). Identity validation, DB lookup and even external systems like partner shipping systems, or foreign trading systems.  Robust and Sustainable API should be kept small, lightweight and client independent, to ensure their modularity and re-usability. This was not always the design criteria, and many older services are monolithic and tightly coupled with the client. API Gateways are often used to mediate and create new lighter weight services for Mobile applications. Adapting protocol and message format  and creating virtual partial API, to strip unwanted traffic off the network portion. These gateways can offer caching and performance improvements, but can also be sources of latency.

The rapid growth in mobile application development has resulted in many new technologies, and emerging standards. Newer, lighter weight protocols like REST are generally used vs. more mature, heavyweight SOAP. New encryption methodologies like elliptic curve are common since it requires lower client CPU processing. New Identity formats like SAML and OAuth are used to address identity in the cloud and mobile arena. These new emerging technologies, are often still early or pre-standard development and relatively immature. Furthermore, the skills of developers and QA in these new standards are very limited and in extremely high demand. When last did your team get training on one of these emerging technologies, or have they simply learned these as they have developed your mobile application? It is unlikely that a business can expect the same level of maturity and quality in mobile applications as perhaps they may in other more traditional development and the fault density will probably be far higher in new mobile application development.

API Performance = User Experience –  (Client, Network, Enablers)

From Client and Network posts, we already know the Network and Client performance impacts. The same test case run locally to the API gateway or server provides the performance for API + Enablers.

I constantly recommend that performance testing be done earlier in the testing and development life-cycle. Since SOAPSONAR same test cases can be used for functional and performance testing, performance testing should start as each service is validated functional on a service by service basis. SOAPSonar will report on each individual request and response time for each service, including each step in a chained service. A client application may only show the end result of the chain service or the service that responds slowest. Testing via the device may give user experience, but provides little information as to what or which service is slowing things down.

An important part of performance testing is understanding the impact of load on performance. This is usually done right before production cutover, and often leaves little time for time consuming rewrites. The result is often over architect hardware or network to compensate. SOAPSonar can use the same test case, with Virtual Agents to generate load (including across physically distributed load agents). Reporting on the pact on performance at a given TPS or understanding at what point the system will begin failing. By defining a success criteria in the test case to fail tests that take over a given time, can help identify individual services that start failing under load or when running a regression test.

SOAPSonar can detail the performance of each request made to each API and the response time, hence identifying any particular API’s which may not perform well. If these API are supported by an Enabler, identifying the poorly performing Enabler vs the API can requires additional isolation.