Search for:

CLOUDPort Free Runtime Player for Troubleshooting

I get a lot of calls from clients having connectivity issues between the client and the services. Connecting between various labs, environments,  instances, sites etc  can be difficult for developers and testers to troubleshoot. Here is a simple free way to confirm connectivity at the web service level.

The CLOUDPort Runtime player is a free tool that can run mock virtualized services to test your client against. While the paid version of CLOUDPort allows you to create the run-times / responses you wish, the Free run-time, comes with 3 embedded solutions. An Echo Service, a Static Response Service and a Fault Service.

The runtime can be used in a variety of ways. The echo service is often used to check field mapping through a XML gateway or some transformation device, since the request is sent back as a response, you can confirm any manipulation of the request or response message. CLOUDPort Runtime also support load testing, providing real time performance information, using either echo or static response. I don’t want to try list all the possible uses cases of the free runtime, as I am sure many of you will come up with new ways.

Read More

Calculating QA Costs – Service Planning Costing (SPC)

The series on Costing Models Includes Back In Costing (BIC), Agile Anarchy Method (AAM), and Just Test Something (JTS) Method. The intent here is not to claim one model as king, but rather to evaluate the potential benefits and pitfalls of relying on one model exclusively. My hope being that by sharing these approaches, QA organizations will evaluate their current models and perhaps find room to tune them for greater excellence.

Service Planning Costing is done by calculating the desired number of test cases, then multiplying them by time needed to complete these and the costs for this time. To this you add any additional project costs. It requires a full understanding of the project before costing can be done, and detailed project planning and scope.

SPC Model Inc. is costing a planned project. The new Project will have 50 new web services, each averaging 5 functions in the service. SPC Model Inc standards require a minimum testing one positive and one negative scenario only per service function. By multiplying these numbers together, SPC Model Inc plans require 500 test cases. The project plan calls for 5 expected development cycles or code drops. The security team will need to run their set of tests and the performance team theirs. They also need to do a final lab and production implementation test. Its estimated their current team skills are at a level were each test case will take an average of 12 min to develop and complete and the corporate rate is $50 per hour for QA resources. Finally they plan on purchasing a new tool, 2 lab machines, training and need to pay for hiring and on-boarding new employee. The Service Planning Cost (SPC) is seen to be $63,000 for this project as per below.

SPC Base

The SPC model is focused on breaking the testing down into as many small pieces as possible and assigning a value to each piece. Variations are possible using averages across the project or breaking the project down further into stages. For instance, First release may have less test cases or services, Security test could be a separate calculation or hourly rate could include training and hiring. The more granular, the more likelihood of accuracy, but the greater the risk of inaccuracy. Numbers like release cycles and time per test can inherited through by reviewing previous similarly projects or through running short pilot. Whatever numbers are used, it is still based on plan and plans can go wrong.

Advantages.

  1. Although some of the numbers may come from previous projects, the actual costs are based on real target project scope.
  2. The model supports organization test coverage standards or objectives. The number and extent of test cases is planned for positive, negative, load, security etc.
  3. Very easily communicated and aides understanding between on-shore and offshore teams and management, ensuring all parties work to the a defined plan. This is one of the reasons for this model to be popular with outsourced organizations to reduce project risk and clearly define the scope to prevent scope creep.
  4. Timelines, gates, KPI and milestones are known
  5. Inefficiencies can readily and clearly be identified and costs easily understood. For instance the cost of an additional code drop.

Disadvantages

  1. The line items can be very subjective, hence vulnerable to padding or underestimating. A very small mistake in any line item is compounded and can result in significant over/under costing. For example change the individual test case time by 2 minutes average.
  2. Does not allow for much flexibility in changing the plan. For instance, a particular service may perhaps warrant additional testing or an additional requirement or unforeseen services added.
  3. Still dependent on other parties delivery. What if a extra code drop becomes needed due to a previous issue not being fixed correctly
  4. Relies on a totally flexible environment. A testing team cannot always be expanded or contracted in real time.
  5. Does not support agile development model well.

CONCLUSION

Service Planning Costing (SPC) is not a silver bullet, or exact science and it does not prevent misuse through padding to target a desired price or outcome. It can however be a extremely valuable tool for analyzing expenses, identifying inefficiencies and managing KPI. These KPI can be used one project to the next, and “tuned” over time to plan better.

I will be covering more around these KPI and on understanding better how an organization, or individual,  may focus on the SPC model to “Dial In” for excellence in future posts and at my TASSQ presentation.

Calculating QA Costs – Agile Anarchy Method (AAM)

Continuing the theme of QA costing methodologies, Just Test Something (JTS) and Backing In Costing (BIC), Service Planning Costing (SPC). The intent here is not to claim one model as king, but rather to evaluate the potential benefits and pitfalls of relying on one model exclusively.  My hope being that by sharing these approaches, QA organizations will evaluate their current models and perhaps find room to tune them for greater excellence.

Let me start by saying, Agile Anarchy Method of COSTING is not the full Agile methodology. Agile Anarchy Method (AAM) is about focusing on maintaining a certain agility (anarchy) in order NOT to be tied to fixed costs.

It’s about the bleeding costs of Agile, selectively applying only parts of Agile methodologies to development process. AAM is usually used in organizations were there are frequent changes to environment, QA scope or the software being tested. It can even be found in organizations following a more traditional waterfall process for development. Its also common in rapidly growing or newly founded organizations, where immature process and reactive planning is sometimes dressed up by calling it Agile, vs having to admit that its really just Anarchy.  Read More

Join us at STARCanada 2014

ST3PP together with one of our partners Crosscheck Networks Canada  will be at the Exhibiting at Software Testing Analysis & Review Canada Conference (STARCanada), “The Greatest Software Testing Conference on Earth“. You will find at at the Exhibition.

Where – Hilton Toronto, 145 Richmond Street West, Toronto, ON, M5H 2L2

When  – Tuesday & Wednesday on 8-9 April 2014

We will have SOAPSonar and CLOUDPort on display and will be speaking to ST3PPs mission. Come over and introduce yourself and get to know us better.

Tuning in for QA Excellence

The future of QA, especially in higher salaried countries like Canada, is Excellence. Manually testing via repetitive basic unit tests,  “blindly entering data”, has little future, or returns.  I can’t get enough of articles like Computer World’s Tech hotshots: The rise of the QA expert. In a rapidly changing world of technology, sometimes QA departments can seem like they are locked in stasis, 30 years in the past. The greater challenge for any tool vendor is gaining these groups interest in evaluating change, not the tools features or functionality.

I usually divide Software QA into 3 buckets or aspects. The first and largest expense in most QA environments is People.  The time costs for  fingers and eyes to enter “data”.   It’s not surprising then that much of QA cost cutting efforts are targeted at reducing People costs. Often People “tuning” amounts to cutting rather than development. When last did your QA department do any training or personal development? When last, as a QA professional did you do any training of your own initiative? Perhaps signing up for JSON introduction course or learn a new tool? “But as a Automation tool vendor, don’t you replace the need for People?”.  The answer is no, we require higher skilled people to develop and use automation tools. Someone still needs to plan, develop and run the test cases, else who would we sell tools to.  Less fingers and eyes maybe, but far more thought and Process.

That brings us to the second aspect of QA is Process. Business constantly needs to re-invent itself. Tuning business process is an import part of  competing and achieving greater excellence.  Yet how many software QA teams have you heard described as innovative? When last did you try an alternate, experimental process or embrace role changes, vs object to proposed role changes? Have you tried getting QA involved earlier in the development cycle, or change the requirements for hand-off from developers. Perhaps you brought some tests into the testing cycle earlier (like performance or identity) to provide more time for code fixes. Changing process requires the right people and the support infrastructure.

That brings us to the 3rd aspect of Software QA , support infrastructure or Tools.  Just like People and Process, Tools by themselves will have little impact. They require the support of the people and the process. So often I hear of someone developing and in-house tool for testing, usually showing great skill, but taking significant time. This person then leaves the company, leaving no-one knowing how to use this expensive tool. I know 3 large enterprises, that used the same person to develop such a tool, that person no longer with any of the these companies. Tools cannot take more people to maintain than to manual testing would. So too must the tool support your business process, there is little point having a great tool in the lab, if the code drop is delayed and team sits idle waiting.

The 3 aspects are all not just linked but act as multipliers with a compounded effect. For example, if you process is streamlined to cut one development cycle out of being tested, it will impact both people and tools required. If a tool can cut 5 minutes off a test cycle, significant impact can be seen in people and process.  If you do cut out a development cycle and save 5 minutes a test case, the effect is multiplied resulting in huge impact to the entire SDLC in either more testing or less expense. Only through “dialing in” and tuning all 3 aspects can Quality Assurance Excellence be achieved.

So please, take some time to consider how you can personally achieve greater excellence as a QA professional. I for one, personally and as a tool vendor, support any such activity.

Calculating QA Costs – Just Test Something Method (JTS)

Adding to the costing models in the series for Agile Anarchy Costing, Backing In Costing and Service Planning Costing, Just Test Something is one of the more common. The intent here is not to claim one model as king, but rather to evaluate the potential benefits and pitfalls of relying on one model exclusively.  My hope being that by sharing these approaches, QA organizations will evaluate their current models and perhaps find room to tune them for greater excellence.

The Just Test Something (JTS) method of costing is when the price for a QA project is determined by external pressure and not by formal requirements for test coverage. Were BIC relied on past precedent, JTS is usually based on time to market, costs or some other business requirement or external force.

Now before you grab your pitchforks and start packing the wood around the stake for my heretical costing model, let me mention that “…whatever we have time for…” was the most common response and feedback I got for the Percentage Coverage article. It was also the most common QA tester, vs. QA managers opinion on how they cost QA. Statements like “we get x weeks to test what we can”

For example, JTS Model Inc, marketing department released a press release that JTS software 1.0 would be GA in 2 weeks time.  Final cut-off time for any potential code fixes was however determined in a Go/No go meeting 32 hours before GA. Development, was still however struggling to add the last few change requests added by user requirements team. They expected the next code drop to be ready in 3 days time, but delivered it 5 days later, leaving QA 3.5 work days of testing time as  Working extra hours, QA was still finding a significant number of quality issues at the deadline meeting. The CEO, decided however, that GA deadline was of greater concern than any yet undiscovered code issues. As a result, performance, Load and Security testing were ignored and only basic functional tests covering some un-calculated percentage of the application was completed.

Typical of start-ups and immature QA departments, Just Test Something (JTS) is often a result of lack of QA focus and common in organizations that consider QA only as “un”necessary evil. Often bordering on having users do final QA in production. Not to be confused with the recent trend in offering bug bounties, JTS  places QA on some scale between “wish we could skip this step /expense” and “I guess we have to say we did SOME LEVEL of QA in the check box”.

JTS takes little focus on number of services, percentage coverage or number of release drops. If fact, it usually has very little planning and is mostly re-active. The process, being “whatever we have time for, get busy”. What is to be tested, and how it will be done being left up to a QA manager or even the individual tester to decide.

Advantages

  1. Low accountability and plausible deniability, QA can always say it was not given enough time and usually there is a certain amount of acceptance that defects will make it into production.
  2. Costs are usually tightly controlled and known, its only the outcome (quality) that is estimated. Seldom does issues like percentage coverage, number of services, number of test cases etc need to be described to management.
  3. Testers focus and time naturally shifts to code frequently used parts of the application or parts that have more defects in the code, since testing structure is less rigid and testing is focussed on highest priority, not total coverage.
  4. Flexibility for testers to test as and how they see fit, determining their own tools, process and focus is often part of JTS, together with low accountability, is attractive to some QA staff.
  5. A final Go/No Go meeting is usually part of the SDLC in which more than QA weigh in on if “enough” testing was done. If the level of QA is too low, this meeting can provide a last-minute reprieve.

Disadvantages

  1. QA role is heavily diminished, lessening their credibility and ability to weigh in and ask for an extension in a Go / No Go meeting. Often little formal gating is done, and code thrown at QA to get it off developments plate resulting in frequent release cycles.
  2. Lack of process can result in QA’s attention and resources not being evenly distributed, resulting QA testing the most common parts of code multiple times, while ignoring other. The result is uneven coverage of QA and possibly deeply embedded defects that can be missed for many releases.
  3. Certain steps are generally more frequently sacrificed due to the constraints. For example performance testing or security testing. Eventually these steps fall out of the testing process entirely as it becomes expected that these aspects will be ignored.
  4. Usually organizations lack of focus on QA results in little training of education spent on developing QA Skills, Process or Tools. The focus if anything on reporting progress. The result further decreases the efficiency of what little QA is being done.
  5.  Poor QA rapidly leads to poor reputation. At some point managements focus shifts to “Fixing Quality” and alternate QA strategies, like outsourcing, off-shoring and restructuring become common place as attempts are made to repair previously missed defects and repair a “week” QA organization.

Conclusion

In reality, any QA department needs to balance time to market and other pressures with QA coverage. As mentioned in the first of this series, these are not static models, but rather companies may use one or more of these models and be positioned on some scale, from slightly applying this model to mostly utilizing this model. QA may wish they have unlimited time and resources at their disposal to do 100% test coverage, but this is rarely the case. What defines JTS model is that QA coverage is determined by the pressure placed on it and not the need to, due any particular level of due diligence.

So put away your Pitchfork, and add a comment below if you wish to add or detract from this post.