Search for:

3 Ways to Get Started with CloudPort – Create

In the first in this series, we used an iTunes JSON service and made a request via SOAPSonar while we capture the request and response using CloudPort’s proxy feature. This is great when your service is developed and available, but what if the service you need does not yet exist or needs to be altered?

Since I am not developing a service, I am going to use SOAPSonar to make a request of the real service and get a response. Then I will simply copy this response and paste it into CloudPort (with a small edit) and set up the right listener policies.

1. We going to do a second iTunes JSON GET to list of  albums for a given artist. Starting with SOAPSonar (you can use the same or start a new project) lets Create Two New JSON Tests. Lets rename the first alt-J albums and second Black Keys albums. The queries are

If you wondering were this came from, I read the service description document and too the ID value from the artistic response from the previously used search service. Commit and Send both. Now we have the requests and responses.

1 soapsonar query

2. Open up CloudPort, you can create a new, or use the previous. Right-click on Tests and create 2 New JSON Simulators. Rename them alt-J Albums and Black Keys Albums.

1 New

3. Now we need to establish the end points and queries or listeners. Select Add manual Rule and set the listener Target to URL and the paste the /lookup?id=558262494&entity=album in as the URL. Ensure its set to Exists. Once you created the new rule, delete the old catch all rule. If you wondering were this came from, its everything we sent as a request after the http://itunes.apple.com/

3 Simulator

4. Now do the same for Black Keys setting the target URL this time to match /lookup?id= 5893059&entity=album and Exists. Notice there are a number of ways to identify a incoming request to and hence trigger a response.

4 simulater BK

5. If you are using a new project, you need to also set the Network Listeners. If you using the same project as in the capture tutorial this should exist. Make sure yours looks as below with Name, IP, Port 8888 and URI.

5. Network Listner

6. Now we need to set up the responses. There are two parts to each response the header and the body. Go back to SOAPSonar real query and copy and past the entire response header and body into CloudPort Response tab. Then commit.

6 copy

7. Do the same for the second Black Keys service for both header and response.

7 response

8. Now we could edit these, adding albums, changing structure or renaming things, but the bands may not be happy. So all I will add is a small change above wrapperType

“simulated”: “simulated using CloudPort”, (make sure you have the , for correct JSON notation)

to both. 

11. New service

9. Notice if you look at the Response Runtime Variables tab, if you response is correctly JSON formatted the graphical view will populate. If its is not, the response will still be sent, but you will not be able to use variables and the client may not understand your response.

9 graphical view

10. Lets test our newly created services. Launch the Simulation Player, by selecting the green arrow icon. Notice your listener URI and we now have 4 simulated services.

10 simulation player

11. Now clone the two real SOAPSonar services and change the requests to

Do you see the changes we made?

In this example we added one line or possible variable, but we could just have well created an entire service from scratch. Here is the zipped create simulation.

4. REDUCING SCOPE – Test Cycle Strategies

In the service plan costing model, we set the number of Test cases at 20,000. The first post in this series was a introduction on ways to to try and reduce that number. This the 4th in the series follows SDLC Strategies and  Test Iteration Strategies and is part of ST3PP’s Best Practice Series aimed at developing better Tools, People and Process. Looking at ways to improve QA efficiency in Canada in order to remain competitive at our higher wages.

Test Cycle

The easiest way is drop something out of the testing. Decrease the percentage coverage, don’t write test plans, test only simple scenarios or don’t create reports. Rather than reducing coverage, lets look at how we reduce effort in running a tests while maintaining quality.

1. Reuse

Reuse is the comes from the idea that if you create a test plan, execute it for a given environment, analyse the results and create a report, then there should be no benefit repeating the exact same test. On the other hand, if there is a code or environment change, that test (and only that test) should not need to be created again, but executed again, and results analysed and reported.

In our service plan costing model, we used 500 services and 10 functions per service and 4 tests per function making up the 50,000 test cases. Testing however should begin long before the all the services are developed. Lets say 20% of the services are developed. The first test iteration would only test 100 and not 500 by 10 x 4 or 4,000 tests. You have great developers and only 10% of those tests cases show there are issues development needs to address. Is there a benefit to running the other 90% of successful test cases again in the next release? Probably not, IF you certain nothing has changed. When the next code drop happens, you can just test 100 new services and the 10 changed ones. For simplicity, I never built this into service plan costing model. In part as the impact depends on if you working short AGILE sprints or longer waterfall based code drops. For accuracy though, these are good measurements and KPI to track and build into your model.

An organization structure and process is not always rigid enough to know for certain that a developer did not touch something and the risk is then missing an undocumented change. Enter automation.

2. Automation

If there is one thing I hear constantly about automation, is that it can take longer to automate and maintain scripts, than it does to manually test. Automation is not the cure for everything. For it to be beneficial it requires changes to your process and approach. Creating the test plan, data sources and success criteria usually take longer with automation, than manual testing, but running the test and generating a report should be far quicker and “automated”. This can mean a single test iteration or cycle, automation can take longer than manual testing. On the other hand, once created, a good automation tool will allow the test to be executed on different , iterations and environments with little or no change, dramatically decreasing the effort. Not all tools are perfect, and redevelopment of the test cases, for the smallest change in code or new test scripts for performance or load can negate the benefits quickly. This is not an automation fault, but that of the tools or process.

Time taken for creating test cases aside, there are other issues automation addresses

  • Regression testing (have you tried without automation?) How exactly are you sure you not missing an undocumented change?
  • How do you plan on continuous testing a service without automation?
  • Need to run through a thousand variables, how long will that take manually execute these tests?
  • How long does it take to execute an automated test, once already developed vs a manual?
  • What does executing (not creating) a automation test costs in man hours?
  • What about testing after launch in the SDLC, will you do that manually?
  • Do automation QA testers really cost more?

To be fair to automation, one needs to compare the impact of automation with on the entire SDLC, utilizing automation with a suitably redesigned process. If then the effort is not worth the reward, it’s a lesson learnt. Not every project will be suited to automation

Conclusion

A final note on documentation, test case development and reporting documentation are often the biggest time consumers of testing. Time equalling cost. Documentation needs to balance detail with effort.  Ironically, I spent a chunk of time this morning reading through a 37 page test plan. Yet on looking to implement the first test case, I found the a required parameter not mentioned anywhere in the 37 pages.  I then looked at a different test plan, and it was not there either, yet found it in seconds in that tools automation test script.

Fundamentally we are looking for ways to save time focussing on the areas of highest impact. Its not about slavishly following a set process, but defining the process to be more effective and hence increasing our value and software quality.

Speedometer Confusion

I follow Seth Godin as he has incredible insight and ability to simplify things down to a few key words.  His post, Speedometer confusion is one of may such posts.

Too often, being busy, is confused with productive.  We set metrics like how many defects found, how many test cases run and how many hours tested. But what exactly do these metrics mean to software quality? Looking down at our speedometer, we may think we travelling at 200km/hour for the last 8 hours covering some 1600km.  But what if when we check the GPS, it shows we only covered 100km.

We are often wary of establishing effective KPI, worried about what these would really reveal, or worried management may not understand. Opting to track “safe” KPI that show some form of effort rather than effect. Rather than number of test cases, why not measure percentage coverage? Rather than defects found, why not measure defects missed? Rather than hours spent testing, hours saved?

It is nearly impossible to improve, if one fails to take an accurate measure of the current situation and set goals for the desired outcome.

3. REDUCING SCOPE – TEST ITERATION STRATEGIES

Following on from SDLC strategies, the second group or area to focus on when attempting to reduce the total amount of testing is ways to reduce the number or Test Iterations or levels.

Testing Levels

Every organization has their own st3pps in their SDLC and organization structure. This often depends on the defined roles and functions in the organization and their development approach. A large waterfall based organization may have dedicated security, identity and load testing teams, while a smaller agile organization may have these functions fall under a single team or person. In every organization, there is a struggle between finding the most efficient process vs budget, personal agendas and overcoming inertia. “Its always being that way”, and “its done by a different team” does not imply optimized.

Unit Testing

With JSON services not having WSDL or Schema, many organizations have had to find new ways for Development and Testing to work together to ensure the correct coverage. Developers and testers may work off the same service description document, but how accurate is the document and how well does it match the developed services? What if a developer added or missed some functionality when coding? How does Testing even know what services are there for testing? Group development like Agile or Extreme programming may help, yet requires testers to have a fair understanding of JSON.

A second challenge is how do you gate or ensure that developers have done at least some level of due diligence before passing the code drop over to Testing to begin testing. If the defect density is to high, and code needs to be heavily reworked, the entire test cycle is wasted. If previously found issues are not suitably addressed, the same retesting is again a waste. Release management can take significant amount of time and resources.  To prevent this, various organizations have tried different approaches. From formal sign-off’s, to change documentation or release management and change management tools. Another approach, is to have one team develop the basic unit test, and this test be used as the gating process. Testing then takes these unit testes and expands them out to include far more intensive coverage. In SOAP its most often the testers that develop the unit test, but with JSON, it is now sometimes the developers that develop the unit test together with the service. Sharing tools and test cases only makes sense, as the developers require some way to test the code they developing in any case. Handing it to testing to expand is good practice, to ensure that “fresh eyes” are used. But what if a developer adds code not in documentation and does not hand a test case over either? Is this risk acceptable?

Whatever the approach, the objective remains the same, from release 0.1 to GA, how can one reduce the number of test cases. The concept also remains the same – Shift as much Testing as possible left, or earlier in the SDLC right to testing while developing and making sure that “code drops” have a certain amount of maturity.

Integration Testing

Integration testing of old was usually a different team to that of Unit testing. As per the post on Continuous Testing in Agile, if using and automation tool, the test cases should be the same. Integration testing, should just include expanding the test cases developed in Unit Testing to include things like encryption, cookies, identity, performance of each service (and enablers) etc. These individual test cases then linked for automation in what we call chaining. Integration testing often then be done concurrently with unit testing, much earlier in the SDLC.

Using the right automation integrated tool, that supports encryption, identity, performance and as many of your integration tests in one, enables the elimination of distinct test cycles for each of the Integration tests. SOAPSonar’s ability to do functional, security, performance and load testing using the same test case, is the number one reason customers say they selected it. That said, Performance of a individual service is something I recommend be part of a unit test and a functional requirement.

Consolidating integration tests into as few as possible and shifting as much as possible into functional unit test cycle can greatly reduce the number of iterations. The impact of removing a single test iteration can clearly be seen in service plan costing model. This is not about increasing risk by skipping systems tests..

Systems Testing

Systems testing requires a certain amount of code to be developed and tested, and the environment to be built.  Moving it too early in the SDLC can actually add to the testing needs. On the other hand, if the integration and unit tests are all automated, and completed successfully, systems testing becomes more about the environment and less about code. Finding code quality issues at this point can be very expensive and troublesome to fix in time.

Again if all the unit tests, and integration tests are automated in the same test case, and if the tool offers the ability to do systems test eg geographically dispersed load testing, the effort required during this iteration can be greatly reduced. A note on performance vs load testing. Too many times I hear that performance testing is left till Systems Testing and suddenly its discovered days before release that there are performance issues. How a service performs should be tested far earlier. At this stage, Load testing should be establishing that the environment (servers, network and other infrastructure) is suitable and not a individual services or enablers performance.

Using Virtualization or Simulated Services to stand in for services either unavailable or that cannot be load tested can be very useful in enabling earlier Systems testing. Simulated services are key trouble shooting tool to eliminate environmental factors like network, cloud, data integrity and servers.

ACCEPTANCE Testing

Acceptance testing is were you suddenly discover if all the work was correctly focussed. Have your Developers and Testers spent all this time developing and testing the right requirements?  NIST lists business requirements as the number 1 issue in software development. Often the users and sponsors are not technically minded and extracting the requirements can be very difficult. Bringing Acceptance Testing in earlier to review what is being done can greatly decrease time and effort spent in incorrect requirements delivery.

In web services, acceptance testing is usually heavily weighted on the client side usability and visuals. The need to see an example and not just a jpeg. This is another benefit of Simulated services. By creating a basic simulated service earlier in the SDLC, the GUI team can begin work. This GUI can then be used together with the simulated services for earlier acceptance testing and to compare to the end services built.

Regression

Regression testing between releases is an important way to minimize the testing scope. If a regression test of a service shows no code change between release 0.4 and 0.5 then those services were not changed since the last test cycle and need not be tested again. lets say a regression test says 30% of the code was had no changes, using the service plan costing model, how much testing could that save? That said, if they automated, it may be of little benefit not “pushing that button” and retesting them.

Where regression testing really shines is post launch. The maintenance phase of software often costing far more than the development phase. Yet if your automation test cases developed during the development cycle can also be used for regression testing during the maintenance cycle, the effort is greatly reduced. This “continual testing” post production and even API monitoring is rapidly growing area of focus for many organizations as more complexed meshed applications are being developed.

A note here, so often I hear that security or some other team wont allow for automation. The impact however of a single systems test iteration not being supported can cause any automation to be useless on production environment and prevent any regression testing using the previously built test cases.

Conclusion

There is a lot about automation in this post. Every day I hear of benefits and issues with automation. Many organizations try apply automation to their old process and find little benefit and become discouraged. Alternately, a tester involved in load, security or some other single test iteration, seek a tool to do just that iteration – sometimes just for control of their environment, breaking the larger benfit by doing so. The key to successful automation is and end to end approach from the start of the SDLC through. You may choose not to automate the whole process, or use other tools, but the design and application of automation needs to be done with the entire cycle in mind.

The next post will be on ways to reduce the number of unit tests.

 

SOAPSonar – Testing SOAP, REST or JSON Services

” What is the difference Testing a SOAP Services vs. JSON/REST or other service using SOAPSonar” After trying to answer this question verbally 3 times in the last week, I thought it a good idea to show it in a post.

  • SOAP – “Simple Object Access Protocol” usually uses XML, and has WSDL. It also has an explicit error format or SOAP Fault messages. It tends to be heavier weight and services are often far larger.
  • REST – Representational state transfer is a software architectural style consisting of a coordinated set of architectural constraints applied to components, connectors, and data elements, within a distributed hypermedia system of which JSON is one language.
  • JSON – JavaScript Object Notation, uses readable text (not tue XML)to transmit data objects, consisting of attribute–value pairs. JSON does not use WSDL (Similar WADL is unpopular, in draft and seldom used), but usually uses a service description document. JSON Schema is also seldom currently used. JSON has no explicit error format. This makes JSON light weight and ideal for mobile applications.

So what does that really mean for someone using SOAPSonar?

The Difference

With a SOAP Service

You can use the capture WSDL bar and enter the URI, with ?wsdl afterwards and discover all the available services. Try it now with

http://www.w3schools.com/webservices/tempconvert.asmx?wsdl.

Notice the TempConvert and two services are automatically populated. When you select FahrenheitToCelsius_1, notice the Body is populated with field in SOAPSonar. If you enter a value, commit and send, you get your XML response.

1. WSDL
SOAPSonar offers a way to view the XML request, using the tab labelled XML and request headers. The same is possible in JSON, also headers tend to be lighter weight

2. XML

You can Also go to Documents and View Schema, which most likey does not exist in JSON

3. View Schema

With A JSON Service

There is no WSDL that can be captured and the chances are there is no schema. This means it is not possible for a Tester to automatically discover services in the same way. In SOAPSonar, we start by selecting File, New Test Group and then we have to name the test group. We can then Add a new test, by right-click, New JSON Test or more generic New REST Test and then naming each one.

5 New JSON

We then need a URI, the query parameters and the Method. Lets use

http://webservices.daehosting.com/services/TemperatureConversions.wso/FahrenheitToCelcius/JSON

as the URI and ?nFahrenheit=decimal as parameter to send and GET as the method. Then for 80 as the value, we replace Decimal with 80. How do I know this?, I read the document definition and example. The REST view in SOAPSonar would be as below. Notice the body of the request is frequently empty

6 REST View

The JSON view, is a single query line in the URI, and the Method. There is no WSDL to View and although incorrect queries will error, the description is limited.

7 JSON

So how then do Testers know what to Test? Its usually one of 4 ways
  1. The tester reviews the JSON code and looks for all URI, Methods and attribute–value pairs and reverse engineers tests cases. This takes significant JSON knowledge
  2. The Tester relies on the service description document, which should define all attribute–value pairs, Methods, URI’s and Query Strings. This requires good documentation.
  3. The developer and / or tester (Agile facilitates this) create and define the unit tests together. This unit test is then used to validate the basic functionality of the each function by both developer and tester. The tester, then adds ADS, chains functions, tests negative scenarios, load, and all additional aspects of the function to get the desired coverage
  4. You embrace yet to be standards of JSON Schema (tough given its level of maturity)

With JSON services, defining success criteria is also extremely valuable, due to the lack explicit error format. Its also far easier far developer to make minor changes to code, as they dont need to update schema, making regression testing important.

The Same

So now that we covered the differences, the rest is much of the same. Lighter weight JSON services tend to be much smaller and services and the very easy structure to understand. Be it SOAP or REST, SOAPSonar (and CloudPort) will identify all the variables and display them in the same manner.

Here is what that SOAP service looks like in graphical view for both request and response in the Runtime Variables tab. Any of these variables can now be used for chaining, automation data sources, success criteria, regression and a variety of testing options, using the right-click option.

4 Variable reference

Here is what the JSON service looks like for the graphical view for both request and response in the Runtime Variables tab. Any of these variables can also be used in the same way as SOAP, for chaining, automation data sources, success criteria, regression and a variety of testing options, using the right-click option.

8. JSON Vairables

If you wish to use a variable with SOAP, you right-click and add it in the field.

9 Query

If you wish to use a variable with JSON, you right-click and add it in the URI (or body occasionally) in the same way.

10 JSON Query

Conclusion

Yes there are differences testing SOAP vs. REST when using SOAPSonar. The lightweight nature of JSON, that makes it attractive, requires closer ties to development and more rigorous documentation in order to ensure that the service is being “discovered” and tested. This means Testing and Development need a clearly defined process, de-mark, deliverables and co-operation between developers and testers.

I hope this helps those QA professionals as that are now testing JSON vs SOAP services to adapt to the changes quicker. Questions, Comments?