PERFORMANCE TUNING MOBILE API – ENABLERS

The logic behind API’s can involve look-ups on other systems. API’s also address things like identifying the user, authentication, security, encryption, message signing etc many of these reside on other centralized or shared systems. A trend in the Data Economy, SaaS and Cloud services is to do mash-ups of API’s, either Consuming API’s outside of your organization as part of your application, or Consuming, refining then Providing them it turn. Collectively, I refer to these as Enablers, services that support, yet not exclusively for your application or under your control. The vast majority of the performance issues I have being involved in was due to some Enabler or other being slow or failing under load.

Outage

When SOAPSonar identifies a particular API as having a performance issue, it can be difficult to track down the exact reason if the service is supported by one or more Enablers. For example, knowing that your authentication services is OAuth based and is taking 10 seconds, is well and fine, but were exactly is the problem if any?

Enablers = Customer Experience – (API + Network + Client)

In this series we covered how we can calculate, API, Network and Client performance. In fact we can nullify Network and Client performance using SOAPSONAR to test API locally. But how then can one establish the impact of enablers? We nullify the or latency added by the Enablers to zero by replacing them with Mocked Virtual Services local to the API.

CLOUDPORT can “capture” and replay services using its free run-time player, without requiring the underlying infrastructure.  Although CLOUDPORT includes Workflow, to add latency and mimic more real life scenarios, in this case we wish to remove as much latency associated with an Enabler as we can. Using a local run-time of a captured responses (AKA virtualized local mocked service) while running the same SOAPSonar test case, will perform differently. The size or extent of the difference indicating the performance of the enabler, and hence making it possible to identify performing Enablers.

Going back to our OAuth example, say Acme is using a SaaS OAuth Authentication service hosted for them at some unknown cloud location. The 10 seconds is seems to indicate that this service is performing poorly, but is it the API, network or the SaaS vendor? By Capturing the response, moving it from the cloud to being local, and then running the test again, response 90% average is 9.5 seconds. Clearly it is then not the enabler run by the SaaS vendor, or the network that is performing badly as the enablers response rate is 0.5 seconds 90% of the time.

Replacing individual cloud services with a run-time, or placing the run-time in alternate locations, then loading these run-times, is a rapid and simply way of understanding the impact a remote service has on user experience.  Say Acme, wants to move their customer record information to VM cloud service provider. What would be the impact on user experience be for their mobile  application? They could move all their infrastructure to a VM cloud, make sure they don’t have data integrity issues for the test and then run the test. A far more rapid option would be just to place a virtual instance run time player in the cloud and test. The difference between the local and the remote run-time response rates being the impact on performance moving to the VM cloud would have.

Identifying poor performing enablers, or the impact of moving services should not be an extensive exercise, but is a vital part of QA performance testing.