PERFORMANCE TUNING MOBILE API – CLIENT
In my first post in this series, I highlighted the need to isolate and break down the user experience into logical and measurable portions to used as a baseline.
The client often gets the most focus the device in the users hand at the end of the chain of factors influencing performance. As it’s at the end of the chain, it is the sum of all the others and unarguably the user’s final experience. This being said, the client impact on performance is only the time added from the moment the device receives a complete message, to display. Or the point of submit till the time it leaves the device. It’s may not be hard to identify when a mobile application is giving a poor user experience, but QA needs to also identify why.Seldom is the service and the client developed by the same team, and we encourage that testing be done independently as well. An Android client for a Nexus phone, will most likely call on the same API that a iPhone client does, but it is possible that the software client on the phones are very different. Certainly there will be differences at the lower layer as it interfaces with the operating system, screen and other parameters. What is more, the API lifecycle can be very different and the same API could be used by a new phone released next month, a tablet, a desktop and a totally different application written 1 year later. The API economy is a term being used to describe an approach of developing services, independently of the client, often relying on 3rd parties outside of your direct control to develop their own clients, thereby ignoring this portion of the performance equation.
There is continuous stream of new devices and clients. Testing every permeation, with custom ROMS, other poorly developed applications, and independent Wireless service provider configurations, is a mammoth task. There are a number of vendors that offer real or virtualized device profile testing, this includes more than simple functional testing. In the end however, if the number of devices that are to be tested, this can simply be done locally in-house on the device.
Client Performance = User Experience – (Enablers + API + Network)
If we know the end to end User Experience, Enablers, API and Network portions, the impact of the client on user experience can be calculated. Alternately we can eliminate portions of these other requirements. For instance, we can pretty much eliminate the network, if we connect via WiFi to the same network segment with the API server is located. CLOUDPort is an API virtualization tool. It allows “mock” services to be created and run without the underlying infrastructure (aka real API and Enablers). It can do this under load and report performance. By capturing and playing the enablers and the API responses locally, we eliminate any performance issues relating to Enablers, API and Network. We then have a value for the Client part of the equation and a baseline for a given list of devices. What percentage of the overall performance does your client application add to your application?
Client performance issues fall into 2 groups, your client component and the device component. Identifying which group your performance issue is in, is made easier once you have isolated and baseline Client Performance. If you have removed the other variables and your client performs well on 9 out of 10 devices, then performance issues are likely related to that particular device. If a number of devices running same OS level perform badly, perhaps it’s incompatibility with that OS and perhaps the client can be altered to support this incompatibility. If the client percentage is a large portion of overall performance for all devices, it would tend to indicate that the client may require attention. Identifying a performance issue does not mean it is readily repairable. There is often little that can be done to get the device manufactures to correct problems you may identify.
A better business requirement for a mobile application performance should address the client portion independently and limit the devices. For example, the client running on the top 10 devices as reported by xyz, shall perform all requests in under 0.8 seconds, independent of the Providers, API or Network. This would allow both functional and performance testing to limit their focus to identifying performance on those particular devices.
There are also a significant number of tools that emulate or offer client user experience and performance. One of these may be added to ST3PPs tool set, after we complete some investigation. The constant changes in this space however offers unique set of challenges.
The next post is on understanding the impact of Network and on ways to isolate and understand the impact that the Network has on your overall performance for troubleshooting.