Latency Kills and How You Can Improve It in GAE Apps

Posted by Raj Chohan on 6/28/17 2:49 PM

Latency Kills blog picv2.png

High latency can kill a user’s experience and hurt your bottom line. An Amazon study found that every tenth of a second added to how long it took to render a web page had adverse effects on revenue. Google found that a drop in half a second in responsiveness resulted in a 20% drop in users. 1

Many applications running on Google App Engine and AppScale would benefit greatly in speeding up their applications as to not upset their more fickle customers and users. In this blog post we detail some methods, techniques, and tools one can use to speed up their web application.

 

Pregenerate Views

Pregenerate data and views, rather than calculating it on page demand. For one of our AppScale customer sites we found they generated their dashboard information when the user hit the website. The amount of queries required caused a significant amount of latency in order to first fetch the data (time to number crunch was miniscule in comparison). They were using ndb but since the data was constantly being updated memcache did not help. Instead, create the view for the user as data updates arrive. Store the precalculated values in a single entity for which the keyname is based on the user doing the requesting (gets are faster than queries). Moreover, use ndb so there is a chance that it is stored in memcache for a much faster response. For the customer in question this lowered the latency of their main page from several seconds to less than 500ms.

 

Do Things Concurrently

Google App Engine APIs can be called asynchronously. Programs are generally written with data being fetched synchronously, which causes latency to pile up. Each blocking call takes up a bit of time. Alternatively, you can fetch all the required data at the same time, and have it take only as long as the slowest datastore operation. If subsequent datastore operations rely on the previous results, this suggestion doesn’t help, but we’ve seen a multitude of applications that have different data workflows that can be staged together for speedup.

A -> B -> C

X -> Y -> Z

In the example above we would run A and X together asynchronously. Block on both and then use the result of each to then start B and Y. Rinse and repeat and run C and Z. By having operations overlap you’ll be able to trim off some latency.

 

Provide the Frame of the Website First

A website does not have to have all the information that a user sees rendered ahead of time by your favorite template library. Instead consider moving any rendering that is done by the template system to Javascript. Once the window has loaded have your Javascript code fetch the required data so that it can fill in the rest of the site. Try to do so with multiple asynchronous calls to overlap latency. When the user sees that the site is filling in it’ll seem much more responsive than seeing a blank page. You can use loader icons to let the user know work is being done to fill out the rest of the page. Your application will deal with smaller workloads providing better flow and modularized components. Consider using the “async” tag on javascript so that the site does not block on loading third party libraries 2. When these libraries are blocking and the library fails to load, this is known as a frontend single point of failure.

 

Use Google PageSpeed

It’s a great tool to get low hanging fruit for page speedup 3

Google PageSpeed Insights Screenshot

 

No Free Lunch

As with most things, it comes down to tradeoffs. You may find that you add complexity to your code base in order to incorporate some of the suggested methods and techniques. In the case of having data ready ahead of time it adds the additional datastore writes. Since datastore writes cost money in Google App Engine (not in AppScale, where its per-VM) weigh this additional cost against the higher latency in user experience. For doing things concurrently, you’ll find that you may have written your code based to be very procedural, and changing it to pipeline flow would require tons of refactoring. Before you go optimizing every path in your application get a good idea of where your users are spending most of their time (appstats4 is your friend!) and get the most return on your engineering time invested. I like to think that the time I put into making a site more responsive is the time and grief I save a customer.

 

Find out why our customers are so happy. Click below to read our Case Studies.

Case Studies

 

[1] http://perspectives.mvdirona.com/2009/10/31/TheCostOfLatency.aspx

[2] http://www.w3schools.com/tags/att_script_async.asp

[3] https://developers.google.com/speed/pagespeed/insights/

[4] https://developers.google.com/appengine/docs/python/tools/appstats

 

Topics: Best Practices

Subscribe to Email Updates