With the Web 2.0 era in the rear view mirror, I wanted to touch on an aspect that was heavily played and still continues to this day. Integration of 3rd-party data into websites. However, the popularity and ease of use for many APIs and the portability of Web 2.0 applications has an underlying side effect, if not integrated properly. I have come across many websites accessing 3rd-party data from places such as Flickr, Twitter, etc. that are not following the basic principles and the lack thereof should raise the question, "what happens if these 3rd-party services fail?". Many sites are not applying the appropriate contingency design plans to allow continued functionality during a failure. Caching data calls to APIs is just an example of good contingency design. In fact, many APIs will require caching - like that of Amazon - but, I suspect this is intended to help limit resource use of the API host, not the site using the API. The reasons a person using API-accessed data on their website should cache are:
1. To speed up the website's load time
2. To have a back up plan if the API call fails
A simple implementation to handle these two cases would be one that caches an API call for a given amount of time and one that freshens stale, cached data and triggers an error should an API call fail.
This post is a bit late to the party but is worth writing as I have recently come across at least three sites where Firebug and other widgets have revealed issues such as retrieving API fetched data and sluggish site loading times. A decent implementation idea would be to roll your own caching wrapper and plug it in to a stable caching tool, perhaps something like Cache Lite for PHP. In this manner, you have a reusable, caching library independent piece of code that can handle caching/flushing and refreshing of data, which could function to handle the two cases discussed above.
For more pointers on this topic, please contact us. We are happy to assist.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment