Monday, December 07, 2009

Web application Performance First Aid

Below I tried to list down a few question or hints to focus whenever there is performance issue.

Front end:

  1. Is page too heavy?
  2. Is content poorly cached?
  3. Is content not compressed?
  4. Is javascript & css minified?
  5. Combine CSS and JS files to reduce the number request for these resources
  6. Is there any inline javascript or CSS – move it to external file?
  7. Place the javascript and css appropriately
    1. Javascript at the end of the body
    2. CSS at the head.
  8. Can static content like CSS and JS be akamized?
  9. Restructure the folder and enable cache by folders like css, js, image and set the cache setting at these folders at webservers
  10. Can resources be pre-fetched like we expect user to click a few links and we can preload those image in the home page.
  11. Can content be published to CDN?
  12. Is CDN configured appropriately?
  13. Can image be compressed? Smushit?
  14. Can we use multiple domain for resources for simultaneous downloads?
  15. Cookies less domain for image resources?
  16. Can we club images using the CSS-Sprite technique?
  17. Sometime ETAG can cause issue, different server have a way to manage this
  18. Scope of AJAX-fying the pages
  19. Cookie size
  20. Is number of request per page back to server including all resources in acceptable range around 15-20? Is it possible to reduce it?
  21. Has the page been tested in low bandwidth scenario?
  22. Is it possible to host the resource server separate from the main web server?
  23. Use appropriate robot.txt to avoid search engine traffic
  24. Using libraries to deal with DOM like JQuery etc

Middle tier

  1. What is the technology stack?
    1. Tech specific optimization
  2. Scope of web server caching?
  3. Response buffering
  4. Page and control fragment caching
  5. Fa├žade approach – in case of multiple service call can be clubbed
  6. Is there any web service call, wsdl caching issue?
  7. Is it possible to segregate anonymous and non-anonymous pages?
  8. Is there any integration services
  9. Is there any 3rd party system integration
  10. Is it using Store-proc or can application use store proc?
  11. Reverse proxy to manage the static content
  12. Enable HTTP Keep-alive
  13. Excessive logging
  14. Distributed Caching in case of load balanced environment

Give a shot to backend

  1. Store procedure usage
  2. Indexing and index column data types
  3. Is Database statistics up to date?
  4. Check defragmentation
  5. Choosing approach log level. In case of SQL server we have simple, full etc mode.
  6. Archiving to reduce the records
  7. Choosing database inserts over updates if possible
  8. High I/O queries
  9. High table scan queries
  10. In case database like MySQL – can we use appropriate engine for different kind of tables?

Tool to analyze

  1. Measure the front end application
    1. Yslow or PageSpeed plug-in for firefox
  2. Measure the application middle tier
    1. Performance monitor log
      1. Web server specific
        1. Request Per second
        2. Request Queue
        3. Request Processing Time
        4. Request Errors
        5. 5. Cache Hit
      2. Disk I/O
      3. Peek memory usage
      4. Peek CPU usage
  3. Measure the backend layer
    1. SQL Profiling in case of MS SQL
    2. CPU usage
    3. Logical and Physical Reads
    4. Execution Time
    5. Monitor for DB Locks
    6. Analyze slow running queries using execution plan

See if we can do anything without code change:

1. Server changes

Restructure the folder and enable cache by folders like css, js, image and set the cache setting at these folders

Gzip the content

Reverse proxy to reduce traffic on primary server

Separate the resource request and page request in to 2 farms – like media farm (hosts media content like images, files, videos etc) and web farm (host the main application)

2. Database

Index fine tuning

Statistic updates

Optimizing top 10 slowest queries :)

3. Infrastructure upgrade

Akamai or Amazon S3

But let me warn you guys… it is just a first aid. Performance tuning is much complex and this is just first step to help ourselves.

Sunday, November 22, 2009

Tuesday, November 17, 2009

SPDY – Speedy … Google’s next step

Google is in all hurry to make everything faster and focused a lot on web arena. It has set a new benchmark for the web application are designed and are high performing. Google has been very innovative when its comes to that. Lot of initiative have gone from this special company to make the website performance better like and Google has recently open sourced some tools and tuts to help developers build fast website.

They have much interesting attempt of replacing HTTP protocol. But why…, below is the answer given by Google.

  • Single request per connection. Because HTTP can only fetch one resource at a time (HTTP pipelining helps, but still enforces only a FIFO queue), a server delay of 500 ms prevents reuse of the TCP channel for additional requests.  Browsers work around this problem by using multiple connections.  Since 2008, most browsers have finally moved from 2 connections per domain to 6. 
  • Exclusively client-initiated requests. In HTTP, only the client can initiate a request. Even if the server knows the client needs a resource, it has no mechanism to inform the client and must instead wait to receive a request for the resource from the client.
  • Uncompressed request and response headers. Request headers today vary in size from ~200 bytes to over 2KB.  As applications use more cookies and user agents expand features, typical header sizes of 700-800 bytes is common. For modems or ADSL connections, in which the uplink bandwidth is fairly low, this latency can be significant. Reducing the data in headers could directly improve the serialization latency to send requests.  
  • Redundant headers. In addition, several headers are repeatedly sent across requests on the same channel. However, headers such as the User-Agent, Host, and Accept* are generally static and do not need to be resent.
  • Optional data compression. HTTP uses optional compression encodings for data. Content should always be sent in a compressed format.

More details and very interesting article to read,

Think different!!!

Monday, November 02, 2009


It is pretty common known fact that the PHP or Flash kind of application while making remote RPC over SOAP makes couple of round trip. This is because service call tries to retrieve the WSDL (service definition) for each request. So it would be advised to cache the WSDL request.

Related links:

Drupal Devel Module

Drupal has so many contributed module that are focused on developers. These modules are focused to ease custom module development – a few of them help to keep the code consistent, a few of them help to focus on performance, some on the deployment etc.

Devel – is one such module for developer to debug the page performance. This may be sophisticated measuring approach or tool but can be pretty handy to get a hint on where to start off. This module can log page response time, memory usage, number of database calls per page.

They have configuration available with different parameter in mind like site in production environment, small site, larger site, in-memory etc. It can also generate content, nodes, comments, users etc.

Sunday, October 25, 2009

Photoninfotech is now DachisGroup Partner

Photon is now part of DachiGroup technology alliance – Expert groups who are strong in different area of social media platform, different areas of technology put their head down to solve identified 4 archetypes.

Nice stuff to read - “Social Business Design – 4 archetypes”,

Thursday, October 22, 2009

Wednesday, October 14, 2009

Speed up your page - Lazy loading - an alternative approach of loading a page

Sometime back I had to do a presentation on topic - Building public Website. But while preparing I realized that it is too difficult to cover even 10% in my allocated time. So I had to break the presentation in to a few sessions.

Thought probably presentation on building optimized website makes more sense than anything else. After presentation, developer can apply whatever they could absorb from the presentation. Presentation went well.

When I gave the presentation - I did not want to use the term "performance". I think replacing the term with "user experience" gave lot more context and feel to the presentation. Performance sounded very technical to me. I understand that user experience may not be all about performance alone, it would include lot other stuff's like content, accessibility, being user-friendly, intuitive etc etc. But if you can get content quickly user feel better - but a slower website can kill everything else you would have built.

If we forget about the performance issues at server side - let us think we had done our best there - so biggest factor that affects the performance network bandwidth. Two primary factors that gives quick responsive website is "amount of content that has to be piped" and "number of round trips". Interestingly we can quickly address these 2 issues. My presentation has been targeting these 2 problems. Every web developer must read this.

In my current website, there are semi-dynamic and dynamic portion. Sales portion of the website where content varies by a few factors but are static for a given factor. Social network portion of the website where many portions are dynamic. Just think of this page as bunch of boxes(portlets) that pulls information from various source and each of these box content has different cache expiry. Sometimes what happens is it takes time to build all the content and till the time we are ready with the content user would be seeing a white screen wondering what is happening.

We are familiar with the term "WYSWYG" but I was looking it in other way as in - "WYGWYS". Yes, what you get is what you see. So while browsing, we should "get" information to the browser as quickly as possible - because unless you bring your content user won't be able to "see". My theory is we can always compromise the round trip against the content to get better user experience.

Lazy loading is inline with this theory. I can mark a portion of my page as "early loading" and portion of page as "lazy loading". What do I mean by "early loading" and "lazy loading"? By "early loading" I mean the content is made available in while requesting a given page. And "lazy loading" would mean that only a place holder is sent to browser and then browser sends an asynchronous request back to load the actual content.

What is the benefit? When you design a page you can mark a few portion as "early loading" - may be because it has some important sales content or could be some very basic information and so user would see something quickly. Then rest of the page would load async. This would help us to design bunch of service or REST calls to deliver the actual content. It will help us to manage different content expiry policy as well. I am yet to verify how effective would this be.

Thursday, September 24, 2009

Few useful JQuery links

JQuery Theme

Use jquery Theme so that we can leverage the theme concept to change look and feel instantly.

This way it becomes easy to build theme-able widgets quickly and it is easier for others to adapt if we are building some reusable widgets or say drupal module.



Access Key API:

Form Validation:

Context help:

Tool tip:

Wednesday, May 13, 2009

Building HTML Proto

There are number of ways you can gather requirement. Prototyping is one convenient form of capturing requirement. Most of the time client get more ideas and opinions when they actually gets to see the application. While building proto they get to see how the final application would look like. As against other approaches, some of the non-functional requirements are also captured - e.g. look and feel, user experience, navigation As it applies to any other mode of requirement gathering, it has some limitation as well.

It will be useful to finalize a few things before we start building html wireframes. I have tried to list down a few below.

Development Tool

We would use Macromedia Dreamweaver for developing this prototype. We would use the template feature of the macromedia to build UI templates and build the pages based on the UI templates. This would ease development effort when changes are required to be made.

Dev Toolbars - for IE, Firefox (firebug, YSlow), Chrome and Safari must be installed

  • All page MUST be XHTML compliant. Use below DOCTYPE,
  • Since the DOCTYPE is XHTML, you are likely to face issue with the javascript body while validating HTML. To avoid any issue with the javascript, need to mark the javascript body using CDATA as below sample.
<script type="text/javascript">


function helloworld(){


Directory Structure
(below is my style, you can ofcourse define whichever makes sense to you guys)
-- _styles
-- _scripts
-- _images
-- _templates
-- <modulename>
-- index.html

Source Control

It is important to decide, configure and enable source control before even we start anything.

Compatibility Requirement

It is important to start worrying about the browser compatibility from day zero. In my project below was the documented decision.

MUST BE COMPATIBLE with major browser list below

  • Safari.
  • IE - 6,7,8

Important this make decision early as this would have severe impact on the testing as well.

Reset CSS
This can be handy when we are looking for cross browser consistency.
Javascript LibraryMake 3rd party library calls earlier and start using in the proto itself. Probably we can reuse some of the code blocks.

  • We would use JQUERY

Friday, March 13, 2009

Thursday, February 26, 2009

Smush It!!!

Easy way to quickly optimize images on your website. Got a nice Firefox plug-in which makes it very easy to use the website as well. They adopt some advanced techniques like cleaning up meta data, other optimization techniques. Give a nice summary of optimization result.

Friday, February 20, 2009

VB.Net vs C#

I have worked with both C# and VB.Net. When I switch between those, as for anyone, for first few days, I curse the syntax difference and soon get used with the syntax difference. Initially I used to say that it is just syntax difference, but recently read a book, ".Net Gotchas". I found some interesting things the way in which C# vary from VB.Net - something related to Constructors, something related to character casing and more.

I have read many blog talking, rather fighting, over the topic which one of these language is superior.

But I found something useful,
What C# Devs Should Know About VB
What VB Devs Should Know About C#

Saturday, January 17, 2009

Thursday, January 15, 2009

Monday, January 12, 2009

Modal Dialogue - Based on Mootools and Clientcide script

Modal dialogue is one of the very commonly used dialogue on web these days. It is a special dialogue box that sits above all html elements on the screen. Nowadays the regular DHTML dialogue is combined with modal overlay - overlay that fills the screen with very high z-index and does not allow access to underlying UI elements. It is very cool to see a window turn light gray and a dialogue appearing center to the screen above this overlay. It is the current trend.

There are numerous script available to accomplish this. Clientcide is one among those.

Many times I wanted to center a html fragment- usually a div appear as overlay centered to the browser window, stay in center if they scroll the browser and stay in center if they resize the window as well and be a modal window...

I wanted to club both of the above feature ... basically a centered modal dialogue box above a grayed out background.

So, let us say, we have a div that hold some UI element - may be just an image or could be bunch of input fields. You want to make that DIV appear in the middle of the screen as modal dialogue combined with grayed out overlay. Here's is the simple script that will help you accomplish that:


I wrote a wrapper functions to achieve the above requirement.

Below is my source code:

//Helper class to center element in current window, show element etc
var WindowHelper = {
defaultOptions: {
onError: $empty, animate: true
defaultStyle: {
position: 'fixed', 'z-index': 5100
_minOf: function(x, y) { // Utility function to find min of 2 numbers
if (($type(x) != 'number') || ($type(y) != 'number')) {
return -1;
if (x > y) {
return y;
return x;
_maxOf: function(x, y) { // Utility function to find max of 2 numbers
if (($type(x) != 'number') || ($type(y) != 'number')) {
return -1;
if (x < y) {
return y;
return x;
centerWindow: function(element, options) {
if (element) {
var top = window.getSize().y / 2;
var left = window.getSize().x / 2;
var opts = $merge(this.defaultOptions, options);

element.setStyle("display", "block"); //Without this below line does not give correct value.
top = this._maxOf(top - (element.getSize().y / 2), 0);
left = this._maxOf(left - (element.getSize().x / 2), 0);

element.setStyle("top", top);
element.setStyle("left", left);

if (Browser.Engine.trident4) element.setStyle('position', 'absolute');

return true;
showElement: function(element, options) {
element.setStyle("visibility", "hidden");
element.setStyle("opacity", 0);
element.setStyle("display", "block");

if (options && options.animate) {
var myTween =
new Fx.Morph(element,
duration: 1000
, onComplete: this._showComplete(element)
myTween.start({ 'opacity': [0, 1] });
} else {
element.setStyle("opacity", 1);
_showComplete: function(element) {
element.setStyle('display', 'block');
element.setStyle('visibility', 'visible');
_closeComplete: function(element, options) {
element.setStyle('display', 'none');
if ($type(options.onWindowClose) == "function") {
close: function(element, options) {
if (element) {
if (options && options.animate) {
var myTween =
new Fx.Morph(element,
duration: 300
, onComplete: this._closeComplete(element, options)
myTween.start({ 'opacity': [.5, 0] });
} else {
this._closeComplete(element, options);

var CTModalizer = {
modalInstance: this.modalInstance || new Modalizer(),
defaultOpts: {
opacity: '0.5'
, hideOnClick: false
, 'z-index': 5000
, onPreGrab: $empty
, animate: true
, onWindowClose: $empty
, updateOnResize: true
init: function(elementIdToGrab, options) {
var opts = $merge(this.defaultOpts, options);

$$('.close').each(function(el) {

$$('.close').each(function(el) {

el.addEvent("click", function() {
} .bind(this));
}, this);

var grabEl = $(elementIdToGrab);
if (opts.updateOnResize) {
window.addEvent("resize", this._resize(grabEl, opts));
_resize: function(grabEl, opts) {
WindowHelper.centerWindow(grabEl, opts);
grab: function(elementIdToGrab, options) {
var grabEl = $(elementIdToGrab);
this.init(elementIdToGrab, options);
if (WindowHelper.centerWindow(grabEl, options)) {
var opts = $merge(this.defaultOpts, options);
WindowHelper.showElement(grabEl, opts);
if ($type(opts.OnPreGrab) == "function") {
closeGrabbedWindow: function(grabbedElementId) {
var el = $(grabbedElementId);
el.setStyle("z-index", "4100");
WindowHelper.close(el, this.modalInstance.modalOptions);


<script type="text/javascript" language="javascript">
var options = {
modalStyle: { opacity: 0.2 }
, animate: true
, onWindowClose: this.CloseRemovePopup };

function showPopUp(grabElementId) {
, options);


function CloseRemovePopup() {
alert('Window closed');

Sample HTML:

<div id="ModalPopUp1" class="PopupContainer" style="width: 560px; height: 150px;">
Sample text <div class="close"><u>Close</u></div>

<input type="button" value="Show pop up" onclick="javascript:{ showPopUp('ModalPopUp1'); }" />

Not the class="close" - I use this class to identify the element to which I associate the close event.

Feel free to modify the scripts in case you need to .. :)