IntroGWT (Google Web Toolkit) at Rento for a hot new project and at some point we came across a performance bottleneck on a specific function. This function had to be called from every 200ms to every 1s and it was taking from 20ms in Chrome and Safari (totally acceptable) to 1200ms in IE7 & IE8 (orders of magnitude beyond unacceptable!).
At first, when we saw these numbers we started pulling our hair out (once again) with IE's amazing performance, but then, what the heck, this graph says it all and we were ready to pay the price of supporting IE once again. So we just had to optimize this code somehow. With a first glimpse the bottleneck was a for loop with about 1000 iterations. Makes sense...
CS basicsComputation time is one of the two main criteria when measuring algorithm complexity (the other being space, aka memory). Definition :
It represents the amount of time (or number of computation steps) that a "normal" physical computer would take to solve a certain computational problem using a certain algorithm.
When you want to reduce the computation time for an algorithm (that contains iterations, as is usually the case) you have the following options :
- Reduce the number of iterations
- Reduce the computation time inside every iteration
Step 1: Reducing the number of iterationsFor the given problem that the algorithm was trying to solve the number of iterations could not be easily* reduced (where easily means implementing an optimal algorithm in a small amount of time and without bloating the code base). We did some best-case-scenario optimizations but most of the time we didn't actually hit it and additionally the worst one was still unacceptable. There were one or two possible solutions though in case part two (see below) wouldn't work.
Step 2: Reducing the computation time inside the loopInside each iteration (among other stuff) a normalization of a string was taking place. This computation looked pretty innocent since it was taking usually less than one ms (we couldn't get an exact value since it was too small) each time it was called. The problem though with repetition is that this phenomenally small amount of time adds up on big iterations and thus by simply caching the result of the normalization and re-using it on future calls, saved us about 500ms on IE (Chrome, Safari and Firefox didn't have a problem anyway). Not bad at all! But still we had about 700ms to cut!
Not enough! Digging deeperAt this point we started scratching our heads and thinking that maybe we should go back to step 1 and actually implement the aforementioned complex algorithms but the gut feeling was "no, there's something else wrong here". After all what was happening inside each iteration step was the normalization (which was out of the game now) and 2-3 checks (conditions, regular expressions and so forth) that shouldn't be that complex anyway.
So, back on the profiler and now that the normalization was out we could see more clearly that something was probably wrong with the iterations themselves. A "getIterator" (or something like that) was taking about 90% of the time. Could it be the Set
Benchmarking GWT Collections emulationAfter spending some time setting up a dummy stupid-simple GWT project for benchmarking the emulation library for the Collections java api we had our results:
Results for initializing various collections with 1000 elements inside :
Bullseye! Using anything else other than native arrays (which translate to native JS arrays on the compiled JS source) has a tremendous overhead on IE!