It’s only slow the first time!
As developers, this is a common phrase we hear from our users who have hosted solutions. I’ve seen enough DevCon Under The Hood sessions to know this is because data needs to be downloaded to the client temp files before a process can proceed. Once the data has been downloaded, subsequent processes are as fast as a local solution. Even on a good LAN this experience can be quite stark. A common example is ExecuteSQL: since it’s a client side operation, all the referenced tables need to be downloaded to the client before the query can be processed.
PSOS to the rescue…well maybe
The obvious answer would seem to be to switch the operation to PSOS (Perform Script On Server) so no downloading is required. The problem is that although it will be faster than the first time the query is done from the client, it will never be as fast as those subsequent client queries. In other words, that downloaded data is the client’s friend, and giving that up for the sake of the first query may not provide the best overall experience for the user. Additionally, server side scripting is not an unlimited resource and if too many operations are shifted from the client then performance issues may crop up elsewhere. PSOS may be the answer, but in general, I take this on a case by case basis.
Behavior can be unintuitive
I recently ran into one of these situations that caused me to scratch my head a little and reminded me that this behavior can show up in unexpected places. The client reported that he was experiencing a first time slow down when editing a field. I assumed I had some script trigger attached that was running some kind of SQL that was the culprit, but when I looked, there were no triggers. I knew this field was at the bottom of a lot of dependent calculations, but why was it just slow the first time? I associate this behavior with downloading data from the client, but where was that happening here? We’re just editing a field?
After some investigation and experimenting, I believe the issue is with the calculations that this edit affects, but it’s the other dependent values in these calculations. In this case many of the dependent calculations require an average value that’s based on a table of a few hundred thousand records. In order for the edit to complete, that entire table needs to be downloaded and the dependent calculations need to be evaluated on the client before they are updated on the server. In order for this to happen, all the dependencies are needed on the client side. In my experiments I witnessed this behavior in action, and saw that the dependent calculations were updated before the record was committed. In order for this to happen, the client needs that average value.
These edits were not done in batches, so making the subsequent ones faster didn’t add much value. Because of this, I added a simple routine to set the value on the server. The user enters the value into a global and then an OnSave trigger calls a script on the server to set the field to the global (passed as a parameter). This works great and solved the issue and seems like an appropriate use of PSOS.
Interesting behavior for sure, but does this sound right? I’m curious if someone has an alternate theory for this behavior and if I’m missing another explanation.