August 23, 2009 at 6:28 am
Comments posted to this topic are about the item Theoretical Performance Issue Handling
August 23, 2009 at 11:47 pm
The amount of data shown to a user needs to be restricted not only from a performance point of view, but also from a user productivity view point. Many users ask for wild card searches that return large amounts of data when what the user is looking for a single entity on which to operate. The manual sifting of data by means of next page is a painful experience, its like looking for a needle in a haystack.
The other is that we give users a lot of fields on which he can search. In order to achieve this we take recourse to writing a challenging dynamic SQL query. In real life the search is carried out 99.99 % of time on two or three fields, does it make sense to achieve flexibility we sacrifice so much performance.
August 24, 2009 at 12:07 am
And never, ever, forget to have the changes in the specifications signed off when you remove/alter some of the users 'absolutely critical' (but of little value) functionality. Even more important if they have changed their mind on its importance part way through the process.
Or am I just overly skeptical????
August 24, 2009 at 1:21 am
The approach that the customer is always right, is totally wrong. Make them think they are right whilst feeding them with your own ideas is the way to go. Programmers (together with consultants) know best. Make them think your ideas are their own and yes, make them sign of on them!
August 24, 2009 at 1:28 am
R. C. van Dijk (8/24/2009)
The approach that the customer is always right, is totally wrong. Make them think they are right whilst feeding them with your own ideas is the way to go. Programmers (together with consultants) know best. Make them think your ideas are their own and yes, make them sign of on them!
"But that's what you suggested to me I needed, and it sounded good! That does it, your product is crap and I'm ripping it out and replacing it with this other product."
Doesn't always work.
August 24, 2009 at 1:59 am
Not many companies have the luxury of dog fooding the software that has been developed to understand the users perspective specially from a usability perspective. We provide a whole set of features which are a pain to use. For example many applications do not allow a date to be entered directly, the user is forced to use a painful calender control. The user interface should be designed keeping in view the users familiarity with software usage. It may perhaps worthwhile to provide training rather than give fancy mechanisms for the user to input data.
August 24, 2009 at 6:19 am
Maybe "The Art of Theoretical Performance Issue Handling" would be a good title. 😉
Data volume is not the only thing that can change over time (or from DEV environment to PROD). You may have the best query you can create, "in theory", and it can timeout in production where there are concurrent users, lock escalations, I/O bottlenecks, wrong color socks (my theory as to why something works one day and not the next).
I like your list of actions taken. Swearing does seem to help and in theoretical SQL it has been proven not to hurt.
I would add one thing: the click-to-eyeball test. The developer runs the query in Query Analyzer and it takes 1 second. A "really good" developer will run it a few times, clearing the cache each time to get the average of 1 second. At this point one must pronounce it "working good in theory".
August 24, 2009 at 6:39 am
Quite correct, we tend to “pluck the pond from under the duck”:-P or better “go on a wild goose chase” when confronted with DB performance problems, instead of pondering the root cause. It is well worth it to consider the “other” factors, e.g. time changes as well as environmental adjustments, as potential causes before we fiddle the code. Uhm, theoretical prognosis, the prognosis theoretically:hehe:.
August 24, 2009 at 8:14 am
Actually, the first thing I would do to debug it would be to have the stored proc run in the actual environment in sql query editor (assuming it's not a data modification proc) and have the actual execution plan included in the query results. If it is data modification, you need to run it in a similar backup of the prod database, so you have similar table sizes and statistics. 8 out of 10 times, one hugely expensive part of the query will jump out at you. This is the quickest way to find out where the bottleneck is, because until you identify that, any 'optimizations' you do will be wasted.
August 24, 2009 at 8:23 am
I totally agree that this is what should be done first! That is even one of the biggest reasons why we always want access to the production systems of our clients!
However, taking (only) this approach causes you to miss out on any possible functional issues there might be (e.g. to much data being shown or processed by the client(s) ).
Taking a step back and looking at the big picture forces you to look at all possible sides of the problem. Taking a techie approach is just not always the way to go.
August 24, 2009 at 9:41 am
I appreciate the overall message of "take a step back". This can be used in a lot of situatations when any problem arrises. Gather information and properly aim to identify the issue before fixing it. I guess thats where the "theoretical" part comes in. Are you fixing the real problem, or are you just chasing your tail?
That's what I find frustrating. There is a lot of over-the-top activities developers seem to engage in the name of "better performance" or "best practices". Rerunning queries multiple times and clearing the cache? Is that REALLY necessary every time? That just leads me to believe that the developer really doesn't know what they are doing, and they are merely hunting and pecking towards a faster result with little regard for data accuracy.
Finally, there were three letters auspiciously missing from the support call example. D-B-A. Where was she in all this? And, no, a developer with t-sql skills does not a DBA make.
August 24, 2009 at 1:49 pm
R. C. van Dijk (8/24/2009)
I totally agree that this is what should be done first! That is even one of the biggest reasons why we always want access to the production systems of our clients!However, taking (only) this approach causes you to miss out on any possible functional issues there might be (e.g. to much data being shown or processed by the client(s) ).
Taking a step back and looking at the big picture forces you to look at all possible sides of the problem. Taking a techie approach is just not always the way to go.
How are you going to decide if 'too much' data is being shown to the client? And what good is it going to do to say "no longer return these 10 columns" if that doesn't make the query run perceptibly faster? Unless ntext or nvarchar(max) columns are being returned and unused due to a sloppy "select *" (which you should almost never have in production code) trimming columns isn't likely to solve the reported business problem of the query being slow. "Runs too slow" seems an entirely techie problem to me.
August 24, 2009 at 1:54 pm
Michael Ross (8/24/2009)
Rerunning queries multiple times and clearing the cache? Is that REALLY necessary every time? That just leads me to believe that the developer really doesn't know what they are doing, and they are merely hunting and pecking towards a faster result with little regard for data accuracy.
Once you run the query once, all the relevant indexes are likely to be in memory in SQL Server's cache, and every subsequent query may come back in 0.1 seconds, which will totally mask the problem that the first time you run the query it may take 5 seconds to run. It will also completely mask any significant changes in performance you get as a result of changing the query. It is really necessary to do that every time.
August 24, 2009 at 2:28 pm
thanks Michael Ross, I forgot about the DBA. I didn't think about that one, since we don't have one:-D
But the reactions exactly draw out what I mean. To technical! Having 100 rows of data is in about 80% (or more) of the times to much info. Users are interested in data they can use. Overviews are reports and charts, not in a 'normal' screen. Most of the times, the user wants quick, accurate and easy to read data. As one of the reactions mentions the search. Look into these kind of functionalities. Is it useful? Does is benefit the user?
A programmers (designers) task is not to build what the user wants, but what the user needs. Most of the time the perception of the users of their needs is wrong and based on worst case scenarios. Never ever ever ever cater for worst case scenarios, this will only lead to an unworkable system which is great in 0.5% of the times.
Take a step back, look at what the system is doing, what is needed for the user to perform his/her task.
Take this approach, look at both the functional and technical sides of the stories. This is the task for a developer. If not, you are a typist and no more.
August 24, 2009 at 2:37 pm
wbrianwhite (8/24/2009)
Michael Ross (8/24/2009)
Rerunning queries multiple times and clearing the cache? Is that REALLY necessary every time? That just leads me to believe that the developer really doesn't know what they are doing, and they are merely hunting and pecking towards a faster result with little regard for data accuracy.Once you run the query once, all the relevant indexes are likely to be in memory in SQL Server's cache, and every subsequent query may come back in 0.1 seconds, which will totally mask the problem that the first time you run the query it may take 5 seconds to run. It will also completely mask any significant changes in performance you get as a result of changing the query. It is really necessary to do that every time.
Look at the execution plan, look at and think about what you are typing. Having complex queries will benefit from compiling and recompiling a stored procedure. This is because having an overview of a complex query is best described as impossible. Breaking it down in separate parts and reviewing their individual impact on the complete picture.
But again I have to agree with Michael, a lot of times developers don't have a clue what they are doing. 'I need to get such and such data' is the mainstream thought. Not the how, why and when.
As long as the how, why and when are not answered most of the times, the solution is crap and a disaster waiting to happen.
Viewing 15 posts - 1 through 15 (of 24 total)
You must be logged in to reply to this topic. Login to reply