January 27, 2016 at 1:56 am
Hi
In an ETL world, I would consider 1 mb/sec processing as a decently performant program. Is there a rough guideline on T-SQL query performance? Say, for instance, if a Query joining N number of tables, whose combined volume is 10 million records, what would be an acceptable query retrieval time? What factors would you watch out for in CBO ?
thank you.
January 27, 2016 at 6:46 am
I don't think this can be caught in a simple metric. How many rows are returned from those combined tables? How much logic has to be done in order to retrieve them? How much logic needs to be done on the result set after filtering to get to the final result set? Etc etc etc etc
To me, the only really relevant metric is customer satisfaction. If the customers / users are happy with query performance, then the system is running fine. If they are not, action is needed.
January 27, 2016 at 8:31 am
The question you're asking is also largely going to be determined by things other than SQL Server. What kind of disks and controllers are we talking about? How much memory does the system have? Then we need to focus on the data. 10 million rows, fine. How many columns of what kind of data? Joining tables, what indexes do we have in place? Without all that information, saying "This performance is good" is really hard.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
Viewing 3 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic. Login to reply