November 18, 2008 at 3:24 am
Not interresting article at all. It tells nothing... I'm so sad
November 18, 2008 at 4:43 am
This was an excellent beginner’s article. Performance tuning is an art in itself.
It’s particularly confusing when your first starting out, as there is such a wide variety of issues to consider, not only in identifying which measurements are important but also identifying the correct measures to take to improve performance.
The article has also encouraged other users to post their experiences and knowledge of performance tuning which will also be useful for the beginner.
I look forward to reading the next article.
November 18, 2008 at 3:14 pm
Interest Thing you’re proposing: Not to use the database at all, just cache. But how? I encourage you to write an article about that.
Alberto De Rossi
Microsoft Certified Solutions Associate - SQL Server
Microsoft MVP - Data Platform
Power BI User Group Lima - Community Leader
November 18, 2008 at 3:17 pm
crainlee2 (11/17/2008)
This seems so fundamental that I almost think I shouldn't mention it but I will anyway.When designing a database, its indexes, and the queries that access it, do everything you can to:
1. not use the database at all (I've known programmers who used it for logging when a flat file would have been much more efficient.),
2. avoid physical disk writes (they are significantly slower than disk reads),
3. avoid physical disk reads.
Your hard drives are the slowest components on your system by several orders of magnitude.
Do everything you can to design for the use of the system and database caches. This will speed up everything. This means lots of memory dedicated to SQL Server and correct indexing of all search fields.
And no correlated sub-queries - period.
LC
Interest Thing you’re proposing: Not to use the database at all, just cache. But how? And what if you don’t have enough memory? I encourage you to write an article about that.
Alberto De Rossi
Microsoft Certified Solutions Associate - SQL Server
Microsoft MVP - Data Platform
Power BI User Group Lima - Community Leader
November 19, 2008 at 4:29 am
I think he is talking about a case to save let's say the application's user's parameter or stuff like that, where the usage of the db is not strickly required. This is more a case of correct analysis and planning. If the apps needs the db, then use it, otherwise it might be worth it to use the users' local machine to save some workload on the server.
December 10, 2008 at 7:41 pm
Good introductory article. Have to somewhat disagree with you on the temp tables though. Breaking huge complex query into more manageable smaller queries, which utilize temp tables may sometimes boost the performance. I have seen it change dramatically. I don't think there any universal rule on this. You just have to test both approaches to see which one performs better.
Thank you.
July 3, 2010 at 9:25 pm
Great article dude. very simple and helpful information.
Thanks for all of your efforts.
September 10, 2010 at 12:58 am
Good Introductory article.
Performance tuning is varying on case to case basis but you need to first identify where the problem or bottleneck is.
Some times, query is perfect, it is following all best practices (like indexes, joins are correct) but hardware is not capable enough to provide the support.
Some times, just adding CTE in the select query and use that CTE table in join - solve the purpose.
Thanks
September 10, 2010 at 5:45 am
First off, I think the article was well intended. However, being a seasoned vet with SQL and SQL performance, I think that your article was more of an intermediate guide rather than a beginners guide. I have shared it with some of the junior DBA's in my office and I got the deer in the headlights look from them.
Just wanted to provide you some feedback.
September 10, 2010 at 6:00 am
mlabedz (9/10/2010)
First off, I think the article was well intended. However, being a seasoned vet with SQL and SQL performance, I think that your article was more of an intermediate guide rather than a beginners guide. I have shared it with some of the juniors in my office and I got the deer in the headlights look from them.Just wanted to provide you some feedback.
There fixed it for you 😀
September 10, 2010 at 6:30 am
First, the positives:
Way to go on pointing out that you need to validate data during the tuning process. This is something that it's far too easy to assume and doesn't get pointed out often enough.
Keeping track of the changes made over time and the results is also a great idea while tuning. If nothing else it provides an excellent path to learning what works and what doesn't while tuning. Again, something that a lot of people miss.
Also, finally, well done on taking a consistent approach to your tuning. Always running the procedures one way from SSMS is great advice when doing the tuning work.
Unfortunately, I've also got a few negatives:
You didn't say, or at least I didn't see, in the article where you were doing this tuning, which environment. Because you didn't say, people might not know, that running DROPCLEANBUFFERS and FREEPROCCACHE are extremely dangerous operations to do to your production system. You've just flushed the cache of all the data and query plans and every application running on the system takes a hit as that data is read from disk instead of memory and the queries all have to recompile (taking longer, as you noted). Especially because this is intended as an introductory level article, that information is vital to people who don't yet know what they're doing.
Speaking of people needing information who don't know what they're doing, you talked about Profiler through the whole article. Again, in a production environment, using the Profiler GUI is dangerous. Instead it's very highly recommended that you use the server side trace through trace events started from TSQL and output to file.
Too much focus on reads, just as too much focus on duration, can be misleading. You need to work on both. You can have only a few reads on the system and still have a badly tuned query.
I'm sure it's just sentence structure, but the way you wrote it, you're advocating for the elimination of joins in queries. I'd suggest that's not necessarily a good approach.
Index scans can be just as costly, maybe even more so, than table scans. Plus, remember, the clustered index is the table, so getting a clustered index scan is effectively identical to getting a table scan. Going from a table scan to a clustered index scan, in most circumstances, won't improve performance at all.
I'd suggest trying to put more cautions into your articles, especially when writing for beginners, so they understand where the weaknesses in a given approach may lie. Especially when advocating for things like cleaning out the cache or using the Profiler GUI.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
September 10, 2010 at 9:09 am
I had intended to write pretty much everything Grant did - thanks for saving me the time Grant!
I will add one thing:
We should be careful about creating temporary tables, dynamic SQL and joining. Those things are the gold mine for the tuners as eliminating them can make your procedure much faster.
Actually those are a gold mine for me for the OPPOSITE reason - I often get huge performance GAINs from adding in the first two of those things. Temp tables can be used to split up massive join queries to get the optimizer better metrics to have more efficient plans in subsequent processing. And dynamic SQL can provide exact values for optimal statistics retrievals which can also lead to optimal plans (and certainly the dreaded nested-loop-with-a-kajillion-row disasters). I do note that this is an ADVANCED usage from someone who has been doing SQL Server relational engine design and tuning for going on 15 years now. 😉
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
September 10, 2010 at 9:34 am
First time posting, been reading for a while.
First off, nice article. Performance is something that I have always loved tweaking. I figured I would share one of my favorite articles (goes over set-based logic):
http://weblogs.sqlteam.com/jeffs/archive/2007/04/30/60192.aspx
-TheCrazyLudwig
September 10, 2010 at 11:25 am
I'd like to endorse what others have said about temp tables.
I use CTEs a lot, and often they make the query both more legible and faster. But sometimes they seem to overload the optimiser and result in a very slow plan. I have speeded up some UPDATEs from minutes to seconds by splitting them into a SELECT INTO a temp table then an UPDATE FROM a join to that table, compared to my original code that used WITH cte ... UPDATE.
September 10, 2010 at 12:02 pm
The very first things I look at in a proc are cursors, scalar UDF's, and IN abuse. Table/clustered index scans and temp tables may or may not be appropriate, I think blanket warnings may take people in the wrong direction.
Viewing 15 posts - 16 through 30 (of 38 total)
You must be logged in to reply to this topic. Login to reply