November 16, 2008 at 9:36 pm
Comments posted to this topic are about the item Performance testing and tuning for a beginner
November 17, 2008 at 5:19 am
That is a resourceful post. As you stated, there is no one single approach to tuning a proc. It is purely a case by case issue, so a variety of testings have to be made after baseline numbers have been taken, and thereafter, examining and comparing new readings of various performance indicators. with each modification. The fact is that you have to start from somewhere, and eliminate those indicators that do not improve with changes until eventually, the real cause(s) of the performance is identified and then tweaked to achieve improvement. As stated, it could be problems with indexes, memory & I/O, temp tables, network, how the T_SQL itself had been written, contention of server resources etc... I think one of the principal problems is identifying the real cause of the issue, and that has to do with using the right tools and understanding the tools in use.
November 17, 2008 at 5:24 am
Nice article ...:)
November 17, 2008 at 5:39 am
HI Addai and Anirban,
Many thanks for your valued review.
November 17, 2008 at 5:54 am
nice article man... π done a great job and research... gr8 going .. and waiting for the next article which u have said to do so..
November 17, 2008 at 7:25 am
Well I agree with your statement about being careful with temporary tables, don't ignore them. In many cases I have taken a complex multi-table query and broken it up into simpler queries that have significant less overall query cost. This is especially true when the original complex query contains a PK table scan on a very large table and then joining it to a derived result set in lieu of performing an index seek on the large table. This is especially obvious when looking at the query plan and observing large intermediate result sets (thick lines) where there should not be such.
Usually when I am given a stored proc to "tune", after cleaning it up (converting to ANSI-92 standard for readibility, SET NOCOUNT ON, moving DDL statements to top of stored proc, and fully qualifing object names) I analyze the query plan to see if it makes sense. As M$ frequently states, the query optimizer doesn't pick the best plan, just one that it thinks is OK. My job then is to "help" the optimizer pick a better plan.
Usually I use profiler to determine which stored procs need attention, then the query plan to identify statements within the stored proc that need attention.
Also, don't forget to consider revising/adding indexes to help performance. The INCLUDE clause in SQL Server 2005 is a "God send" for covering indexes.
Cheers,
Mike Byrd
Mike Byrd
November 17, 2008 at 9:26 am
It's a nice topic.
The "tuning" is more than just apply the "best practices" to the T-SQL, but is the first step to try to improve the performance of your system.
Many of ours peers avoid the T-SQL tuning because have a very good HW supporting behind the scenes, and crashes when the database become a millions of records and the qrys give them bad time response, then is when they start thinking in improve HW; and the solution could be tuning code.
Saludos a todos!
November 17, 2008 at 9:27 am
This was a great artical.
Here is my Experience. In My Previous Job we had a Stored Procedure that used to Take around 6 Hours to Run (Batch JOB) use to do a Lot of Things on a 35 GB Database. Many a times it used to fill up the Temp DB and Crash. Here are few things I did to fix it, and there was a great Improvement (1 Hour 40 Mts.)
There were a few Very Complex Query and used to Join with 9 Big Table. I broke that query and created a Temp Table, Used a While Loop on the Temp Table. Removed the Dynamic SSQL. Removed the Function like AVG , SUM on a Large Query and added it to Sub Query, Remember those function are lot of Overheads. Of course I had to create few Index.
But Yes !! There are quite a few ways of tuning a Stored Procedure. My Favorate is breaking up joins into various small queries and Pumping the Info in Temp Table and then Join them.
I am not sure if this is the best way but has always worked for me π
R A J
November 17, 2008 at 10:12 am
Points for writing and posting the article. And a good start.
For smallist, in-house corp applications tuning seems to be ignored or overlooked. And even with externally facing internet applications, I've found performance and tuning only become important when problems arise. Keep tuning.
The more you are prepared, the less you need it.
November 17, 2008 at 11:11 am
Nice article :).
I spend a lot of time on performance tuning; it is my favorite activity. I find many people don't like to pay attention to it. I find biggest problems in vendor applications and access developers applications. Microsoft Access developers don't think about all the affects when they are working on the dataset, at least not the concurrent data access. So I often find developers selecting the entire table. So doesn't matter how many indexes, covering indexes, or include indexes you built it will be slow. And I usually get a reply when Microsoft Access could do it ... Sure it can when you only have 10k records with only few users hitting it. I usually prove to them the point using READS, and lots of diagrams on Indexes and how SQL Server stores information.
As you said you focused on READS, I look at those too. In addition to I look at the number of rows returned. On recent application I tuned the application was generating 150K reads, and I was like great another full table select. When looking at the rows returned it was less then 100 rows; so that didn't turn out to be the issue. In this case developer was at least being nice, but if you average the rows to read thatβs 1500 Logical reads per row. Best case would be 2-3 reads/row because of how the data is accessed. In my case it turned out the function call was a problem; RBAR issue.
Again good article, just my two cents on tuning I seen ;). As you said there is no set way for tuning, I usually tell people at work. I can't teach you tuning, it is experience and art; you learn it. And I usually hand them the "Dissecting SQL Server Execution Plans" by Grant Fritchey. Really good book to learn more basics :).
Thanks.
Mohit K. Gupta, MCITP: Database Administrator (2005), My Blog, Twitter: @SQLCAN[/url].
Microsoft FTE - SQL Server PFE
* Some time its the search that counts, not the finding...
* I didn't think so, but if I was wrong, I was wrong. I'd rather do something, and make a mistake than be frightened and be doing nothing. :smooooth:[/font]
November 17, 2008 at 1:04 pm
Great easy to understand article!
November 17, 2008 at 2:14 pm
Good Job! Thanks for the input.
----------------------------------------------------------------------------
"No question is so difficult to answer as that to which the answer is obvious." - George Bernard Shaw
November 17, 2008 at 3:10 pm
This seems so fundamental that I almost think I shouldn't mention it but I will anyway.
When designing a database, its indexes, and the queries that access it, do everything you can to:
1. not use the database at all (I've known programmers who used it for logging when a flat file would have been much more efficient.),
2. avoid physical disk writes (they are significantly slower than disk reads),
3. avoid physical disk reads.
Your hard drives are the slowest components on your system by several orders of magnitude.
Do everything you can to design for the use of the system and database caches. This will speed up everything. This means lots of memory dedicated to SQL Server and correct indexing of all search fields.
And no correlated sub-queries - period.
LC
November 17, 2008 at 10:51 pm
so glad to join this article and saw many splendid ideas about the performance tunning tips..
But we should think about the fragmentation about the database table.It will reduce the performance more if more fragmentation exists in the table.
You can check the fragmentation of the table through the view sys.dm_db_index_physical_stats. The view returns size and fragmentation information for the data and indexes of specified or view.
Syntax:
sys.dm_db_index_physical_stats (
{ database_id | NULL | 0 | DEFAULT }
, { object_id | NULL | 0 | DEFAULT }
, { index_id | NULL | 0 | -1 | DEFAULT }
, { partition_number | NULL | 0 | DEFAULT }
, { mode | NULL | DEFAULT }
)
The fragmentation level of an index or heap is shown in the avg_fragmentation_in_percent column. So, you can decide whether you need rebuild or reorganize the index of the table to reduce the framentation.
----==----:hehe:
November 18, 2008 at 12:53 am
Nice article. Really good one.
KSB
-----------------------------
Thousands of candles can be lit from a single candle, and the life of the candle will not be shortened. Knowledge and happiness never decreases by being shared.β - Buddha
Viewing 15 posts - 1 through 15 (of 38 total)
You must be logged in to reply to this topic. Login to reply