November 9, 2011 at 8:23 am
Dev @ +91 973 913 6683 (11/9/2011)
Off the topic, but I was searching for MVPs around the World and found 280 MVPs for SQL Server.I noticed you (Gail) in the list. May I know who else contribute here?
A lot of them.
Why do you ask?
November 9, 2011 at 8:28 am
I have limited knowledge on them so wanted to know more. Nothing else.
November 9, 2011 at 8:31 am
Dev @ +91 973 913 6683 (11/9/2011)
I have limited knowledge on them so wanted to know more. Nothing else.
I'm not saying you did anything wrong. I just don't know what you want to help you more.
November 9, 2011 at 8:37 am
Actually I don’t want it to happen that I get an opportunity to talk or have lunch with an MVP & I couldn’t recognize him / her. 😀
My intensions are genuine. I would like to thank them personally (with handshake ;-)) for their contribution for SQL Server by sharing their knowledge with dumb guys (like me).
November 9, 2011 at 8:39 am
Dev @ +91 973 913 6683 (11/9/2011)
Actually I don’t want it to happen that I get an opportunity to talk or have lunch with an MVP & I couldn’t recognize him / her. 😀My intensions are genuine. I would like to thank them personally (with handshake ;-)) for their contribution for SQL Server by sharing their knowledge with dumb guys (like me).
Then next time you go to an event, ask who's an mvp in there.
Also a good idea to say thanks everytime someone helps.
Also note that some are not MVPs even if they help tremendously (like myself).
November 9, 2011 at 8:44 am
Also note that some are not MVPs even if they help tremendously (like myself).
I have no doubt in this. You are a helpful person indeed. Thank You! 😀
November 10, 2011 at 1:43 pm
Personally, if I have the space available, and the index meets my "defrag it" criteria, and I'm within my maintenance window (i.e. I have free reign to lock indexes), I always rebuild. As far as I am aware, if you're solely worried about the most thorough index maintenance, rebuild does a better job than reorganize.
On servers with Enterprise edition which can do online rebuilds, the choice is even easier for me.
This is advice suited for a situation where you have a known maintenance window and won't exceed it, plenty of free space, and so on and so forth. Different situations will result in different optimal choices.
November 10, 2011 at 2:11 pm
Nadrek (11/10/2011)
Personally, if I have the space available, and the index meets my "defrag it" criteria, and I'm within my maintenance window (i.e. I have free reign to lock indexes), I always rebuild. As far as I am aware, if you're solely worried about the most thorough index maintenance, rebuild does a better job than reorganize.On servers with Enterprise edition which can do online rebuilds, the choice is even easier for me.
This is advice suited for a situation where you have a known maintenance window and won't exceed it, plenty of free space, and so on and so forth. Different situations will result in different optimal choices.
The utility I use to rebuild/reorganize indexes checks to see if we are on Enterprise Edition. If so, it will perform a rebuild for all indexes that meet the requirements for online index rebuilds. If the index does not meet that requirement - it reorganizes or rebuilds based on the level of fragmentation.
Simple choice really - and I agree, rebuilding indexes is generally faster and more thorough than reorganizing.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
November 11, 2011 at 8:06 am
Dev (11/9/2011)
Actually I don’t want it to happen that I get an opportunity to talk or have lunch with an MVP & I couldn’t recognize him / her. 😀My intensions are genuine. I would like to thank them personally (with handshake ;-)) for their contribution for SQL Server by sharing their knowledge with dumb guys (like me).
https://mvp.support.microsoft.com/communities/mvp.aspx?product=1&competency=SQL+Server
If you go to a SQL Saturday event (sqlsaturday.com) you will find several presenting usually. The PASS conference has a large percentage of them in attendance (sqlpass.org). I wouldn't be surprised to find that there were more than a dozen, perhaps as many as 2 dozen, that post regularly here on ssc.com.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
November 11, 2011 at 8:23 am
I recommend that people look at this "industry standard" set of scripts (free download)
http://ola.hallengren.com/Documentation.html#IndexOptimize
the basics are that rebuild/reorg will happen appropriate to degree of fragmentation
- once set (SQL Agent job), you can just let it chug in the background)
I recommend you impose some @TimeLimit value so it stays in the overnight/weekend window
Dick
November 11, 2011 at 8:27 am
Thank You Kevin 🙂
November 11, 2011 at 8:29 am
Ola's stuff is good, but this is the one I use (found it first).
Worked well over the last years.
November 11, 2011 at 12:59 pm
dick.baker (11/11/2011)
I recommend that people look at this "industry standard" set of scripts (free download)http://ola.hallengren.com/Documentation.html#IndexOptimize
the basics are that rebuild/reorg will happen appropriate to degree of fragmentation
- once set (SQL Agent job), you can just let it chug in the background)
I recommend you impose some @TimeLimit value so it stays in the overnight/weekend window
Dick
Note that the few scripts I've seen with something like @TimeLimit do not tend to have code to prevent starting a 5 hour defrag of an enormous table 15 minutes before the end of the time limit.
I do have some custom code that does it, though it's very rough; the trick is to estimate how long it'll take (and the speed is different for clustered and nonclustered indexes) before starting.
November 11, 2011 at 1:33 pm
Nadrek (11/11/2011)
Note that the few scripts I've seen with something like @TimeLimit do not tend to have code to prevent starting a 5 hour defrag of an enormous table 15 minutes before the end of the time limit.I do have some custom code that does it, though it's very rough; the trick is to estimate how long it'll take (and the speed is different for clustered and nonclustered indexes) before starting.
I wrote some very custom index rebuild code a few years back (tied tightly to the app and the db), I used the previous rebuild duration + a bit to estimate if an index could rebuild in the time remaining. Had to err on the conservative side, but it worked pretty well
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
November 11, 2011 at 2:07 pm
GilaMonster (11/11/2011)
I wrote some very custom index rebuild code a few years back (tied tightly to the app and the db), I used the previous rebuild duration + a bit to estimate if an index could rebuild in the time remaining. Had to err on the conservative side, but it worked pretty well
Here's a snippet of my SQL 2000 vintage code. @CountPages is taken directly from DBCC SHOWCONTIG .. WITH TABLERESULTS
SET @TakeTooLong = 0
SET @OutputString =
CASE
WHEN LEN('Skipping (' + @SchemaPlusTablename + ',[' + @indexname + ']) ') < 160 THEN
LEFT('Skipping (' + @SchemaPlusTablename + ',' + @indexname + ') ' + REPLICATE(' ',160) ,160) -- if it's too small, pad it out.
ELSE 'Skipping (' + @SchemaPlusTablename + ',[' + @indexname + ']) '
END
+ ' LogFrg: ' + CONVERT(CHAR(3),@frag) + '% ScnDn ' + CONVERT(CHAR(3),@ScanDensity) + '%' + ' FillFact ' + CONVERT(CHAR(3),@OrigFillFactor) + ' Pgs: ' + CONVERT(CHAR(9),@CountPages)
-- Now, let's estimate whether or not this index is likely to take us past our timing and/or maintenance window.
IF @IndexId = 1
BEGIN
-- Clustered
-- First, will it take us past our @ExitIfAfter deadline?
IF DATEADD(s,(@CountPages/@ClusteredPagesPerSecond),GETDATE()) > @ExitIfAfter
BEGIN
PRINT @OutputString + ' ExitIfAfter ' + CONVERT(CHAR(23),@ExitIfAfter,126) + ' Est End ' + CONVERT(CHAR(23),DATEADD(s,(@CountPages/@ClusteredPagesPerSecond),GETDATE()),126) + ' est rate ' + CONVERT(CHAR(11),@ClusteredPagesPerSecond) + ' Clust Pgs/sec'
SET @TakeTooLong = 1
END -- Clustered DATEADD
-- Second, will this one index take more time than we want any one index to take?
IF @CountPages/@ClusteredPagesPerSecond > @MaximumSecondsPerIndividualDefrag
AND @TakeTooLong = 0
BEGIN
PRINT @OutputString + ' Max Sec ' + CONVERT(CHAR(11),@MaximumSecondsPerIndividualDefrag) + ' Est Sec ' + CONVERT(CHAR(11),@CountPages/@ClusteredPagesPerSecond) + ' est rate ' + CONVERT(CHAR(11),@ClusteredPagesPerSecond) + ' Clust Pgs/sec'
SET @TakeTooLong = 1
END -- Clustered MaximumSecondsPerIndividualDefrag
END -- @IndexId = 1
ELSE -- the ELSE from the @IndexId = 1
BEGIN
-- Nonclustered
IF DATEADD(s,(@CountPages/@NonClusteredPagesPerSecond),GETDATE()) > @ExitIfAfter
BEGIN
PRINT @OutputString + ' ExitIfAfter ' + CONVERT(CHAR(23),@ExitIfAfter,126) + ' Est End ' + CONVERT(CHAR(23),DATEADD(s,(@CountPages/@NonClusteredPagesPerSecond),GETDATE()),126) + ' est rate ' + CONVERT(CHAR(11),@NonClusteredPagesPerSecond) + ' NonClust Pgs/sec'
SET @TakeTooLong = 1
END -- NonClustered DATEADD
IF @CountPages/@NonClusteredPagesPerSecond > @MaximumSecondsPerIndividualDefrag
AND @TakeTooLong = 0
BEGIN
PRINT @OutputString + ' Max Sec ' + CONVERT(CHAR(11),@MaximumSecondsPerIndividualDefrag) + ' Est Sec ' + CONVERT(CHAR(11),@CountPages/@NonClusteredPagesPerSecond) + ' est rate ' + CONVERT(CHAR(11),@NonClusteredPagesPerSecond) + ' NonClust Pgs/sec'
SET @TakeTooLong = 1
END -- NonClustered MaximumSecondsPerIndividualDefrag
END -- the ELSE from the @IndexId = 1
And the PagesPerSecond updates; essentially, we use a hardcoded/passed in estimate until we've performed 10 seconds of actual work on that type of index, after which we use our cumulative average as a guess.
Note that I found that for many tables, clustered index defrags were significantly faster than nonclustered index defrags (to my surprise).
A more advanced version would take into account which drive(s) the various files on each filegroup are on, and perform more granular estimates.
-- Now we're going to update our cumulative measured statistics, and if we have enough, we'll update our estimated pages per second.
IF @IndexId = 1
BEGIN
-- Clustered
SET @TotalMeasuredClusteredMilliseconds = @TotalMeasuredClusteredMilliseconds + DATEDIFF(ms,@MeasureOneDefragStartTime,@MeasureOneDefragEndTime)
SET @TotalMeasuredClusteredPages = @TotalMeasuredClusteredPages + @CountPages
-- If we have at least ten seconds of data, recalculate from our cumulative statistics
IF @TotalMeasuredClusteredMilliseconds > 10000
SET @ClusteredPagesPerSecond = @TotalMeasuredClusteredPages /(@TotalMeasuredClusteredMilliseconds/1000) -- ms to second conversion
SET @OutputString = @OutputString + ' Cum Est Pgs/sec: ' + CONVERT(CHAR(10),@ClusteredPagesPerSecond) + ' Clust'
END -- @IndexId = 1
ELSE -- the ELSE from the @IndexId = 1
BEGIN
-- Nonclustered
SET @TotalMeasuredNonClusteredMilliseconds = @TotalMeasuredNonClusteredMilliseconds + DATEDIFF(ms,@MeasureOneDefragStartTime,@MeasureOneDefragEndTime)
SET @TotalMeasuredNonClusteredPages = @TotalMeasuredNonClusteredPages + @CountPages
-- If we have at least ten seconds of data, recalculate from our cumulative statistics
IF @TotalMeasuredNonClusteredMilliseconds > 10000
SET @NonClusteredPagesPerSecond = @TotalMeasuredNonClusteredPages /(@TotalMeasuredNonClusteredMilliseconds/1000) -- ms to second conversion
SET @OutputString = @OutputString + ' Cum Est Pgs/sec: ' + CONVERT(CHAR(10),@NonClusteredPagesPerSecond) + ' NonClust'
END -- the ELSE from the @IndexId = 1
Viewing 15 posts - 16 through 29 (of 29 total)
You must be logged in to reply to this topic. Login to reply