October 29, 2008 at 1:39 pm
Thank you for the feedback.
The majority of our databases indexes get rebuild on a regular basis
Some are not... as always ... no downtime allowed...:hehe: :ermm: :alien:
The final step on our rebuild jobs (no maint plans in use because ... well you know the upgrade issues every time and again...) is sp_updatestats and dbcc updateusage. According to the books this should not be needed, but we've seen a couple of cases that still needed it.
Johan
Learn to play, play to learn !
Dont drive faster than your guardian angel can fly ...
but keeping both feet on the ground wont get you anywhere :w00t:
- How to post Performance Problems
- How to post data/code to get the best help[/url]
- How to prevent a sore throat after hours of presenting ppt
press F1 for solution, press shift+F1 for urgent solution 😀
Need a bit of Powershell? How about this
Who am I ? Sometimes this is me but most of the time this is me
October 29, 2008 at 1:56 pm
ALZDBA (10/29/2008)
Some are not... as always ... no downtime allowed...:hehe: :ermm: :alien:
SQL 2000?
The final step on our rebuild jobs (no maint plans in use because ... well you know the upgrade issues every time and again...) is sp_updatestats and dbcc updateusage. According to the books this should not be needed, but we've seen a couple of cases that still needed it.
If you must update the stats as well, do it before the reindex. A reindex updates the stats on that index with fullscan. If you then do a sampled stats update you could end up with stats less accurate than if you hadn't updated them
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
October 30, 2008 at 1:56 am
GilaMonster (10/29/2008)
... SQL 2000? ...
Indeed.
They are not that fond to migrate the db engine.
Their argument is: if it aint broken... don't fix it...
This means that I'll only be allowed to upgrade, it the hardware needs to be replaced. (because at that time they will have downtime and they promissed to cope with it at that time).
Johan
Learn to play, play to learn !
Dont drive faster than your guardian angel can fly ...
but keeping both feet on the ground wont get you anywhere :w00t:
- How to post Performance Problems
- How to post data/code to get the best help[/url]
- How to prevent a sore throat after hours of presenting ppt
press F1 for solution, press shift+F1 for urgent solution 😀
Need a bit of Powershell? How about this
Who am I ? Sometimes this is me but most of the time this is me
October 30, 2008 at 2:09 am
ALZDBA (10/30/2008)
GilaMonster (10/29/2008)
... SQL 2000? ...Indeed.
Try doing DBCC indexdefrag from time to to time on those 'no downtime' systems. It is an online operation.
This means that I'll only be allowed to upgrade, it the hardware needs to be replaced. (because at that time they will have downtime and they promissed to cope with it at that time).
Time to take a hammer to the motherboard?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
October 30, 2008 at 2:29 am
GilaMonster (10/30/2008)
Try doing DBCC indexdefrag from time to to time on those 'no downtime' systems. It is an online operation.
I know, but I won't ...
That would just couver their needs and blind them fully ....
Time to take a hammer to the motherboard?
I hope this one will do
:w00t:
Johan
Learn to play, play to learn !
Dont drive faster than your guardian angel can fly ...
but keeping both feet on the ground wont get you anywhere :w00t:
- How to post Performance Problems
- How to post data/code to get the best help[/url]
- How to prevent a sore throat after hours of presenting ppt
press F1 for solution, press shift+F1 for urgent solution 😀
Need a bit of Powershell? How about this
Who am I ? Sometimes this is me but most of the time this is me
October 30, 2008 at 12:29 pm
Hi,
The statistics is automagically updated, as Gail mentioned, at a INDEX REBUILD or when the table is touched by a query and the optimizer renders the statistics out of date. INDEX DEFRAG will on the other hand not update the statistics.
If it hasn't been updated, it is because either
a. the sysindexes.rowmodctr compared to sysindexes.rows is not big enough (http://support.microsoft.com/kb/195565/EN-US/ contains a chart)
b. the tables have not been touched by your queries
There are three possible issues you can run into with statistics
1. The statistics is out dated (because of a. above)
2. High change frequency that makes Autostatistics and accompanying recompiles occur at disturbingly high rates
3. The sample rate used to build the statistics is not high enough
b. the underlying data is of "skewed" nature (such as LastName where you can expect some "buckets" to have many entries, such as "Smith", and some very few, such as "Doggy Dog"). See Bart Duncans blog for a crystal clear example: http://blogs.msdn.com/bartd/archive/2006/07/25/limited-statistics-granularity.aspx
If updating the statistics without specifying sample rate solves any problems you have with bad row estimates, then 1 is your problem and you don't have to worry about 3 (although 2 might be an issue).
Outdated Statistics
==================
Add a job that on a regular basis updates your statistics. If you do index rebuilds remember that this will in fact rebuild your statistics (so you wouldn't do them during the same night or whatever interval you are using).
Autostatistics and Recompile
============================
Have a look at http://support.microsoft.com/kb/243586 "Troubleshooting stored procedure recompilation".
Sample Rate Issues
==================
If 3 is your problem it is a little bit trickier.
What you need to do is to:
A. Find a sample rate that is sufficiently high to provide you with good statstistics (this might be as low as 10% for ~5 million rows tables) but at the same time it should be low enough to be able to finish within your maintenance window (stating the obvious; it shouldn't obstruct other database activity either)
Maybe you end up running with 60% sample rate nightly on some tables but with 10% sample rate 3 times a day for other tables.
B. Turn OFF Autostatistics for the tables when you have set up your scheduled UPDATE STATISTICS .... WITH SAMPLE XX PERCENT
To find your large tables and their row modifications you can do something like this:
SELECT os.name AS TableName,
si.rowmodctr AS RowsModified,
si.rowcnt AS RowCount
FROM SYSOBJECTS o JOIN SYSINDEXES si ON o.id = si.id
WHERE si.rowmodctr > -- some number
and si.rowcnt > -- some number
and so.xtype = 'U'
order by RowCount
(Mind you also non-indexed columns can have statistics, Autocreate statistics feature. You will find them by looking at entries in sysindexes.name that begins with _WA.)
.. and to get a look at the sample rate that was used on the current statistics (ping me if you want the entire sproc that loops through everything):
DBCC SHOW_STATISTICS (@tablename, @index) WITH STAT_HEADER
Lubor Kollar has written an MSDN article on the subject: "Statistics Used by the Query Optimizer in Microsoft SQL Server 2000", http://msdn.microsoft.com/en-us/library/aa902688(SQL.80).aspx and there is a shorter version at http://support.microsoft.com/kb/195565/EN-US/ "Statistical maintenance functionality (autostats) in SQL Server".
HTH!
/Elisabeth
elisabeth@sqlserverland.com
MCITP | MCT
http://sqlblog.com/blogs/elisabeth_redei/
http://linkedin.com/in/elisabethredei
November 10, 2008 at 2:01 am
Hi Lowell,
Thx for the code, it was very usefull !!!
Lowell (10/27/2008)
OK Gail put your thinking hat on, because you've got to double check my work.I'm combining a couple of scripts i had laying around to try and find statistics that might need updating.
I figured i'd assume a lowly 4% or 1400 rows as my personal goal for identifying "what" needs updated statistics.
I'm getting more rows modified than i have total rows in the table in my query; was under the impression that sysindexes.rowmodctr was a count of rows modified, and when statistics got automatically updated, it was reset to zero.
as you can see below, I show 5896 rows modified, on a table with only 1537 rows.
here's a couple of typical rows of data:
RB_TABLE TABLE_NAME_IDX 5896 1537 100.00
SSFUNCAT PK__SSFUNCAT__19C17488 1400 2468 56.73
here's my prototype sql:
[font="Courier New"]SELECT
sysobjects.name AS [Table Name],
sysindexes.name AS [Index Name],
sysindexes.rowmodctr AS [Rows Modified],
IndexTotals.NUMROWS AS [Total Rows],
CASE
WHEN sysindexes.rowmodctr > IndexTotals.NUMROWS
THEN 100.00
WHEN IndexTotals.NUMROWS = 0
THEN 100.00
ELSE CONVERT(DECIMAL(10,2),(CONVERT(DECIMAL(10,2),sysindexes.rowmodctr ) / CONVERT(DECIMAL(10,2),IndexTotals.NUMROWS ) * 100))
END AS [Percent Modified]
FROM sysobjects
INNER JOIN sysindexes
ON sysobjects.id = sysindexes.id
INNER JOIN (SELECT
sysobjects.id,
sysobjects.name,
MAX(sysindexes.rows) AS NUMROWS
FROM sysobjects
INNER JOIN sysindexes
ON sysobjects.id = sysindexes.id
WHERE sysobjects.xtype = 'U'
GROUP BY sysobjects.id,sysobjects.name
)IndexTotals
ON sysobjects.id = IndexTotals.id
WHERE sysindexes.rowmodctr > 0
AND sysobjects.xtype = 'U'
AND LEFT(sysindexes.name,7) <> '_WA_Sys'
ORDER BY sysindexes.rowmodctr DESC
[/font]
November 10, 2008 at 8:11 am
David i don't know how useful it is, since for me it doesn't seem to be accurate;
I've updated statistics on a table for example, but this report give me results that there is still more than 4% out of date, so my wild assumption wasn't exactly accurate.
I'm throw some more time looking at it again.
Lowell
Viewing 8 posts - 16 through 22 (of 22 total)
You must be logged in to reply to this topic. Login to reply