February 11, 2018 at 8:40 pm
I'm stumped and need the assistance of the wider community to understand what is going on.
The database in question has minimal fragmentation, but performance is slow. We go through the following process:
Performance improves significantly, but fragmentation levels are around the same.
I'm at a loss to explain what is going on here, can anyone help me understand why this is occurring?
pcd
PS Please don't go on about not shrinking the database - I'm fully aware of that etc. The suggestion came from someone on the team.
February 11, 2018 at 9:25 pm
pcd_au - Sunday, February 11, 2018 8:40 PMI'm stumped and need the assistance of the wider community to understand what is going on.
- Disable all non clustered indexes
- Shrink the database - no truncate
- Rebuild all clustered indexes
- Rebuild all non clustered indexes (re-enabling them)
- Shrink the database - truncate only
Performance improves significantly, but fragmentation levels are around the same.
I'm at a loss to explain what is going on here, can anyone help me understand why this is occurring?
pcd
PS Please don't go on about not shrinking the database - I'm fully aware of that etc. The suggestion came from someone on the team.
Performance of what "improves significantly"?
It could be from things like the fact that disabling an index also disables any FK that may be based on those indexes. Of course, that won't affect SELECTs.... just INSERTs, DELETEs, and some UPDATEs. Any FKs of that nature must be rebuilt using ALTER TABLE CHECK CONSTRAINT to become active again.
The other thing is, how are you doing your normal index maintenance? If it's a mature database and it normally does most REORGANIZE instead of REBUILD, that could be it. I'm working on a project that may prove that but I'm not done yet so I can only say that as a suggestion rather than a demonstrable fact.
And, sorry... I have to say that routine shrinking of files or databases is a total waste of time. 😉 Doing it for reasons of recovering space after a "rampant growth accident" might also the wrong thing to do if you don't do it correctly and, it looks like you might be doing it incorrectly. If you pack everything down and then rebuild your indexes, you will have wasted space equal to about 120% of your largest pagecount index and that last shrink with TRUNCATE ONLY won't take it (unwanted free space) out because your index rebuilds build a new copy of the index at the logical end of the file (after a shrink) because there's no room at the beginning of the shrunk file. If the goal really is to recover disk space, then you need to create a new temporary file group, rebuild your largest index(es) to that (moving the index using the DROP EXISTING option), do your first shrink, do all your other rebuilds not once but twice to avoid the end-of-file problem, and then rebuild the largest index back to that original file group and drop the temporary file group. Again, that's just a suggestion at this point... I'm not done experimenting with all this and it could be several months (if at all) before I have a demonstrable example.
--Jeff Moden
Change is inevitable... Change for the better is not.
February 11, 2018 at 9:46 pm
Jeff Moden - Sunday, February 11, 2018 9:25 PMPerformance of what "improves significantly"?
It could be from things like the fact that disabling an index also disables any FK that may be based on those indexes. Of course, that won't affect SELECTs.... just INSERTs, DELETEs, and some UPDATEs. Any FKs of that nature must be rebuilt using ALTER TABLE CHECK CONSTRAINT to become active again.
The other thing is, how are you doing your normal index maintenance? If it's a mature database and it normally does most REORGANIZE instead of REBUILD, that could be it. I'm working on a project that may prove that but I'm not done yet so I can only say that as a suggestion rather than a demonstrable fact.
Jeff
A reasonable ask. It's a Data Warehouse database. We have traced a series of queries (select only) that we execute in sequence. Before we 'compact' routines take 40+minutes, after that take 18 minutes. We flush all the caches etc and perform cold and warm tests. There are no RI constraints (not that there are any inserts or updates).
The reorganize only removes leaf level fragmentation, leaving non leaf level fragmentation, where as the rebuild removes both. That could be the answer. Is there a way of determining fragmentation at either a) non leaf levels or b) a specific level? Time to brush up on sys.dm_db_index_physical_stats by the looks of things.
pcd
February 11, 2018 at 10:15 pm
Jeff Moden - Sunday, February 11, 2018 9:25 PMAnd, sorry... I have to say that routine shrinking of files or databases is a total waste of time. 😉 Doing it for reasons of recovering space after a "rampant growth accident" might also the wrong thing to do if you don't do it correctly and, it looks like you might be doing it incorrectly. If you pack everything down and then rebuild your indexes, you will have wasted space equal to about 120% of your largest pagecount index and that last shrink with TRUNCATE ONLY won't take it (unwanted free space) out because your index rebuilds build a new copy of the index at the logical end of the file (after a shrink) because there's no room at the beginning of the shrunk file. If the goal really is to recover disk space, then you need to create a new temporary file group, rebuild your largest index(es) to that (moving the index using the DROP EXISTING option), do your first shrink, do all your other rebuilds not once but twice to avoid the end-of-file problem, and then rebuild the largest index back to that original file group and drop the temporary file group. Again, that's just a suggestion at this point... I'm not done experimenting with all this and it could be several months (if at all) before I have a demonstrable example.
Jeff
Hopefully your comments may help others, in our case it's not about recovering disk space, it's about removing as much fragmentation as possible. We don't truncate the data file initially as we know we will reuse the space when we rebuild, the second shrink where we just truncate is purely a bonus as the database is only used for reporting. However can you provide more detail on the 'end-of-file problem' you mentioned above. I'm presuming the twice refers to the (shrink and rebuild), and not just the (rebuild)?
pcd
February 12, 2018 at 2:13 am
pcd_au - Sunday, February 11, 2018 8:40 PMPerformance improves significantly, but fragmentation levels are around the same.I'm at a loss to explain what is going on here, can anyone help me understand why this is occurring?
Dropping and creating an index results in updated statistics (with FULLSCAN). that's the most common reason why index rebuilds improve performance.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
February 12, 2018 at 6:55 am
pcd_au - Sunday, February 11, 2018 10:15 PMJeff Moden - Sunday, February 11, 2018 9:25 PMAnd, sorry... I have to say that routine shrinking of files or databases is a total waste of time. 😉 Doing it for reasons of recovering space after a "rampant growth accident" might also the wrong thing to do if you don't do it correctly and, it looks like you might be doing it incorrectly. If you pack everything down and then rebuild your indexes, you will have wasted space equal to about 120% of your largest pagecount index and that last shrink with TRUNCATE ONLY won't take it (unwanted free space) out because your index rebuilds build a new copy of the index at the logical end of the file (after a shrink) because there's no room at the beginning of the shrunk file. If the goal really is to recover disk space, then you need to create a new temporary file group, rebuild your largest index(es) to that (moving the index using the DROP EXISTING option), do your first shrink, do all your other rebuilds not once but twice to avoid the end-of-file problem, and then rebuild the largest index back to that original file group and drop the temporary file group. Again, that's just a suggestion at this point... I'm not done experimenting with all this and it could be several months (if at all) before I have a demonstrable example.Jeff
Hopefully your comments may help others, in our case it's not about recovering disk space, it's about removing as much fragmentation as possible. We don't truncate the data file initially as we know we will reuse the space when we rebuild, the second shrink where we just truncate is purely a bonus as the database is only used for reporting. However can you provide more detail on the 'end-of-file problem' you mentioned above. I'm presuming the twice refers to the (shrink and rebuild), and not just the (rebuild)?pcd
Based on what you're saying, shrinking is probably the wrong thing to do for this. You're just trying to defragment indexes, correct? Before I can make additional recommendations or answer the "end-of-file-problem", could you tell me the total size of the database and what the size of the 4 largest tables (including the indexes) are, please?
And Gail is correct. In many cases (especially on mature databases where REORGANIZE is frequently used), people forget to rebuild stats and rebuilding all of the indexes did a lot of that for you, although it doesn't take care of all statistics (particularly, column stats).
--Jeff Moden
Change is inevitable... Change for the better is not.
February 12, 2018 at 3:02 pm
GilaMonster - Monday, February 12, 2018 2:13 AMDropping and creating an index results in updated statistics (with FULLSCAN). that's the most common reason why index rebuilds improve performance.
We tests Ola Hallengren's code against the database - nothing was found, everything check out OK.
I'll give the update statistics with full scan a go.
pcd
February 12, 2018 at 5:54 pm
pcd_au - Monday, February 12, 2018 3:02 PMGilaMonster - Monday, February 12, 2018 2:13 AMDropping and creating an index results in updated statistics (with FULLSCAN). that's the most common reason why index rebuilds improve performance.We tests Ola Hallengren's code against the database - nothing was found, everything check out OK.
I'll give the update statistics with full scan a go.pcd
That's not quite what she's saying. She's was explaining that you disabling the indexes and then re-enabling them caused an index rebuild, which also causes a stats update with a full scan as a result. It IS a good thing to use OLA's stuff to do your STATS updates on a regular basis whether or not an index needs defragmenting or not.
--Jeff Moden
Change is inevitable... Change for the better is not.
February 12, 2018 at 7:51 pm
Jeff Moden - Monday, February 12, 2018 5:54 PMThat's not quite what she's saying. She's was explaining that you disabling the indexes and then re-enabling them caused an index rebuild, which also causes a stats update with a full scan as a result. It IS a good thing to use OLA's stuff to do your STATS updates on a regular basis whether or not an index needs defragmenting or not.
Jeff, my bad explanation.
Prior to starting down this path we used Ola's scripts, but they indicated that there was nothing for us to do.
I'm aware that rebuilding the indexes will update the stats, but missed the fact that I should have have tried updating the stats as a first option. Sometimes you lose sight of the obvious.
I'm hoping that I can encourage the client to implement the index optimise code as part of their loading process.
pcd
PS Waiting to get you the info on the tables etc requested in a prior post.
February 13, 2018 at 1:42 pm
Hi,
The most common factor for slow querys are:
1.- Low space on tmbdb. This is solved by increasing the space. Or allocating them in a high speed disk drives like solid state drives.
2.- Wrong cardinallity on execution plans. This is solved by updating / creatining statistics. As Gail have mention before.
I also will take care about creating index columns in order to improve performance in some querys.
Hope this helps.
February 13, 2018 at 8:39 pm
mig28mx - Tuesday, February 13, 2018 1:42 PMHi,
The most common factor for slow querys are:
1.- Low space on tmbdb. This is solved by increasing the space. Or allocating them in a high speed disk drives like solid state drives.
2.- Wrong cardinallity on execution plans. This is solved by updating / creatining statistics. As Gail have mention before.I also will take care about creating index columns in order to improve performance in some querys.
Hope this helps.
Heh... actually, the most common factor for slow queries is bad code that causes too much activity in TempDB or makes it impossible to use an index correctly even if the stats are totally up to date. 😉
--Jeff Moden
Change is inevitable... Change for the better is not.
February 13, 2018 at 8:41 pm
pcd_au - Monday, February 12, 2018 7:51 PMJeff Moden - Monday, February 12, 2018 5:54 PMThat's not quite what she's saying. She's was explaining that you disabling the indexes and then re-enabling them caused an index rebuild, which also causes a stats update with a full scan as a result. It IS a good thing to use OLA's stuff to do your STATS updates on a regular basis whether or not an index needs defragmenting or not.Jeff, my bad explanation.
Prior to starting down this path we used Ola's scripts, but they indicated that there was nothing for us to do.
I'm aware that rebuilding the indexes will update the stats, but missed the fact that I should have have tried updating the stats as a first option. Sometimes you lose sight of the obvious.
I'm hoping that I can encourage the client to implement the index optimise code as part of their loading process.
pcd
PS Waiting to get you the info on the tables etc requested in a prior post.
When you say that you used Ola's scripts, which scripts? Are you talking about index maintenance, stats maintenance, or both?
--Jeff Moden
Change is inevitable... Change for the better is not.
February 14, 2018 at 1:06 am
mig28mx - Tuesday, February 13, 2018 1:42 PM2.- Wrong cardinallity on execution plans. This is solved by updating / creatining statistics. As Gail have mention before.
While incorrect row estimates are a common cause of poor performance, I find that very few cases (at least the ones I see) can be fixed by something as easy as updating stats.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
February 19, 2018 at 9:44 am
Jeff Moden - Tuesday, February 13, 2018 8:39 PMmig28mx - Tuesday, February 13, 2018 1:42 PMHi,
The most common factor for slow querys are:
1.- Low space on tmbdb. This is solved by increasing the space. Or allocating them in a high speed disk drives like solid state drives.
2.- Wrong cardinallity on execution plans. This is solved by updating / creatining statistics. As Gail have mention before.I also will take care about creating index columns in order to improve performance in some querys.
Hope this helps.
Heh... actually, the most common factor for slow queries is bad code that causes too much activity in TempDB or makes it impossible to use an index correctly even if the stats are totally up to date. 😉
Hi,
Yes. You are correct! I´m assuming that the code are ok.
February 19, 2018 at 5:36 pm
Jeff Moden - Monday, February 12, 2018 6:55 AMBased on what you're saying, shrinking is probably the wrong thing to do for this. You're just trying to defragment indexes, correct? Before I can make additional recommendations or answer the "end-of-file-problem", could you tell me the total size of the database and what the size of the 4 largest tables (including the indexes) are, please?
And Gail is correct. In many cases (especially on mature databases where REORGANIZE is frequently used), people forget to rebuild stats and rebuilding all of the indexes did a lot of that for you, although it doesn't take care of all statistics (particularly, column stats).
Jeff
Here's the top tables by dpages:
SchemaName TableName data_dpages index_dpages index_count rows
dbo DIM_SITE_PATIENT_OBSERVATION 29201392 9411304 4 2464211124
dbo vwi_grouped_prescription_status 24715255 40659521 5 10797143915
dbo FACT_SITE_PATIENT_MONTHLY_SNAPSHOT 10270332 10988964 9 5551101846
dbo VW_DIM_SITE_PATIENT_OBSERVATION 8612276 1207517 1 616052781
dbo DIM_SITE_PATIENT_PRESCRIPTION 5060652 915088 3 733524642
dbo vwi_grouped_condition_status 5015235 7916566 5 1732864690
dbo FACT_PATHOLOGY_RECENCY_SNAPSHOT 4989792 569895 1 616052779
dbo FACT_PRESCRIPTION_MONTHLY_SNAPSHOT 4119396 9379938 4 4732463256
dbo VW_DIM_SITE_PATIENT_PATHOLOGY 3129046 2454810 2 1232105558
dbo DIM_SITE_PATIENT_NUM 2301654 868782 3 265823847
Apologies for the formatting, I'm sure there is a better way.
It is acknowledged (by some) that their data structures are sub optimal, but currently they are not in a position to make the changes necessary.
Our next step is to restore a pre compacted database and update the stats ( detailed ) on all tables, to see what difference that makes. I'll update you on the results next week.
pcd
Viewing 15 posts - 1 through 15 (of 17 total)
You must be logged in to reply to this topic. Login to reply