June 5, 2017 at 1:37 pm
No, I am rebuilding indexes before deleting data.
After deleting We are inserting data through SSIS.
June 5, 2017 at 2:55 pm
krishnabudampati - Monday, June 5, 2017 1:37 PMNo, I am rebuilding indexes before deleting data.
After deleting We are inserting data through SSIS.
So, you're rebuilding index, deleting data, then inserting new rows. Your deletes and inserts change the distribution of the data.
Try deleting, then inserting, then rebuilding. There's no way to know from what's been posted if the rebuild is completely negated, but the statistics on them certainly aren't current.
Better yet, if the indexes don't need to be rebuilt, don't rebuild them. If you are seeing a performance improvement on a small table after rebuilding the indexes, it may very well be because the index rebuild also rebuilds statistics. Try skipping the index rebuild and instead just update your statistics. Then compare your results to what you're seeing now.
June 5, 2017 at 2:57 pm
krishnabudampati - Monday, June 5, 2017 1:37 PMNo, I am rebuilding indexes before deleting data.
After deleting We are inserting data through SSIS.
You may need a statistics update after the inserts. Also, inserts cause fragmentation too due to page splits. I'm questioning the need for the defrag after the deletes. Why do it there? What evidence did you base that decision on?
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
June 5, 2017 at 9:55 pm
I have been observing fragmentation when I check index properties.
Base on that I am going to rebuilding the indexes
June 5, 2017 at 10:33 pm
Instead of deleting 90% of unwanted data keep 100 percent of wanted data.
Your data refresh process should look like this:
1. Create a "clone" table with similar name, say [copy_OriginalTablename]
This step must include creation of the clustered index
2. Insert the data into the "copy" table - 10% of old data + the new data from the feed.
It's better be done in a single INSERT statement, with an ORDER BY clause which follows the clustered index definition.
3. Create all nonclustered indexes on the "copy" table
4. Drop the original table.
5. Rename the copy table to Original Table.
I assume the table does not have FK's, those "refreshable" tables normally don't have them.
If it does - include them in the step 1.
This way the indexes will be defragmented every time.
_____________
Code for TallyGenerator
June 6, 2017 at 4:29 am
krishnabudampati - Monday, June 5, 2017 9:55 PMI have been observing fragmentation when I check index properties.Base on that I am going to rebuilding the indexes
Fair enough, but what do you attribute your performance issues to fragmentation? Generally, not in all cases, fragmentation doesn't negatively affect performance much. It's usually out of date statistics and/or problematic code. I'd suggest gathering more metrics so you can focus your efforts on where the real pain points lie.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
June 6, 2017 at 5:10 am
VastSQL - Sunday, June 4, 2017 5:15 AMGilaMonster - Sunday, June 4, 2017 4:48 AMVastSQL - Sunday, June 4, 2017 4:43 AMRebuild and reorg ignores tables with less than 1000 Pages, that might be the case with you.They don't.
Most custom scripts exclude indexes under 1000 pages, because there's little use in rebuilding them.
Thanks Gail , Good to know.
But I have tried rebuild indexes with small page count and the fragmentation value didn't change.
See an earlier post by Grant Fritchey on this same thread for some answers.
For fast, accurate and documented assistance in answering your questions, please read this article.
Understanding and using APPLY, (I) and (II) Paul White
Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden
June 6, 2017 at 5:31 am
Okay,
I have a table with out any indexes. If I am deleting & Inserting data continuously some space will be fragmented
How to recover the space.
June 6, 2017 at 5:41 am
krishna83 - Tuesday, June 6, 2017 5:31 AMOkay,I have a table with out any indexes. If I am deleting & Inserting data continuously some space will be fragmented
How to recover the space.
Is the problem space or performance? For space, yeah, defragging indexes will help. For performance, it's rare that it helps much unless you're getting lots of scans. If you're getting lots of scans, that's the problem, not the fragmentation.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
June 6, 2017 at 5:45 am
krishna83 - Tuesday, June 6, 2017 5:31 AMOkay,I have a table with out any indexes. If I am deleting & Inserting data continuously some space will be fragmented
How to recover the space.
Any reason the table doesn't have any indexes? A well-formed table will usually have a clustered index at least. There are exceptions where you're better off without them, but they're rare. So my answer to your question is this: create a clustered index on the table.
John
June 6, 2017 at 6:27 am
krishna83 - Tuesday, June 6, 2017 5:31 AMOkay,I have a table with out any indexes. If I am deleting & Inserting data continuously some space will be fragmented
How to recover the space.
This is a well-documented issue with two common approaches - either copy out the table to a clone and then rename them, or create and then drop a clustered index.
For fast, accurate and documented assistance in answering your questions, please read this article.
Understanding and using APPLY, (I) and (II) Paul White
Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden
Viewing 11 posts - 16 through 25 (of 25 total)
You must be logged in to reply to this topic. Login to reply