August 8, 2011 at 1:46 am
Ninja's_RGR'us (8/5/2011)
Hey Rhox I'd love to do a few tests with your particular case. Can you send me the table's definition (ddl to recreate it) along with the indexes so that I can build a model to scale?TIA.
USE [db_messagent]
GO
/****** Object: Table [dbo].[FLAGS] Script Date: 08/08/2011 09:41:37 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[FLAGS](
[USERID] [int] NULL,
[LISTID] [int] NULL,
[CAMPAIGNID] [int] NULL,
[ACTIONID] [int] NULL,
[PROBEID] [int] NULL,
[TRIGGERID] [int] NULL,
[DT] [smalldatetime] NULL,
[VALUE] [int] NULL,
[CLISTID] [int] NULL,
[CITEMID] [int] NULL,
127.0.0.1 [int] NULL,
[PSID] [int] NULL,
[DLISTID] [int] NULL,
[DITEMID] [int] NULL,
[VIRALID] [int] NULL
) ON [PRIMARY]
GO
/****** Object: Index [IDX_CAMPAIGN] Script Date: 08/08/2011 09:42:20 ******/
CREATE CLUSTERED INDEX [IDX_CAMPAIGN] ON [dbo].[FLAGS]
(
[CAMPAIGNID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
/****** Object: Index [IDX_CONTEXT] Script Date: 08/08/2011 09:42:31 ******/
CREATE NONCLUSTERED INDEX [IDX_CONTEXT] ON [dbo].[FLAGS]
(
[CAMPAIGNID] ASC,
[ACTIONID] ASC,
[PROBEID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
/****** Object: Index [IDX_DT] Script Date: 08/08/2011 09:42:39 ******/
CREATE NONCLUSTERED INDEX [IDX_DT] ON [dbo].[FLAGS]
(
[DT] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
/****** Object: Index [IDX_LISTUSER] Script Date: 08/08/2011 09:42:46 ******/
CREATE NONCLUSTERED INDEX [IDX_LISTUSER] ON [dbo].[FLAGS]
(
[LISTID] ASC,
[USERID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = OFF) ON [PRIMARY]
GO
August 8, 2011 at 2:08 am
Hey Ninja's_RGR'us,
I'm investigating your script.
As I can understand (I don't have those years experience...):
- Put your code in a job (T-SQL job)
- Stop our transaction log backup job
- Start reorganize index
- Start your job (backup-script)
When done (script/job stops automaticaly)
- Start our transaction log backup job
I only have to change some code in your script... to match our system.
Your script checks if the reorganize is busy and while busy, take log backups each minute...?
Some questions:
- This will take much time and takes high resources?
- Can I change our already running 'backup transaction logs' maintenance plan to backup each 1 or 3 minutes and let it run as long as the manualy started reorganize index is busy (best to put this in a job to check if it's still running)?
- Index rebuilds are done as a single transaction, but are Index reorganize multiple transactions (I think so based on your test)?
August 8, 2011 at 5:33 am
Changing the sequense...
As I can understand (I don't have those years experience...):
- Put your code in a job (T-SQL job)
- Stop our transaction log backup job
- Start reorganize index
- Start your job (backup-script)
When done (script/job stops automaticaly), Yes it stops automatically. I'd just change the "bida$$table" to the proc name of michelle's script.
- Start our transaction log backup job
I'd change my script to simply call you current backup log script (makes the code simpler and more robust). All you need to do is make sure it is NOT running because you can't restart a running job. That would throw an error and make the script I posted fail.
As a first run I think you'd be better off running the backup job manually in SSMS. That way during the first run you can always see what's going on and adjust when necessary. After 1 good run you could then automate the whole thing.
If the db is really critical I'd consider maybe just watching that code run weekly or monthly untill you feel safe putting it in a job.
If you chose to run Michelle's script instead of starting the reorg manually you need to make sure it'll reorg and not rebuild. If you are using the default setting, it will rebuild because of 80+% fragmentation and the log backups won't have anything to do and it'll fail just like last time.
Some questions:
- This will take much time and takes high resources?
It'll be at least as "bad" as it was last time you ran this. The only difference is that now you'll also be backing up almost constantly. So that drive will see a great spike in activity along with the tlog drive which will now be having more read activity than last time.
- Can I change our already running 'backup transaction logs' maintenance plan to backup each 1 or 3 minutes and let it run as long as the manualy started reorganize index is busy (best to put this in a job to check if it's still running)?
I was going with the job idea because that's how my maintenance plans are set up here. I have 1 job per little thing I want done (I chose this so that I could refire any single step without having to run the rest of the sequence). If this is not your case then I'd probably just edit my script to change the backup name / location and run with that. Also the more I think of it, the better you'll be by running this in SSMS the first time. Just to get aquinted and see everything that's going on. Fire the reorg manually and then run the backups manually with my edited script. All you need to watch for is make sure those logs backups
- Index rebuilds are done as a single transaction, but are Index reorganize multiple transactions (I think so based on your test)?
Exactly, and also based on Gails very deep and accurate knowledge of internals. I was never forced into this situation because my dbs are usually under 100 GBs so I had to run some tests to confirm my understanding of the process.
P.S. If something goes wrong and people start complaining you can just stop the reorg window. The defrag will stop but you won't lose the work that's been done so far.
August 8, 2011 at 6:04 am
Ok thanks for the extra info.
Michelle's Index Defrag script is working here, but I use the "@maxPageCount = 3000000" parameter
So the big indexes will be excluded for rebuilding/reorganizing.
EXEC master.dbo.dba_indexDefrag_sp
@timeLimit = 120,
@maxPageCount = 3000000
Now I only have to 'plan' this reorganize of the 1 big index 🙂
I will see how it impacts and if needed I'have to create a job to plan this action automaticaly.
August 8, 2011 at 6:17 am
As I said, I really think you're better off doing that reorg while on site. If people start complaining you can always stop within 5 secs of the call and try again later.
Ever Even if you can only spare 30 minutes / day during lunch, then you'd eventually be done after 1-2 weeks.
Once you figure out the real impact on your system (perf, drives), and the frequence this needs to be done, then you can move to fully automated process, not before.
August 8, 2011 at 6:34 am
Ninja's_RGR'us (8/8/2011)
As I said, I really think you're better off doing that reorg while on site. If people start complaining you can always stop within 5 secs of the call and try again later.Ever if you can only spare 30 minutes / day during lunch, then you'd eventually be done after 1-2 weeks.
Once you figure out the real impact on your system (perf, drives), and the frequence this needs to be done, then you can move to fully automated process, not before.
I mentioned with the "plan" : the search for time when I'm at the office and the server can get some extra delay... 🙂
August 8, 2011 at 6:47 am
I actually ran this on our prod server here and I got no complaints. Now keep in mind that our server is really "under" utilized. It's rare to see anything used above 10% for anything more than 1-2 seconds unless someone's running one of the huge dashboards.
Since you already got permission to run this and create an outage while the rebuild ran, I don't really see the same manager complain about the server being online 100% of the time, even if a little slower than usual with a way to instantly get 100% power back within seconds... with you onsite, monitoring the server to measure the real impact.
I think this is a relatively easy sale compared to a full table outage for 5-6 hours + overtime to oversee.
August 8, 2011 at 6:53 am
Yes you're right 🙂
I'm busy to change the script to be confirm with the 'standard' names here... (and changing the extention bak to trn to avoid misunderstandings... 🙂
August 8, 2011 at 7:06 am
Rhox (8/8/2011)
Yes you're right 🙂I'm busy to change the script to be confirm with the 'standard' names here... (and changing the extention bak to trn to avoid misunderstandings... 🙂
Of course, tweak away. That's the best way to really own and understand the script. Which I'm really hoping happens before you use it for real.
P.S.
EXEC master.dbo.dba_indexDefrag_sp
@timeLimit = 120,
@maxPageCount = 3000000
The time limit is checked BEFORE running the next reorg / defrag. So it's possible to start working on a new index 119.99 minutes into it and that index taking 4 hours to run.
Also 3M page is ± 24 GB. Since Gail has provided a 200% estimate for the log size that means that you'd need 48 GB free space for the log to grow. Depending on the starting size of the log it might be enough but I think this is really cutting it close. Moreoever 30-40 GB of log file growth is going to take a really long time to run. The whole new log file need to be 0 initialized (no way to shorcircuit that). That takes a heck of a long time and everything-ish stops while that happens.
August 8, 2011 at 7:20 am
I know, that timelimit is not the actual timelimit, but after 2hours (or more), no new reorganize/rebuild will take place...
It's difficult to maintenant a busy system and I'm still testing (following it up on production to optimize it as best possible) the script of Michelle.
Before I came here, no indexes were defragmented...
They are happy with the status now already: only this 1 index to cover now! 🙂
I will correct the max page count, to meet our system, thanks for that remark!
August 8, 2011 at 7:29 am
Actually I wouldn't only correct it but I'd probably build a second job (once you've proven the code works with minimal impact).
the second job would be 1.5-2M pages min (or whatever makes sense)
Would only do reorg (maybe put rebuild threshold at 101%, if this is possible).
That job would have a first step to start the continuous log backups. You can delay the start of the loop with waitfor so that the script has time to run and make al ist of indexes to work on.
I have a similar setup here. I do a weekly run of the smaller indexes (just below the 1000 pages mark) and rebuild those has well.
August 8, 2011 at 7:41 am
That was i thinking too , to cover all indexes (1 job for the smallest indexes and 1 for the bigger ones)
But first trying this action 🙂
I will reply if it's worked well
August 9, 2011 at 6:42 pm
Did you try the below option
1=>Change the database recovery model to Bulk..
2=>Run the below query to find the highly fragmented index
select
object_name(itable.object_id) as tablename,
itable.name as IndexName,
indexfrag.avg_fragmentation_in_percent
from sys.dm_db_index_physical_stats(db_id(), null, null, null, 'DETAILED') indexfrag
inner join sys.indexes itable on itable.object_id = indexfrag.object_id
and itable.index_id = indexfrag.index_id
-- make sure to set this where clause to the percentage below which you want to exclude results.
where indexfrag.avg_fragmentation_in_percent > 30
order by avg_fragmentation_in_percent desc, tablename
[/size]
3=>Only rebuild the index those are highly fragmented
4=>Also you can find the table with bigger size or index that are bigger and rebuild the index in chunk
OR
1=>Do not change the database recovery model
4=>Rebuild the index in chunk and While index rebuilding is in progress watch the size of the log growth.
5=>If necessary you can truncate the log before starting rebuilding the next chunk of index
This process is time consuming but when you do not have enough free space then you do not have any better choice left.
August 10, 2011 at 5:55 am
@gorachandindia
How can I put this nicely?
Did you rean ANY of the message before posting your suggestions?
The OP already knows which specific index is failing, which one are fragmented, how to do the reorg in chunks (can't rebuild in chunks), how to start a "continuous" log back that auto stop when the reorg is done.
Gail also pointed out that changing the recovery model didn't provide any benefits + had 1 draw back.
Thanks for the support. I love people trying to help out, but I preffer when they provide real value for the question at hand!
August 10, 2011 at 7:02 pm
you know the exact fragmented index Ok great .
Right now I can clarify.. what I mean rebuild index in chunk is one index at a time. 🙂
Viewing 15 posts - 31 through 45 (of 58 total)
You must be logged in to reply to this topic. Login to reply