June 21, 2004 at 11:34 am
There's a lot to be said for dropping and recreating indexes as part of shrinking databases as well as ensuring indexes are most appropriately tuned.
That having been said, has anyone looked at developing a process for generating the appropriate SQL code dynamically to drop all the non-clustered indexes, the clustered index and/or primary key constraint, then shrink the DB and rebuild the indexes in the proper sequence? I would seem to be a useful tool: perhaps an SP that writes the appropriate DROP and ALTER commands to two files that can then be executed within a job (run the SP, read and execute the DROP commands, shrink the DB, read and execute the ALTER commands to rebuild the indexes).
I've been poking around both the Information_Schema and the system tables to see if I could figure it out, but this seems a tad beyond my level of expertise.
Think this is a good idea? Has someone already tackled it and I just didn't find it in my searches? If not is anyone interested in taking a shot, or pointing me in the right direction to find the info to do it myself?
TIA
June 21, 2004 at 12:27 pm
Why make so much work? Use the DBCC DBREINDEX command and put it into a job.
See the BOL for more information on DBCC DBREINDEX.
-SQLBill
June 21, 2004 at 1:32 pm
I agree with SQLBill.
An automated procedure would have to deal with contraints (RI) issues.
Although someone here wrote something like this, he was not a good coder, an even worse DBA, and it was written for 7. ie without the information schemas
KlK
June 21, 2004 at 10:57 pm
I agree with previous replies, it sounds like more trouble than its worth. Especially considering the point that you probably have to drop referential constraints as well.
Nevertheless, lets not be censorial. Heres a query I use for index columns that might get you started, if youre not deterred.
declare @tablename varchar(50) set @tablename = .... Select keyno, Convert( varchar(25), co.name ) as col_name , Convert( varchar(25), ix.name ) as ind_name , Convert( varchar(25), tb.name ) as tbl_name , ix.id as tbl_id, ix.indid as ind_id, ik.colid as col_id , ix.status from sysObjects TB inner join sysIndexes ix on ix.id = tb.id inner join sysIndexKeys IK on ik.id = ix.id and ik.indid = ix.indid inner join sysColumns CO on co.id = ik.id and co.colid = ik.colid Where tb.name like @tablename and ix.status not in (8388704, 10485856, 96) /* exclude stats-only */ and ix.status in (8388704, 10485856, 96) /* exclude stats-only */ order by ik.id, ik.indid, ik.keyno /* Other status values: CASE ix.status & 16 WHEN 0 THEN '' ELSE 'Clustered' END CASE ix.status & 2 WHEN 0 THEN '' ELSE 'Unique' END */
I have done a procedure to drop all Constraints and recreate them; indexes would be similar. Its pretty involved. Obviously to recreate them, you need to store all the details somewhere, somehow. And theres a danger you could 'drop' them twice, which would erase your stored definitions. I dont have the Constraints query handy at the moment.
Possibly a good way would be to use SQL-DMO to generate script for all the indexes and constraints you want. Generating drops by query is the easy part.
June 22, 2004 at 4:14 am
I appreciate the start. I understand the issues, I've used DBCCReindex before. I've been following this post: http://www.sqlservercentral.com/forums/shwmessage.aspx?forumid=92&messageid=120908
and the (current) last post in the thread makes an argument for dropping and recreating the Primary Key as giving the best results if you include ensuring your indexes are all kept as efficient as possible.
Also, on a table with multiple complex indexes I would think drop, shrink, recreate would be much faster than DBCCReindex. I have no proof, just a gut feeling.
I'll take a look at the code and see if I can come up with something. I'll post if I do.
Thanks.
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply