May 21, 2008 at 11:19 am
Could any one tell me the diff between shrinking and truncating .
defrag and shrinking.
I read bol , but iam not able to understand what exactly happens inside.
May 21, 2008 at 12:23 pm
Well, if I'm clear on what you're asking, it's about log files, right?
Well, truncating would be removing all inactive portions of the log. So your log is full with stuff it doesn't need anymore, and you get rid of it. But your physical log file that sits out on the drive is still the same size, it's just empty now. So you shrink it to fit the current size of the data. Get it?
As for defragging, that just reorgs the pages to put them back in order so they can be read easily.
Watch my free SQL Server Tutorials at:
http://MidnightDBA.com
Blog Author of:
DBA Rant – http://www.MidnightDBA.com/DBARant
May 22, 2008 at 1:26 pm
As I understand, you were talking about the SQL Logs.
Truncate is not a logged operation and it will remove uncommited transaction from the Log.
Shrinking is nothing but Comprissing. If have any idea about blocks and sectors in Os drives. It will remove unnecessary space from the file and keep one after other.
You can Shrink data file and log file as well.
Defrag means rearranging the pages, so it can easy to read those.
MCP, MCTS (GDBA/EDA)
May 22, 2008 at 2:14 pm
any way when truncating the log file the operation want be logged cos u cant log the operations on the log file!!
and u cant drop the log file or delete some recoreds from.
its not like when truncate a table.
..>>..
MobashA
May 23, 2008 at 7:06 am
I won't define the topics as that has already been taken care of but I will add my 2 cents, for what it's worth.
1) Truncate - if you truncate the log, do a full database backup immediately afterwards as you've broken your backup chain.
2) Shrinking - unless you had a rare event which caused your log file to grow substantially, don't do it. If the log grew due to normal daily processing, it will likely grow again after the shrink, resulting in a performance hit and potential physical fragmentation.
3) Defrag is fine but you should possibly look for a maintenance window to reindex. I do mine weekly. If you can only do it monthly, maybe defrag at other times to keep things running as smooth as possible.
-- You can't be late until you show up.
May 23, 2008 at 8:30 am
Good tosscrosby,
Appreciate for explanation, I forgot to add those point in my previous reply.
Tip of the Day...
"Pages will not migrate between files during an index reorganization".
MCP, MCTS (GDBA/EDA)
May 23, 2008 at 9:22 am
Thanks for all of you for your good answers.
Now iam able to uinderstand what happens exactly.
But i have a small doubt.
Index rebuild means it will drops the existing one and creates new one.
In this situation what happens exactly (abt memory and pages wise and the actual process)
May 23, 2008 at 9:29 am
Typically, rebuilds should be run when you can have exlcusive use of the database. It will impact the server performance, it will log to the transaction log, use tempdb and, I think, you should have like 50% of the amount your database size free with respect to disk space for that database (I may be wrong about this number, it may be more!). Defrag can be run with users in the system but again, there will be a performance hit. For index rebuilds, there a plenty of scripts on this site that will allow you to do a selective rebuild, i.e. if the index is x% fragmented, rebuild it, look around if this fits your needs. This would allow for a quicker, possibly smaller, rebuild process.
-- You can't be late until you show up.
May 23, 2008 at 9:38 am
Fragmentation exists when indexes have pages in which the logical ordering, based on the key value, does not match the physical ordering inside the data file. All leaf pages of an index contain pointers to the next and the previous pages in the index. This forms a doubly linked list of all index/data pages. Ideally, the physical order of the pages in the data file should match the logical ordering. Overall disk throughput is increased significantly when the physical ordering matches the logical ordering of the data. This leads to much better performance for certain types of queries. When the physical ordering does not match the logical ordering, disk throughput can become less efficient, because the disk head must move back and forth to gather the index pages instead of scanning forward in one direction. Fragmentation affects I/O performance, but has no effect on performance of queries whose data pages reside in the SQL Server data cache.
MCP, MCTS (GDBA/EDA)
Viewing 9 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply