September 26, 2008 at 5:33 pm
Are there any disadvantages to recreating a database from scratch on a daily basis? In other words, I perform the following steps:
1. drop all tables
2. recreate all tables
3. populate all tables with data
My database only needs to be active during the business day and stay syncrhonized with the previous day's worth of data. Since I don't want to spend a lot of man-hours creating a synchronization script, I perform the above steps to maintain a fresh copy.
Do I need to worry about index fragmentation? database fragmentation? hard disk fragmentation? If any of the above is true, what can I do to fix it?
September 26, 2008 at 6:00 pm
jlp3630 (9/26/2008)
Are there any disadvantages to recreating a database from scratch on a daily basis? In other words, I perform the following steps:1. drop all tables
2. recreate all tables
3. populate all tables with data
My database only needs to be active during the business day and stay syncrhonized with the previous day's worth of data. Since I don't want to spend a lot of man-hours creating a synchronization script, I perform the above steps to maintain a fresh copy.
Do I need to worry about index fragmentation? database fragmentation? hard disk fragmentation? If any of the above is true, what can I do to fix it?
Why not use backup and restore?
----------------------------------------------------------------------------------
Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?
September 26, 2008 at 7:43 pm
Matt Miller (9/26/2008)
Why not use backup and restore?
The source database is a linked server residing on another server. Additionally, the "linked" server is not a MS SQL Server.
September 26, 2008 at 8:01 pm
In that case you don't really have a choice to do it the way you're doing it (or similar).
For the db fragmentation. Check the current DB size on the linked server and ask the DBA there what is the expected DB size in the year 2015 (or as far in the future as he can estimate this).
Then do the transfer with the current data, that will give you the current size of the DB for sql server. Using that, you can estimate how big the db will be far in the future. Set the DB to that size and forget about it after installing a job that checks for available free space in the DB and warns you when it's too low.
For the index fragmentation, I'm not too sure what to expect, I wouldn't be surprised to find fragmentation. The best thing to do at this point would be to do a data load, then check for fragmentation right after. If you find any, then defrag the indexes and put that in your daily script.
September 26, 2008 at 8:12 pm
Ninja's_RGR'us (9/26/2008)
In that case you don't really have a choice to do it the way you're doing it (or similar).For the db fragmentation. Check the current DB size on the linked server and ask the DBA there what is the expected DB size in the year 2015 (or as far in the future as he can estimate this).
Then do the transfer with the current data, that will give you the current size of the DB for sql server. Using that, you can estimate how big the db will be far in the future. Set the DB to that size and forget about it after installing a job that checks for available free space in the DB and warns you when it's too low.
For the index fragmentation, I'm not too sure what to expect, I wouldn't be surprised to find fragmentation. The best thing to do at this point would be to do a data load, then check for fragmentation right after. If you find any, then defrag the indexes and put that in your daily script.
Do you think I need to run a disk defragmenter (i.e., Programs -> Accessories -> System Tools -> Disk Defragmenter)? Or is that really unnecessary?
September 26, 2008 at 8:19 pm
If you size the objects (data and log files) big, then no, you likely wouldn't needd to run a defragmenter.
If on the other hand you don't size the data files big enough - it's going to auto-grow which would mean some fragmentation. If you didn't leave the autogrowth setting at the default - it shouldn't be so bad, but the "grow in 1MB increment" will KILL a hard drive with fragmentation.....
----------------------------------------------------------------------------------
Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?
September 26, 2008 at 8:31 pm
One way to be sure about the hd fragmentation would be to drop the DB althogeter. Run defrag. Then recreate the DB file at their correct size. If there was plenty of space on the HD, there should be little to no fragmentation at all at that point... you can always go the extra mile and detach the DB, re-run defrag and re-attach. But that may be a bit of an overkill at that point!
September 26, 2008 at 8:59 pm
If you're inserting data in a sequential manner according to your index, you will probably not need to reindex. If the data is random compared with your index, you will definitely need to reindex afterward.
best bet is to write a procedure that checks the DMV's and intelligently reindexes after rebuilding and reinserting your data. There have been a number of articles written on this topic.
Depending on the use of the database, you might consider not creating indexes until after the data is loaded. That way you know they won't need a defrag and you've done fewer actual disk writes during the insertion process, thus speeding up that part of the process.
Good luck,
~Craig
Craig Outcalt
MCITP, MCDBA
Craig Outcalt
Viewing 8 posts - 1 through 7 (of 7 total)
You must be logged in to reply to this topic. Login to reply