July 18, 2011 at 8:33 am
Hi,
What is the best method of updating your production server dbs from your development?
Currently, we are running daily process at night to backup our database and restore on production but recently our database grown and until we get more space, I was thinking if there any process that we can use instead of going through backup and restore route where we dont have to store anything on drive???
Any suggestions????
Thanks in advance..
July 18, 2011 at 8:40 am
July 18, 2011 at 8:42 am
keyun (7/18/2011)
Hi,What is the best method of updating your production server dbs from your development?
Currently, we are running daily process at night to backup our database and restore on production but recently our database grown and until we get more space, I was thinking if there any process that we can use instead of going through backup and restore route where we dont have to store anything on drive???
Any suggestions????
Thanks in advance..
Maybe I'm misunderstanding but are you sure you want to be restoring the whole dev db to production? You would normally develop objects and release to prod as updates not releasing the whole database. What about all the live data?
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
July 18, 2011 at 9:02 am
I'd second Perry's note, and be concerned about moving from development to production.
If this is production moving to development, and you are low on space, there isn't a good way to do this for free. As the Ninja mentioned, Red Gate makes Virtual Restore, and it's an ideal way to work on development type databases that are space constrained. It uses the backup file for reads and then has stub files for snapshots, similar to the way database snapshots work. There are a few other products that do the same thing, but they call cost $$.
One thing to look at, however, is the data size v the database size. Often you have (And should) have free space in the data files on a production system. If this is the case, and you have limited space, you could potentially find a place to "restore" the db, shrink it, and then back it up again. I know this is a pain, but in the short term, this can allow you to move a "smaller" database to the development server.
Disclosure: I work for Red Gate.
July 18, 2011 at 9:06 am
Hi Perry - sorry for the confusion but I miss worded when I mention prod vs. dev..
Actually, on our database production server we have nightly job that calculate data and prepare datamart database. currently, we are doing full backup and restore from .bak file on web production server where our application can have access to it.
The problem that we are having is just for that 30min backup/restore process we are holding 60gb space on drive so, I was looking for the solution that i do not have to store file on some drive fiscally and i can allocate 60gb space for some other purpose...
I hope this helps..
Thanks
July 18, 2011 at 9:17 am
Hi SCC
Here is my step
-- Step 1 - Shrink & Backup
DBCC SHRINKDATABASE ('MyTestDb', TRUNCATEONLY);
GO
BACKUP DATABASE rdc_DM TO DISK = 'G:\MSSQL\MyTestDb.bak' WITH INIT;
GO
and here i am doing restore...
-- Step 2 - restore
ALTER DATABASE MyTestDb SET SINGLE_USER WITH ROLLBACK IMMEDIATE
RESTORE DATABASE MyTestDb FROM DISK = '\\Server\G$\MSSQL\MyTestDb.bak' WITH REPLACE
ALTER DATABASE MyTestDb SET MULTI_USER
Thanks
July 18, 2011 at 9:22 am
Works as long as you don't do that to prod.
What I've seen work well to is to dump the log (on a copy of the prod db obviously).
restore prod backup to copy db
alter copy db, simple
checkpoint
shrink log file to 100 mb or whatever is needed.
reset to full
backup
copy / move
restore
July 18, 2011 at 9:25 am
How much of the data changes?
Is it better, perhaps, to build an ETL process to move the data instead of the full backup/restore?
July 18, 2011 at 9:29 am
Steve Jones - SSC Editor (7/18/2011)
How much of the data changes?Is it better, perhaps, to build an ETL process to move the data instead of the full backup/restore?
I'd agree with that on a multi TB DB.
But for 60GB? Spend 100$ and buy a new 1 TB drive!
How much $ in time to build a full ETL process that only moves the "correct" data across?
July 18, 2011 at 10:22 am
About ETL, Yes, it can build but since there are lots of tables used by web app and synonyms from other dbs, so, it was decided that we do total refresh.
About, getting more space, it is doable but I am thinking as a next 5 years vision...bcz i do not want to be in situation after 1 year where we have space issue... and I am pretty sure that server drive and SAN, LUN...does not come that cheap..but it is on table....
Thanks,
July 18, 2011 at 10:27 am
Well if this is really a dev machine and NOT used for very specific tuning issues, a normal attached disk can do the job.
As for 5 years ahead it might be a little too far out. It's hard to guess it the company will buy someone out or have 100X growth in 4 years.
If you can spare the space on the san then awesome but I wouldn't let that stop me.
July 18, 2011 at 11:24 am
I didn't think this was the dev machine, but the production machine, moving a database that is processed to production.
SAN space in the US is around US$6k/TB from what I've seen lately. That keeps dropping, but it does pay to try and keep the costs down, and looking forward is a good idea.
I think you're having a slight issue with the processing requiring 60GB in the log, which isn't needed on the production machine. I wouldn't think 60, or even 160GB is a big deal, but perhaps it is. You can restore it and then shrink it down, but you need that space available. I suppose you could process, shrink the log, then perform a backup.
July 18, 2011 at 11:27 am
Steve Jones - SSC Editor (7/18/2011)
I didn't think this was the dev machine, but the production machine, moving a database that is processed to production.SAN space in the US is around US$6k/TB from what I've seen lately. That keeps dropping, but it does pay to try and keep the costs down, and looking forward is a good idea.
I think you're having a slight issue with the processing requiring 60GB in the log, which isn't needed on the production machine. I wouldn't think 60, or even 160GB is a big deal, but perhaps it is. You can restore it and then shrink it down, but you need that space available. I suppose you could process, shrink the log, then perform a backup.
Chicked or the egg. You still need room (possibly on the san) for a full restore, shrink (IO, CPU intensive), re-backup. re-restore.
You save nothing on the san here and you still need to buy another HD.
July 18, 2011 at 11:32 am
You don't need the same room on the SAN.
If I have 100GB mdf and 60GB log on the dev machine, and I can shrink that to 80GB mdf, 10GB log, then I need 90GB on the production machine for the restore, not 160MV. That's 70GB savings.
July 18, 2011 at 11:36 am
So 160 GB on prod?
Backup is 100GB
Restore that as copy is 160 GB (260 GB on SAN)
Shrink down to 100GB (200 GB on SAN)
Backup that is another 100 GB (300GB on SAN)
copy /move across to restore on another machine (100 to 200 GB).
It's like the chicken and the egg! In my mind I'll just make more room on the san or buy an fast-ish drive for the dev machine.
Viewing 15 posts - 1 through 15 (of 16 total)
You must be logged in to reply to this topic. Login to reply