January 12, 2009 at 11:25 am
We have 2 production servers(Prod1 and Prod2) ,Each server has around 50 db’s and each db will be updated with huge data only once in a month. During the data load process users can not access that db as it is very slow. I would like to setup an environment where there would be no downtime. I was planning for a kind of standby server for 2 production servers where in I can load data in the staging server and do all the manipulations I need and make identical as Prod1 and then failover Prod1 to stage where stage becomes prod1 now. In this way I want to use stage for both prod1 and prod2. I would also like to automate the whole process.
I know clustering can resolve this for me but I want to go for an easy way with less expensive as I cannot afford too many resources to duplicateProd1 and Prod2. I am sure there are many experts in this forum who already deal with such criteria so Please suggest me the best solution.
thanks
January 12, 2009 at 11:32 am
When you're loading the monthly data, how heavily does that hit the server hardware? If, for example, you had two instances of SQL on one machine, and you were loading data into a copy of the database in one instance, would that make the server so slow it wouldn't do what you need?
Other than two instances, the only thing I can think of is two servers.
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
January 12, 2009 at 11:35 am
Mike - not sure that I qualify for the "expert" criteria that you put in your post but I will share what we do. 😀
We actually have the instances both exist in production and then have the application that is facing them point to the one with the most recent data. So, in your example we would take Prod1 out via the application connection while the data was being updated, etc. Then when this was complete we would bring that instance back in, take Prod2 out and do the same, You could also do this with your Stage server as well if that is appropriate.
As for clustering, that will provide high-availability in the event of hardware failure but really won't help in the situation that you describe unless I have misunderstood something.
HTH
David
@SQLTentmaker“He is no fool who gives what he cannot keep to gain that which he cannot lose” - Jim Elliot
January 12, 2009 at 11:39 am
we are doing this from several years and we feel that we have to improve performance and users should not feel that downtime even for a minute. I am thinking of alternatives ot dataload. Right now when there is a dataload it keeps sever busy for 28 hrs ofr each database. we want to do all our dataload in some staging server, once data is ready we have to make it available for users within minutes. we dont want to load data on productions directly.
My idea: Finish dataload process in staging and get ready to move into production. Move mdf file onto production. Dettach the respective DB on Prod1 and attach the stage db to production.
January 12, 2009 at 11:41 am
Mike, the solution that I mention will provide you with that. I don't know of any other way than taking that server out of the "mix" while doing the loading.
We literally only experience a moment of "downtime" while we point the application to the other server. Current connections are bled off (or wait until they are completed with their work) and then brought up on the other server. Really does work well.
David
@SQLTentmaker“He is no fool who gives what he cannot keep to gain that which he cannot lose” - Jim Elliot
January 12, 2009 at 11:42 am
How big are the databases? (I have a couple of thoughts on this, but need to know database size before I can suggest anything that might be intelligent.)
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
January 12, 2009 at 11:45 am
Each db will range from 200GB to 800GB. Right now biggest is 800GB. But as days pass it grows more.
January 12, 2009 at 11:48 am
Mike Levan (1/12/2009)
My idea: Finish dataload process in staging and get ready to move into production. Move mdf file onto production. Dettach the respective DB on Prod1 and attach the stage db to production.
This too would work. If this is a large database the size could be restrictive and your copy time / network utilization could be expensive.
Could also backup / restore to a different name and then do a rename script to rename the current to _Old and the restore from Stage to the formal name. If using compression software for backups you could save some time with this scenario, especially if you have fast file initialization enabled.
Just some more thoughts to consider.
David
@SQLTentmaker“He is no fool who gives what he cannot keep to gain that which he cannot lose” - Jim Elliot
January 12, 2009 at 12:14 pm
I think your best bet is load into a staging copy of the database, copy the files to the production server, then take the production database offline and attach the updated database with the same database name.
Might create a few minutes of downtime, maybe just a few seconds, but should be a significant improvement over your current situation.
Another alternative is have two instances of SQL on the server. Load up the staging copy, restore it to the second instance, then change the connection in your front end to that instance. The following month, load the staging copy into the first instance, and change you connection back to that. That will work if you have a single connection definition for the databases.
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
January 12, 2009 at 12:25 pm
The clustering solution I do not believe will work unless you set up an active/active cluster and then funnel all the data load through one machine. However, you will probably still experience blocking that will affect the performance of the clients immensely. Is there anyway to cut down the load to more manageable transactional chunks?
January 12, 2009 at 2:38 pm
GSquared (1/12/2009)
I think your best bet is load into a staging copy of the database, copy the files to the production server, then take the production database offline and attach the updated database with the same database name.Might create a few minutes of downtime, maybe just a few seconds, but should be a significant improvement over your current situation.
Not sure if I understand it correctly, but if the update takes hours on staging too, then what about the changes that are going in production?
That apart, copying files over the network for detach attach would take time too? What is the procedure to compensate and or mitigate the transfer time..
I am sure there must be a more efficient way to do this?
Thanks!!
The_SQL_DBA
MCTS
"Quality is never an accident; it is always the result of high intention, sincere effort, intelligent direction and skillful execution; it represents the wise choice of many alternatives."
January 12, 2009 at 2:42 pm
The OP stated that the data was only updated once per month so, the solutions offered should be sufficient and should allow for very minimal downtime - or am I missing something? (Which wouldn't surprise me...)
David
@SQLTentmaker“He is no fool who gives what he cannot keep to gain that which he cannot lose” - Jim Elliot
January 13, 2009 at 6:57 am
The_SQL_DBA (1/12/2009)
GSquared (1/12/2009)
I think your best bet is load into a staging copy of the database, copy the files to the production server, then take the production database offline and attach the updated database with the same database name.Might create a few minutes of downtime, maybe just a few seconds, but should be a significant improvement over your current situation.
Not sure if I understand it correctly, but if the update takes hours on staging too, then what about the changes that are going in production?
That apart, copying files over the network for detach attach would take time too? What is the procedure to compensate and or mitigate the transfer time..
I am sure there must be a more efficient way to do this?
Thanks!!
Per the original post, the databases are reporting databases that get updated once per month. There aren't updates except those. Or I misread something. So updates to production while the staging databases are updated aren't an issue, if I'm reading it correctly.
Yes, the copying would add a certain amount of time to the upload process, but would result in a very small down-time window, as opposed to the 28 hours of "it's too slow"-time that is the current situation.
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
January 13, 2009 at 6:57 am
May be 1 thing i missed to mention here. I am not updating all the Db's at one time each db may be updated in different times. So When I am updating staging and then move the files to production for attaching, this may also comsume more time. So from this post i have 3 options now.
1. Dettach/Attach/moving files background to production
2. Restore db from staging
3. Clustering.
I am sure 1 & 2 are time consuming. Is there any other alternatives i can add to my list.
Thanks
January 13, 2009 at 7:04 am
Mike - I am a bit confused how clustering could be a solution as it is truly a HA solution for hardware failures. Can you expound more on how you would use that in your scenario? Thanks.
David
@SQLTentmaker“He is no fool who gives what he cannot keep to gain that which he cannot lose” - Jim Elliot
Viewing 15 posts - 1 through 15 (of 27 total)
You must be logged in to reply to this topic. Login to reply