December 7, 2009 at 2:06 pm
We're loading a new database by importing data from old application, populating 75 tables with a couple of gigs of data. Not much data, but the previous load took several hours in total due to various steps & procedures and data propagation among tables.
We have a server with logs on one drive array & data on another drive array.
I realize I'll lose point-in-time recovery if in simple mode.
Does Simple recovery run any faster than Full recovery in this situation ?
December 7, 2009 at 2:30 pm
It can. It depends on log growth events locking things up, which will happen in Full, and can be avoided in Simple if the load is done in small enough batches.
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
December 7, 2009 at 8:22 pm
Right, log growth(or database growth) KILLS performance, so if you set the growth rate to a reasonable rate then it won't be too bad, but I don't think you'll see much difference in Full vs. Simple in what you are looking at here..
CEWII
December 7, 2009 at 11:15 pm
As an alternative, in order to prevent log growth, it may be worthwhile to grow the logfile substantially prior to the data load. Grow the log file to something large enough that it will easily cover the data load.
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
December 8, 2009 at 7:24 am
That is a good idea, the only issue I see is that it is not easy to truly guess it. But if you grow it big enough it wouldn't have to grow very often during the load..
CEWII
December 8, 2009 at 8:59 am
If the log is growing on every load, why shrink it? Why not leave it large? Doesn't make sense. If the database is growing, you are not properly watching size. If there is a chance that you need more space in the database, you ought to be expanding it manually to prevent to many fragments.
Moving to simple more only means that committed transactions are marked as "free" in the log and the space can be reused. The same logging takes place. If you have multiple transactions, then you can reuse space, but you are still hitting the disks.
What can make a difference is getting more I/O, or perhaps dropping indexes before and adding them after. You might want to read on Project REAL and some of the things they did to load a ton of data quickly.
http://www.microsoft.com/sqlserver/2005/en/us/project-real.aspx
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply