March 10, 2008 at 5:51 pm
How can I turn off logging during load and updating records in a datawarehouse? I have a few procedures which let the tempdb grow to 5 gb. The systems slows down and works hours to update an table. I have the impression that turning off logging and rollback options could speed up significantly.
Is that correct and how can I turn off all logging?
Menno van Leewen
March 10, 2008 at 7:01 pm
If you mean turning off logging through the transaction log - no. Not possible. All activity is "logged" (meaning the changes are put into the log FIRST, then flushed/applied to the DB).
You should probably take a look at the various recovery models and decide what's best for you. You may also look into breaking things into smaller, more manageable chunks that don't make your TempDB "bloat". On the other hand- you may just set your TempDB to a large starting size, since you seem to use it a lot. Not making it have to grow while doing all of this loading should help your performance tremendously.
----------------------------------------------------------------------------------
Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?
March 10, 2008 at 7:12 pm
Alter Database [FUBAR ] Set Recovery Simple
...
Alter Database [FUBAR ] Set Recovery Full
Note: As Adam was saying, use with caution! You can easily shoot yourself in the foot with this.
[font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
Proactive Performance Solutions, Inc. [/font][font="Verdana"] "Performance is our middle name."[/font]
March 10, 2008 at 7:23 pm
Hi Matt,
I want to try to set the temp DB to a larger start size. How can I do that?
Menno
March 10, 2008 at 7:26 pm
Hi Barry,
The recovery model is already set to simple. Do you have additional options in mind?
Menno
March 10, 2008 at 7:40 pm
You can use SQL Server Enterprise Manager to right-click on tempdb and change its starting size.
As Matt suggested, break your transactions into smaller chunks. If you are using simple recovery model and have small transactions then your transaction log needn't grow so much as the log data will be flushed and the space reused.
Also, you mentioned that tempDB was growing quite large. Perhaps you're doing some very large joins that may also require a sort? You can look at your query plans to see if there are hash joins or sort operators. If that's the case you could try to break your complex joins, etc into several statements with the output of each statement going to a temp table. If nothing else this may help you simplify your logic and is a nice place to put a pair of
BEGIN TRANSACTION
and
COMMIT TRANSACTION
statements around each chunk of code.
March 10, 2008 at 7:41 pm
M.van Leewen (3/10/2008)
Hi Matt,I want to try to set the temp DB to a larger start size. How can I do that?
Menno
It would look something like this:
Alter DATABASE [tempdb]
MODIFY file ( NAME = N'temp', SIZE = 1512000KB,, MAXSIZE = UNLIMITED, FILEGROWTH = 256000KB)
MODIFY file ( NAME = N'templog', SIZE = 1512000KB,, MAXSIZE = UNLIMITED, FILEGROWTH = 256000KB)
set your file sizes accordingly (if you say yours grows to 5GB - then set it to 6+), and make sure to set up your growth factors as well.
----------------------------------------------------------------------------------
Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?
March 10, 2008 at 8:56 pm
M.van Leewen (3/10/2008)
Hi Barry,The recovery model is already set to simple. Do you have additional options in mind?
Menno
Nope. That was it.
[font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
Proactive Performance Solutions, Inc. [/font][font="Verdana"] "Performance is our middle name."[/font]
March 10, 2008 at 9:43 pm
Yes... there are... just setting recovery model to SIMPLE doesn't stop logging for loading of data. You have to use BCP or BULK INSERT or SELECT/INTO and your target table has to meet certain criteria for the logging to be supressed even then. It's all in Books Online... take a peek at the "Fast Load" options for BCP and Bulk Insert or the "Logged and Nonlogged Bulk Copies" in Books Online..
There's nothing you can do to suppress logging for UPDATES/INSERTS/DELETES. The logs just won't stay active for long and the LDF files won't get real large if you break mega-row UPDATES/INSERTS/DELETES into smaller chunks as some have already suggested. You do need to make sure that "Truncate Log On Checkpoint" is turned on... again, check Books Online for that information.
--Jeff Moden
Change is inevitable... Change for the better is not.
March 10, 2008 at 11:38 pm
Thanks Ian, This is the case. I use large queries with complex joins. I connect them to look up values and depending on the value change record status. These queries are made in as views and called upon in DTS. I do not use SP's and there are no begin and end's. I think your tip point the current tech weak point in my warehouse. It is successfull and growing fast but the tech model is not yet setup for the scale.
Work to do....
Menno
March 10, 2008 at 11:39 pm
OK thanks Matt.
March 10, 2008 at 11:47 pm
Thanks Jeff,
I have the auto shrink on. I will check if the "truncate log on checkpoint" is on. Is it to find in the settings or do I have to use an alter table and a specific command? The problem is not in the loads and inserts. I drop all constraints before loading which has speed up the loads significantly and create the constraints afterward to gard the integrity. The problem is in complex joins which checking a facttable and changing the status depending on the result.
I have to break down the procedure in smaller chunks and write some SP's with begin and ends with commits in it but never done this before.
Menno
March 10, 2008 at 11:59 pm
I have the auto shrink on. I
That's probably one of the worst things you can do... you are fragmenting the database and fragmenting the physical files on disk. Everytime it grows, the database must wait and then you turn around and shrink it. Never ending cycle of pain for the users and the database.
Trunc. Log On Checkpoint is a database option.
--Jeff Moden
Change is inevitable... Change for the better is not.
March 11, 2008 at 12:05 am
My apologies... I just remembered something and I verified it in Books Online... you don't need to set Trunc. Log On Checkpoint because the SIMPLE recovery mode does it for you...
From Books Online:
Note This model is similar to setting the trunc. log on chkpt. database option in Microsoft® SQL Server™ version 7.0 or earlier.
--Jeff Moden
Change is inevitable... Change for the better is not.
March 11, 2008 at 1:29 am
Hello Ian,
This is the view: Update_Factsdossiers_Maatregelen_financieringsstatus
SELECT dbo.DimAdmincode.Bedrijfstype,
dbo.FactsDossiers.Admincode,
dbo.FactsDossiers.Kengetal AS Referentiekengetal,
dbo.FactsDossiers.Samenloop AS Samenloopdossier,
dbo.Filter_datum_Tijdpunten.Datum AS Referentiedatum,
dbo.Filter_datum_Tijdpunten.DagBoekjaar AS ReferentieBoekjaar,
dbo.Filter_datum_Tijdpunten.DagPeriode AS Referentieperiode,
FactsDossiers_1.Kengetal AS updatetargetkengetal,
FactsDossiers_1.Samenloop AS Samenloopmaatregel,
Filter_datum_Tijdpunten_1.Datum AS Updatedatum,
Filter_datum_Tijdpunten_1.DagBoekjaar AS UpdateBoekjaar,
Filter_datum_Tijdpunten_1.DagPeriode AS Updateperiode,
Filter_datum_Tijdpunten_1.DagMaandnaam AS Updatemaand
FROM dbo.FactsDossiers INNER JOIN
dbo.FactsDossiers FactsDossiers_1 ON dbo.FactsDossiers.Admincode = FactsDossiers_1.Admincode AND
dbo.FactsDossiers.Klantcode = FactsDossiers_1.Klantcode AND dbo.FactsDossiers.Documentnummer = FactsDossiers_1.Documentnummer AND
dbo.FactsDossiers.Documenttype = FactsDossiers_1.Documenttype AND
dbo.FactsDossiers.DocumentProductcode = FactsDossiers_1.DocumentProductcode INNER JOIN
dbo.DimAdmincode ON dbo.FactsDossiers.Admincode = dbo.DimAdmincode.Admincode AND
FactsDossiers_1.Admincode = dbo.DimAdmincode.Admincode INNER JOIN
dbo.Filter_datum_Tijdpunten Filter_datum_Tijdpunten_1 ON FactsDossiers_1.Datum = Filter_datum_Tijdpunten_1.Datum INNER JOIN
dbo.Filter_datum_Tijdpunten ON dbo.FactsDossiers.Datum = dbo.Filter_datum_Tijdpunten.Datum AND
Filter_datum_Tijdpunten_1.DagBoekjaar = dbo.Filter_datum_Tijdpunten.DagBoekjaar AND
Filter_datum_Tijdpunten_1.DagPeriode = dbo.Filter_datum_Tijdpunten.DagPeriode
WHERE (dbo.DimAdmincode.Bedrijfstype = N'BJZ')
AND (dbo.FactsDossiers.Samenloop <> N'nb')
AND (FactsDossiers_1.Kengetal <> N'indicatiebesluit')
AND (FactsDossiers_1.Kengetal <> N'eersteindicatiebesluit')
And the following procedure is performed in DTS
UPDATE Update_Factsdossiers_Maatregelen_financieringsstatus
SET Samenloopmaatregel = Samenloopdossier
WHERE
(Referentiekengetal = N'inzorg') AND (updatetargetkengetal
= N'nieuwemaatregelen')
Menno
Viewing 15 posts - 1 through 15 (of 29 total)
You must be logged in to reply to this topic. Login to reply