June 25, 2009 at 8:48 am
You could try forcing a checkpoint and then doing a tlog backup.
i.e.
CHECKPOINT
BACKUP LOG [databasename] TO DISK=""
If you've got a pretty busy system or one with long running transactions the checkpoint command might help you get more of your tlog available to be backed up.
June 25, 2009 at 10:09 am
nscott (6/24/2009)I have 400 GB of free disk space on the SQL server. For log properties, initial size = 195 MB; autogrowth = By 10 MB, restricted growth to 200 MB.
You have restricted this log file to 200MB - and once it fills up to that point SQL Server stops until the log file has either been increased or the transaction filling up the log file is cleared. I am betting anything that you are exceeding 200MB in your single transaction.
I would not restrict my log files autogrowth at all. I would size my log file to handle the amount of transactions that occur between transaction log backups (actually, just a bit larger). Then, if I have an event that needs more transaction log space - the file will automatically grow and handle that without taking the system down or killing the process.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
June 25, 2009 at 10:23 am
Jeffrey Williams (6/25/2009)
nscott (6/24/2009)I have 400 GB of free disk space on the SQL server. For log properties, initial size = 195 MB; autogrowth = By 10 MB, restricted growth to 200 MB.
You have restricted this log file to 200MB - and once it fills up to that point SQL Server stops until the log file has either been increased or the transaction filling up the log file is cleared. I am betting anything that you are exceeding 200MB in your single transaction.
I would not restrict my log files autogrowth at all. I would size my log file to handle the amount of transactions that occur between transaction log backups (actually, just a bit larger). Then, if I have an event that needs more transaction log space - the file will automatically grow and handle that without taking the system down or killing the process.
Yah I saw that... but it just didn't click... mostly because I figured that he couldn't actually mean that. 195 MB initial size 10 MB autogrowth 200 MB max size... means that it can't even do a single growth if necessary since that would exceed the 200MB max size. 🙁
June 25, 2009 at 11:27 am
Thanks to you all for your input! These databases have been setup by a third-party software company, and I just assumed their default settings would be OK. I have double-checked with them and I will allow unrestricted growth on the transaction logs while performing regular tlog backups.
They also set the databases themselves to restricted growth, which I don't understand either! Shouldn't databases always be allowed to grow as needed?
P.S. I went into the properties of the main DB in question and set the tlog to unrestricted. But when I went back in to view the properties, it is restricted again, but with a limit of 2097152 MB. I guess I just need to run the T-sql command to force the unrestricted setting?
June 25, 2009 at 11:33 am
nscott (6/25/2009)
P.S. I went into the properties of the main DB in question and set the tlog to unrestricted. But when I went back in to view the properties, it is restricted again, but with a limit of 2097152 MB. I guess I just need to run the T-sql command to force the unrestricted setting?
The maximum possible size for a transaction log file is 2 TB, which is what that 'limit' is. So, essentially it is unrestricted.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
June 30, 2009 at 10:04 am
Thanks for the info! We've started a regular tlog backup routine and the timeout issue seems to be resolved. However, now I have an issue with the specific update query I'm running. Not sure if I should create a new post.
In Access, I have two linked SQL tables. I'm trying to insert all 489 records from one table into the other. It's not timing out anymore, but when it does finally run, it's trying to modify 1.9 MILLION records! I'm using the following query expression:
INSERT INTO dbo_WKORDER ( WO_NUMBER, WO_END_DT, WO_CAT_TY, WO_CAT_CD, WO_USER15, WO_USER22, WO_MOD_BY, WO_MOD_DT )
SELECT "SR-" & [dbo_s_repair]![S_Rpr_ID] AS Expr1, dbo_s_repair.S_Rpr_Dt, "Service Laterals" AS Expr2, "10130" AS Expr3, dbo_s_repair.S_Rpr_By, dbo_s_repair.S_Rpr_ID, "nscott" AS Expr4, "6/29/2009" AS Expr5
FROM dbo_WKORDER, dbo_s_repair;
I'm not fluent in T-SQL so I'm trying to run everything in Access. Maybe there's a problem with the expressions? The WO_NUMBER field is indeed a text field - I'm trying to populate it with the "SR-" plus the ID from the original database.
Thanks for any input, let me know if I need to start a new thread...
June 30, 2009 at 10:09 am
nscott (6/30/2009)
However, now I have an issue with the specific update query I'm running. Not sure if I should create a new post.
Please do, as this has nothing to do with backups (the forum) or transaction logs (the original question)
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
June 30, 2009 at 10:14 am
nscott (6/30/2009)
Thanks for the info! We've started a regular tlog backup routine and the timeout issue seems to be resolved. However, now I have an issue with the specific update query I'm running. Not sure if I should create a new post.In Access, I have two linked SQL tables. I'm trying to insert all 489 records from one table into the other. It's not timing out anymore, but when it does finally run, it's trying to modify 1.9 MILLION records! I'm using the following query expression:
INSERT INTO dbo_WKORDER ( WO_NUMBER, WO_END_DT, WO_CAT_TY, WO_CAT_CD, WO_USER15, WO_USER22, WO_MOD_BY, WO_MOD_DT )
SELECT "SR-" & [dbo_s_repair]![S_Rpr_ID] AS Expr1, dbo_s_repair.S_Rpr_Dt, "Service Laterals" AS Expr2, "10130" AS Expr3, dbo_s_repair.S_Rpr_By, dbo_s_repair.S_Rpr_ID, "nscott" AS Expr4, "6/29/2009" AS Expr5
FROM dbo_WKORDER, dbo_s_repair;
I'm not fluent in T-SQL so I'm trying to run everything in Access. Maybe there's a problem with the expressions? The WO_NUMBER field is indeed a text field - I'm trying to populate it with the "SR-" plus the ID from the original database.
Thanks for any input, let me know if I need to start a new thread...
You have a cross join in your query. There is no join criteria between dbo_WKORDER and dbo_s_repair.
June 30, 2009 at 10:16 am
nscott (6/30/2009)
INSERT INTO dbo_WKORDER ( WO_NUMBER, WO_END_DT, WO_CAT_TY, WO_CAT_CD, WO_USER15, WO_USER22, WO_MOD_BY, WO_MOD_DT )
SELECT "SR-" & [dbo_s_repair]![S_Rpr_ID] AS Expr1, dbo_s_repair.S_Rpr_Dt, "Service Laterals" AS Expr2, "10130" AS Expr3, dbo_s_repair.S_Rpr_By, dbo_s_repair.S_Rpr_ID, "nscott" AS Expr4, "6/29/2009" AS Expr5
FROM dbo_WKORDER, dbo_s_repair;
Well I see two problems...
1. You have a cross join between dbo_wkorder and dbo_s_repair as both tables are specified in from without any JOIN clauses or criteria either as
FROM
dbo_WKORER a JOIN dbo_s_repair b ON
a.WO_NUMBER = b.WO_NUMBER
or as
FROM
dbo_WKORDER,dbo_s_repair
WHERE
dbo_WKORDER.WO_NUMBER = dbo_s_repair.WO_NUMBER
2. There is no WHERE clause to control what records to work with.
June 30, 2009 at 10:37 am
Sorry - I guess it's too late for a new thread, unless it's easy to move all the replies as well...will know better next time.
Since I just want to dump all records from one to the other, I don't need a WHERE, right? So it can just be:
INSERT INTO dbo_WKORDER ( WO_NUMBER, WO_END_DT, WO_CAT_TY, WO_CAT_CD, WO_USER15, WO_USER22, WO_MOD_BY, WO_MOD_DT )
SELECT "SR-" & [dbo_s_repair]![S_Rpr_ID] AS Expr1, dbo_s_repair.S_Rpr_Dt, "Service Laterals" AS Expr2, "10130" AS Expr3, dbo_s_repair.S_Rpr_By, dbo_s_repair.S_Rpr_ID, "nscott" AS Expr4, "6/29/2009" AS Expr5
FROM dbo_s_repair;
Do the expressions look OK?
June 30, 2009 at 10:52 am
Now that you eliminated the cross join, and the fact that you want all the records from the source table, it should work. Have you run it in a test environment yet to see if it gives you the results you are expecting?
June 30, 2009 at 11:23 am
Isn't that what backups are for? :blush: Seriously though, I see the real importance of that, even with a "simple" procedure like this. What is the best or most common practice for a test environment? Would it be a different instance of the SQL database or a duplicate DB within the same instance?
Thanks again for all your very quick responses!
June 30, 2009 at 12:27 pm
nscott (6/30/2009)
Isn't that what backups are for? :blush: Seriously though, I see the real importance of that, even with a "simple" procedure like this. What is the best or most common practice for a test environment? Would it be a different instance of the SQL database or a duplicate DB within the same instance?Thanks again for all your very quick responses!
That one falls into the land of "it depends".
If your environment is farily small, having two databases that are copies of each other, one for testing one for production is probably ok. If you have a larger environment, seperate servers with seperate resources become more and more necessary.
Viewing 13 posts - 16 through 27 (of 27 total)
You must be logged in to reply to this topic. Login to reply