May 13, 2008 at 2:22 pm
jamiejulius (5/13/2008)
One more question to help me understand what's happening.The checkpoint process kicks in as a result of accumulated write activity, which needs to be flushed to the disk. The act of flushing occupies the disk so much that it prevents concurrent queries from executing normally.
It's slowing other disk accesses down, causing other reads and writes to queue up. Any queries that need a read/write from disk must wait on the IO completion
Is this so, because every concurrent query must at least write to the log file even though the data is cached in memory?
Data modifications must write to the log file. The logging of the modification must be completed (on disk) before the transaction commit is done, and before the client gets notified that the insert/update/delete is done.
Reads can be done completely from memory, providing the data's in the cache.
Is this why you suggested that separating the MDF and LDF files to separate disks would improve performance?
Yup. Also, data and log files have very different access patterns. Log files write sequentially and seldom read. Data files have random reads and writes.
It's recommended that the log files are separated so that the disk heads aren't going all over the disk, but are in position to do a write when one is needed.
Disk seek time greatly exceeds the time required to actually do the read/write.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 13, 2008 at 2:33 pm
Thanks, Gail!
You've been extremely helpful throughout this entire post.
Jamie
May 14, 2008 at 2:37 am
GilaMonster (5/12/2008)
Is the DB set to autoclose?
I just wanted to point out that if the DB is being hit 600 times per minute no SQL Server Instance would close the DB...
May 14, 2008 at 7:26 am
You might consider switching the database to simple recovery model if possible.....
May 14, 2008 at 7:41 am
Jonathan Mallia (5/14/2008)
You might consider switching the database to simple recovery model if possible.....
And lose the ability to do point-in-time recoveries.
Why do you suggest that?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 14, 2008 at 7:41 am
Hans Lindgren (5/14/2008)
GilaMonster (5/12/2008)
Is the DB set to autoclose?I just wanted to point out that if the DB is being hit 600 times per minute no SQL Server Instance would close the DB...
I wonder about this. Per BOL: "autoclose: When true, the database is shut down cleanly and its resources are freed after the last user logs off." It is certainly possible that a user could login, do something, logout 600 times per minute and leave plenty of time for SQL Server to see the last user log off and close the database (or at least start that process). 10 times per second leaves lots of CPU cycles for multi-gigahertz systems to detect and respond to things. I wonder if there is a delay in this logic inside the engine that isn't mentioned under the sp_dboption topic in BOL.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
May 14, 2008 at 8:01 am
TheSQLGuru (5/14/2008)
I wonder about this. Per BOL: "autoclose: When true, the database is shut down cleanly and its resources are freed after the last user logs off." It is certainly possible that a user could login, do something, logout 600 times per minute and leave plenty of time for SQL Server to see the last user log off and close the database (or at least start that process). 10 times per second leaves lots of CPU cycles for multi-gigahertz systems to detect and respond to things. I wonder if there is a delay in this logic inside the engine that isn't mentioned under the sp_dboption topic in BOL.
Well, I guess you are right in a sense. (Done some testing and yeah, there is no discernable delay. Just being the only using and doing multiple USE myDB ; GO ; USE master ; GO ; USE myDB shows up as multiple startups in the log). But I would argue that no application with these number of hits would have connection pooling turned off and serializing all DB access..
But yes, in a general case scenario, I was wrong in saying that SQL Server never would close it 😛 So much for writing short answers!
I would say, that there is never any absolute truth but that would be too short!
May 14, 2008 at 8:03 am
In reference to the idea of changing the Recovery Model, are you suggesting that this would reduce the frequency and/or duration of the checkpoints?
May 14, 2008 at 8:09 am
Hans Lindgren (5/14/2008)
But I would argue that no application with these number of hits would not have connection pooling turned off and serializing all DB access..
I have a rule when replying to forum questions. Never assume anything. Always check every possibility, no matter how unlikely.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 14, 2008 at 8:10 am
jamiejulius (5/14/2008)
In reference to the idea of changing the Recovery Model, are you suggesting that this would reduce the frequency and/or duration of the checkpoints?
No. It won't. All it will mean is that your transaction logs auto-truncate on checkpoint and don't need to be backed up.
It also means that you will only be able to restore to the last full/diff backup in case of disaster.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 14, 2008 at 12:17 pm
GilaMonster (5/14/2008)
jamiejulius (5/14/2008)
In reference to the idea of changing the Recovery Model, are you suggesting that this would reduce the frequency and/or duration of the checkpoints?No. It won't. All it will mean is that your transaction logs auto-truncate on checkpoint and don't need to be backed up.
It also means that you will only be able to restore to the last full/diff backup in case of disaster.
As such, I don't see how this suggestion will help my original problem.
May 16, 2008 at 12:20 pm
Could it be the checkpoint process? It happens at what seems to be set intervals if traffic is steady. Also, what type of storage are you using? SAN? If so, have the sys admins run an analysis on the SAN array for a set time period. Are any other applications using the storage setup? Maybe they are causing the waits on the disk writes. Just some suggestions......
Viewing 12 posts - 31 through 41 (of 41 total)
You must be logged in to reply to this topic. Login to reply