October 16, 2003 at 3:54 am
We have multiple SQL 7.0 Servers each containing around 30 databases on average 4gig in size each.
We are planning the SQL 2K migration but we have noticed that some databases are performing poorly and we are seeing high disk read times on the mdf's disk subsystem that is causing the server to perform poorly.
Something I have noticed is SQL Server is using 2 gig of memory (maximum allowed for SQL 7.0 standard) and I was wondering if anyone knew how SQL Server managed this capacity checking and if there was an issue related to it.
The performance issues we are seeing only happen once SQL Server has hit the 2gig limit?
I've run DBCC Checkdb, updated statistics, re-indexed etc. but nothing works apart from restarting SQL Server and this is only short lived until the memory limit is hit.
I've also checked Tempdb (that lives on its own disk subsystem), locks, blocks etc. and nothing.
Any ideas?
October 16, 2003 at 7:09 am
quote:
We are planning the SQL 2K migration but we have noticed that some databases are performing poorly and we are seeing high disk read times on the mdf's disk subsystem that is causing the server to perform poorly.
Before or after migration?
quote:
Something I have noticed is SQL Server is using 2 gig of memory (maximum allowed for SQL 7.0 standard) and I was wondering if anyone knew how SQL Server managed this capacity checking and if there was an issue related to it.
Are you use SQL Server 2000 Standard Edition? You could upgrade to Enterprise edition that supports 4GB memory or more depending on which OS version you use.
quote:
The performance issues we are seeing only happen once SQL Server has hit the 2gig limit?
SQL Server is memory intensive application and likes memory very much. It manages memory allocation dynamically by default to use almost all available memory (leaves 4 - 16 for OS). If there are no sufficient memory, It starts paging that involves more physical I/O activities. You may notice performance decrease. It seems your system are experiencing memory shortage.
October 16, 2003 at 9:14 am
We are still using SQL 7.0 Standard.
Further to the problems experienced I am seeing LCK_M_X waits which are sproc recompilations?
Because my databases are the back-end to a web based front-end, all our connections come through as one user. This user is not the object owner of the sprocs who is dbo so SQL Server checks for an execution plan for sqluser.up_sprocname, can't find one so has to generate one.
By executing the sproc along with the owner name e.g. exec dbo.up_sprocname, the current execution plan is used saving an enormous amount of overhead.
Something else that could be causing these high recompile counts is because we are continuously feeding new data and updates to the databases causing statistics to become out of date sooner rather than later.
Viewing 3 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic. Login to reply