July 3, 2009 at 8:36 am
Hi,
We have observed during a benchmark that SQL SERVER 2008 is around 30% slower than SQL SERVER 2005.
in fact some user actions may be 5 time slower. This is due to high disks IO .
after investigation, It seems to me that the management of the cache is different from SQLSERVER 2005.
With SQL server 2005, each time a page is referenced in the cache (by a select for example), a counter is incremented . This help to keep the page in memory longer. SQL server 2008 doesn't seem to take care of this counter:crying: .
I have create a simple case to show this point.
I have create 2 tables (A, B) with around 2 GB of data each.
The memory limit of SQL server is set to 3 GB.
I do 1 FULL SCAN on table A, to put all pages of this table in the cache.
I do 2 FULL scan on table A to increment by 2 the counter pages of table A in memory
On SQL 2005, If I do a FULL SCAN on table B, some pages of table B will be added in the cache but not all, because memory is missing (only 3 GB). (I need to do 3 FULL SCAN on table B to add all pages in memory : Some pages of table A will me removed)
On SQL 2008, If I do a FULL SCAN on table B, all pages of this table will be added in the cache. Some pages of table A will be removed from cache
Do you think This is a bug from SQL server 2008. In any case this increase the disk IO.
Fabio
PS : Here is the request I used to check how many pages are in cache
SELECT count(*)AS cached_pages_count
,name ,index_id
FROM sys.dm_os_buffer_descriptors AS bd
INNER JOIN
(
SELECT object_name(object_id) AS name
,index_id ,allocation_unit_id
FROM sys.allocation_units AS au
INNER JOIN sys.partitions AS p
ON au.container_id = p.hobt_id
AND (au.type = 1 OR au.type = 3)
UNION ALL
SELECT object_name(object_id) AS name
,index_id, allocation_unit_id
FROM sys.allocation_units AS au
INNER JOIN sys.partitions AS p
ON au.container_id = p.hobt_id
AND au.type = 2
) AS obj
ON bd.allocation_unit_id = obj.allocation_unit_id
WHERE database_id = db_id()
GROUP BY name, index_id
ORDER BY cached_pages_count DESC
July 6, 2009 at 7:03 pm
Well that's the problem with benchmarks isn't it? 🙂
You don't say in your post whether this is causing you problem on a real system or not?
You might be interested to know that the Lazy Writer (which acts as the clock hand for the data cache) uses a LRU-K algorithm, not the simple counting method you refer to. There are a good number of technical articles on LRU-K available via your favourite search engine.
That's all academic, and I mention it just in case you are curious. You should never rely on internal implementation details - these can and do change with each new version, edition, service pack, cumulative update, hot fix...
If you are having problems in a real environment, please post the full details and someone will advise you.
Paul
July 7, 2009 at 4:07 am
If you are doing performance testing, then you need to make sure your environments are the same. This can be difficult.
Are you using the same type of computer hardware for your SQL 2005 and SQL 2008 tests. Are your disks the same type.
Is the Windows version the same. Do you have the same configuration for page size, etc. Are you using a desktop or server operating system - it is only worth doing this type of testing on a server OS.
Is the SQL configuration the same. Is the maximum and minimum server memory set the same. Are your databases the same. Are they located in the same part of the disk (for DAS disks), or have the same speed of access for SAN disks. Are their indexes in the same state. Have you got up-to-date statistics for your tables.
If you are sure the environments are the same, then you can do some meaningful performance tests. If you are concerned about the results, let us know about how your systems are set up and the results you get. We may be able to offer some advice.
Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.
When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara
June 23, 2011 at 2:45 pm
I've been trying to find documentation on the exact page replacement policy myself without much success.
I did find the following quote though which might explain the difference in behaviour you are seeing.
The latest builds of SQL Server have moved to Time of Last Access (TLA) algorithms to determine which buffers to eject from cache. The older algorithms were based on a reference counting scheme and TLA has been found to be more efficient.
June 27, 2011 at 1:29 pm
Hello Fabio:
As other posters have suggested - be certain you are making apples-to-apples comparisons, i.e., everything else should be THE SAME except for the SQL Server versions.
Otherwise, you'll get false positives / negatives in your tests.
Another thing to consider is OS upgrades and how memory allocation is affected.
We upgraded / set-up several new SQL Servers (including a 2-node cluster) from WIN2003/SQL2005 to WIN2008/SQL 2008 R2.
AWE (Advanced Windowing Support) is irrelevant on 64-bit architectures, and should not be enabled.
We experienced performance problems during high workloads with one of our DELL servers, but it was because hyper-threading was enabled.
Hyper-threading uses a logical to physical CPU ratio. Current implementations are using 2:1 ratios. This means that there are two logical CPUs for each physical CPU. Therefore, SQL creates worker schedulers for each CPU that it "sees" and when hyper-threading is enabled it makes SQL "think" there are twice as many CPU's. So, at higher workloads, those SQL worker schedulers try to exploit those phantom CPU's and you get low-level performance issues. We tried tinkering with MAXDOP properties and rewriting queries but to no avail.
DELL finally admitted this was a "reported/documented issue" and told me how to fix it.
Post that, I read up the issue to understand why it was an issue.
Finally, and this is off-the-cuff, check the database compatibility levels on your databases. SQL 2005 (90) databases work fine under SQL2008 (100) but depending on certain queries, the SQL engine might behave as per the semantics of its database compatibility level which can effect performance. On a related note, rebuild indexes and update the stats after changing databases to compatibility level 10 (2008).
Best of luck!
June 28, 2011 at 10:20 am
Seems you are new here - so welcome! But please always check the dates on threads you are going to post to. Makes no sense to post to a thread that is 2 years old. 🙂
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
June 28, 2011 at 10:43 am
TheSQLGuru (6/28/2011)
Seems you are new here - so welcome! But please always check the dates on threads you are going to post to. Makes no sense to post to a thread that is 2 years old. 🙂
Good point and sorry.
But that post was rendered as a link in an automated email I received from SQLSERVERCENTRAL within the past few days.
So, I clicked and responded.
Therefore, it seems reasonable to me that the link / post should've not been included to begin with.
Perhaps there's a filtering mechanism that could be applied to older/dead posts?
Thanks!
June 28, 2011 at 10:50 am
Martin Smith-178018 (6/23/2011)
I've been trying to find documentation on the exact page replacement policy myself without much success.I did find the following quote though which might explain the difference in behaviour you are seeing.
The latest builds of SQL Server have moved to Time of Last Access (TLA) algorithms to determine which buffers to eject from cache. The older algorithms were based on a reference counting scheme and TLA has been found to be more efficient.
He's the one that restarted the thread!
Viewing 8 posts - 1 through 7 (of 7 total)
You must be logged in to reply to this topic. Login to reply