July 19, 2010 at 8:29 pm
Comments posted to this topic are about the item Turn on CHECKSUM after an upgrade
David B.
July 19, 2010 at 8:36 pm
A good article to point out the checksum "issue".
The one tip that I think the community would benefit from is how to ensure the bit is flipped on and that all pages are read (maybe I read through too fast and didn't see the how).
Per Paul Randal (unfortunately don't have a link handy), a DBCC CHECKDB([dbname]) WITH DATA_PURITY should be ran in order to get the checksum fully implemented after the upgrade from SQL 2000.
With that enabled, errors 823, 824, and 825 will be triggered when there is a potential for disk read/write failures, and I have new SQL Agent alerts set up for each of those errors in order to receive those warnings before they become major issues.
The steps I follow when upgrading a DB:
1. Restore the DB to SQL 2005/2008.
2. Change compatibility level to 90 or 100.
3. Change to page verify checksum
4. DBCC CHECKDB with data_purity (to ensure checksum is good to go)
5. DBCC UPDATEUSAGE (as a best practice to get the usage data up-to-date after the upgrade) -- sometimes needed to correct issues found by the data_purity check as well.
Best Regards,
Steve
July 20, 2010 at 6:44 am
I'm curious to know if anyone has done an analysis on the impact of CHECKSUM on insert/update/delete performance. There must be some tangible penalty. In the 2000 days, when you wanted to maximize performance, you switched off even torn_page detection, so I'm assuming there's an even more significant penalty when undergoing the scrutiny of CHECKSUM...
July 20, 2010 at 7:20 am
I would like to know more about this. The article does not explain what occurs when this is not enabled, nor does it mention if there is a performance hit, and if so,what is the impact on performance.
Thanks.
July 20, 2010 at 7:34 am
Add me to the list. What is the best protection, and how much (approximately) of a performance hit are we looking at?
Student of SQL and Golf, Master of Neither
July 20, 2010 at 7:58 am
when i upgraded our netbackup software i turned on checksum for most databases and didn't see any impact, but didn't do too much of a scientific study
to get the pages converted from TPD, can you rebuild the clustered index on the table? will it work?
July 20, 2010 at 7:59 am
A few links were found:
http://www.sqlskills.com/blogs/paul/category/Page-Checksums.aspx
"In benchmarking tests with a TPCH workload during SQL Server 2005 development, we measured approx 2% performance degradation as a result of having checksums enabled. "
I cannot get this page to load, but based on the title it seems like it will have pertinent information:
Regards,
Steve
July 20, 2010 at 8:43 am
[font="Courier New"]
S.K. (7/19/2010)
Per Paul Randal (unfortunately don't have a link handy), a DBCC CHECKDB([dbname]) WITH DATA_PURITY should be ran in order to get the checksum fully implemented after the upgrade from SQL 2000.
DBCC CHECKDB WITH DATA_PURITY has nothing to do with CHECKSUM. DATA_PURITY will check the values stored in the columns are valid for the data type. For databases created in SQL Server 2005 and above, these are done automatically when you run CHECKDB but for the databases created earlier you have to explicitly flip the bit by running the DATA_PURITY switch in the DBCC CHECKDB command.
When you enable PAGE_CHECKSUM, it doesn't really go after all the data pages and write the checksum. There is no command like that to write the CHECKSUM in all of the pages. One way is to rebuild all indexes and this will cover only the CI, I but heaps are NOT touched. For heaps, you may have to do an in-place update like below.
Update heap set c1 = c1
Disabling CHECKSUM to get some performance benefit is NOT a good idea. There are so many other things you can do that will get lot more bang for the time spent. Please don't do that.
On this related topic, I did some tests on the impact of enabling BACKUP CHECKSUM and teh results were very interesting. I did ask MSFT team on why would this happen and didn't get an answer so far. So, my take away point here, always test even if you happen to know teh answer and take action on the first hand information.
http://sankarreddy.com/2010/03/performance-impact-of-using-backup-checksum-in-sql-server-20052008/[/url][/font]
[font="Courier New"]Sankar Reddy | http://SankarReddy.com/[/url][/font]
July 20, 2010 at 8:45 am
Sankar,
Thanks for the clarification. Apologies for any confusion all.
Steve
July 20, 2010 at 10:31 am
Thanks for this article, it helped remind me of a step I need to make sure is in place for our next SQL Server upgrade.
July 25, 2010 at 8:49 am
Hi dma33 & John
Linchi Shea has posted an analysis on the impact of page checksums and default trace with various IO workloads. You can review it here. I didn't want to go back there, as Linchi had already experimented and explained the most important facts. I just wanted to put the emphasis on the upgrade problem, and mention the readonly scenario.
John, if you choose to deactivate page checksums, you won't be warned when there are torn pages and bits trashed by anything lying under the file system. It is much safer to let the parameter to ON, it is not just another BP. Ask Paul Randal or any PFE or escalation engineer who has worked on a corruption case, and check the Data Corruption section on this forum, these 823, 824 and 825 errors are happening.
HTH
David B.
David B.
June 10, 2012 at 4:56 pm
To Steve's list I would add a step to rebuild all heaps and clustered indexe so ensure all pages have a checksum. Without that, checksum will be applied (and checked with DBCC CHECKDB) only to newly changed pages, not the existing ones:
The steps I follow when upgrading a DB from SQL 2000 or older:
1. Restore the DB to SQL 2005/2008.
2. Change compatibility level to 90 or 100.
3. Change page_verify to CHECKSUM
4. Rebuild all clustered indexes and heaps to ensure all pages have a checksum. For rebuilding a heap in sql 2008 there is "ALTER TABLE mytable REBUILD" command.
5. DBCC CHECKDB with data_purity (to ensure checksum is good to go)
6. DBCC UPDATEUSAGE (as a best practice to get the usage data up-to-date after the upgrade) -- sometimes needed to correct issues found by the data_purity check as well.
To rebuild a heap in SQL2005 you have to create clustered index and then drop it.
Best Regards,
Vedran
Viewing 12 posts - 1 through 11 (of 11 total)
You must be logged in to reply to this topic. Login to reply