November 5, 2017 at 4:23 pm
Comments posted to this topic are about the item Delete Change Tracking table data sys.syscommittab
February 23, 2018 at 12:40 pm
Hi, when I try to run the sql script, I get the following error: "Invalid object name 'sys.syscommittab'". Any advice on how to get this working?
February 23, 2018 at 2:41 pm
chris.o.smith - Friday, February 23, 2018 12:40 PMHi, when I try to run the sql script, I get the following error: "Invalid object name 'sys.syscommittab'". Any advice on how to get this working?
Did you try to run it using sqlcmd or did you try to run it in SSMS?
You need to use the DAC to query sys.syscommittab. That's what the -A option is for in the sqlcmd used to run the script.
Sue
February 23, 2018 at 3:47 pm
Sue_H - Friday, February 23, 2018 2:41 PMDid you try to run it using sqlcmd or did you try to run it in SSMS?
You need to use the DAC to query sys.syscommittab. That's what the -A option is for in the sqlcmd used to run the script.Sue
Thanks, that did the trick for me. In our current set-up, this table is growing faster than the built-in clean-up process deletes from it. As a result, syscommittab is slowly growing, and it's starting to impact CHANGETABLE query performance. We're looking at the possibility of using the method described in this post to do an occasional clean-up of the table, until we upgrade to SQL Server 2017 (there were some improvements to the CT clean up process in SS2017, outlined here, that I think will solve our problems). Outside of change tracking, are you aware of any gotchyas or concerns when manually deleting from sys.syscommittab in this way? Online documentation indicates that the table is used for other purposes outside of change tracking.
February 23, 2018 at 4:44 pm
chris.o.smith - Friday, February 23, 2018 3:47 PMSue_H - Friday, February 23, 2018 2:41 PMDid you try to run it using sqlcmd or did you try to run it in SSMS?
You need to use the DAC to query sys.syscommittab. That's what the -A option is for in the sqlcmd used to run the script.Sue
Thanks, that did the trick for me. In our current set-up, this table is growing faster than the built-in clean-up process deletes from it. As a result, syscommittab is slowly growing, and it's starting to impact CHANGETABLE query performance. We're looking at the possibility of using the method described in this post to do an occasional clean-up of the table, until we upgrade to SQL Server 2017 (there were some improvements to the CT clean up process in SS2017, outlined here, that I think will solve our problems). Outside of change tracking, are you aware of any gotchyas or concerns when manually deleting from sys.syscommittab in this way? Online documentation indicates that the table is used for other purposes outside of change tracking.
I wouldn't. If you were to go down the road of using undocumented methods to clean it up, I would look at the undocumented stored procedure instead as it has some additional internal functions to check what is safe to delete.
But first I would really look at the settings - mostly retention period and making sure the auto cleanup is enabled.
You can have a situation where the cleanup can't keep up with the number of transactions. There is an article about it and a query to see if you are hitting this -
Change Tracking Cleanup Limitation
That same guy has written about the undocumented stored procedure to clean up - sys.sp_flush_commit_table_on_demand. There are some other things written about that so I would search on that and read all you can. The article I first read on it (and the link above) is on the SirSQL blog -
Change Tracking Cleanup - Going Beyond Automatic
Edit - added: Here is the other blog post I was looking for where Microsoft mentions the same stored procedure and why you would want to use that, how the table is used by replication as well, etc. And how it doesn't always work...
Change Tracking (aka Syscommittab) Issues and Cleanup
The post also lists some other things to try as well. I'd look at all of those things instead of just deleting everything from that table.
Sue
February 26, 2018 at 9:37 am
I hadn't seen the the Benjamin Reynolds post before and got super excited after reading through it - that seemed like a perfect solution. However, I'm running into the same quirks as I did when I first learned about the sp_flush_commit_table_on_demand sproc - nothing seems to get deleted even when it appears that it should work. For example, here's what I tried to do in our dev environment: we have a database with CT turned on and it currently has 148,962,939 records in the syscommittab table. The most recent commit_time in the table is 2018-02-14 14:17:16.333 (with a commit_ts of 294788369), and the oldest commit_time is 2017-03-28 23:17:06.740 (with a commit_ts of 156972007). At first, I had auto clean-up turned off (because, as I mentioned earlier, we turned it off due to contention issues). However, the "Safe Cleanup Version" won't ever change if auto clean-up is off. So, I turned it back on with a retention period of 2 days. I left the database alone over the weekend and then tried to run Benjamin's powershell script this morning. The results from the log are as follows:
26-02-2018 08:12:04.12 Script Starting
26-02-2018 08:12:05.12 Current Total Syscommittab Records = 148962939
26-02-2018 08:12:05.12 No ScopeId returned so we can't continue to try and log to a table any more!
26-02-2018 08:12:14.12 Script Completed!
26-02-2018 08:12:14.12 Last Values:
26-02-2018 08:12:14.12 Safe Cleanup Version: 294788369
26-02-2018 08:12:14.12 Syscommittab Rows deleted: 0
26-02-2018 08:12:14.12 Deadlocks Encountered: 0
26-02-2018 08:12:14.12 Number of Iterations: 9
26-02-2018 08:12:14.12 Syscommittab Records Start: 148962939
26-02-2018 08:12:14.12 Syscommittab Records End: 148962939
You can see that the "safe cleanup version" has appropriately updated to the max commit_ts in the table, since it is older than 2 days. However, the sp_flush_commit_table_on_demand sproc still didn't delete anything, and I have no idea why... I submitted a question about this on Benjamin's blog, so hopefully I can get a little guidance from there - if so, I'll come back to this post and update with any progress I make. If you have any suggestions, please let me know! Thanks again for your time!
February 26, 2018 at 3:35 pm
chris.o.smith - Monday, February 26, 2018 9:37 AMHi Sue, thanks for all of the detailed info. We've played around with the settings quite a bit. Originally, we had our retention period set to 15 days and auto-clean up turned on. However, in addition to the issue where the CT tables grew faster than the clean up deleted (yes, we are running into the issue that Nic Cain wrote about and had messed around with the internal sproc quite a bit for the last few months but couldn't get it to reliably work), we also experienced significant contention while clean-up was running. So, we tried turning auto clean-up off and cleaning up manually on our own schedule. This is easy enough for the change tracking side tables, but doesn't address the ever-growing syscommittab table...I hadn't seen the the Benjamin Reynolds post before and got super excited after reading through it - that seemed like a perfect solution. However, I'm running into the same quirks as I did when I first learned about the sp_flush_commit_table_on_demand sproc - nothing seems to get deleted even when it appears that it should work. For example, here's what I tried to do in our dev environment: we have a database with CT turned on and it currently has 148,962,939 records in the syscommittab table. The most recent commit_time in the table is 2018-02-14 14:17:16.333 (with a commit_ts of 294788369), and the oldest commit_time is 2017-03-28 23:17:06.740 (with a commit_ts of 156972007). At first, I had auto clean-up turned off (because, as I mentioned earlier, we turned it off due to contention issues). However, the "Safe Cleanup Version" won't ever change if auto clean-up is off. So, I turned it back on with a retention period of 2 days. I left the database alone over the weekend and then tried to run Benjamin's powershell script this morning. The results from the log are as follows:
26-02-2018 08:12:04.12 Script Starting
26-02-2018 08:12:05.12 Current Total Syscommittab Records = 148962939
26-02-2018 08:12:05.12 No ScopeId returned so we can't continue to try and log to a table any more!
26-02-2018 08:12:14.12 Script Completed!
26-02-2018 08:12:14.12 Last Values:
26-02-2018 08:12:14.12 Safe Cleanup Version: 294788369
26-02-2018 08:12:14.12 Syscommittab Rows deleted: 0
26-02-2018 08:12:14.12 Deadlocks Encountered: 0
26-02-2018 08:12:14.12 Number of Iterations: 9
26-02-2018 08:12:14.12 Syscommittab Records Start: 148962939
26-02-2018 08:12:14.12 Syscommittab Records End: 148962939You can see that the "safe cleanup version" has appropriately updated to the max commit_ts in the table, since it is older than 2 days. However, the sp_flush_commit_table_on_demand sproc still didn't delete anything, and I have no idea why... I submitted a question about this on Benjamin's blog, so hopefully I can get a little guidance from there - if so, I'll come back to this post and update with any progress I make. If you have any suggestions, please let me know! Thanks again for your time!
Thanks for posting back. This is one of those really frustrating things that no one really gets until you hit it. And it really stinks.
But you got a couple of replies from Benjamin. He gave you a different command to try running from the DAC. Please post back after trying that one - I'd really like to see if that works.
Sue
March 5, 2018 at 9:34 am
My latest question to Ben is (hopefully this one didn't go to his spam folder :)): do you know of a way to force the hardened clean up version to increment? If so, we could potentially have the following process when we need to clean up syscommittab:
1. Turn auto clean-up on
2. Wait 30 minutes for safe clean up version to update
3. Run process to force hardened clean up version to update
4. Turn auto clean-up off
5. Run powershell SQL agent job
If you have any ideas on #3, please let me know!
Viewing 8 posts - 1 through 7 (of 7 total)
You must be logged in to reply to this topic. Login to reply