February 22, 2017 at 2:05 pm
Server is an x64 VM with Win 2012 and SQL 2012, SP3. It has 32G of ram with 24,576 MB dedicated to SQL Server. This is an OLTP logging server that records logging from transactions running on other servers and Biztalk connects to this server to log the information, to the tune of about 2M records in single-record transactions per day.
After a while, no specific time, could be one week, could be three, some of the processes on the server slow down or hang completely. A reboot of the server fixes the problem for a while.
One of the processes is a report that runs normally in a couple of minutes but is now taking hours to run, if it even finishes. The other, a very important process, is the archival process which moves records older than 90 days from the main DB to an archive DB. That is the thing I am most concerned with now.
The queries are hung on the server with a resource_semaphore wait. When I query what it is looking for, RAM-wise, it says:
dop 1
requested_memory_kb 4737520
granted_memory_kb NULL
required_memory_kb 1024
used_memory_kb NULL
max_used_memory_kb NULL
The actual query this stuck process is executing is:
Delete A
From ACSLog.dbo.AuditLog A With (Rowlock)
Inner Join ACSLog_Archive.dbo.AuditLog B On A.AuditLogID = B.AuditLogID OPTION (MAXDOP 1)
The number of records it is trying to delete, at this time, is 500. The batch size is usually 5,000 to 50,000.
SQL Server never grants the query RAM to run, and this doesn’t make sense because SQL Server is supposed to reduce the RAM request until it can grant at least SOME ram to it. At least that's my understanding.
Queries such as the following never finish:
SELECT
TEXT
,query_plan
,requested_memory_kb
,granted_memory_kb
,used_memory_kb
FROM sys.dm_exec_query_memory_grants emg
CROSS APPLY sys.dm_exec_sql_text(sql_handle)
CROSS APPLY sys.dm_exec_query_plan(emg.plan_handle)
ORDER BY emg.requested_memory_kb DESC
I am trying to figure out why this is happening and what I can do to fix it, short of rebooting the server.
Internet searches have not yielded anything, but I am not sure how to word the problem to a search engine.
Please let me know what else you need to assist, if possible.
Thanks,
Chris
Learning something new on every visit to SSC. Hoping to pass it on to someone else.
February 22, 2017 at 3:58 pm
Stamey - Wednesday, February 22, 2017 2:05 PMServer is an x64 VM with Win 2012 and SQL 2012, SP3. It has 32G of ram with 24,576 MB dedicated to SQL Server. This is an OLTP logging server that records logging from transactions running on other servers and Biztalk connects to this server to log the information, to the tune of about 2M records in single-record transactions per day.
After a while, no specific time, could be one week, could be three, some of the processes on the server slow down or hang completely. A reboot of the server fixes the problem for a while.
One of the processes is a report that runs normally in a couple of minutes but is now taking hours to run, if it even finishes. The other, a very important process, is the archival process which moves records older than 90 days from the main DB to an archive DB. That is the thing I am most concerned with now.
The queries are hung on the server with a resource_semaphore wait. When I query what it is looking for, RAM-wise, it says:
dop 1
requested_memory_kb 4737520
granted_memory_kb NULL
required_memory_kb 1024
used_memory_kb NULL
max_used_memory_kb NULLThe actual query this stuck process is executing is:
Delete A
From ACSLog.dbo.AuditLog A With (Rowlock)
Inner Join ACSLog_Archive.dbo.AuditLog B On A.AuditLogID = B.AuditLogID OPTION (MAXDOP 1)
The number of records it is trying to delete, at this time, is 500. The batch size is usually 5,000 to 50,000.SQL Server never grants the query RAM to run, and this doesn’t make sense because SQL Server is supposed to reduce the RAM request until it can grant at least SOME ram to it. At least that's my understanding.
Queries such as the following never finish:
SELECT
TEXT
,query_plan
,requested_memory_kb
,granted_memory_kb
,used_memory_kb
FROM sys.dm_exec_query_memory_grants emg
CROSS APPLY sys.dm_exec_sql_text(sql_handle)
CROSS APPLY sys.dm_exec_query_plan(emg.plan_handle)
ORDER BY emg.requested_memory_kb DESCI am trying to figure out why this is happening and what I can do to fix it, short of rebooting the server.
Internet searches have not yielded anything, but I am not sure how to word the problem to a search engine.
Please let me know what else you need to assist, if possible.Thanks,
Chris
1) You have standard edition of SQL Server 2012 I presume? So you have a MAGIC BULLET FIX of simply increasing RAM to the maximum allowed for your SQL Server, which is 128GB. Actually give the server 150 or so to let the OS and other stuff have their RAM without taking any from SQL Server. It just PAINS ME to see people hobbling SQL Server with such amounts of RAM and then complaining about memory pressure or slow IO.
2) What is the query plan on that delete query? If the history table or main table are not indexed properly you could be scanning and hashing a bajillion rows, which will crush your memory (and IO).
3) I wouldn't do a join-delete there anyway. I would do a delete top N from main where exists history (logical code there, not actual). Even better would be to do a delete from main inserting the output into history in a single statement. Note that neither of these matter if you are scanning.
4) How are you controlling your batch size? set rowcount?
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
February 23, 2017 at 5:56 am
1. The server is Enterprise Edition. I would love to give the server more RAM, but people other than the DBA's dictate how much RAM the servers get and we have very little input on this matter, it pains me to say. We requested 64G on the server but only got 32G when it was built. I can't get into the politics of this but I'm sure you are familiar with similar situations.
2. The query plan on this works very well, normally, as I have indexed everything properly and SQL Server wants for nothing when this problem is not occurring. Normally the batches take 250 seconds when set to 50,000. At 10,000, the batch takes about 100 seconds.
3. There may be a better way to do this, with newer features, but this was written several years ago and I have not thought about changing it because it does work well most of the time. Keep in mind this is not the only process that has a problem, other processes, like reports the business side is trying to run, take unreasonable amounts of time, if they ever finish, and they normally take 7 minutes or less. They give up on the reports after they run for an hour and do not complete.
4. The script queries a table where we set the batch size before each batch runs, allowing us to change the batch size on the fly when this is running, since sometimes it runs for days (depending on how many records it has to archive). I use a SQL job to change the values in the table to ramp up the batch size at night then ramp it down during business hours, for instance.
I am thinking there is a memory leak some place, or something that isn't releasing memory, but don't know how to find it, or verify it is or it isn't.
Thanks,
Chris
Learning something new on every visit to SSC. Hoping to pass it on to someone else.
February 23, 2017 at 6:35 am
Just curious, in the plan, how do the memory requests and the actual memory grant line up? I don't think in any way that if they're off it explains why a reboot fixes the issue. However, I'd be curious if you're seeing a very straight forward memory issue, or something to do with misallocation and spills to disk. The second issue could be much more complex.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
February 23, 2017 at 7:22 am
Stamey - Thursday, February 23, 2017 5:56 AM1. The server is Enterprise Edition. I would love to give the server more RAM, but people other than the DBA's dictate how much RAM the servers get and we have very little input on this matter, it pains me to say. We requested 64G on the server but only got 32G when it was built. I can't get into the politics of this but I'm sure you are familiar with similar situations.2. The query plan on this works very well, normally, as I have indexed everything properly and SQL Server wants for nothing when this problem is not occurring. Normally the batches take 250 seconds when set to 50,000. At 10,000, the batch takes about 100 seconds.
3. There may be a better way to do this, with newer features, but this was written several years ago and I have not thought about changing it because it does work well most of the time. Keep in mind this is not the only process that has a problem, other processes, like reports the business side is trying to run, take unreasonable amounts of time, if they ever finish, and they normally take 7 minutes or less. They give up on the reports after they run for an hour and do not complete.
4. The script queries a table where we set the batch size before each batch runs, allowing us to change the batch size on the fly when this is running, since sometimes it runs for days (depending on how many records it has to archive). I use a SQL job to change the values in the table to ramp up the batch size at night then ramp it down during business hours, for instance.
I am thinking there is a memory leak some place, or something that isn't releasing memory, but don't know how to find it, or verify it is or it isn't.
Thanks,
Chris
Item 1) Yep, I am indeed familiar with such issues. And as a l;ong-time SQL Server consultant I have been VERY successful in making them go away when I explain to the idiots that made the decisions that RAM access is measured in MICROSECONDS and disk access is usually measured in tens to hundreds of MILLISECONDS, which is 5-6 ORDERS OF MAGNITUDE SLOWER. Even SSD-based access is 3-4 orders of magnitude slower. And not only does that memory get used to store the data that is actually being processed, it is used for MANY other things such as SORTing, HASHing, query plans, etc, etc. And if there isn't enough, stuff either waits for it or gets thrown down to disk. You simply cannot be successful when CPUs (which do BILLIONS OF THINGS PER SECOND PER CORE) have to wait 5 orders of magnitude longer than they should to get stuff to work on. I have actually turned away clients who come to me with performance problems when they are doing something stupid like trying to run a terabyte database on an 8 or 16GB box, telling them that even though I am extraordinarily good at finding and fixing SQL Server performance problems I simply cannot compete with the across-the-board multiple-orders-of-magnitude improvement they will get from simply increasing RAM.
2) When query plans go out to lunch (and it isn't due to simple blocking), I always check the actual and estimated rows, and often find orders of magnitude differences that are at the root of the poor plan that is causing the performance issue. How do those compare for you?
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
February 23, 2017 at 8:04 am
Thanks for the reply. I cannot get the archival query to execute at this time because it can never get a memory grant from SQL Server, so I cannot check the exec plan at this time. (Note: I have killed the job and restarted multiple time, and reduced the batch size to a measly 500 records but it still will not even complete one batch) I will likely have to wait for an outage window, which may be a week or more, unless the performance slows to the point that the client is complaining enough that they give us an outage window sooner. Yes, I'm caught between a rock and a hard place.
Give me a list, if you don't mind, of what you would like to see. I will get what I can now, and the other will have to wait until I can get a server reboot, but then the problem will no longer exist until it happens again.....
Additional info: The server OS response slows during these time (noticeable when connecting by RDP) and it will not answer various requests in addition to the ones I have mentioned, such as a linked query from another server with a message that it could not be completed because a resource limit was reached. This may be due to the archival process, but I can't be sure.
Thanks,
Chris
Learning something new on every visit to SSC. Hoping to pass it on to someone else.
February 23, 2017 at 9:27 am
Can you post the estimated query plan up? We may be able to do something that can force a particular type of query plan that doesn't require a massive memory grant, such as a simple nested loop (which you probably should be getting anyway if you are properly indexed). Speaking of which, post up the complete table definitions, including all indexes. And the complete set of code you are using in this activity, which is presumably not just the delete statement you posted earlier.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
February 23, 2017 at 11:32 am
Adding the estimated query plans for two options, 1. Current design, with a join on delete, and 2. with mod to use Delete where not in(select from....
FYI, while I select the records based on a date column, with index, I am joining on the clustered Indexes of both tables. The temp tables does not have any index but contains one column.
Also, when I changed the script to use the In(Select...) delete language I also changed the batch size to 50 and attempted a run. It still stopped with a resource_semaphore wait on the delete.
Requested memory is 4737552 KB
Granted is Null
Required is 1024
Thanks,
Chris
Learning something new on every visit to SSC. Hoping to pass it on to someone else.
February 23, 2017 at 1:07 pm
Stamey - Thursday, February 23, 2017 11:32 AMAdding the estimated query plans for two options, 1. Current design, with a join on delete, and 2. with mod to use Delete where not in(select from....
FYI, while I select the records based on a date column, with index, I am joining on the clustered Indexes of both tables. The temp tables does not have any index but contains one column.
Also, when I changed the script to use the In(Select...) delete language I also changed the batch size to 50 and attempted a run. It still stopped with a resource_semaphore wait on the delete.
Requested memory is 4737552 KB
Granted is Null
Required is 1024Thanks,
Chris
I still need ALL of the code involved, and all tables and their indexes. As I strongly suspected there is way more going on than your initial DELETE query, including a temp table and two different criteria for matching records.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
February 23, 2017 at 1:24 pm
Code:Set NOCOUNT ON
declare @enddate datetime, @NumRecordsToGet int, @FirstDayOfQuarter datetime
Declare @NumRecords Int, @NumAffected Int
Declare @Err Int, @ErrMsg VarChar(255), @Body1 VarChar(500), @Email VarChar(2000)
select @enddate = convert(datetime,convert(varchar,dateadd(month,-3,getdate()),101)), @FirstDayOfQuarter = DATEADD(qq, DATEDIFF(qq,0,GetDate()), 0)
If datediff(day,@FirstDayOfQuarter,GetDate()) > 3 --don't archive for the first 3 days of each quarter; in case something goes wrong with the backup
Begin
While (select logtimestamp from ACSLog.dbo.auditlog where auditlogid = (select min(auditlogid) from ACSLog.dbo.auditlog)) < @EndDate
Begin
Begin Try
Begin Tran --Do the select, insert, and delete in one transaction, so we maintain data integrity.Keep batch numbers reasonably sized for this.
If Object_ID('TempDB..#Archive') Is Not Null
Drop Table #Archive
Select Top 1 @NumRecordsToGet = NumAuditRecords, @Email = ErrorEmail from ACSLog_Archive.dbo.PurgeLimits
Select Top (@NumRecordsToGet) AuditLogID Into #Archive from ACSLog.dbo.auditlog With (NoLock) Where logtimestamp < @enddate --Populate Temp table.
Insert Into ACSLog_Archive.dbo.AuditLog --Put records into archive table where they do not exist in archive table.
Select L.* From ACSLog.dbo.AuditLog L With (NoLock)
Inner Join #Archive A On L.AuditLogID = A.AuditLogID
Left Join ACSLog_Archive.dbo.AuditLog B On A.AuditLogID = B.AuditLogID
Where B.AuditLogID Is Null OPTION (MAXDOP 2)
Select @NumAffected = @@Rowcount
Delete
From ACSLog.dbo.AuditLog Where AuditLogID In(Select AuditLogID From ACSLog_Archive.dbo.AuditLog) OPTION (MAXDOP 1)
--Delete A
--From ACSLog.dbo.AuditLog A With (Rowlock)
--Inner Join ACSLog_Archive.dbo.AuditLog B On A.AuditLogID = B.AuditLogID OPTION (MAXDOP 1)
Drop table #Archive;
Insert Into ACSLog_Archive.dbo.PURGE_StatusLog (ArchivedCount, TableName)
Values (@NumAffected, 'AuditLog');
Set @NumAffected = 0 --Reset for next loop.
Commit Tran
End Try
BEGIN CATCH
IF @@TRANCOUNT > 0
RollBack Tran
Set @Err = ltrim(str(error_number()))
Set @ErrMsg = error_message()
Print 'ErrMsg: ' + @ErrMsg + ': Rolling Back Transaction'
Break --out of while loop.
END CATCH
End
If @Err > 0
Begin
Set @Body1 = 'Error: ' + IsNull(Convert(NVarChar(10), @Err), '') + ': ' + IsNull(@ErrMsg, '')
EXEC msdb.dbo.sp_send_dbmail
@recipients = @Email,
@body = @Body1,
@subject = 'An error occurred Archiving the AuditLog table.' ,
@body_format = 'HTML' ;
End
End
Prod Tables:
CREATE TABLE [dbo].[AuditLog](
[AuditLogID] [int] IDENTITY(1,1) NOT NULL,
[Client] [nvarchar](100) NOT NULL,
[RequestingSystemID] [nvarchar](256) NOT NULL,
[RequestingLocationID] [nvarchar](256) NOT NULL,
[BusinessName] [nvarchar](256) NULL,
[MachineName] [nvarchar](256) NULL,
[AppDomainName] [nvarchar](512) NOT NULL,
[LocationNPI] [nvarchar](50) NULL,
[UserNPI] [nvarchar](50) NULL,
[RequestorFirstName] [nvarchar](100) NOT NULL,
[RequestorLastName] [nvarchar](100) NOT NULL,
[InternalSystemID] [nvarchar](256) NULL,
[ClientCertification] [nvarchar](256) NULL,
[CertificationValidated] [int] NULL,
[PatientIdentifier] [nvarchar](256) NULL,
[PatientDatasource] [nvarchar](50) NULL,
[PatientFirstName] [nvarchar](100) NULL,
[PatientLastName] [nvarchar](100) NULL,
[PatientMiddleName] [nvarchar](100) NULL,
[PatientDOB] [datetime] NOT NULL,
[PatientGender] [nvarchar](1) NOT NULL,
[MessageType] [nvarchar](256) NOT NULL,
[MessageIn] [varchar](max) NULL,
[MessageOut] [varchar](max) NULL,
[EventType] [nvarchar](100) NOT NULL,
[LogTimeStamp] [datetime] NOT NULL,
[BreakGlass] [bit] NULL,
[CallingSystem] [nvarchar](100) NULL,
[MessageTimeStamp] [datetime] NULL,
[MessageCorrelationId] [bigint] NULL,
[InvocationDirectionId] [int] NULL,
[OutMessageType] [nvarchar](256) NULL,
[MessageNote] [nvarchar](1000) NULL,
CONSTRAINT [PK_AuditLog] PRIMARY KEY CLUSTERED
(
[AuditLogID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 80) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
/****** Object: Index [IX_AuditLog_Client_LocatNPI_LogTimeStamp] Script Date: 2/23/2017 15:18:09 ******/
CREATE NONCLUSTERED INDEX [IX_AuditLog_Client_LocatNPI_LogTimeStamp] ON [dbo].[AuditLog]
(
[Client] ASC,
[LocationNPI] ASC,
[LogTimeStamp] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 80) ON [PRIMARY]
GO
/****** Object: Index [IX_AuditLog_MessageCorrelation] Script Date: 2/23/2017 15:18:09 ******/
CREATE NONCLUSTERED INDEX [IX_AuditLog_MessageCorrelation] ON [dbo].[AuditLog]
(
[MessageCorrelationId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 80) ON [PRIMARY]
GO
/****** Object: Index [IX_LogTimeStamp] Script Date: 2/23/2017 15:18:09 ******/
CREATE NONCLUSTERED INDEX [IX_LogTimeStamp] ON [dbo].[AuditLog]
(
[LogTimeStamp] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 80) ON [PRIMARY]
GO
/****** Object: Index [IX_MessageTimeStamp] Script Date: 2/23/2017 15:18:09 ******/
CREATE NONCLUSTERED INDEX [IX_MessageTimeStamp] ON [dbo].[AuditLog]
(
[MessageTimeStamp] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 80) ON [PRIMARY]
GO
Archive Tables:
CREATE TABLE [dbo].[AuditLog](
[AuditLogID] [int] NOT NULL,
[Client] [nvarchar](100) NOT NULL,
[RequestingSystemID] [nvarchar](256) NOT NULL,
[RequestingLocationID] [nvarchar](256) NOT NULL,
[BusinessName] [nvarchar](256) NULL,
[MachineName] [nvarchar](256) NULL,
[AppDomainName] [nvarchar](512) NOT NULL,
[LocationNPI] [nvarchar](50) NULL,
[UserNPI] [nvarchar](50) NULL,
[RequestorFirstName] [nvarchar](100) NOT NULL,
[RequestorLastName] [nvarchar](100) NOT NULL,
[InternalSystemID] [nvarchar](256) NULL,
[ClientCertification] [nvarchar](256) NULL,
[CertificationValidated] [int] NULL,
[PatientIdentifier] [nvarchar](256) NULL,
[PatientDatasource] [nvarchar](50) NULL,
[PatientFirstName] [nvarchar](100) NULL,
[PatientLastName] [nvarchar](100) NULL,
[PatientMiddleName] [nvarchar](100) NULL,
[PatientDOB] [datetime] NOT NULL,
[PatientGender] [nvarchar](1) NOT NULL,
[MessageType] [nvarchar](256) NOT NULL,
[MessageIn] [nvarchar](max) NULL,
[MessageOut] [nvarchar](max) NULL,
[EventType] [nvarchar](100) NOT NULL,
[LogTimeStamp] [datetime] NOT NULL,
[BreakGlass] [bit] NULL,
[CallingSystem] [nvarchar](100) NULL,
[MessageTimeStamp] [datetime] NULL,
[MessageCorrelationId] [bigint] NULL,
[InvocationDirectionId] [int] NULL,
[OutMessageType] [nvarchar](256) NULL,
[MessageNote] [nvarchar](1000) NULL,
CONSTRAINT [PK_AuditLog] PRIMARY KEY CLUSTERED
(
[AuditLogID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 80) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
/****** Object: Table [dbo].[PURGE_StatusLog] Script Date: 2/23/2017 15:19:49 ******/
CREATE TABLE [dbo].[PURGE_StatusLog](
[TableKey] [bigint] IDENTITY(1,1) NOT NULL,
[CreateDate] [datetime] NULL,
[ArchivedCount] [int] NULL,
[TableName] [varchar](50) NULL,
CONSTRAINT [PK_PURGE_StatusLog] PRIMARY KEY CLUSTERED
(
[TableKey] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 80) ON [PRIMARY]
) ON [PRIMARY]
/****** Object: Table [dbo].[PurgeLimits] Script Date: 2/23/2017 15:19:49 ******/
CREATE TABLE [dbo].[PurgeLimits](
[NumAuditRecords] [bigint] NULL,
[NumMessageTranRecs] [bigint] NULL,
[NumMessageTranIDRecs] [bigint] NULL,
[ErrorEmail] [varchar](2000) NULL
) ON [PRIMARY]
ALTER TABLE [dbo].[PURGE_StatusLog] ADD CONSTRAINT [DF_ArchiveStatusLog_CreateDate] DEFAULT (getdate()) FOR [CreateDate]
Thanks,
Chris
Learning something new on every visit to SSC. Hoping to pass it on to someone else.
February 23, 2017 at 2:07 pm
without a full rewrite I would try the following
looking at the code and the plans the following is not quite what I would do and I do think it is the only culprit of the memory issue.
Delete
From ACSLog.dbo.AuditLog Where AuditLogID In(Select AuditLogID From ACSLog_Archive.dbo.AuditLog) OPTION (MAXDOP 1)
so for each run you are deleting from the main table any row that may be on the FULL archive table even if they were processed years ago.
-- create temp table to hold the keys inserted onto archive db
If Object_ID('TempDB..#Archive_output') Is Not Null
Drop Table #Archive_output;
create table #archive_output
(AuditLogID int)
Insert Into ACSLog_Archive.dbo.AuditLog --Put records into archive table where they do not exist in archive table.
-- when doing the insert output the keys onto the temp table
output inserted.AuditLogID
into #archive_output
Select L.* From ACSLog.dbo.AuditLog L With (NoLock)
Inner Join #Archive A On L.AuditLogID = A.AuditLogID
Left Join ACSLog_Archive.dbo.AuditLog B On A.AuditLogID = B.AuditLogID
Where B.AuditLogID Is Null OPTION (MAXDOP 2)
Select @NumAffected = @@Rowcount
-- this one is a killer for sure
--Delete
--From ACSLog.dbo.AuditLog Where AuditLogID In(Select AuditLogID From ACSLog_Archive.dbo.AuditLog) OPTION (MAXDOP 1)
?-- replace the above with a join to the keys that we know were inserted onto archive as part of this run
Delete l
From ACSLog.dbo.AuditLog l
inner join #archive_output ao
on ao.AuditLogID = l.AuditLogID;
Drop table #archive_output;
February 23, 2017 at 2:17 pm
Yes, sort of. The archive DB only has 90 days worth of history in it. Each quarter we back up the DB and store it offline in case it is needed for historical purposes (HIPPA law).
The reason it is done this way is to make absolutely sure that the only records being deleted from Prod are records that ARE in the archive table, that way nothing is lost. Yes, it there are multiple safegguards, like the transaction, but management wants to be sure.
Thanks,
Chris
Learning something new on every visit to SSC. Hoping to pass it on to someone else.
February 23, 2017 at 3:21 pm
the code I gave does guarantee that.
but if you wish to have the extra validation you can try this variation to the code I gave
Delete l
From ACSLog.dbo.AuditLog l
inner join #archive_output ao
on ao.AuditLogID = l.AuditLogID
where exists (select top 1 1
from ACSLog_Archive.dbo.AuditLog b
where Ao.AuditLogID = B.AuditLogID
)
February 24, 2017 at 5:19 am
If you look at the code I posted I was doing the delete with a join. Delete A
From ACSLog.dbo.AuditLog A With (Rowlock)
Inner Join ACSLog_Archive.dbo.AuditLog B On A.AuditLogID = B.AuditLogID OPTION (MAXDOP 1)
It was suggested in another post that I do it with the Delete from Where In(select..., so that is just a test here. Looking at both estimated exec plans I posted, the Where In version should be slightly faster. This does not set anything in stone, so I am still open to any ideas.
I am going to try your idea now.
Thanks,
Chris
Learning something new on every visit to SSC. Hoping to pass it on to someone else.
February 24, 2017 at 5:46 am
Well, learned something new. It is working and working very fast. 1000 records in 3 seconds or less, and 50,000 in, well, it started at 106 seconds vs. the previous ~290 seconds, but has now worked its way down to between 52 and 76 seconds per batch. Thanks for the idea, Frederico, and teaching me something. I am using your code variation for additional safety.
I have 9.4M records to archive as of now, since the process stopped working 2-19.
Memory looks like this now:
requested_memory_kb granted_memory_kb required_memory_kb used_memory_kb max_used_memory_kb
575312 575312 1024 0 33792
Thanks,
Chris
Learning something new on every visit to SSC. Hoping to pass it on to someone else.
Viewing 15 posts - 1 through 15 (of 20 total)
You must be logged in to reply to this topic. Login to reply