May 8, 2015 at 3:46 am
Well, I don't konw clearly,but my friend knew it,I'll ask him ...:-D
May 13, 2015 at 2:13 am
Well for this I have to use the same server but to the different db
May 13, 2015 at 3:48 pm
Hi,
Another point to note here is that table is replicating and the table has more than 55000000 records greater than 3 months. Is this no of rows are ok to delete in one batch when it is set up to replication?
How to do it in better way without performance impacts?
May 13, 2015 at 4:48 pm
ramana3327 (5/13/2015)
Hi,Another point to note here is that table is replicating and the table has more than 55000000 records greater than 3 months. Is this no of rows are ok to delete in one batch when it is set up to replication?
How to do it in better way without performance impacts?
Absolute not OK to delete that number of rows in one batch. You'll saturate memory and your logfile will grow to the same size even if you're in the SIMPLE recovery mode never mind what it would do in replication.
You'll need to build a "DELETE Crawler" or, depending on the ratio of rows that you want to keep v.s. delete, you might want to build an "INSERT Crawler" to a new table, drop the old table (make sure you have scripts for indexes, privs, constraints, etc, first), and rename the new table.
If it's setup for replication, it would probably better to turn it off, do whatever, and then restart the replication for the table.
To be sure, I don't know much about replication because I don't use it. We use our SANs to do that for us. But, if it were me, I'd spend some serious time on learning how well (or not) replication and one form of partitioning or another played together.
--Jeff Moden
Change is inevitable... Change for the better is not.
May 13, 2015 at 4:50 pm
And just to be sure, there's no freakin' way that I'd use SSIS for any of this. 😉
--Jeff Moden
Change is inevitable... Change for the better is not.
May 21, 2015 at 2:42 pm
I write the sp below and tested. When I tested it is not coming out of loop after the deletion of the records from the Error table.
Is it performance wise good when it is replicating? Any more ideas?
CREATE PROCEDURE [dbo].[ErrorArchive]
AS
SET NOCOUNT ON;
DECLARE @r INT;
DECLARE @TOTALCOUNT INT;
SET @r = 1;
SELECT @TOTALCOUNT = COUNT(*)
FROM [Testdb].[dbo].[Error]
WHERE Created < DATEADD(MONTH,-3,CONVERT(date,getdate()));
WHILE @r < @TOTALCOUNT
BEGIN
Insert
[dbo].[Error_archive]
SELECT TOP(10000) *
FROM [Testdb].[dbo].[Error]
WHERE Created < DATEADD(MONTH,-3,CONVERT(date,getdate()))
Order by [id]
DELETE
FROM [Testdb].[dbo].[Error]
WHERE id in (select [id] from [dbo].[Error_archive])
AND ID in (SELECT TOP(10000) id
FROM [testdb].[dbo].[Error]
WHERE Created < DATEADD(MONTH,-3,CONVERT(date,getdate()))
Order by [id]);
END
GO
May 21, 2015 at 4:34 pm
ramana3327 (5/21/2015)
I write the sp below and tested. When I tested it is not coming out of loop after the deletion of the records from the Error table.Is it performance wise good when it is replicating? Any more ideas?
CREATE PROCEDURE [dbo].[ErrorArchive]
AS
SET NOCOUNT ON;
DECLARE @r INT;
DECLARE @TOTALCOUNT INT;
SET @r = 1;
SELECT @TOTALCOUNT = COUNT(*)
FROM [Testdb].[dbo].[Error]
WHERE Created < DATEADD(MONTH,-3,CONVERT(date,getdate()));
WHILE @r < @TOTALCOUNT
BEGIN
Insert
[dbo].[Error_archive]
SELECT TOP(10000) *
FROM [Testdb].[dbo].[Error]
WHERE Created < DATEADD(MONTH,-3,CONVERT(date,getdate()))
Order by [id]
DELETE
FROM [Testdb].[dbo].[Error]
WHERE id in (select [id] from [dbo].[Error_archive])
AND ID in (SELECT TOP(10000) id
FROM [testdb].[dbo].[Error]
WHERE Created < DATEADD(MONTH,-3,CONVERT(date,getdate()))
Order by [id]);
END
GO
I suspect it's not coming out of the loop because you're INSERTING/DELETING 1000 rows are a time but you set @r to @r+1.
-- Itzik Ben-Gan 2001
May 24, 2015 at 5:54 am
Dear All,
Even in my environment we need to purge the data from one of my table.But our concern is after purging my data will that reduce my data file ???
May 24, 2015 at 9:52 am
vivek.ghag52 (5/24/2015)
Dear All,Even in my environment we need to purge the data from one of my table.But our concern is after purging my data will that reduce my data file ???
Simply deleting (purging) data will not reduce the size of your data file. It will provide more free space in your database that can be used without having to grow your database.
May 24, 2015 at 10:12 am
Agreed..but I want to shrink my data file since my drive on secondary has less space which is less nw for current size of the database..can i go ahead and shrink my ndf data file???
May 24, 2015 at 10:47 am
vivek.ghag52 (5/24/2015)
Agreed..but I want to shrink my data file since my drive on secondary has less space which is less nw for current size of the database..can i go ahead and shrink my ndf data file???
Not exactly sure what you are saying, but if you shrink any of the data files, mdf or ndf, you'll need to rebuild your indexes that reside in those files as they will become fragmented and impact system performance.
May 24, 2015 at 10:56 am
Yeah, to add on to what Lynn said, unless you absolutely need the drive space back (actually need, not just want :-)), it's generally best just not to shrink a data file.
The worst part about it is that as Lynn pointed out, it will likely cause very high levels of fragmentation, and then if you rebuild, you need free space roughly the size of the rebuilt index to do the rebuild, so the data file will then grow again and leave you with more of that free space in the data file you don't want.
At that point, you've generated a large amount of IO for the shrink and rebuild, and you still have a bunch of free space in the file anyway. Also, provided the drive's not about to run out of space, free space in the file is good, as it avoids needless autogrowth events (even though data file autogrowth events are less painful than log autogrowth events, if you have IFI enabled).
So, to remove the fragmentation without growing the file again anyway, you'd need to REORGANIZE the indexes, which might not remove all the fragmentation.
I'd recommend reading http://www.sqlskills.com/blogs/paul/why-you-should-not-shrink-your-data-files/ to see all the issues that come with shrinking a file, and the ways to go about reclaiming the space if you absolutely have to.
I hope this helps!
EDIT: Upon a reread, my initial wording about the space needed for a rebuild seemed a bit confusing, so I changed it.
May 24, 2015 at 9:47 pm
I have an alternative to it.Please let me know if that will work.I will add another data file and move that specific table into that data file and then shrink that particular data file after purging data from that table.Lemme know if my option of shrinking data file is correct.
July 3, 2015 at 2:05 pm
vivek.ghag52 (5/24/2015)
I have an alternative to it.Please let me know if that will work.I will add another data file and move that specific table into that data file and then shrink that particular data file after purging data from that table.Lemme know if my option of shrinking data file is correct.
Don't shrink datafiles unless you plan to rebuild or reorgize the indexes. Shrinks cause massive fragmentation.
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 14 posts - 16 through 28 (of 28 total)
You must be logged in to reply to this topic. Login to reply