April 21, 2010 at 6:42 am
Hello all,
We have a job that is set to run over the weekend on one of our SQL Servers (SQL 2000 sp4). What we need to do makes sense - we have tables in a database that need to be purged on a weekly basis. The database is getting up there in size - its just over 100G now, and pretty much every table in the DB has millions of rows....
Here is the SQL script that is being run on a weekly basis. Mind you that this is only one step in the SQL Server agent job. This step is performed about 30 more times - once for each table in this database.
use master
go
backup log audit with no_log
go
use audit
go
dbcc shrinkfile (audit_log, 500)
go
use audit
--delete the data
delete from so_aud where so_timestamp < getdate() - 90
use master
go
backup log audit with no_log
go
use audit
go
dbcc shrinkfile (audit_log, 500)
go
DBCC SHRINKDATABASE (audit, 10)
GO
As you can imagine - thats a lot of DBCC. This job will take hours to run... there has to be a better way...
Any ideas would be appreciated...
D
April 21, 2010 at 8:37 am
There was an article on this site recently on deleting large numbers of records, while minimising the effect on the logfile.
http://www.sqlservercentral.com/articles/T-SQL/67898/
And more interesting tips came up in the discussion on the article too.
April 21, 2010 at 6:17 pm
1. Make sure you have clustered indexes on column so_timestamp and other columns used in range selection.
2. Don't shrink databases. It's a quite useless exersise.
Unless you want to upgrade disks and trying to make them crash sooner.
_____________
Code for TallyGenerator
Viewing 3 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic. Login to reply