March 8, 2016 at 7:59 am
Hi Team,
I have a DB which is about 600 GB in size. Since operations take lot of time, we create compact db (about 50 GB)
by deleting records from many of the tables apart from the invoices we need.
But this processes takes about 40 hours to complete so we lose about 2 days of testing.
I cannot use truncate as it does not support where clause.
My Question:
> Is there any faster of deletion , like can we stop transaction logs somehow for delete statements.
> Is there any other faster way of creating compact DB if any of you do similar activities. Please note we don't have an option of creating another DB , have to do this on the same DB itself.
I was thinking of below , but I don't think it will work, let me your thoughts on this:
> Create DB Snapshot with the invoices we need - Is this Possible ??
> Then Restore the DB Snapshot we created onto our main DB -- Is this also possible?
Any pointers/suggestions on this would be helpful.
March 8, 2016 at 8:03 am
If deleting takes too long, simply create an empty database and migrate there just the data you need. Why do you think this is not a valid option?
-- Gianluca Sartori
March 8, 2016 at 10:56 am
We cant have another DB due to various constraints costing etc...
So I want suggestions for compacting on the same DB. New DB option is known.
March 8, 2016 at 1:39 pm
Databases are for free, there's no cost associated with a new database, except the cost of the storage you're saving it to. If 50GB of disk space is too expensive, I would be very concerned. Isn't this data worth the disk space it's sitting on? Then don't bother working with it.
-- Gianluca Sartori
March 8, 2016 at 3:09 pm
As you've already learned the hard way, deleting 600 GB of data is even more time consuming that selecting it 30x over, and yes it's transaction logging that's eating most of the I/O and CPU cycles. Also, maintenance on any non-clustered indexes enabled on the table are also struggling to keep up, and updated index pages are also logged as well.
You will find that using SELECT / INTO syntax to select into a new table will be relatively much faster, because it is a minimally logged operation, and you can apply a WHERE clause. Also, if you only need a subset of columns for QA purposes, then only select those specific columns rather than using (*). On mid-range server, I'd expect 600 GB to copy this way within an hour.
select * into TestInvoice from Invoice where InvoiceDate >= '2016/02';
Once done, you can re-create any indexes needed for your QA query purposes.
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply