Speeding up a huge delete

  • Happy to help.  keep us posted on this pls.

  • To keep you posted,

    The row delete worked - nice end to a nice weekend - down from 118 million to 60 million, but the database size hasn't changed, methinks this is something to do with space not getting yielded to the OS.

    Need a shrinkfile???

     

    JB

  • You need to shrink whole database at the end of process.

    And this part must be done definitely off hours. It will consume as much server resourses as it could reach and slow down everything very noticably.

    If you have EM from SQL2000 you must see in "taskpad" view how much space is not allocted for data within .mdf file. SHRINKDATABASE will release this space.

    _____________
    Code for TallyGenerator

  • I may need to do it regardless of time, if an insert MUST take place - and I would imagine that would be sooner rather than later.  I've got 11Mb unallocated in the .mdf and the log is down to 1Mb, none of which is enough for a weekly insert of 30Gb.  I'm not sure what's happened, the row numbers are definitely half of what they were on Friday!!!  Can't believe I'm only being quoted 11Mb available..

    Any idea how lengthy the process is?  I know DBCC Defrags take an absolute age.

     

    Cheers,

     

    JB

  • Can you look at space used by each table in the database?

    Seems you're barking on the wrong tree.

    _____________
    Code for TallyGenerator

  • Any code that'll tell me tablesizes?

    And that's barking UP the wrong tree...

     

    JB

  • Since you have a 30GB insert comming up, may I suggest you run the shrink after the insert is done.. soo you don't shrink / regrow the files for nothing?

  • That's one of the phases in the improvement cycle, this entire exercise so far has been the first of those phases, to dig the DB out of the grinded, mashed-gear halt it's been in this week.

    Next thing is to automate deletion of any rows containing week_number more than 5 weeks older than the current date. 

    Next is your suggestion - which, wise though it is, I may or may not implement, depending upon both the speed of data growth, and how hard the beancounters cough for the next step...

    Which is the fitting of some new disks into the array - with hardware currently being as cheap as chips, after all there's no excuse nowadays for <1TB servers being bilked of adequate storage...I mean, bloody hell, I've got a disk at home bigger than the database!!

    JB

  • I got 2... and they are 4 yers old each iirc .

    Pls keep us posted of your progress... This is really an interesting case!

  • I can't believe anyone would let a system go so bad, it just flies in the face of every tenet of good databasing; and the previous DBA heart of gold, he was just pretty sore about a raft of issues.

  • Ya but also how long was the server and databases left alone wihtout maintenance and checking for basic stuff like free space.  I agree he's partly responsible of this... but maybe he thaught he would be around to monitor and fix such problems.

  • Use these:

    DBCC UPDATEUSAGE

    master.dbo.sp_MSforeachtable

    master.dbo.sp_spaceused 'TableName'

    Check out BOL for complete syntax and different options.

    Thanks for "UP". Not really my area of expertise...

    _____________
    Code for TallyGenerator

  • Guys,

    Got this error message 36 minutes into the run (I wasn't around to retry)

    --The SQL Server cannot obtain a LOCK resource at this time. Rerun your statement when there are fewer active users or ask the system administrator to check the SQL Server lock and memory configuration.--

    Any idea if I can have SQL lock out they system when first running the statement?

    Cheers,

    JB

     

     

  • Which statement you are talking about?

    _____________
    Code for TallyGenerator

  • The DBCC SHRINKDATABASE one!

Viewing 15 posts - 31 through 45 (of 47 total)

You must be logged in to reply to this topic. Login to reply