Need better performance with DBCC DBREINDEX

  • I'm working with a 125 GB SQL Server database that's taking over 9 hours to optimize. The SQL server is running SQL Server 2000 standard edition with SP3 and a memory leak fix. SQL has been allocated a maximum of 2 GB of the 3.5 Gb of RAM. The server has Xeon processors, 700 Mhz. The database comes as part of a "canned product" so I cannot make any design changes. The server grows 50+ Gb a year. Some of the data in the database cannot be archived for 7 years. Other data cannot be archive for 30 years. There's only about 2.5 years of data in the database. What can be done to reduce the amount of time it takes to optimize?

    Managment has asked the question about failover clustering and load balancing. Can either do anything for performance regarding optmization? Any suggestions would be greatly appreciated. 🙂

  • quote:


    Managment has asked the question about failover clustering and load balancing. Can either do anything for performance regarding optmization?


    SQL Server doesn't support load balancing in clusting. Clustering's sole purpose is for fault tolerance. When a cluster fails over, all that happens is that the instance of SQL Server running on the failed server is moved to another server.

    You may consider dbcc indexdefrag that is online process and does not affect user's tramsactions. See BOL for details.

    You could also think about upgrade hardware and SQL Serevr from Standard edition to Enterprise Edition to utilize more physical memory which will speed up the performance.

    By the way, are your running some kind of ERP application, SAP?

    Edited by - allen_cui on 08/29/2003 09:33:23 AM

  • If I used just the indexdefrag, would SQL server experience a performance hit because of page splits? During reorganization, free space is created on the index pages.

    Hardware upgrade may be an option. Any suggestions on hardware to handle the situation that has been described?

    I have upgraded my test box to Enterprise edition and run the optimization. It only reduced the runtime by 10%. The RAM on the test box was upgraded to 4 GB, and I increased the max memory for SQL to 4 GB. I also enabled AWE. (Operating System is Windows 2000 Server.) The RAM was no longer a bottleneck. Disk IO turned out to be the bottleneck. Any other suggestions?

    The software is for the banking industry and is a repository for check images and various scanned documents, such as loans.

    Thank you for responding. I hope to hear from you again soon.

  • quote:


    Hardware upgrade may be an option. Any suggestions on hardware to handle the situation that has been described?


    How many CPUs the box has? More processors could benefit.

    quote:


    I have upgraded my test box to Enterprise edition and run the optimization. It only reduced the runtime by 10%. The RAM on the test box was upgraded to 4 GB, and I increased the max memory for SQL to 4 GB. I also enabled AWE. (Operating System is Windows 2000 Server.) The RAM was no longer a bottleneck.


    You don't need AWE in 4GB memory configuration. You do need /3GB in your boot.ini file to allow SQL Server applications to address 3 GB memory and OS reserves 1 GB memory for the operating system.

    quote:


    The RAM was no longer a bottleneck. Disk IO turned out to be the bottleneck. Any other suggestions?


    How do you configure disks for SQL Server in this box? How many database files you have and how do you allocate them to the disks?

    You may look into SAN storage for the database.

  • The box has 4 CPUs and is using RAID5. The database is not "home grown". Upgrades, etc., are provided by the vendor. In checking the properties for the database, it has one data file (no secondary files).

  • Multiple files can be used to increase performance. If there is a single database file, SQL Server would only use one thread to perform a read of database. But if there are multiple physical database files and data has been divided into those files, SQL Server would use multiple threads (one per physical file) to read the data, which potentially could be faster. In addition, if each file were on its own separate physical disk or disk array, the performance gain would even be greater. Of course there is a point where the additional threads aren't of much use when you max out the server's I/O.

    Locate the database files (.mdf) and log files (.ldf) on separate arrays in your server to isolate potentially conflicting reads and writes. You may already done that.

    Edited by - allen_cui on 08/29/2003 11:57:42 AM

  • Are you storing the images in the database?

    If so are you storing them inline with the data pages, if you do this you will really slow down your system.

    Let me know how you are storing your images and we can work out what are the best optimisations.

    With regards to load balancing, that can certainly be done with a cluster via federation, but we need to have a look at your data retrieval process. I presume like other check image systems most of your retrievals (90%) are performed in the first 60 days then the other 10% over the next 10 years. This gives you quite a few options for online archival and makes federation fairly easy.

  • Just a few thoughts -

    Use textimage_on for your table(s) containing the image columns holding the scanned documents in order to specify that the image data go on a separate file group. Ideally, this separate file group will be on a separate set of physical disks. Also, put your indexes on a seperate set of disks (and file group). If single tables are very large, then you may want not want to have a clustered index as it will cause the rebuilding of the clustered index to take a long time as it rebuilds the entire table in the order of the clustered index. Also look at your fill-factor for indexes as this can cause unnecessary disk space usage if you don't need it.

  • Here are my 2 cents:

    Or database ai more or less the same size as yours. We are running SAP on.

    Those are the steps what I do when I have a maintenance window:

    - Do a full backup

    - Put the the recovery mode to bulk logged (that one stores the less info to the log, even less then the simple recovery mode). (May be this is your bottleneck.)

    - Start the reindex by the biggest tables

    - Regullary truncate the log

    - At the end change the recovery mode back to full logged (after you have made an another full backup)

    My experience shows that only the first maintenance is taking quite long. all others (if you are not waiting month's in between) is faster.

    What I would recommend is to make once a full dbreindex within the maintenance window as described here and after between 2 maintenance window regullary an indexdefrag. The advantage of the indexdefrag is that you can run it online (let's say better the night)

    I hope that helps

    Bye

    Gabor



    Bye
    Gabor

  • declanm,

    The images are not being stored in the database. Pointers to the images, which reside on a separate logical drive, are stored in the database.

    Because the database is not a home-grown, custom database, I may be limited as to what I can do. However, I plan to discuss database design issues (such as federation) with the vendor.

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply