High CPU usage after a backup

  • The CPU usage on our server is fairly constant at 60% throughout the day.  We've noticed that this value will go up to 90% during a backup job, and it will stay at 90% even after the job is complete.

    The backup job takes 30 seconds to a minute to complete, but CPU usage stays at 90% (of which, the "sqlservr.exe" process is taking most of it, according to Task Manager).  The backup job updates the statistics and backs up the transaction log.

    The "really" strange thing I've found, is that by just running Enterprise Manager and accessing the Properties dialog of the server (I don't need to toggle any settings), will cause CPU usage to go back down to it's normal 60%.  This seems to work whether I'm doing this on the server via Remote Desktop, or using my local Enterprise Manager to connect to the server remotely.

    Has anyone observed this occurring before?

    We're running SQL Server 2000 Standard Edition SP3 (I know we should be on SP4), on Windows Server 2003.  Server is a Xeon 2.8 with 2gb ram.

    In terms of database size, its at 1gb, with 4gb allocated.  Transaction log is at 2gb with 7gb allocated.

  • This was removed by the editor as SPAM

  • I'd recommend running SQL Server Profiler to see if there are any other jobs running after the backup which are still using those resources.  If not, run Windows Performance Monitor to see if there's a Windows process consuming the CPU.

    Our backups run in the middle of the night, so we're not around to notice if it's doing the same thing on ours.

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • verification perhaps?

    Thank-you,
    David Russell
    Any Cloud, Any Database, Oracle since 1982

  • This is most likely due to paging and a long disk queue.

    If you're running on local disks, seek a SAN, or at least more disks.

    The easiest way to tell is to run PerfMon and watch the disk queue length for each of your disks separately. The lower the better. Microsoft mentions that it shouldn't be a problem for local disks if the queue length is less than 2+[Num. Disks].

    If running on a SAN, or other high end storage array, it should never be above 1.

    Common causes for this are storing the backup on a data/log drive. Other causes would be storing the page file on the same disk as a log or data file, or storing a log or data file the system drive.

    BTW, Normally 60%? Is this server doing OLTP, OLAP/Reporting, or both? If OLTP, why so high? Run a profiler and look for anything with high reads or high duration (+500).

    Good luck!


    Enjoy!

    A Brown

    Manage all of your backups in one simple job using this script.

Viewing 5 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic. Login to reply