Runs slower on the High Perf. Machine?

  • I'm inserting data in to a Temp table from 1 other temp table and 3 other tables. The select statement has a couple of joins, several where clauses, it gets ugly (and it's not mine). The statement runs on my machine OK. Profiler shows 294k reads and 494 writes. My machine has 1 P3, 400M ram allocated to SQL server. My customers machine has twin Xeons and 3.5G ram for SQL server. The same statement running on the same DB, (I gave them a backup of the DB on my tool), takes 4X as long and Profiler indicates 6 million+ reads and 30 writes. No other users are in the system, no other apps are running. I've looked at the execution plan on both servers and they appear to be identical. Task monitor shows about 2G of ram in use and 50 - 57% CPU usage. Why so many more reads and less writes on my customers tool?

  • I guess the statement runing in your client machine may be executed in parallel because the machine has two CPUs. The default SQL Server configuration for 'max degree of parallelism' is set to 0 which means SQL Server will decide when to execute the query by itself according the statistics it has. Maybe you could just change the configuration to disable to parallel execution.

    You may need to run the statistics update to the database if you havn't done it.

  • I saw a similar situation at a job before. We Upgraded to an eight processor machine, and the database performance degraded by 60 something percent. (We always compared to baseline stats gathered prior to the upgrade). In profiling it we found the parallelism having a greater cost recombining the query than the query cost to start with. (In IO)

    We Set the server to a much higher cost before using parallelism and profiled the server again. By repeating this several times, we found the "Sweet Spot" that gave us the best of both worlds.

    This may not apply to your case, because you say the execution plans are identical and I'm sure you've probably done this, but did you examine the costs of each step in the plan for your comparison? You should be able to identify the exact area of the problem by comparing these numbers.

    If your not gonna help, Please move to the side, because if your not helping the situation, your hurting it....

  • Couple ideas that might be affecting as well. 1 how many times did you run the query that took 4x as long, initially the data will be on the drives and nothing in memory so it will not have a cache to look for on those items. Run 4 or 5 times at least to allow the cache a chance to take hold then wait a day or two and try a cold run before you consider comparison.

    When was the last time stats were manually updated, indexes rebuilt or defragged, and DBCC UPDATE USAGE run to make sure DB is as clean as possible.

    ALso was AWE enabled and the /3GB switch added to the boot.ini if this is a SQL 2000 on Windows 2000 Adavnce Server. (there are items for SQL 7 and NT 4 but cannot rememeber them off the top of my head). If not there can be memory issues involved if not controlled some other way.

    You can also have a HD bottleneck or some other hardware issue. Imagine differences in relation to were things are located. If the database works fine on a single CPU with less RAM but that machine has RAM with a higher bus speed and RAM clocking and the harddrive is in a RAID5 array as opposed to no RAID or RAID0 you have to add cost of parity writing, and maybe even the drives clock at a slower speed or have a smaller HD cache. Problem is how to identify potential minuses, but like I said consider where the data lives and how it gets moved thru the system and compare all components.

    However, with everything I said here I agree it is probably max degree of parallelism or the query cost ratio as suggested in the previous statments.

    "Don't roll your eyes at me. I will tape them in place." (Teacher on Boston Public)

  • First of all I want thank those of you who responded.

    Some additional info: We tested the report that calls the SP in question on

    another machine in the customers enviroment. The machine was almost exactly like my own, single CPU - 512M ram. The report still took 4+ minutes, where it only takes 62 seconds on my machine. At that point one of our other developers re-wrote the entire SP and now it runs in 45 seconds in our customers enviroment. So far the only discrepency between our customers machine and mine is they are on SQL Server SP-2 and my machine is still on SP-1,(I thought I had already installed SP-2 but apparently not). I will be installing SP-2 sometime today and then I'll re-test.

  • Glad to here you found a solution. I almost bet if the SP was moved by attaching the database that it was using a previous compile and execution plan stored when run on your machine. When you moved to the customers environment it was as happy there due to differences. Deffinently let us know if the code has any performance changes between SP and SP2 thou.

    "Don't roll your eyes at me. I will tape them in place." (Teacher on Boston Public)

Viewing 6 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply