July 19, 2006 at 11:57 am
hi,
our server has 4 cpus.
one of my storedproc is used to take more than one process(10 to 13). it used to take 7min. now suddenly it is running on single process it is taking more time. other than this everything looking fine on the server.
even i gave option (maxdop 4) but still using only one process. server runnging with sql2000 stadard edition + sp3
any suggestions please
Thanks
July 19, 2006 at 1:13 pm
what is the max degree of parelism set on the server itself?
July 19, 2006 at 1:24 pm
it is default to 0
July 20, 2006 at 1:11 am
Since you state that you are using Standard Edition, how much memory is available to the query?
From BOL: Is there enough memory available to execute the query in parallel?
If memory is low then the otimizer can not execute queries in parallel.
You really should upgrade to Enterprise Edition for more that 2GB memory use by SQL Server 2000.
Are more than one processor configured for SQL Server to use?
Andy
July 20, 2006 at 9:32 am
If the optimiser doesn't consider the query needs a parallel execution then it will not use one no matter how hard you try!!
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
July 23, 2006 at 5:48 pm
Ummm.. Post the code for the offending proc?
--Jeff Moden
Change is inevitable... Change for the better is not.
July 27, 2006 at 10:09 am
A word of warning - on SP3, parallel queries can be very unpredictable in the consistency of the results they return. I have seen a single query, run repeatedly, returning different reults each time.
See the following KB article:
July 27, 2006 at 6:02 pm
THAT is very true... there's a hot fix for it (although I don't remember the #, check the MS website). In fact, if parallelism is spawned and you don't have the hot-fix or sp4 and the indexes are just right, you could find yourself in pretty deep Kimchie because things like "AND somecolumn IS NULL" may be ignored and you will inadvertently update everything in a table, like we did on a 1.2 million row customer table... and then did nightly processing against it and that's when we also found out that our backups hadn't worked in 10 days... .
Thankfully, we don't have any of those problems anymore and the DBA's/Hardware folks are all wearing the proverbial "Scarlet Letter" pretty well. It's also when we created code standards (especially expected rowcounts on data cleanups) and began enforcing them with the not-so-proverbial "Iron Fist". Needless to say, a lot of great changes came out of the incident but it took 40 people several days to backout the abhorrent change from bit's and pieces of tables that were not affected.
--Jeff Moden
Change is inevitable... Change for the better is not.
July 28, 2006 at 3:40 am
Hmm I must have been lucky to never encounter this, mind 818 patch is very elderly now so I wouldn't be running a box without this.
btw. I doubt we're helping with the original post which reported that the query refused to run in parallel < grin >
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
July 28, 2006 at 6:36 am
How true, how true... I'm still waiting for Ravi to post some code and maybe give us a hint as to the rowcounts in the tables.
Ravi, while were waiting, you should take a look and see if anyone added new indexes to your tables (or something similar) which may be tricking the optimizer out of running parallel. Seemingly harmless changes to the code will do the same thing.
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 10 posts - 1 through 9 (of 9 total)
You must be logged in to reply to this topic. Login to reply