November 12, 2012 at 3:00 am
Artoo22 to answer you, the database have the same indexes. Too prove this to myself today (at a loss with this one), I again backed the database up only 3 hours ago re-restored the database, and rebuilt all index's again.
I see what your saying, But I just don't have a logical explanation to it. They are 100% like for like in terms of data, schema and structure.
November 12, 2012 at 3:19 am
Can you run this on both servers
SELECTOBJECT_NAME(s.object_id) AS TableName,
s.name AS StatsName,
s.auto_created,
s.user_created,
STATS_DATE(s.object_id, s.stats_id) AS StatsDate,
c.name AS ColumnName
FROMsys.stats s
INNER JOINsys.stats_columns sc
ON s.object_id = sc.object_id
INNER JOINsys.columns c
ON s.object_id = c.object_id
AND sc.column_id = c.column_id
ORDER BYs.object_id,
s.name,
c.column_id
November 12, 2012 at 3:51 am
OK run on each database
1. for the SQL 2005
2. for SQL 2008 r2
November 12, 2012 at 5:36 am
November 12, 2012 at 1:33 pm
I now have 8 of them spread over the log and data drives at the moment.
They were all on the raid 1 OS before, with this configuration I also saw the same sort of issues hence my desperation in spreading them around.
November 13, 2012 at 12:29 am
How many cores do you have. It should be enough to have one or two tempdb data files on the data drives, and one tempdb log on the log drives.
Disable auto growth on the tempdb data files and expand them to equal size.
For the SQL Server I also usually set max degree of parallelism to 1, under advanced server settings.
Do you have any joins in your query which have clustered or covered indexes? Can you align the index fields to have the same sequence as the fields you use for the joins?
Example: SELECT * FROM table1 LEFT JOIN table2 ON table1.a = table2.a AND table1.b = table2.b
Index: fields a,b
I had queries running in seconds on SQL 2005 which then took many minutes on SQL2008R2:crying:, just because the query and index was not aligned.
November 13, 2012 at 6:34 am
carsten.kunz 37074 (11/13/2012)
For the SQL Server I also usually set max degree of parallelism to 1, under advanced server settings.
Why do you do such a thing?
There are no special teachers of virtue, because virtue is taught by the whole community.
--Plato
November 13, 2012 at 4:28 pm
opc.three (11/13/2012)
carsten.kunz 37074 (11/13/2012)
For the SQL Server I also usually set max degree of parallelism to 1, under advanced server settings.Why do you do such a thing?
I have mostly OLTP systems, not much data warehousing or reporting. It gives me more consistent query execution times and plans, and so far there was really no need for this.
November 14, 2012 at 12:05 am
carsten.kunz 37074 (11/13/2012)
opc.three (11/13/2012)
carsten.kunz 37074 (11/13/2012)
For the SQL Server I also usually set max degree of parallelism to 1, under advanced server settings.Why do you do such a thing?
I have mostly OLTP systems, not much data warehousing or reporting. It gives me more consistent query execution times and plans, and so far there was really no need for this.
If that has been your experience on the instances you manage then that is fair enough. I asked because taken out of context your comment "usually setting Max Degree of Parallelism to 1", if thought of as a general rule of how you would recommend setting up any new instance, could be seen as starting off on the wrong foot in many cases. Thank you for posting back.
Another option to consider, in cases where you have databases with hybrid workloads consisting mostly of queries that will not benefit from parallelism but also containing some more complicated queries that could, is to either leave Max Degree of Parallelism at 0 or set it to a value greater than 1 but less than the number of logical CPUs, and in addition to one of those two changes also raising the Cost Threshold for Parallelism.
There are no special teachers of virtue, because virtue is taught by the whole community.
--Plato
November 14, 2012 at 12:48 pm
I can't tell from the query plans or your posts if your temp table has a primary key. If not, create a primary key when declaring the table and perhaps that will make a difference. If you do have a primary key, then just chalk up this post as my 2 cents. FWIW. 🙂
November 15, 2012 at 9:25 pm
Hi Brad
Something to check. I notice all your joins use 'nolock'. There is a bug in SP2 which causes high cpu usage on queries that use nolock against a table with a blob, but the cpu usage is not visible against the query. We've just spent a painful time hunting it down. It's fixed in CU3. Just thought I'd mention it as we saw queries running inexplicable slowly when we applied SP2 and you seem to be selecting from tables with images.
Cheers
Roddy
November 15, 2012 at 9:37 pm
Hi Roddy,
thats helpful. While the CPU usage is low on the query it does jump up a little. If this server were loaded it actually might have a serious impact. I am going to look into this a little more.
Thanks.
Brad
November 15, 2012 at 9:42 pm
Wow, you're up early. At least it's 14:30 in the afternoon in Oz. May well not be the issue but good to eliminate it. Just removing the nolocks should prove it one way or the other.
Cheers
Roddy
November 15, 2012 at 9:52 pm
Yeah I am in AU, SYD - its only 4:00PM here.
Cheers
November 18, 2012 at 5:01 pm
Hi Brad,
I'm in Brisbane. Still scratching my head on this, coffee doesn't seem to have helped. I think various things are red herrings though.
As you have pointed out through out the post the schema and indexes have not changed and you have rebuilt indexes, stats, etc.
The original query is a serial one so MAXDOP is irrelevant.
I'm not convinced in any way about tempdb being the issue. If you had an io issue there, it would make the query slow but it would not cause the query plan to change completely during optimsation where the optimiser does not know about any i/o issue. You could prove this by removing the actual insert and just run the select. If it is still slow it's not a tempdb issue. Also "set statistics on" and see what they say. Likewise in fact for my theory about the bug, it wouldn't affect the optimiser. Our plans stayed the same, they just ran slow.
However I notice on the extreme right of the original plan where it joins TaxonContentLink to Article there is a "constant scan" which creates 3 expressions (expr1150,expr1151,expr1149). This looks to me like it relates to the functions which are called (funcTaxonListTagsForWidget,ect). This constant scan does not appear in the slow query so it looks like the functions are being handled differently. Try commenting them out to see what happens. If it runs fine then the issue lies in the functions not optimising in the same way on the new server.
Just a few observations to try and eliminate red herrings and maybe try and narrow it down to a specific table or line of code, which maybe 2008R2 optimises differently.
Cheers
Roddy
Viewing 15 posts - 16 through 30 (of 30 total)
You must be logged in to reply to this topic. Login to reply