August 29, 2006 at 12:15 pm
>>The good news is the rowset returned isn't out of order any more.
I would be concerned about the vendor's application quality if rowset order has a dependency on query plan.
It sounds like the common situation where a programmer is depending on the clustered index for ordering and eliminating an explicit ORDER BY on the SQL query.
Not that this helps with your problem any.
August 29, 2006 at 12:56 pm
SQL Server is now at 2187. No difference.
I noticed an index I had created on the existing server that seems to be useless that the execution plan is scanning instead of the primary key. I can try other functions with the index removed and see if it will harm the other procedures. When I took out the extraneous index, the INDEX SCAN became a CLUSTERED INDEX SCAN (though the columns that it's retrieving appear to match the columns of the Primary Key exactly, so I can't figure out why it's having to scan, or if that's even an issue).
Should I reindex the affected table? The Clustered Index is a Primary Key on this particular table.
August 30, 2006 at 7:10 am
Another question. On the existing server, the RAID-5 array is the E: drive, and the database is located on E:\Microsoft SQL Server\MSSQL\Data (both trans log and data file)
On the new server, the RAID-5 array is the D: drive, and the database is located on the root of D:\. I can move it if anyone thinks it might make a difference.
If so, what is the easiest way to do that?
August 30, 2006 at 8:02 am
The key here is SCAN - a clustered index scan is a table scan and it's the absolute last thing you want in your query - so unless you're just doing a select * from your tables your indexes are not being used - as I said you probably need a useful index - as before the i/o difference would be useful - get this from profiler.
If there is no order by statement in the query actual order of result set will be at random ( influenced by usable indexes within the joins ) Multithreading ( parallelism ) may sometimes be have adverse effects but it will NOT change a seek to a scan !!!
OK your procs and collations -
check the collation of the two databases ( in properties ) are they the same? do the same for the master database.
Procs - script out the problem proc from both your databases right click on proc, options, script, preview - check the first two options in the script :- they should be
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_NULLS ON
GO
in both
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
August 30, 2006 at 8:14 am
The Collation name is the same for the app databases and master databases in both systems. For the record, they are:
SQL_Latin1_General_CP1_CI_AS
The screen in question is not using stored procedures in this case; I checked profiler yesterday. Neither system has QUOTED_IDENTIFIER set by default, nor ANSI_NULLS. If you did an ad hoc query, would both of these options need to be turned on?
I turned on Hyperthreading and restarted the computer yesterday and it didn't seem to help or hurt things, so until I see otherwise, I'll just leave it turned on. I believe MAXDOP is now 4, as I reset the setting to "Use All Available Processors"
What events should I look for in the profiler when I run a trace tonight to check I/O?
August 30, 2006 at 10:04 pm
Brent,
Colin kind of alluded to this, but have you tried attaching a copy of the database rather than restoring the DB backup? A backup doesn't necessarily preserve every bit at the file level. Basically, you 1) detach from old server, 2) copy mdf and ldf files, 3) attach to new server. Look up the "sp_detach_db" & "sp_attach_db" topics in BOL for usage. If you detach using the @skipchecks option then no stats are updated (which could alter query plans). Note the 16 file limit when re-attaching. Anyway, it might be worth a try if you have time.
James Stover, McDBA
August 31, 2006 at 2:22 am
No the settings are not important as such, I was just working through a check list.
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
August 31, 2006 at 2:26 am
so it's embedded sql then? so how have you compared query plans?
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
August 31, 2006 at 6:12 am
I used Profiler on both systems and ran the same screen, with the same parameters, on both. When I sent the vendor the traces and he noticed that for the same screen request, the servers execution plans were different. One was running a Clustered Index Seek on the Primary Key on one of the tables (current system) and the other was doing an index scan on a non-clustered index on the same table (new system), despite the fact that both had the Primary Key on the same columns.
August 31, 2006 at 8:19 am
hmmm .. so you inlcuded the plan data in the trace then? Well you should be able to extract the sql that runs the query from profiler and then run that in QA anlayser and examine the graphical plan with stats i/o which will give a nice indication of the problem. You can also include the read. writes, duration in profiler - the plan ( in profiler ) will also give you a break down of the parts of the query .
I'd be tempted to throw the problem back at the vendor to find a solution as it's a third party application and also ask if they've tested against sp4 of sql server ?
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
August 31, 2006 at 8:52 am
It looks to me like the most intensive part of the query plan is this:
exec sp_execute 7, N'001', N'01', N'007207BK S'
When I go back and look at the prepared SQL Statement that builds this sp (I'm guessing it's a temporary SP), it looks like this:
declare @P1 int
set @P1=7
exec sp_prepare @P1 output, N'@P0 nvarchar(4000),@P1 nvarchar(4000),@P2 nvarchar(4000)', N'SELECT B2BEW, B2DAT, B2PLT, B2RNR, B2CTN, B2LO1, B2INVC, B2QUAL, B2CHA, B2LPL, B2MGES, B2BEL, B2BLA, B2BBE, B2BBN, B2WRT, B2KDL, B2VNR,B2BNR, B2PLT, B2KTR, B2ANR FROM TXMYBW00 WHERE B2BNR = @P0 AND B2PLT = @P1 AND B2ANR = @P2 ', 1
select @P1
The Reads count is 121114, for a duration of 1813.
The thing is, I can not find the execution plan for this pair in the Profiler. If I take the SQL Statement and put the following into Query Analyzer:
SELECT B2BEW, B2DAT, B2PLT, B2RNR, B2CTN, B2LO1, B2INVC, B2QUAL, B2CHA, B2LPL, B2MGES, B2BEL, B2BLA, B2BBE, B2BBN, B2WRT, B2KDL, B2VNR,B2BNR, B2PLT, B2KTR, B2ANR FROM TXMYBW00 WHERE B2BNR = '001' AND B2PLT = '01' AND B2ANR = '007207BK S'
It puts in the plan for a Clustered Index Seek and the result set comes up immediately.
I look at this:
declare @P1 int
set @P1=8
exec sp_prepare @P1 output, N'@P0 nvarchar(4000),@P1 nvarchar(4000),@P2 nvarchar(4000)', N'SELECT * FROM TXUYKD00 WHERE KBNR = @P0 AND KPLT = @P1 AND KKDN = @P2 ORDER BY KBNR, KPLT, KKDN ', 1
select @P1
The program calls multiple iterations of the statement, such as:
exec sp_execute 8, N'001', N'01', N'0024940018'
Each one of these shows Clustered Index Scan in Profiler. If I take this into Query Analyzer and run it, the Execution Plan in QA shows a Clustered Index Seek and is very fast.
I must be doing this wrong.
September 1, 2006 at 7:19 am
cool! It's fun extracting all this stuff isn't it???? In profiler you need to choose events, performance, show plan statistics and in the data make sure you include the binary data column.
You'll get a load of stuff btw. You might want to check out the proc cache and see what the set options are - or at least attempt to capture the set options for the connection string - it's unusual ( but something I've experienced ) but changes to SET options can influence how a query runs - it's just I had a similar issue ( but with a couple of stored procs ) which ran aweful from the app but were perfect from QA , they did clustered index scans where they did seeks in QA, the difference was from several hundred k i/o vs a couple of hundred.
You might want to create a couple of secondary indexes ( on all three columns = composite/compound index ) for a test
B2BNR,B2PLT ,B2ANR
B2ANR ,B2PLT ,B2BNR
and see what happens
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
September 1, 2006 at 8:01 am
Not sure how to check the connection string; it's a JDBC connected app using the JSQLConnect 4.0 trial version of JDBC. Is there a setting there I need to check?
I added the index like you said, and the query changed to Index Scan on that index (I thought it was a great idea, personally, but it just scanned the NCL index).
September 6, 2006 at 8:27 am
Anyone?? I need assistance and I'm trying to learn SQL Server administration at the same time. I can't believe that the SQL Server looks at an index, queries the same columns (according to Profiler) and thinks it has to scan that index. If I have to make an adjustment, so be it, but I need someone to tell me what to look for to find that adjustment.
September 6, 2006 at 12:04 pm
Sorry, there's no magic pill, I've been doing this around 14 years now, performance and optimisation is so much a case of "it depends" sadly most of us learn by experience so it's difficult to pass on the know how - I try my best but as you've discovered it's not easy.
Connection strings issue SET commands, these can sometimes have an adverse effect on performance ( rarely ) it's always something to check - run up profiler and connect from your app and log all the SET commands issued by the connection.
The basic calculation that defines seeks vs scans is the number of locks and pages ( some of the cost element ) , a scan will usually need far less locks than a seek that returns multiple rows .. it's calculated as a %age of the table size. Secondary indexes are very selective, generally around 95% to be used, you could work out the selectivity of the first column of the various indexes.
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
Viewing 15 posts - 31 through 45 (of 49 total)
You must be logged in to reply to this topic. Login to reply