performance question

  • I have been trying to find a performance issue as I'm seeing this SQL statement have high TotalCPUTime_ms and TotalDuration_ms.

    When I look at the actual table the fields(character) used in the delete(design,subdesign...)  they are in contrast to the nvarchar(4000). Would SQL be spending time do the compare durimg the delete?

    Thanks.. hope I asked correctly what I seeing...

    (@1 nvarchar(4000),@2 nvarchar(4000),@3 tinyint,@4 nvarchar(4000),@5 nvarchar(4000),@6 tinyint,@7 tinyint)
    DELETE [designError] WHERE [design]=@1 AND [subDesign]=@2 AND [designAlternative]=@3 AND [layerCode]=@4 AND [object]=@5 AND [position]=@6 AND [operation]=@7
  • One would think any programming language will do a character by character compare when dealing with Character fields. Someone once told me, "Searching on Numeric data is always faster".  I don't know your data but is it possible add some type of keys in the place of the 4000 character fields. I have a similar database in my case it contains Campaigns and sub Campaigns. I created a table for Campaigns, and one for Sub Campaigns. Then I have a main table where I have the IDs and the variable detail data. I have 175 million rows and most queries return in less than a minute.

  • The designError table is small and there are no keys or indexes. I can't see the code that's issuing the call for the delete(package software) .. I can't change any table design .. just inherit this any trying to see what can be done to improve performance..

    Thanks.

  • Bruin wrote:

    The designError table is small and there are no keys or indexes. I can't see the code that's issuing the call for the delete(package software) .. I can't change any table design .. just inherit this any trying to see what can be done to improve performance..

    Thanks.

    High values of a metric like CPU and duration do not necessarily indicate an issue.  Simply, the system is working.

    Can you show the design of the table?  What you may be seeing are implicit conversions.

    As an example, @i is declared as an nvarchar(400).  What is the datatype of the column "design"?  If it's anything but an nvarchar(4000), an implicit conversion is occurring.

    This code will find implicit conversions. It's pretty intrusive and it will likely take a long time to run to completion.  If you see a significant number of rows returned, stop the execution and start digging in.

    Chances are good that correcting a few items will correct most of the implicit conversions.  As an example, there was a "location" variable contained in many of the stored procs in on of our systems.  Originally, the column was one type, and at some point changed to a different type.  Copy and paste programming propagated this everywhere.   We scripted out all of the procs, did a find and replace, and re-deployed them.  The implicit conversions went away.

    SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED 

    DECLARE @dbname SYSNAME
    SET @dbname = QUOTENAME(DB_NAME());

    WITH XMLNAMESPACES
    (DEFAULT 'http://schemas.microsoft.com/sqlserver/2004/07/showplan')
    SELECT
    stmt.value('(@StatementText)[1]', 'varchar(max)') SQLStatement,
    t.value('(ScalarOperator/Identifier/ColumnReference/@Schema)[1]', 'varchar(128)') SchemaName,
    t.value('(ScalarOperator/Identifier/ColumnReference/@Table)[1]', 'varchar(128)') TableName,
    t.value('(ScalarOperator/Identifier/ColumnReference/@Column)[1]', 'varchar(128)') ColumnName,
    ic.DATA_TYPE AS ConvertFrom,
    ic.CHARACTER_MAXIMUM_LENGTH AS ConvertFromLength,
    t.value('(@DataType)[1]', 'varchar(128)') AS ConvertTo,
    t.value('(@Length)[1]', 'int') AS ConvertToLength,
    query_plan
    FROM sys.dm_exec_cached_plans AS cp
    CROSS APPLY sys.dm_exec_query_plan(plan_handle) AS qp
    CROSS APPLY query_plan.nodes('/ShowPlanXML/BatchSequence/Batch/Statements/StmtSimple') AS batch(stmt)
    CROSS APPLY stmt.nodes('.//Convert[@Implicit="1"]') AS n(t)
    JOIN INFORMATION_SCHEMA.COLUMNS AS ic
    ON QUOTENAME(ic.TABLE_SCHEMA) = t.value('(ScalarOperator/Identifier/ColumnReference/@Schema)[1]', 'varchar(128)')
    AND QUOTENAME(ic.TABLE_NAME) = t.value('(ScalarOperator/Identifier/ColumnReference/@Table)[1]', 'varchar(128)')
    AND ic.COLUMN_NAME = t.value('(ScalarOperator/Identifier/ColumnReference/@Column)[1]', 'varchar(128)')
    WHERE t.exist('ScalarOperator/Identifier/ColumnReference[@Database=sql:variable("@dbname")][@Schema!="[sys]"]') = 1

    Michael L John
    If you assassinate a DBA, would you pull a trigger?
    To properly post on a forum:
    http://www.sqlservercentral.com/articles/61537/

  • A DELETE operation has to find the appropriate rows to delete. That means it has to look at the values. If there are no indexes, it has to scan every single value, comparing them all. Getting one, or more, indexes in support of the query is how you'll speed things up. Key is going to be getting the right columns in the right order.

    Also, are all the columns being compared varchar? If not, that's going to lead to issues since an implicit conversion has to occur and that can negatively impact statistics use, which negatively impacts index use.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Table layout...

    CREATE TABLE [dbo].[designerror](
    [design] [nvarchar](40) NOT NULL,
    [layercode] [int] NOT NULL,
    [position] [int] NOT NULL,
    [operation] [int] NOT NULL,
    [description] [nvarchar](1000) NULL,
    [seq] [int] NOT NULL,
    [designAlternative] [int] NOT NULL,
    [subDesign] [nvarchar](120) NOT NULL,
    [subDesignAlternative] [int] NOT NULL,
    [object] [nvarchar](180) NOT NULL,
    [attribute] [nvarchar](180) NOT NULL,
    [level] [int] NOT NULL
    ) ON [PRIMARY]
    GO

    ALTER TABLE [dbo].[designerror] ADD DEFAULT ((0)) FOR [seq]
    GO

    ALTER TABLE [dbo].[designerror] ADD DEFAULT ((0)) FOR [designAlternative]
    GO

    ALTER TABLE [dbo].[designerror] ADD DEFAULT ((0)) FOR [subDesignAlternative]
    GO

    ALTER TABLE [dbo].[designerror] ADD DEFAULT ((0)) FOR [level]
  • Michael L John,

    Thanks for script.... help me understand output versus table layout and why slowness is occurring

    Script output:

    Table - Design

    Column - design

    Convertfrom - nvarchar

    convertfromlength - 40

    converttolength - 240

     

    I have attached the SqlStatement from script...  The design table has 7.8 million records...

    All of the fields referenced in Design are a part of PK except attrvalue.

    Thanks all for help

    INSERT INTO tbldesg
    SELECT distinct d1.design, d1.version, d1.attvalue BOMitem, d1.designAlternative, d2.attValue BOMdesignAlternative
    FROM design d1
    INNER JOIN design d2
    ON d1.design = d2.design
    AND d1.designAlternative = d2.designAlternative
    AND d1.version=d2.version
    AND d1.layerCode = d2.layerCode
    AND d1.position = d2.position
    AND d1.operation = d2.operation
    WHERE d1.attvalue IS NOT NULL
    AND d1.attribute = N'xxx'
    AND d2.attribute = N'xxxb'
    AND d1.design=N'vvvv'
    AND d1.designAlternative = 0
    AND d1.version = 0

     

  • another piece of info there is a non-clustered Index on:

    design

    version

    class

    object

    Thanks again.

  • Is this ORM generated code? You have a bunch of parameters using nvarchar(4000), even though none of your columns are nvarchar(4000).

    The worst offender that is likely to be causing implicit conversion issues, is parameter @4 nvarchar(4000) -- layercode is an int.

    @3, @5, & @6 are tinyint, but designAlternative, operation and position are int.

     

  • correct not in house generated code..

    Solutions\fixes that could be applied?

    Thanks for responses..

     

  • Bruin wrote:

    correct not in house generated code..

    Solutions\fixes that could be applied?

    Thanks for responses..

    Modify the ORM code.

    Create a more meaningful index(s)

    Throw hardware at it

    Complain to the vendor. Repeatedly

    Have you downloaded Bret Ozar's First Responder Kit ?  That's a good starting point, although you need to understand things before you blindly start implementing changes.  And test, test, test

    Have you also considered bringing in a consultant?  It really seems like you may be in a bit over your head.  I'm guessing this was a problem thrown at you?

    Michael L John
    If you assassinate a DBA, would you pull a trigger?
    To properly post on a forum:
    http://www.sqlservercentral.com/articles/61537/

  • Woof!

    So, your parameters don't match the data types of the columns. I know you said you can't change the code, but it's wrong. For example, tinyint, which is what's in the parameter setting, is not the same as int.

    It looks like someone is generating ORM code but doing it incorrectly. ORM tools can use the correct data types. I'd get with the developers to fix that, right now. Ahead of performance issues. Get your ORM code correct or you're always, ALWAYS, going to have pain.

    As to where to put an index, it's going to require quite a lot of experimentation. Look through the columns and determine which is the most selective. That should then be the one you use as the first column in the index key. Now, do you make it a compound key, with more than one column? I don't know. Experimentation will give you a better answer.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • They did throw is at me...  what Indexes could be applied to help..

     

    They did throw hardware at it.. which is the wrong approach.. and it only slightly improved overall response..

  • Bruin wrote:

    They did throw is at me...  what Indexes could be applied to help..

    They did throw hardware at it.. which is the wrong approach.. and it only slightly improved overall response..

    More hardware may only make bad code run more frequently.  Looks like that's what happened.   Also, the classic "put an index on it" is probably not the first step.

    Is SQL configured properly?  Memory, disk configuration, MAXDOP, cost threshold, and so forth.  What kind of a disk subsystem is this running on?  What is the block size of the disks that contain data, log and tempdb?  Power settings on the server? Network latency? Is it virtual? If so, what SCSI adaptor are you using?  Have you over-allocated resources on the host?  Does it balloon memory?  These are things you can control without code.

    What about maintenance?  Are statistics up to date?

    Michael L John
    If you assassinate a DBA, would you pull a trigger?
    To properly post on a forum:
    http://www.sqlservercentral.com/articles/61537/

  • It would be really helpful if you could include an ACTUAL Execution Plan (as opposed to an Estimated Execution Plan).  See the article at the second link in my signature line below.

    Also, could you run the following code so that we can see what the state of your table is for things like page density, segment size, and a wad of other information?

     SELECT * FROM sys.dm_db_index_usage_stats(DB_ID(insert single quoted db name here),OBJECTID('designError'),NULL,NULL,'DETAILED';

    Disclaimer... I don't know if any of that will help us help you but, without it, we won't know.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 15 posts - 1 through 15 (of 60 total)

You must be logged in to reply to this topic. Login to reply