logical_reads

  • Is 1,689,532,605 a lot? :ermm:

    My query appears to be stuck. It's a stored procedure that has has been called from a SSIS package that was kicked off by SQL Agent.

    The SQL Guy @ blogspot[/url]

    @SeanPearceSQL

    About Me[/url]

  • The most reads I ever got is around 1B and it crashed the server. So unless you're talking about a TB db+ then yes you may have a problem

    What does the package do?

  • Ninja's_RGR'us (4/8/2011)


    What does the package do?

    Runs 4 procs then exports the data into text files. It is stuck on the 4th proc.

    It started at 08:00 (it's 14:45 now) and reads are now at 1,843,869,579.

    The SQL Guy @ blogspot[/url]

    @SeanPearceSQL

    About Me[/url]

  • How many GBs of data arer you exporting to disk?

    Are you reading and writing on the same HD?

  • Ninja's_RGR'us (4/8/2011)


    How many GBs of data arer you exporting to disk?

    Are you reading and writing on the same HD?

    I haven't reached the export yet. It is loading the result table that will be exported to disk when the proc is finished.

    Here is the pseudo code for the statement that is "stuck".

    INSERT INTO ResultTable

    SELECTt.field1_varchar_255

    ,t.field2_varchar_255

    ,r.field1_int

    ,r.field2_datetime

    ,tkr.field1_varchar_255

    FROM#temp t

    INNER JOINTable1 r -- rows = 53,909,104 & data = 4,584,088 KB

    ONr.FK = t.ID

    INNER JOINTable2 aar -- rows = 198,426,511 & data = 19,389,648 KB

    ONr.ID = aar.ID

    AND aar.TYPE IN (4010,4110,4109,4120,2116,4087,4113,4347,4105,4010,4348,4345,4349)

    INNER JOIN#temp2 ls -- rows = 60,056,519 & data = 1,953,000 KB

    ONls.FK = aar.ID

    AND ls.TYPE = 10002

    LEFT JOINTable3 tkr -- rows = 1,604 & data = 88 KB

    ONtkr.TYPE1 = aar.TYPE

    AND tkr.TYPE2 = 'MyType'

    WHEREr.field2_datetime >= CONVERT(varchar(7), DATEADD(mm, -1, GETDATE()), 121) + '-01' --1st of previous month

    ORDER BYField1

    ,Field2

    ,Field3

    OPTION (MAXDOP 1)

    The SQL Guy @ blogspot[/url]

    @SeanPearceSQL

    About Me[/url]

  • Why are you using maxdop 1? Other than divide and conquer, I don't have much else to offer (a but outside my normal tuning table sizes).

  • Ninja's_RGR'us (4/8/2011)


    Why are you using maxdop 1?

    Not sure. It's not my query and the developer came running to me to perform some magic trick :unsure:

    It's probably there due to multipe threads causing excessive CX_PACKET waits.

    The SQL Guy @ blogspot[/url]

    @SeanPearceSQL

    About Me[/url]

  • Ok, let's call in Paul Randal's magic "where does it hurt" button :

    Run this and post back the results :

    WITH Waits AS

    (SELECT

    wait_type,

    wait_time_ms / 1000.0 AS WaitS,

    (wait_time_ms - signal_wait_time_ms) / 1000.0 AS ResourceS,

    signal_wait_time_ms / 1000.0 AS SignalS,

    waiting_tasks_count AS WaitCount,

    100.0 * wait_time_ms / SUM (wait_time_ms) OVER() AS Percentage,

    ROW_NUMBER() OVER(ORDER BY wait_time_ms DESC) AS RowNum

    FROM sys.dm_os_wait_stats

    WHERE wait_type NOT IN (

    'CLR_SEMAPHORE', 'LAZYWRITER_SLEEP', 'RESOURCE_QUEUE', 'SLEEP_TASK',

    'SLEEP_SYSTEMTASK', 'SQLTRACE_BUFFER_FLUSH', 'WAITFOR', 'LOGMGR_QUEUE',

    'CHECKPOINT_QUEUE', 'REQUEST_FOR_DEADLOCK_SEARCH', 'XE_TIMER_EVENT', 'BROKER_TO_FLUSH',

    'BROKER_TASK_STOP', 'CLR_MANUAL_EVENT', 'CLR_AUTO_EVENT', 'DISPATCHER_QUEUE_SEMAPHORE',

    'FT_IFTS_SCHEDULER_IDLE_WAIT', 'XE_DISPATCHER_WAIT', 'XE_DISPATCHER_JOIN', 'BROKER_EVENTHANDLER',

    'TRACEWRITE', 'FT_IFTSHC_MUTEX', 'SQLTRACE_INCREMENTAL_FLUSH_SLEEP',

    'BROKER_RECEIVE_WAITFOR', 'ONDEMAND_TASK_QUEUE', 'DBMIRROR_EVENTS_QUEUE',

    'DBMIRRORING_CMD', 'BROKER_TRANSMITTER', 'SQLTRACE_WAIT_ENTRIES',

    'SLEEP_BPOOL_FLUSH', 'SQLTRACE_LOCK')

    )

    SELECT

    W1.wait_type AS WaitType,

    CAST (W1.WaitS AS DECIMAL(14, 2)) AS Wait_S,

    CAST (W1.ResourceS AS DECIMAL(14, 2)) AS Resource_S,

    CAST (W1.SignalS AS DECIMAL(14, 2)) AS Signal_S,

    W1.WaitCount AS WaitCount,

    CAST (W1.Percentage AS DECIMAL(4, 2)) AS Percentage,

    CAST ((W1.WaitS / W1.WaitCount) AS DECIMAL (14, 4)) AS AvgWait_S,

    CAST ((W1.ResourceS / W1.WaitCount) AS DECIMAL (14, 4)) AS AvgRes_S,

    CAST ((W1.SignalS / W1.WaitCount) AS DECIMAL (14, 4)) AS AvgSig_S

    FROM Waits AS W1

    INNER JOIN Waits AS W2 ON W2.RowNum <= W1.RowNum

    GROUP BY W1.RowNum, W1.wait_type, W1.WaitS, W1.ResourceS, W1.SignalS, W1.WaitCount, W1.Percentage

    HAVING SUM (W2.Percentage) - W1.Percentage < 95; -- percentage threshold

    GO

  • Results attached

    The SQL Guy @ blogspot[/url]

    @SeanPearceSQL

    About Me[/url]

  • The most I can do from this point forward is send you this link.

    This kind of problem is not my cup of tea.

    http://download.microsoft.com/download/4/7/a/47a548b9-249e-484c-abd7-29f31282b04d/Performance_Tuning_Waits_Queues.doc

  • Called in for help... they shoud be getting here soon.

    Good luck.

  • I would be tempted to rewrite the query as follows

    INSERT INTO ResultTable

    SELECT t.field1_varchar_255

    ,t.field2_varchar_255

    ,r.field1_int

    ,r.field2_datetime

    ,tkr.field1_varchar_255

    FROM Table2 aar

    INNER JOIN Table1 r ON aar.ID = r.ID

    INNER JOIN #temp t ON r.FK = t.ID

    INNER JOIN #temp2 ls ON aar.ID=ls.FK

    LEFT JOIN Table3 tkr ON aar.TYPE =tkr.TYPE

    WHERE

    r.field2_datetime >= CONVERT(varchar(7), DATEADD(mm, -1, GETDATE()), 121) + '-01' --1st of previous month

    AND aar.TYPE IN (4010,4110,4109,4120,2116,4087,4113,4347,4105,4010,4348,4345,4349)

    AND ls.TYPE = 10002

    AND tkr.TYPE2 = 'MyType'

    The only reason I dropped the Order by is that they can be expensive hold up inserts, you might also find it more efficient to sort the data when extracting to file, thats just a personal preference.

    Based onthe data volumes I what are the estimated Row counts your expecting?

    _________________________________________________________________________
    SSC Guide to Posting and Best Practices

  • From the link Ninja posted:

    (PAGEIOLATCH_x: If this is significant in percentage, it typically suggests disk IO subsystem issues. Check disk counters.

    35% wait seems like it may be significant.

    Is your tempdb on a different physical drive than your user db?

  • Jason-299789 (4/8/2011)


    I would be tempted to rewrite the query as follows

    INSERT INTO ResultTable

    SELECT t.field1_varchar_255

    ,t.field2_varchar_255

    ,r.field1_int

    ,r.field2_datetime

    ,tkr.field1_varchar_255

    FROM Table2 aar

    INNER JOIN Table1 r ON aar.ID = r.ID

    INNER JOIN #temp t ON r.FK = t.ID

    INNER JOIN #temp2 ls ON aar.ID=ls.FK

    LEFT JOIN Table3 tkr ON aar.TYPE =tkr.TYPE

    WHERE

    r.field2_datetime >= CONVERT(varchar(7), DATEADD(mm, -1, GETDATE()), 121) + '-01' --1st of previous month

    AND aar.TYPE IN (4010,4110,4109,4120,2116,4087,4113,4347,4105,4010,4348,4345,4349)

    AND ls.TYPE = 10002

    AND tkr.TYPE2 = 'MyType'

    The only reason I dropped the Order by is that they can be expensive hold up inserts, you might also find it more efficient to sort the data when extracting to file, thats just a personal preference.

    Based onthe data volumes I what are the estimated Row counts your expecting?

    I will try that, thanks. I agree with dropping the order by, I was going to do that. I am currently running the procedure on QA up to that point so I know what rowcount we are dealing with in the temp tables. I will produce the execution plan when I get there and I will also try your version.

    The SQL Guy @ blogspot[/url]

    @SeanPearceSQL

    About Me[/url]

  • I forgot to ask if the Temp tables were indexed as that could improve the performance considerably.

    _________________________________________________________________________
    SSC Guide to Posting and Best Practices

Viewing 15 posts - 1 through 15 (of 21 total)

You must be logged in to reply to this topic. Login to reply