Tempdb GROWS

  • 2cams (5/7/2010)


    This is my 1st Tempdb issue so I'm grabbing whatever I can implement easily to help with this issue

    Please don't take this the wrong way...

    Beware the easy out. If the vendor doesn't think you've done your research properly, they'll throw it back in your face. Especially if they twig to the idea that you're running random queries that you might not understand.

    The point has been made on this thread, several times, about checking for I/O. Every time you respond, you don't say anything about I/O. You talk about number of times a query was run and how long it takes. You do not mention, in your replies, the answers to questions others have asked or comment on how these suggestions worked or didn't work.

    If I were a vendor and you responded like that to me, I would tell you to go back and look at those items and refuse to fix my code until you had proven beyond a shadow of a doubt that you actually had those hard numbers and understood what they meant.

    So do yourself a favor. When you follow Grant's suggestion and take that data to the vendor, make sure you include those numbers and make sure you've done enough research to answer their questions. The instant you hesitate, they will claim "You're not sure of anything. You're just guessing. Go away." and your chance to fix the issue vanishes.

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • Will do, I'm going throw the whole thread again and checking all the suggestions again.

    Thanks again.

    MCITP: Database Administrator 2005
    MCTS SQL Server 2008
    MCP SQL 2012/2014
    MCSA SQL Server 2012/2014
    MCSE Data Management and Analytics

  • Brandie Tarvin (5/7/2010)


    ...enjoyable ranting & raving edited out...

    On the other hand, the DELETE statements are kinda weird. I've never seen DELETEs phrased this way and I find myself thinking there has to be a better way for them to have been coded. Don't sub-queries always get evaluated without the use of indexes, almost RBAR style?

    EDIT: I am actually looking for someone to correct me or agree with me on the above paragraph. I'm kinda brain dead this morning and want to make sure I'm right or understand why I'm wrong. SO please, chime in on this topic.

    No, sub-queries like this should be able to make use of indexes just fine.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • It seems that this statement is causing a lot of page allocations due to the delete statement. I would certainly take it back to the vendor and ask them to correct the issue from their side.

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • Run this query and post the results

    SELECT top 5 *

    FROM sys.dm_db_session_space_usage

    ORDER BY (user_objects_alloc_page_count +

    internal_objects_alloc_page_count) DESC

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • CirquedeSQLeil (5/7/2010)


    It seems that this statement is causing a lot of page allocations due to the delete statement. I would certainly take it back to the vendor and ask them to correct the issue from their side.

    I'm curious, Jason. What lead you to this conclusion?

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • Brandie Tarvin (5/7/2010)


    CirquedeSQLeil (5/7/2010)


    It seems that this statement is causing a lot of page allocations due to the delete statement. I would certainly take it back to the vendor and ask them to correct the issue from their side.

    I'm curious, Jason. What lead you to this conclusion?

    A hunch. It seems to be happening lately with vendor apps. The tempdb is growing, and a delete is occurring. The query I provided would tell is if that is true or not. My hunch is that the pages are being allocated and then they are being cleared but not deallocated or released from the file.

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • Ah, I thought it was something specific to the code that you could point me to.

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • CirquedeSQLeil (5/7/2010)


    Run this query and post the results

    SELECT top 5 *

    FROM sys.dm_db_session_space_usage

    ORDER BY (user_objects_alloc_page_count +

    internal_objects_alloc_page_count) DESC

    9424400

    5721100

    6721100

    620000

    520000

    The Tempdb file is 38.3Gb and the log file is 386Mb curently

    MCITP: Database Administrator 2005
    MCTS SQL Server 2008
    MCP SQL 2012/2014
    MCSA SQL Server 2012/2014
    MCSE Data Management and Analytics

  • That seems to be an awfully low number of allocated pages. Did you run this after hours or during business hours?

    Also, did you run it while the queries you were questioning were running?

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • I ran the query this morning when I saw your post.

    The time the Tempdb grows or grew let me rather say is from 02:17 AM until 03:45 AM.

    I'll create a SQL Agent job and output this query to a file and post tomorrow the results.

    The vendor thought we were complaining about a performance issue which there was initially. We re indexed the database and updated the stats, performance is not an issue anymore but since then the tempdb mutated.

    MCITP: Database Administrator 2005
    MCTS SQL Server 2008
    MCP SQL 2012/2014
    MCSA SQL Server 2012/2014
    MCSE Data Management and Analytics

  • 2cams (5/10/2010)


    The vendor thought we were complaining about a performance issue which there was initially. We re indexed the database and updated the stats

    Re-indexed as in recreated the ones that existed or created a whole new slew of indexes that weren't previously in the database or the code?

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • re created the current indexes, we didn't add any new indexes

    MCITP: Database Administrator 2005
    MCTS SQL Server 2008
    MCP SQL 2012/2014
    MCSA SQL Server 2012/2014
    MCSE Data Management and Analytics

  • I got up @ 2AM to start the trace and will post the top 5 reads and writes queries which IS NOT NULL TestData in the Trace table 1Gb in size which ran from 02:08AM till 03:50AM

    5th most reads query 1st, will post the writes later ...

    CPU = 3625

    Reads = 282907

    Writes = 360

    Duration = 15563476

    SELECT MAX(Percentile) AS Percentile

    FROM (SELECT TOP 95 PERCENT Availability AS Percentile From ResponseTime

    WHERE (DateTime >= '04/01/2010 00:00:00' AND DateTime <= '04/30/2010 23:59:59')

    ORDER BY Availability ASC) DERIVEDTBL = 5th

    ------------------------------------------

    CPU = 20436

    Reads = 681814

    Writes = 1385

    Duration = 54365234

    ALTER PROCEDURE [dbo].[swsp_NetFlowSummaryCollapseQuery](

    @SourceTable nvarchar(30),

    @DestinationTable nvarchar(30),

    @StartInterval int,

    @EndInterval int,

    @ResultingInterval int)

    AS

    BEGIN

    --SET NOCOUNT ON;

    DECLARE @Script nvarchar(4000)

    SET @Script =N'

    INSERT INTO '+@DestinationTable+'

    SELECT

    '+cast(@ResultingInterval AS varchar)+',

    MIN(StartTime) AS StartTime,

    NodeID,

    SourceIPSort,

    SourcePort,

    DestIPSort,

    DestPort,

    InterfaceIDRx,

    InterfaceIDTx,

    Protocol,

    SUM(TotalPackets),

    SUM(TotalBytes),

    ToS

    FROM '+@SourceTable+' WITH(NOLOCK)

    WHERE Interval BETWEEN '+cast(@StartInterval AS varchar)+' AND '+cast(@EndInterval AS varchar)+'

    GROUP BY NodeID, SourceIPSort, SourcePort, DestIPSort, DestPort,

    InterfaceIDRx, InterfaceIDTx, Protocol, ToS;'

    EXEC sp_executesql @Script

    END = 4th

    ---------------------------------------------

    CPU = 10264

    Reads = 1049368

    Writes = 4001

    Duration = 23986328

    ALTER PROCEDURE [dbo].[apm_dbm_APM_PortEvidence_DetailToHourly]

    @date datetime

    AS

    BEGIN

    -- SET NOCOUNT ON added to prevent extra result sets from

    -- interfering with SELECT statements.

    SET NOCOUNT ON;

    declare @err int

    declare @retainDate datetime

    set @retainDate = dbo.apm_GetRetainDetailDate(GetUtcDate())

    if exists (SELECT * FROM tempdb..sysobjects WHERE name LIKE '#port_DetailToHourly_retain%')

    drop table #port_DetailToHourly_retain

    if exists (SELECT * FROM tempdb..sysobjects WHERE name LIKE '#port_DetailToHourly%')

    drop table #port_DetailToHourly

    --select data which is not aggregated

    SELECT

    csd.ComponentID,

    csd.ID as ComponentStatusID,

    ped.[PortNumber],

    ped.[PortDescription],

    ped.[Type],

    csd.[TimeStamp]

    INTO #port_DetailToHourly

    FROM APM_ComponentStatus_Detail AS csd

    INNER JOIN APM_Component c on csd.ComponentID = c.ID

    LEFT JOIN APM_PortEvidence_Detail ped on csd.ID = ped.ComponentStatusID

    WHERE (csd.Archive = 0) AND (csd.[TimeStamp] < @retainDate) AND dbo.apm_GetComponentEvidenceType(c.[ComponentType]) = 2

    -- select all calculated data for retain into #port_DetailToHourly_retain temp table

    SELECT

    a.[ComponentStatusID],

    b.[PortNumber],

    b.[PortDescription],

    b.[Type],

    a.[MinResponceTime],

    a.[AvgResponceTime],

    a.[MaxResponcetime],

    a.[MinStatisticData],

    a.[AvgStatisticData],

    a.[MaxStatisticData],

    a.[ErrorCode],

    a.[RecordCount],

    a.[Archive]

    INTO #port_DetailToHourly_retain

    FROM

    (

    SELECT

    MAX(csd.[TimeStamp]) as [LastTimeStamp],

    csd.ComponentID,

    MIN(ped.[ComponentStatusID]) as [ComponentStatusID],

    MIN(ped.[ResponceTime]) AS [MinResponceTime],

    AVG(ped.[ResponceTime]) AS [AvgResponceTime],

    MAX(ped.[ResponceTime]) AS [MaxResponceTime],

    MIN(ped.[StatisticData]) AS [MinStatisticData],

    AVG(ped.[StatisticData]) AS [AvgStatisticData],

    MAX(ped.[StatisticData]) AS [MaxStatisticData],

    0 AS [ErrorCode],

    SUM(ped.[RecordCount]) as [RecordCount],

    1 AS Archive

    FROM APM_PortEvidence_Detail AS ped

    INNER JOIN APM_ComponentStatus_Detail csd on ped.ComponentStatusID = csd.ID

    WHERE (ped.Archive = 0)

    AND (csd.[TimeStamp] < @retainDate)

    GROUP BY csd.ComponentID, dbo.DateAndHourOnly(csd.[TimeStamp])

    ) AS a INNER JOIN #port_DetailToHourly b ON a.ComponentID = b.ComponentID AND a.[LastTimestamp] = b.[TimeStamp]

    --Insert evidence data

    INSERT INTO APM_PortEvidence_Hourly ([ComponentStatusID],[PortNumber],[PortDescription],[Type],[MinResponceTime],[AvgResponceTime],[MaxResponceTime],[MinStatisticData],[AvgStatisticData],[MaxStatisticData],[ErrorCode],[RecordCount],[Archive])

    SELECT * FROM #port_DetailToHourly_retain

    --calculate and insert corresponding ComponentStatus

    INSERT INTO APM_ComponentStatus_Hourly ([ID],[ApplicationID],[ComponentID],[Availability],[PercentAvailability],[TimeStamp],[RecordCount],[Archive])

    SELECT

    MIN(csd.[ID]),

    csd.[ApplicationID],

    csd.[ComponentID],

    0 as [Availability],

    AVG(csd.[PercentAvailability]) AS [PercentAvailability],

    dbo.DateAndHourOnly(csd.[TimeStamp]) as [TimeStamp],

    SUM(csd.[RecordCount]),

    1 AS [Archive]

    FROM APM_ComponentStatus_Detail AS csd

    WHERE (csd.Archive = 0)

    AND (csd.[ID] IN (SELECT DISTINCT [ComponentStatusID] FROM #port_DetailToHourly))

    GROUP BY ComponentID, dbo.DateAndHourOnly([TimeStamp]), [ApplicationID]

    select @err = @@error if @err <> 0 return @err

    --Delete from evidences

    DELETE FROM APM_PortEvidence_Detail

    WHERE ([Archive] = 0) AND [ComponentStatusID] IN (SELECT DISTINCT [ComponentStatusID] FROM #port_DetailToHourly)

    --Delete from statuses

    DELETE FROM APM_ComponentStatus_Detail

    WHERE ([Archive] = 0)

    AND ([ID] IN (SELECT DISTINCT [ComponentStatusID] FROM #port_DetailToHourly))

    DROP TABLE #port_DetailToHourly

    DROP TABLE #port_DetailToHourly_retain

    END = 3rd

    --------------------------------------

    CPU = 8469

    Reads = 1098815

    Writes = 315

    Duration = 20233398

    ALTER PROCEDURE [dbo].[apm_dbm_APM_ProcessEvidence_DetailToHourly]

    @date datetime

    AS

    BEGIN

    -- SET NOCOUNT ON added to prevent extra result sets from

    -- interfering with SELECT statements.

    SET NOCOUNT ON;

    declare @err int

    declare @retainDate datetime

    set @retainDate = dbo.apm_GetRetainDetailDate(GetUtcDate())

    if exists (SELECT * FROM tempdb..sysobjects WHERE name LIKE '#process_DetailToHourly%')

    drop table #process_DetailToHourly

    if exists (SELECT * FROM tempdb..sysobjects WHERE name LIKE '#process_DetailToHourly_retain%')

    drop table #process_DetailToHourly_retain

    --select data which is not aggregated

    SELECT

    csd.ComponentID,

    csd.[ID] AS ComponentStatusID,

    csd.[TimeStamp],

    ped.Name

    INTO #process_DetailToHourly

    FROM APM_ComponentStatus_Detail AS csd

    INNER JOIN APM_Component c on csd.ComponentID = c.ID

    LEFT JOIN APM_ProcessEvidence_Detail ped on csd.ID = ped.ComponentStatusID

    WHERE (csd.Archive = 0) AND (csd.[TimeStamp] < @retainDate) AND dbo.apm_GetComponentEvidenceType(c.[ComponentType]) = 1

    -- select all data for retain into #process_DetailToHourly_retain temp table

    SELECT

    a.[ComponentStatusID],

    b.[Name],

    a.[MinPercentCPU],

    a.[MaxPercentCPU],

    a.[AvgPercentCPU],

    a.[MinPercentMemory],

    a.[MaxPercentMemory],

    a.[AvgPercentMemory],

    a.[MinMemoryUsed],

    a.[MaxMemoryUsed],

    a.[AvgMemoryUsed],

    a.[MinPercentVirtualMemory],

    a.[MaxPercentVirtualMemory],

    a.[AvgPercentVirtualMemory],

    a.[MinVirtualMemoryUsed],

    a.[MaxVirtualMemoryUsed],

    a.[AvgVirtualMemoryUsed],

    a.[MinInstanceCount],

    a.[MaxInstanceCount],

    a.[AvgInstanceCount],

    a.[ErrorCode],

    a.[RecordCount],

    a.[Archive]

    INTO #process_DetailToHourly_retain

    from(

    --calculate

    SELECT

    csd.ComponentID,

    MAX(csd.[TimeStamp]) AS [LastTimestamp],

    MIN(ped.[ComponentStatusID]) as [ComponentStatusID],

    MIN(ped.[PercentCPU]) as [MinPercentCPU],

    MAX(ped.[PercentCPU]) as [MaxPercentCPU],

    AVG(ped.[PercentCPU]) as [AvgPercentCPU],

    MIN(ped.[PercentMemory]) as [MinPercentMemory],

    MAX(ped.[PercentMemory]) as [MaxPercentMemory],

    AVG(ped.[PercentMemory]) as [AvgPercentMemory],

    MIN(ped.[MemoryUsed]) as [MinMemoryUsed],

    MAX(ped.[MemoryUsed]) as [MaxMemoryUsed],

    AVG(ped.[MemoryUsed]) as [AvgMemoryUsed],

    MIN(ped.[PercentVirtualMemory]) as [MinPercentVirtualMemory],

    MAX(ped.[PercentVirtualMemory]) as [MaxPercentVirtualMemory],

    AVG(ped.[PercentVirtualMemory]) as [AvgPercentVirtualMemory],

    MIN(ped.[VirtualMemoryUsed]) as [MinVirtualMemoryUsed],

    MAX(ped.[VirtualMemoryUsed]) as [MaxVirtualMemoryUsed],

    AVG(ped.[VirtualMemoryUsed]) as [AvgVirtualMemoryUsed],

    MIN(ped.[InstanceCount]) as [MinInstanceCount],

    MAX(ped.[InstanceCount]) as [MaxInstanceCount],

    AVG(ped.[InstanceCount]) as [AvgInstanceCount],

    0 as [ErrorCode],

    SUM(ped.[RecordCount]) as [RecordCount],

    1 as [Archive]

    FROM APM_ProcessEvidence_Detail AS ped

    INNER JOIN APM_ComponentStatus_Detail csd on ped.ComponentStatusID = csd.ID

    WHERE (ped.Archive = 0)

    AND (csd.[TimeStamp] < @retainDate)

    GROUP BY csd.ComponentID, dbo.DateAndHourOnly(csd.[TimeStamp])

    ) as a

    --and join with latest name in timestamp aggregation group

    INNER JOIN #process_DetailToHourly b ON (a.ComponentID = b.ComponentID AND a.[LastTimestamp] = b.[TimeStamp])

    --Insert hourly data

    INSERT INTO APM_ProcessEvidence_Hourly ([ComponentStatusID],[Name],[MinPercentCPU],[MaxPercentCPU],[AvgPercentCPU],[MinPercentMemory],

    [MaxPercentMemory],[AvgPercentMemory],[MinMemoryUsed],[MaxMemoryUsed],[AvgMemoryUsed],[MinPercentVirtualMemory],

    [MaxPercentVirtualMemory],[AvgPercentVirtualMemory],[MinVirtualMemoryUsed],[MaxVirtualMemoryUsed],

    [AvgVirtualMemoryUsed],[MinInstanceCount],[MaxInstanceCount],[AvgInstanceCount],[ErrorCode],[RecordCount],[Archive])

    SELECT * from #process_DetailToHourly_retain

    --calculate and insert corresponding ComponentStatus

    INSERT INTO APM_ComponentStatus_Hourly ([ID],[ApplicationID],[ComponentID],[Availability],[PercentAvailability],[TimeStamp],[RecordCount],[Archive])

    SELECT

    MIN(csd.[ID]),

    csd.[ApplicationID],

    csd.[ComponentID],

    0 as [Availability],

    AVG(csd.[PercentAvailability]) AS [PercentAvailability],

    dbo.DateAndHourOnly(csd.[TimeStamp]) as [TimeStamp],

    SUM(csd.[RecordCount]),

    1 AS [Archive]

    FROM APM_ComponentStatus_Detail AS csd

    WHERE (csd.Archive = 0)

    AND (csd.[ID] IN (SELECT DISTINCT [ComponentStatusID] FROM #process_DetailToHourly))

    GROUP BY ComponentID, dbo.DateAndHourOnly([TimeStamp]), [ApplicationID]

    select @err = @@error if @err <> 0 return @err

    --Delete from evidences

    DELETE FROM APM_ProcessEvidence_Detail

    WHERE ([Archive] = 0) AND [ComponentStatusID] IN (SELECT DISTINCT [ComponentStatusID] FROM #process_DetailToHourly)

    --Delete from statuses

    DELETE FROM APM_ComponentStatus_Detail

    WHERE ([Archive] = 0)

    AND ([ID] IN (SELECT DISTINCT [ComponentStatusID] FROM #process_DetailToHourly))

    DROP TABLE #process_DetailToHourly

    DROP TABLE #process_DetailToHourly_retain

    = 2nd

    ----------------------------------------------------

    CPU = 4849421

    Reads = 7103857

    Writes = 8

    Duration = 3835915039

    ALTER PROCEDURE [dbo].[dbm_TrapMessages_DeleteStale]

    @date datetime

    AS

    BEGIN

    SET NOCOUNT ON;

    DECLARE @ChunkSize as Int

    DECLARE @CatchCount int;

    DECLARE @msg VARCHAR(max), @sev INT, @st INT

    DECLARE @RowCountVar Int

    Set @ChunkSize = 10000

    SET @CatchCount = 0;

    SET IMPLICIT_TRANSACTIONS OFF

    SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED

    SET ROWCOUNT @ChunkSize

    NextChunk:

    BEGIN TRY

    DELETE Traps FROM (SELECT TOP 10000 TrapID FROM Traps WHERE DateTime < @date) AS T1 WHERE Traps.TrapID=T1.TrapID

    SELECT @RowCountVar = @@ROWCOUNT

    DELETE TrapVarBinds FROM (SELECT TrapVarBinds.TrapID FROM TrapVarBinds LEFT JOIN Traps ON TrapVarBinds.TrapID=Traps.TrapID WHERE Traps.TrapID IS NULL) AS T1 WHERE(TrapVarBinds.TrapID = T1.TrapID)

    SET @CatchCount = 0; --Null catch count

    END TRY

    BEGIN CATCH

    SET @CatchCount = @CatchCount + 1; --Increment catch count

    IF (@CatchCount = 3) --If catch count reach this value re raise exception

    BEGIN

    SELECT @msg = ERROR_MESSAGE(), @sev = ERROR_SEVERITY(), @st = ERROR_STATE()

    RAISERROR (@msg, @sev, @st)

    END

    GOTO NextChunk;

    END CATCH

    IF @RowCountVar = @ChunkSize GOTO NextChunk

    SET ROWCOUNT 0

    INSERT INTO Events ( EventTime, Acknowledged, Message, EventType, NetworkNode, NetObjectID )

    VALUES ( GetDate(), 1, 'All Trap Messages older than ' + convert(varchar(50), @date) + ' were deleted during nightly database maintenance' , 1000 , 0 , 0 )

    END

    = 1st place, the query that I picked up earlier being the main problem ... most reads

    Will post most writes a little later

    MCITP: Database Administrator 2005
    MCTS SQL Server 2008
    MCP SQL 2012/2014
    MCSA SQL Server 2012/2014
    MCSE Data Management and Analytics

  • Several of those queries are loading data into temporary tables and then loading the same data into permanent tables. Why? Can't they just run the query to move the data into permanent tables? I don't know if that's the principal cause of tempdb growth, but it sure isn't helping. Why write and read the data twice?

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

Viewing 15 posts - 46 through 60 (of 67 total)

You must be logged in to reply to this topic. Login to reply