How to manage temporary tables

  • Thanks guys

    @eric wow a 1GB temp file I have 4 files at 500MB and the log is at 50MB

    Excuse my question but during a full backup is anything written to the TL?

    Thanks

    KY

  • I've seen dozens of sql servers with tempdb set to initial sizes of 8MB data and 1MB log with Autogrowth rates of 1MB and no maxsize set.  This must cause thousands of fragments on disk so it seems crazy to me.  I mostly use about 4 data files and a log file of 1 GB.  Another mistake is to not set a maxsize on these files.  Several times I've seen servers brought to a halt looking for more disk space because a developer wrote a cartesian join query.  Perhaps some DBAs don't know file sizes can be set for tempdb and assume it uses model's sizes.

  • hurricaneDBA - Thursday, April 27, 2017 11:51 AM

    Thanks guys @eric wow a 1GB temp file I have 4 files at 500MB and the log is at 50MB Excuse my question but during a full backup is anything written to the TL?ThanksKY

    The transaction log is read from and then truncated. When the log is truncated, that means the not active portion is deallocated. The the log file is not automatically shrunk, meaning the log file will never automatically reduce in size, if that's what you're expecting. But you don't want the log file to shrink.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • Yeah I already knew that it's size doesn't shrink and the logs truncate but I will investigate why the size jumped from 100mb to 2gb overnight when there are no connections or activity on the database according to the end user. There was one stored procedure being run every day and that's what is shown above. Hence it uses temporary tables so that could be a reason but I need to prove this.

    Anyways we'll see what happens by tomorrow. I'll provide an update but I'll change the log file size to 1GB with 1GB increments keeping the max size at 5gb and see what happens

    Thanks guys I wish you all a great week

    KY

  • hurricaneDBA - Thursday, April 27, 2017 12:57 PM

    Yeah I already knew that it's size doesn't shrink and the logs truncate but I will investigate why the size jumped from 100mb to 2gb overnight when there are no connections or activity on the database according to the end user. There was one stored procedure being run every day and that's what is shown above. Hence it uses temporary tables so that could be a reason but I need to prove this. Anyways we'll see what happens by tomorrow. I'll provide an update but I'll change the log file size to 1GB with 1GB increments keeping the max size at 5gb and see what happens Thanks guys I wish you all a great week KY

    How much available disk storage do you have? Unless this is SQL Server Express Edition running on your laptop or end user's desktop, 5 GB is very small for setting max size. 

    Scheduled maintenance operations, like index re-build / reorganization can use TEMPDB space in addition to user database and tempdb log file growth. You can query the default trace to see who, what, and when file growth events are occurring.

    select
      te.name as event_name,
      tr.DatabaseName,
      tr.FileName,
      (tr.IntegerData * 8) / 1024 AS GrowthMB,
      tr.LoginName,
      tr.StartTime,
      tr.EndTime
    from
    sys.fn_trace_gettable(convert(nvarchar(255),(select value from sys.fn_trace_getinfo(0) where property=2)), 0) tr
    inner join sys.trace_events te on tr.EventClass = te.trace_event_id
    where
    tr.EventClass in
    (
    92, -- Data File Auto Grow
    93, -- Log File Auto Grow
    94, -- Data File Auto Shrink
    95 -- Log File Auto Shrink
    )
    order by
    EndTime desc;

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • You know I remember that the application support guy is running reorganization and index maintenance jobs I'll check the jobs tomorrow me see what he is running but my guess is this could be the reason for the logs being so full.

    This is our production server and the space capacity for logs I think are 10GB so I can't use more than that unless I ask for more space.

    I'll set the TL to 5GB to test it first then I'll see if I can get more space from the systems team

    Thanks for the query I'll run it tomorrow also and let you know

    KY

  • hurricaneDBA - Thursday, April 27, 2017 1:58 PM

    You know I remember that the application support guy is running reorganization and index maintenance jobs I'll check the jobs tomorrow me see what he is running but my guess is this could be the reason for the logs being so full. This is our production server and the space capacity for logs I think are 10GB so I can't use more than that unless I ask for more space. I'll set the TL to 5GB to test it first then I'll see if I can get more space from the systems team Thanks for the query I'll run it tomorrow also and let you know KY

    Heh... you should also try to convince your application support guy that "running reorganization and index maintenance jobs" is largely a waste of time.  Please see any article in the following Google search by any of Brent Ozar's group.  I thought he was drinking bong water through 2 straws but tried it to prove them wrong and they're not wrong.  I haven't done index maintenance on my production box since 17 Jan 2016 (more than a year now) and performance actually got noticeably better over the first 3 months.  The important thing is to rebuild stats.
    https://www.google.com/#q=Should+I+worry+about+index+fragmentation

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Hi Everyone
    i Checked and it seems there are another 5 stored procedures which run daily and using temporary tables and below are the 2 of them. Hence i ask my original question and that is what should i set the tempdb files and log sizes to?
    I already have 4 files each having a size of 500MB and the log file to 50MB
    Shall i increase the files size to 1GB each or add 4 more files being 500MB each?
    Also have the log file be 5GB or increase the number of LOG files?

    CREATE TABLE #routedata


    ([X_COORD] [numeric](20, 10) NOT NULL,


    [Y_COORD] [numeric](20, 10) NOT NULL,


    [MEASURE] [numeric](13, 3) NOT NULL,


    [ROUTE_GUID] [uniqueidentifier] NOT NULL,


    PRIMARY KEY (X_COORD, Y_COORD, ROUTE_GUID));


    CREATE INDEX NCL_routedata_ROUTE_GUID_MEASURE ON #routedata (ROUTE_GUID, MEASURE) INCLUDE (X_COORD, Y_COORD);


    INSERT #routedata


    (X_COORD,


    Y_COORD,


    MEASURE,


    ROUTE_GUID)


    SELECT c.X_COORD,


    c.Y_COORD, /*c.Z_COORD, */


    MEASURE = AVG(sp.MEASURE),


    sp.ROUTE_GUID


    FROM CENTERLINE_GEO_CROSS_REF cgcr,


    COORDINATE c,


    LOCATION l,


    STATION_POINT sp

    WHERE cgcr.COORDINATE_GUID = c.COORDINATE_GUID


    AND c.LOCATION_GUID = l.LOCATION_GUID


    AND l.LOCATION_GUID = sp.LOCATION_GUID


    AND (


    /* In this part all of the sections from Qatar network are filtered.*/


    sp.route_guid='{E229D6A1-8D52-40E0-A74E-BC52F2F2FDC7}' or


    sp.route_guid='{AF6A7163-1078-408E-ADFE-BE868385B231}' or


    sp.route_guid='{0BB9DE41-E47D-4445-A7CB-355F288B31A5}' or


    sp.route_guid='{2FA49F74-E482-4CB5-9F3C-6A19D46FADB2}' or


    sp.route_guid='{AC9DEF31-8827-420E-A704-9F5626E60995}' or


    sp.route_guid='{017B3B2C-CCB4-437D-94D5-A130975A1CB2}' or


    sp.route_guid='{32D27E0D-49F7-4AE3-A139-C4D32D27199E}' or


    sp.route_guid='{D246CFA0-079C-45EE-9C44-F0AB174901BD}' or


    sp.route_guid='{ACF13846-E357-4694-8491-A98E539E612F}' or


    sp.route_guid='{7940CCD0-6737-4701-A76A-42DB04EDFD2A}' or


    sp.route_guid='{C4CE82B3-BB62-4E0C-9125-5E1CF1CC361B}'


    )


    GROUP BY X_COORD,


    Y_COORD,


    ROUTE_GUID;


    WITH routedata


    AS (SELECT X_COORD,


    Y_COORD,


    MEASURE,


    ROUTE_GUID

    FROM #routedata),


    pointdata

    AS (SELECT SP.ROUTE_GUID,


    ER.EVENT_GUID,


    SP.MEASURE,


    /* Previous Coordinate plus difference between next and previous times distance ratio = Interpolated coordinate */


    (PrevCoord.X_COORD + (NextCoord.X_COORD - PrevCoord.X_COORD) * distance_ratio) InterpX,


    (PrevCoord.Y_COORD + (NextCoord.Y_COORD - PrevCoord.Y_COORD) * distance_ratio) InterpY


    FROM dbo.EVENT_RANGE AS ER


    LEFT JOIN dbo.STATION_POINT AS SP ON ER.STATION_GUID_BEGIN = SP.STATION_GUID


    /* Find the next and previous point on the route to interpolate the coordinates of the current measure */


    OUTER APPLY (SELECT TOP 1


    *


    FROM routedata RD


    WHERE RD.MEASURE <= SP.MEASURE


    AND RD.ROUTE_GUID = SP.ROUTE_GUID


    ORDER BY MEASURE DESC) AS PrevCoord


    OUTER APPLY (SELECT TOP 1


    *


    FROM routedata RD


    WHERE RD.MEASURE >= SP.MEASURE


    AND RD.ROUTE_GUID = SP.ROUTE_GUID


    ORDER BY MEASURE) AS NextCoord


    CROSS APPLY ( /* current location minus previous / if (next measure minus previous measure) = 0 then create NULL, then replace NULL with 1 (avoid divide by zero error) */SELECT (SP.MEASURE - PrevCoord.MEASURE) / ISNULL(NULLIF((NextCoord.MEASURE - PrevCoord.MEASURE), 0), 1) distance_ratio) AS dr


    WHERE STATION_GUID_BEGIN = STATION_GUID_END),


    linedata

    AS ( /* Find start and end of a route. Min,Max used because we have one entry per event (start or end of a line) */SELECT SP.ROUTE_GUID,


    ER.EVENT_GUID,


    MIN(SP.MEASURE) AS LineStart,


    MAX(SP.MEASURE) AS LineEnd


    FROM dbo.STATION_POINT AS SP


    INNER JOIN dbo.EVENT_RANGE AS ER ON SP.STATION_GUID = ER.STATION_GUID_BEGIN


    OR SP.STATION_GUID = ER.STATION_GUID_END


    WHERE STATION_GUID_BEGIN <> STATION_GUID_END


    GROUP BY SP.ROUTE_GUID,


    ER.EVENT_GUID),


    interpdata

    AS (SELECT DISTINCT


    (LineStartPrevCoord.X_COORD + (LineStartNextCoord.X_COORD - LineStartPrevCoord.X_COORD) * distance_ratio_LineStart) StartInterpX,


    (LineStartPrevCoord.Y_COORD + (LineStartNextCoord.Y_COORD - LineStartPrevCoord.Y_COORD) * distance_ratio_LineStart) StartInterpY,


    --ISNULL((LineStartPrevCoord.Z_COORD + (LineStartNextCoord.Z_COORD - LineStartPrevCoord.Z_COORD) * distance_ratio_LineStart), 0) StartInterpZ,


    (LineEndPrevCoord.X_COORD + (LineEndNextCoord.X_COORD - LineEndPrevCoord.X_COORD) * distance_ratio_LineEnd) EndInterpX,


    (LineEndPrevCoord.Y_COORD + (LineEndNextCoord.Y_COORD - LineEndPrevCoord.Y_COORD) * distance_ratio_LineEnd) EndInterpY,


    --ISNULL((LineEndPrevCoord.Z_COORD + (LineEndNextCoord.Z_COORD - LineEndPrevCoord.Z_COORD) * distance_ratio_LineEnd), 0) EndInterpZ,


    LineStart,


    LineEnd,


    LD.ROUTE_GUID,


    EVENT_GUID

    FROM linedata LD /* Find previous and next coordinates for line start and line end to interpolate the coordinates of a linestart and lineend */


    OUTER APPLY (SELECT TOP 1


    *


    FROM routedata RD


    WHERE RD.MEASURE <= LineStart


    AND RD.ROUTE_GUID = LD.ROUTE_GUID


    ORDER BY RD.MEASURE DESC) AS LineStartPrevCoord


    OUTER APPLY (SELECT TOP 1


    *


    FROM routedata RD


    WHERE RD.MEASURE >= LineStart


    AND RD.ROUTE_GUID = LD.ROUTE_GUID


    ORDER BY RD.MEASURE) AS LineStartNextCoord


    OUTER APPLY (SELECT TOP 1


    *


    FROM routedata RD


    WHERE RD.MEASURE >= LineEnd


    AND RD.ROUTE_GUID = LD.ROUTE_GUID


    ORDER BY MEASURE) AS LineEndNextCoord


    OUTER APPLY (SELECT TOP 1


    *


    FROM routedata RD


    WHERE RD.MEASURE <= LineEnd


    AND RD.ROUTE_GUID = LD.ROUTE_GUID


    ORDER BY MEASURE DESC) AS LineEndPrevCoord


    CROSS APPLY ( /* current location minus previous / if (next measure minus previous measure) = 0 then create NULL, then replace NULL with 1 (avoid divide by zero error) */SELECT (LD.LineStart - LineStartPrevCoord.MEASURE) / ISNULL(NULLIF((LineStartNextCoord.MEASURE - LineStartPrevCoord.MEASURE), 0), 1) distance_ratio_LineStart) AS drls


    CROSS APPLY ( /* current location minus previous / if (next measure minus previous measure) = 0 then create NULL, then replace NULL with 1 (avoid divide by zero error) */SELECT (LD.LineEnd - LineEndPrevCoord.MEASURE) / ISNULL(NULLIF((LineEndNextCoord.MEASURE - LineEndPrevCoord.MEASURE), 0), 1) distance_ratio_LineEnd) AS drle),


    prepdata

    AS (


    /* Create the valid geomdata - with interpolated start and end coordinates and a comma separated list of route coordinates between these two points.

    - ISNULL check for this list in case there are no points between interpolated start and end

    - Z coordinates are ignored here */SELECT DISTINCT


    ID.*,


    'LINESTRING(' + CAST(StartInterpX AS varchar(20)) + ' ' + CAST(StartInterpY AS varchar(20)) + ' ' + ISNULL(CAST('' /*StartInterpZ*/ AS varchar(20)), '0') + ' ' + CAST(LineStart AS varchar(20)) + ',' + ISNULL(STUFF((SELECT ',' + CAST(X_COORD AS varchar(20)) + ' ' + CAST(Y_COORD AS varchar(20)) + /*' ' + ISNULL(CAST(Z_COORD as varchar(20)), '0') +*/ ' ' + CAST(MEASURE AS varchar(20))


    FROM routedata RD


    WHERE ID.ROUTE_GUID = RD.ROUTE_GUID


    AND RD.MEASURE > ID.LineStart


    AND RD.MEASURE < ID.LineEnd


    ORDER BY RD.ROUTE_GUID,


    RD.MEASURE


    FOR


    XML PATH('')), 1, 1, '') + ',', '') + CAST(EndInterpX AS varchar(20)) + ' ' + CAST(EndInterpY AS varchar(20)) + ' ' + ISNULL(CAST('' /*EndInterpZ*/ AS varchar(20)), '0') + ' ' + CAST(LineEnd AS varchar(20)) + ')' geomdata


    FROM interpdata ID)


    SELECT ROUTE_GUID,


    EVENT_GUID,


    MEASURE AS EventStart,


    MEASURE AS EventEnd,


    geography::Point(InterpY, InterpX, @refsystem) AS GeomData,


    CAST(1 AS bit) Point


    INTO #PointsAndLines


    FROM pointdata


    WHERE InterpX IS NOT NULL


    UNION ALL


    SELECT ROUTE_GUID,


    EVENT_GUID,


    LineStart AS EventStart,


    LineEnd AS EventEnd,


    geography::STGeomFromText(geomdata, @refsystem) AS GeomData,


    CAST(0 AS bit) Point


    FROM prepdata


    DROP TABLE #routedata;


    --------------------------------------------------------------------------

    --DECLARE @Route_GUID uniqueidentifier,

    --@Event_Guid uniqueidentifier,

    --@geomdata varchar(max),

    --@geo geography;

    ----DECLARE @refsystem int = 4326;

    ----SELECT * INTO #t2 FROM #tmp

    --WHILE EXISTS (SELECT * FROM #tmp)

    --BEGIN

    --SELECT Top 1 @Route_GUID = Route_GUID,

    -- @EVENT_GUID = EVENT_GUID,

    -- --@EventStart = EventStart,

    -- --@EventEnd = EventEnd,

    -- --geography::STGeomFromText(geomdata, @refsystem) AS GeomData,

    --@geomdata = geomdata

    --FROM #tmp

    --PRINT 'Route_GUID:'

    --PRINT @Route_GUID

    --PRINT 'EVENT_GUID:'

    --PRINT @EVENT_GUID

    --PRINT '@geomdata:'

    --PRINT @geomdata

    --SELECT @Geo = geography::STGeomFromText(@geomdata, @refsystem)

    ----PRINT @Geo

    --DELETE FROM #tmp WHERE Route_GUID = @ROUTE_GUID AND Event_GUID = @Event_GUID

    --END

    --------------------------------------------------------------------------

    /* Push the line and point data into separate tables (we can only perform a single insert from the CTE above, so we use a temporary table as a clipboard) */

    INSERT INTO dbo.GEOM_EVENT_POINT


    SELECT ROUTE_GUID,


    EVENT_GUID,


    EventStart,


    EventEnd,


    GeomData

    FROM #PointsAndLines


    WHERE Point = 1;


    INSERT INTO dbo.GEOM_EVENT_LINE


    SELECT ROUTE_GUID,


    EVENT_GUID,


    EventStart,


    EventEnd,


    GeomData

    FROM #PointsAndLines


    WHERE Point = 0


    DROP TABLE #PointsAndLines;


    PRINT 'Done';

    ####################################

    ALTER PROCEDURE [dbo].[BuildFreeSpan]


    AS


    BEGIN

    /* Reading pipeinteg sections and writing them into temporary table */

    SELECT

    PS.Id,


    PS.TechnicalArea


    INTO #TempPipeintegSection


    FROM [ROPIMS].[PipeInteg].[Section] AS PS


    PRINT N'Section table from ROPIMS are loaded';


    /* Check whether the final table is already existed or not */

    If not exists (SELECT * FROM information_schema.tables WHERE TABLE_NAME = 'Tbl_FreeSpan')


    CREATE TABLE Tbl_FreeSpan


    (


    ID uniqueidentifier,


    Network_Name nvarchar(50),


    Line_GUID uniqueidentifier,


    Section_GUID uniqueidentifier,


    xxKeyxx nvarchar(50),


    xxValuexx int


    );


    PRINT N'Tbl_FreeSpan is created.';


    DELETE FROM Tbl_FreeSpan;


    PRINT N'Tbl_FreeSpan is erased.';


    /* counting Free spans and writing them into temporary table */

    SELECT

    NEWID() AS ID,


    PS.TechnicalArea AS Network_Name,


    SP.LINE_GUID,


    SP.ROUTE_GUID AS Section_GUID,


    'Free_Spans' AS 'xxKeyxx',


    ISNULL(COUNT(FS.FREE_SPAN_GUID),0) AS 'xxValuexx'


    INTO #TempFreeSpan


    FROM #TempPipeintegSection AS PS


    INNER JOIN [dbo].Station_Point AS SP


    ON PS.Id = SP.ROUTE_GUID


    LEFT JOIN [dbo].[EVENT_RANGE] AS ER


    ON ER.STATION_GUID_BEGIN = SP.STATION_GUID AND ER.STATION_GUID_END = SP.STATION_GUID


    LEFT JOIN [ROPIMS].[PipeInteg].[FreeSpan_Screening] AS FS


    ON ER.EVENT_GUID = FS.FREE_SPAN_GUID


    LEFT JOIN [ROPIMS].[PipeInteg].[FreeSpan_Fatigue] AS FAT


    ON FS.Free_Span_GUID = FAT.Free_Span_GUID


    LEFT JOIN [ROPIMS].[PipeInteg].[FreeSpan_ULS_Check] AS ULS


    ON ULS.Free_Span_GUID = FS.Free_Span_GUID


    WHERE


    (FS.result = 'Ok' AND ULS.result = 'NotOk') OR


    (FS.result = 'NotOk' AND FAT.FatigueAssessment = 'Ok' AND ULS.result = 'NotOk') OR


    (FS.result = 'NotOk' AND FAT.FatigueAssessment = 'NotOk')


    GROUP BY


    SP.LINE_GUID,


    SP.ROUTE_GUID,


    PS.TechnicalArea

    /* Write the calculated results into ROPODS */

    INSERT INTO Tbl_FreeSpan


    SELECT ID, Network_Name, Line_GUID, Section_GUID, xxKeyxx, xxValuexx


    FROM #TempFreeSpan


    PRINT N'Data is loaded into Tbl_FreeSpan.';


    /* Dropping the temporary tables */

    DROP TABLE #TempPipeintegSection, #TempFreeSpan;


    PRINT N'Temporary tables are removed.';


    END

    Do let me know what to set the sizes too
    have a good day
    KY

  • Hi Eric
    I ran your script and this is what i got keeping the username hidden sorry i cant include the name
    Kindly note that i dont have TL backup jobs after 4pm as the application isnt being used except between 7 and 3 but the log file is huge as i can see below starting at 12am which might correlate to the stored procedures running which i mentioned earlier

    Log File Auto Grow ROPODS Demo_ROPODS_log 196 2017-04-28 00:34:13.993 2017-04-28 00:34:14.453
    Log File Auto Grow ROPODS Demo_ROPODS_log 178 2017-04-28 00:34:01.457 2017-04-28 00:34:01.860
    Log File Auto Grow tempdb templog 6 2017-04-28 00:33:47.297 2017-04-28 00:33:47.420
    Log File Auto Grow tempdb templog 5 2017-04-28 00:33:38.527 2017-04-28 00:33:38.550
    Log File Auto Grow tempdb templog 5 2017-04-28 00:33:32.007 2017-04-28 00:33:32.037
    Log File Auto Grow ROPODS Demo_ROPODS_log 162 2017-04-28 00:04:54.753 2017-04-28 00:04:55.167
    Log File Auto Grow ROPODS Demo_ROPODS_log 147 2017-04-28 00:04:49.770 2017-04-28 00:04:50.110
    Log File Auto Grow ROPODS Demo_ROPODS_log 134 2017-04-28 00:04:44.620 2017-04-28 00:04:44.950
    Log File Auto Grow ROPODS Demo_ROPODS_log 122 2017-04-28 00:04:40.373 2017-04-28 00:04:40.643
    Log File Auto Grow ROPODS Demo_ROPODS_log 110 2017-04-28 00:04:32.627 2017-04-28 00:04:32.887
    Log File Auto Grow ROPODS Demo_ROPODS_log 100 2017-04-28 00:04:28.200 2017-04-28 00:04:28.417
    Log File Auto Grow ROPODS Demo_ROPODS_log 91 2017-04-28 00:04:26.193 2017-04-28 00:04:26.410
    Log File Auto Grow ROPODS Demo_ROPODS_log 83 2017-04-28 00:04:24.267 2017-04-28 00:04:24.447
    Log File Auto Grow ROPODS Demo_ROPODS_log 75 2017-04-28 00:04:22.277 2017-04-28 00:04:22.447
    Log File Auto Grow ROPODS Demo_ROPODS_log 68 2017-04-28 00:04:20.877 2017-04-28 00:04:21.030
    Log File Auto Grow ROPODS Demo_ROPODS_log 62 2017-04-28 00:04:17.020 2017-04-28 00:04:17.160
    Log File Auto Grow ROPODS Demo_ROPODS_log 56 2017-04-28 00:04:08.420 2017-04-28 00:04:08.553
    Log File Auto Grow ROPODS Demo_ROPODS_log 51 2017-04-28 00:04:03.173 2017-04-28 00:04:03.297
    Log File Auto Grow ROPODS Demo_ROPODS_log 47 2017-04-28 00:03:57.070 2017-04-28 00:03:57.170
    Log File Auto Grow ROPODS Demo_ROPODS_log 42 2017-04-28 00:03:51.953 2017-04-28 00:03:52.073
    Log File Auto Grow ROPODS Demo_ROPODS_log 38 2017-04-28 00:03:49.213 2017-04-28 00:03:49.307
    Log File Auto Grow ROPODS Demo_ROPODS_log 35 2017-04-28 00:03:44.627 2017-04-28 00:03:44.703
    Log File Auto Grow ROPODS Demo_ROPODS_log 32 2017-04-28 00:03:40.440 2017-04-28 00:03:40.533
    Log File Auto Grow ROPODS Demo_ROPODS_log 29 2017-04-28 00:03:37.060 2017-04-28 00:03:37.150
    Log File Auto Grow ROPODS Demo_ROPODS_log 26 2017-04-28 00:03:32.810 2017-04-28 00:03:32.890
    Log File Auto Grow ROPODS Demo_ROPODS_log 24 2017-04-28 00:03:28.983 2017-04-28 00:03:29.043
    Log File Auto Grow ROPODS Demo_ROPODS_log 21 2017-04-28 00:03:25.890 2017-04-28 00:03:25.957
    Log File Auto Grow ROPODS Demo_ROPODS_log 19 2017-04-28 00:03:21.227 2017-04-28 00:03:21.307
    Log File Auto Grow ROPODS Demo_ROPODS_log 18 2017-04-28 00:03:19.657 2017-04-28 00:03:19.707
    Log File Auto Grow ROPODS Demo_ROPODS_log 16 2017-04-28 00:03:18.330 2017-04-28 00:03:18.377
    Log File Auto Grow ROPODS Demo_ROPODS_log 15 2017-04-28 00:03:16.090 2017-04-28 00:03:16.127
    Log File Auto Grow ROPODS Demo_ROPODS_log 13 2017-04-28 00:03:11.283 2017-04-28 00:03:11.323
    Log File Auto Grow ROPODS Demo_ROPODS_log 12 2017-04-28 00:03:10.780 2017-04-28 00:03:10.820
    Log File Auto Grow ROPODS Demo_ROPODS_log 11 2017-04-28 00:03:10.287 2017-04-28 00:03:10.323
    Log File Auto Grow ROPODS Demo_ROPODS_log 10 2017-04-28 00:03:09.460 2017-04-28 00:03:09.557
    Log File Auto Grow ROPIMS Demo_ROPIMS_log 110 2017-04-28 00:02:54.820 2017-04-28 00:02:55.073
    Log File Auto Grow ROPIMS Demo_ROPIMS_log 100 2017-04-28 00:02:47.643 2017-04-28 00:02:47.917
    Log File Auto Grow ROPIMS Demo_ROPIMS_log 91 2017-04-28 00:02:42.313 2017-04-28 00:02:42.520
    Log File Auto Grow ROPIMS Demo_ROPIMS_log 83 2017-04-28 00:02:31.213 2017-04-28 00:02:31.427
    Log File Auto Grow ROPIMS Demo_ROPIMS_log 75 2017-04-28 00:02:26.247 2017-04-28 00:02:26.480
    Log File Auto Grow ROPIMS Demo_ROPIMS_log 68 2017-04-28 00:02:21.433 2017-04-28 00:02:21.580
    Log File Auto Grow ROPIMS Demo_ROPIMS_log 62 2017-04-28 00:02:15.843 2017-04-28 00:02:16.017
    Log File Auto Grow ROPIMS Demo_ROPIMS_log 56 2017-04-28 00:02:11.337 2017-04-28 00:02:11.497
    Log File Auto Grow ROPIMS Demo_ROPIMS_log 51 2017-04-28 00:02:08.203 2017-04-28 00:02:08.400
    Log File Auto Grow ROPIMS Demo_ROPIMS_log 47 2017-04-28 00:02:00.447 2017-04-28 00:02:00.560
    Log File Auto Grow ROPIMS Demo_ROPIMS_log 42 2017-04-28 00:01:54.410 2017-04-28 00:01:54.593
    Log File Auto Grow ROPIMS Demo_ROPIMS_log 38 2017-04-28 00:01:45.767 2017-04-28 00:01:45.880
    Log File Auto Grow ROPIMS Demo_ROPIMS_log 35 2017-04-28 00:01:41.607 2017-04-28 00:01:41.703
    Log File Auto Grow ROPIMS Demo_ROPIMS_log 32 2017-04-28 00:01:38.663 2017-04-28 00:01:38.843
    Log File Auto Grow ROPIMS Demo_ROPIMS_log 29 2017-04-28 00:01:35.297 2017-04-28 00:01:35.397
    Log File Auto Grow ROPIMS Demo_ROPIMS_log 26 2017-04-28 00:01:32.920 2017-04-28 00:01:32.997
    Log File Auto Grow ROPIMS Demo_ROPIMS_log 24 2017-04-28 00:01:31.570 2017-04-28 00:01:31.730
    Log File Auto Grow ROPIMS Demo_ROPIMS_log 21 2017-04-28 00:01:30.353 2017-04-28 00:01:30.527
    Log File Auto Grow ROPIMS Demo_ROPIMS_log 19 2017-04-28 00:01:28.640 2017-04-28 00:01:28.777
    Log File Auto Grow ROPIMS Demo_ROPIMS_log 18 2017-04-28 00:01:26.420 2017-04-28 00:01:26.477
    Log File Auto Grow ROPIMS Demo_ROPIMS_log 16 2017-04-28 00:01:24.703 2017-04-28 00:01:24.810
    Log File Auto Grow ROPIMS Demo_ROPIMS_log 15 2017-04-28 00:01:23.373 2017-04-28 00:01:23.470
    Log File Auto Grow ROPIMS Demo_ROPIMS_log 13 2017-04-28 00:01:22.410 2017-04-28 00:01:22.460
    Log File Auto Grow ROPIMS Demo_ROPIMS_log 12 2017-04-28 00:01:21.320 2017-04-28 00:01:21.383
    Log File Auto Grow ROPIMS Demo_ROPIMS_log 11 2017-04-28 00:01:20.693 2017-04-28 00:01:20.727
    Log File Auto Grow ROPIMS Demo_ROPIMS_log 10 2017-04-28 00:01:19.630 2017-04-28 00:01:19.670
    Log File Auto Grow ROCOMMON Demo_ROCOMMON_log 19 2017-04-27 18:39:27.047 2017-04-27 18:39:27.173

    thanks
    KY

  • Set the tempdb data and log files to the size they need to be.  That'll be the size they've grown to, so don't shrink them.  Don't create extra log files - there is no performance benefit.  Only create extra data files if you're seeing IAM page latch contention.  If you're doing index maintenance overnight, don't stop backing up the log at 4pm.  And, as I and others have already said, fix the code if it's in your power to do so.

    John

  • Thanks John
    I dont have power to change the code but i will email the vendor to change it if possible
    Now i will set the LOG file size for the specific databases to the size they grow to which is 2GB and ill make the temp log file size 5GB as mentioned and see what happens
    I will also set the TL backup jobs to run during the evening also and see what results
    Thank you
    KY

  • Hi Everyone
    Just to give an update i disabled all the stored procedures which were running and to my surprise the log file still grew to 2GB for 1 database and 1GB for another when i ran the TL backups this morning at 7am
    This doesnt make any sense as we arent running any queries or jobs or anything between 5pm and 7am
    I checked in the error log and i found this error:

    AppDomain xxxxxxxxxxxxxxxx is marked for unload due to memory pressure.
    04/29/2017 00:00:29,spid18s,Unknown,This instance of SQL Server has been using a process ID of 1612 since 4/11/2017 4:01:24 PM (local) 4/11/2017 1:01:24 PM (UTC). This is an informational message only; no user action is required.
    I will check if there are any applications job which run from the application side but how can i check from the DB side if there are any applications accessing the database after 5pm?

    thanks
    have a good weekend
    KY

  • hurricaneDBA - Saturday, April 29, 2017 3:35 AM

    Hi Everyone
    Just to give an update i disabled all the stored procedures which were running and to my surprise the log file still grew to 2GB for 1 database and 1GB for another when i ran the TL backups this morning at 7am
    This doesnt make any sense as we arent running any queries or jobs or anything between 5pm and 7am
    I checked in the error log and i found this error:

    AppDomain xxxxxxxxxxxxxxxx is marked for unload due to memory pressure.
    04/29/2017 00:00:29,spid18s,Unknown,This instance of SQL Server has been using a process ID of 1612 since 4/11/2017 4:01:24 PM (local) 4/11/2017 1:01:24 PM (UTC). This is an informational message only; no user action is required.
    I will check if there are any applications job which run from the application side but how can i check from the DB side if there are any applications accessing the database after 5pm?

    thanks
    have a good weekend
    KY

    Don't have time to read the entire thread, but I am just not getting why you are concerned with 2GB and 1GB log files. 

    As for finding what is running, start a simple profiler trace to local disk on the server, capturing just rpc batch completed and sql batch completed. Easy, peasy, very light weight.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • I once was troubleshooting an ETL process where TEMPDB kept filling up the data drive, and in this case they had all the data and log files on a single 100 GB drive. It was a Friday afternoon, so I asked for a new TB sized drive dedicated just for containing the temp data and log files thinking that whatever they were doing going forward, this should suffice. So, I monitored the process of the weekend, it was the only thing running, and to my horror I saw it drag on for more than 24 hours and then on into Sunday, until the sucker finally filled up the new tempdb drive. As it turns out, they were doing a non-indexed Cartesian join between two large tables, and the resulting hash join simply spooled over a TB worth of temp data before collapsing. I don't know if any amount of disk space could have possibly supported this SQL query in it's original form. So, it's always important to look at the query execution plan and confirm what it's possibly doing that can be optimized.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • From what I've read, yes, you should configure tempdb to handle the load on the server.  That's an altruism that applies to any SQL Server.

    However, I believe the key is in the code.  Give me a beefy server and I'll probably be able to come up with a way to cause pain.  You need to figure out everything that's running and when, then fix the problems.

Viewing 15 posts - 16 through 29 (of 29 total)

You must be logged in to reply to this topic. Login to reply