Performance of system tables when loading views and procedures.

  • I have a database with about 2,000,000 objects in it. Yes I know its lots but this is what I have to work with.

    About 150,000 tables

    About 150,000 views

    About 150,000 procedures

    When we load a new version of a view or a procedure it takes so much longer than in a database with a 10th of the objects.

    Generally this isn't an issue but when we are uploading new versions of all the views and procedures it takes far too long.

    Does anyone know how I can improve the performance of this upgrade?

  • ...uploading new versions of all the views and procedures...

    Why are you doing this, out of interest?

    The absence of evidence is not evidence of absence
    - Martin Rees
    The absence of consumable DDL, sample data and desired results is, however, evidence of the absence of my response
    - Phil Parkin

  • The product I work on has a set of objects per business entity, Some customers have 200+ business entities. When a customer moves from one version of our product to another we have to migrate the data from one version of teh tables to the new one. Also all changed views, procedures, triggers that have changed, this time they have all have some changes, have to be reloaded.

  • Have you resolved the issue yet?

    If not, can you check as an example

    sp_spaceused 'sys.sysschobjs'

    My prod DB has 5000 rows and is 3MB.

    If you have 2 million objects, I am guessing your system table(including indexes) is 1.2GB?

    Could be that joins may be occurring in the background between large system tables.

    The system tables should be in memory, but on your scale it may not be so you may need to preload the tables before the deployment to prevent any random cold reads to disk.

    To check other tables which could be pain points, see which of the system tables are those that have the highest row counts as below.

    select top 10 s.name+'.'+o.name,*

    from sys.partitions i

    inner join sys.objects o on o.object_id = i.object_id

    inner join sys.schemas s on s.schema_id = o.schema_id

    Where s.name = 'sys' and i.index_id = 1

    order by rows desc

    Also, which tool was being used for deployments of the 2 million changes?

    Could be the tool is overwhelmed.

    Catch-all queries done right [/url]
    Gail Shaw's Performance Blog[/url]

  • Thanks for your reply, much appreciated. here are the details of teh system data.

    sp_spaceused 'sys.sysschobjs'

    rows = 2155329

    Reserved = 899744 KB

    Data = 318088 KB

    Index = 574984 KB

    UNused = 6672 KB

    details about the otehr tables

    sys.syscolpars - rows 21423310 - reserved 47560 KB

    sys.sysmultiobjrefs = rows 16586992 - reserved 21648 KB

    sys.sysrscols = rows 4497657 - reserved 3720 KB

    sys.sysschobjs = rows 2155329 - reserved 11808 KB

    sys.sysobjvalues = rows 2131434 - reserved 54360 KB

    sys.syssoftobjrefs = rows 1396279 - reserved 2952 KB

    sys.syssingleobjrefs = rows 1271440 - reserved 1232 KB

    sys.sysiscols = 952050 - rows reserved 1232 KB

    sys.sysidxstats = 548805 - rows reserved 2120 KB

    sys.sysallocunits = 508128 - rows reserved 968 KB

    Is there a way that I can force the system tables to always be in memory?

    Apart from selecting all the data into a temp table, is there a better way to perload the system tables into memory?

    Tool for applying the sql script to the database is an inhouse one, it seems to perform ok, we run three threads which from our testing seems to be the optimal number of threads. Do you have any suggestions?

    Thanks for you reply.

  • I'm including Eirikur Eiriksson's reply to a different thread as that has a great example of how warming the cache works.

    So try doing this for the largest system tables.

    Remember when warming the cache that you don't only do the table, but also all the indexes, since it won't help if the table is cached but it still has to do random physical reads when updating the indexes.

    You may need to add a query hint to force index usage.

    If warming the cache doesn't work, then you may need advice from the Monsters lurking on this forum, as this topic would be beyond many of us mere peons.

    Eirikur Eiriksson (11/1/2016)


    In most cases, we will see up to 2 times better performance when applying this method, more commonly the difference is greater. Here is an example, results speak for themselves

    ๐Ÿ˜Ž

    Simple test harness

    USE TEEST;

    GO

    SET NOCOUNT ON;

    DBCC FREEPROCCACHE WITH NO_INFOMSGS;

    --/* -- UNCOMMENT THIS LINE TO SKIP THE RECREATION OF THE TEST DATA SET

    --DECLARE @SAMPLE_SIZE INT = 10;

    IF OBJECT_ID('dbo.TBL_TEST_DELETE') IS NOT NULL DROP TABLE dbo.TBL_TEST_DELETE;

    CREATE TABLE dbo.TBL_TEST_DELETE

    (

    TD_ID INT IDENTITY(1,1) NOT NULL CONSTRAINT PK_DBO_TBL_TEST_DELETE_TD_ID PRIMARY KEY CLUSTERED

    ,TD_TS DATETIME NOT NULL CONSTRAINT DFLT_DBO_TBL_TEST_DELETE_TD_TS DEFAULT (GETDATE())

    ,TD_C001 VARCHAR(4000) NOT NULL

    ,TD_C002 VARCHAR(4000) NOT NULL

    ,TD_C003 VARCHAR(4000) NOT NULL

    ,TD_C004 VARCHAR(4000) NOT NULL

    ,TD_C005 VARCHAR(4000) NOT NULL

    ,TD_C006 VARCHAR(4000) NOT NULL

    ,TD_C007 VARCHAR(4000) NOT NULL

    ,TD_C008 VARCHAR(4000) NOT NULL

    ,TD_C009 VARCHAR(4000) NOT NULL

    ,TD_C010 VARCHAR(4000) NOT NULL

    );

    DECLARE @SAMPLE_SIZE INT = 100000;

    ;WITH T(N) AS (SELECT N FROM (VALUES (0),(0),(0),(0),(0),(0),(0),(0),(0),(0)) X(N))

    , NUMS(N) AS (SELECT TOP(@SAMPLE_SIZE) ROW_NUMBER() OVER (ORDER BY @@VERSION) AS N

    FROM T T1,T T2,T T3,T T4,T T5,T T6,T T7,T T8,T T9)

    INSERT INTO dbo.TBL_TEST_DELETE(TD_TS,TD_C001,TD_C002,TD_C003,TD_C004,TD_C005,TD_C006,TD_C007,TD_C008,TD_C009,TD_C010)

    SELECT

    DATEADD(SECOND,NM.N,GETDATE())

    ,REPLICATE('S',4000)

    ,REPLICATE('A',4000)

    ,REPLICATE('M',4000)

    ,REPLICATE('P',4000)

    ,REPLICATE('L',4000)

    ,REPLICATE('E',4000)

    ,REPLICATE('D',4000)

    ,REPLICATE('A',4000)

    ,REPLICATE('T',4000)

    ,REPLICATE('A',4000)

    FROM NUMS NM;

    --*/

    CHECKPOINT 1;

    DBCC DROPCLEANBUFFERS WITH NO_INFOMSGS;

    DBCC FREESYSTEMCACHE ( 'ALL' ) WITH NO_INFOMSGS;

    RAISERROR('COLD DELETE 1

    -----------------------------------------------------------------------------------------------------',0,0) WITH NOWAIT;

    SET STATISTICS IO,TIME ON;

    DELETE X

    FROM dbo.TBL_TEST_DELETE X

    WHERE X.TD_ID < 10001;

    SET STATISTICS IO,TIME OFF;

    CHECKPOINT 1;

    DBCC DROPCLEANBUFFERS WITH NO_INFOMSGS;

    DBCC FREESYSTEMCACHE ( 'ALL' ) WITH NO_INFOMSGS;

    RAISERROR('HOT DELETE

    -----------------------------------------------------------------------------------------------------',0,0) WITH NOWAIT;

    SET STATISTICS IO,TIME ON;

    DECLARE @TD_ID INT ;

    DECLARE @TD_TS DATETIME ;

    DECLARE @TD_C001 VARCHAR(4000);

    DECLARE @TD_C002 VARCHAR(4000);

    DECLARE @TD_C003 VARCHAR(4000);

    DECLARE @TD_C004 VARCHAR(4000);

    DECLARE @TD_C005 VARCHAR(4000);

    DECLARE @TD_C006 VARCHAR(4000);

    DECLARE @TD_C007 VARCHAR(4000);

    DECLARE @TD_C008 VARCHAR(4000);

    DECLARE @TD_C009 VARCHAR(4000);

    DECLARE @TD_C010 VARCHAR(4000);

    SELECT

    @TD_ID = TD.TD_ID

    ,@TD_TS = TD.TD_TS

    ,@TD_C001 = TD.TD_C001

    ,@TD_C002 = TD.TD_C002

    ,@TD_C003 = TD.TD_C003

    ,@TD_C004 = TD.TD_C004

    ,@TD_C005 = TD.TD_C005

    ,@TD_C006 = TD.TD_C006

    ,@TD_C007 = TD.TD_C007

    ,@TD_C008 = TD.TD_C008

    ,@TD_C009 = TD.TD_C009

    ,@TD_C010 = TD.TD_C010

    FROM dbo.TBL_TEST_DELETE TD

    WHERE TD.TD_ID < 20001;

    DELETE X

    FROM dbo.TBL_TEST_DELETE X

    WHERE X.TD_ID < 20001;

    SET STATISTICS IO,TIME OFF;

    CHECKPOINT 1;

    DBCC DROPCLEANBUFFERS WITH NO_INFOMSGS;

    DBCC FREESYSTEMCACHE ( 'ALL' ) WITH NO_INFOMSGS;

    RAISERROR('COLD DELETE 2

    -----------------------------------------------------------------------------------------------------',0,0) WITH NOWAIT;

    SET STATISTICS IO,TIME ON;

    DELETE X

    FROM dbo.TBL_TEST_DELETE X

    WHERE X.TD_ID < 30001;

    SET STATISTICS IO,TIME OFF;

    Output

    COLD DELETE 1

    -----------------------------------------------------------------------------------------------------

    SQL Server parse and compile time:

    CPU time = 155 ms, elapsed time = 306 ms.

    Table 'TBL_TEST_DELETE'. Scan count 1, logical reads 90367, physical reads 462, read-ahead reads 6223, lob logical reads 90000, lob physical reads 6756, lob read-ahead reads 0.

    SQL Server Execution Times:

    CPU time = 1125 ms, elapsed time = 7132 ms.

    HOT DELETE

    -----------------------------------------------------------------------------------------------------

    SQL Server parse and compile time:

    CPU time = 345 ms, elapsed time = 1219 ms.

    Table 'TBL_TEST_DELETE'. Scan count 1, logical reads 12620, physical reads 1, read-ahead reads 2666, lob logical reads 90000, lob physical reads 5620, lob read-ahead reads 0.

    SQL Server Execution Times:

    CPU time = 297 ms, elapsed time = 3144 ms.

    SQL Server parse and compile time:

    CPU time = 0 ms, elapsed time = 1 ms.

    Table 'TBL_TEST_DELETE'. Scan count 1, logical reads 107747, physical reads 0, read-ahead reads 0, lob logical reads 90000, lob physical reads 0, lob read-ahead reads 0.

    SQL Server Execution Times:

    CPU time = 906 ms, elapsed time = 1810 ms.

    COLD DELETE 2

    -----------------------------------------------------------------------------------------------------

    SQL Server parse and compile time:

    CPU time = 391 ms, elapsed time = 1272 ms.

    Table 'TBL_TEST_DELETE'. Scan count 1, logical reads 108950, physical reads 123, read-ahead reads 1248, lob logical reads 90000, lob physical reads 5620, lob read-ahead reads 0.

    SQL Server Execution Times:

    CPU time = 1250 ms, elapsed time = 7753 ms.

    Catch-all queries done right [/url]
    Gail Shaw's Performance Blog[/url]

  • keith.gilbert-1064677 (10/6/2016)


    The product I work on has a set of objects per business entity, Some customers have 200+ business entities. When a customer moves from one version of our product to another we have to migrate the data from one version of teh tables to the new one. Also all changed views, procedures, triggers that have changed, this time they have all have some changes, have to be reloaded.

    Wouldn't it be a heck of a lot tidier if you used one database per version?

    โ€œWrite the query the simplest way. If through testing it becomes clear that the performance is inadequate, consider alternative query forms.โ€ - Gail Shaw

    For fast, accurate and documented assistance in answering your questions, please read this article.
    Understanding and using APPLY, (I) and (II) Paul White
    Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden

  • Creating a new database then creating 200000 objects the transferring teh data will take longer than updating the exsisting.

    Does anyone know whether you can force systems tables to always be in memory / cache?

  • Has warming the cache improved the deployment time in your test environment?

    I would only look for a more permanent solution if I have proven that caching was the problem.

    Catch-all queries done right [/url]
    Gail Shaw's Performance Blog[/url]

  • keith.gilbert-1064677 (11/4/2016)


    Creating a new database then creating 200000 objects the transferring teh data will take longer than updating the exsisting.

    Does anyone know whether you can force systems tables to always be in memory / cache?

    If creating a database from scratch, I would agree. However, you could create a "Restore From" Database Image (potentially per version) that already contains all of the previous version that you could restore. Which would be much faster. Then apply your change scripts that you already have to run on the current version and voila! you have a new database version that you can update your clients to. If you are running a service then it is very easy to point your customers to the proper database via code, authentication, xml, WCF, etc, etc, etc.

    I would imagine the amount of testing, and debugging or errors take up a huge amount of time for a database that size. Using the restore method would save a HUGE amount of time and hassle. Also, does your company ever remove "older" versions? That would also allow you to potentially reduce the size of your database and make all this easier. Another thought is to use multiple servers and maybe a load balancing software like F5 to direct customers over to server(s) that are upgraded....just a thought. I wish you much luck!

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply