Help with tuning a troublesome index

  • To set it up, here's some DDL

    table

    CREATE TABLE [performance].[ramStat]

    ([createDate] [datetime] NULL,

    [objectName] [varchar](150) NULL,

    [counterName] [varchar](150) NULL,

    [counterValue] [int] NULL,

    [counterValueMb] [decimal](14, 2) NULL,

    [counterDescription] [varchar](255) NULL

    )

    ON [PRIMARY]

    index

    CREATE CLUSTERED INDEX [ci_ramStat] ON [performance].[ramStat]

    (

    [createDate] DESC,

    [counterName] ASC

    )

    WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 75)

    ON [PRIMARY]

    sample data

    insert into performance.ramstat

    (createDate,objectName,counterName,counterValue,counterValueMb,counterDescription)

    values

    ('2011-02-28 09:38:00.037','MSSQL$INTERNAL2:Buffer Manager','Database pages','155196','1212.47','Number of pages in the buffer pool with database content.'),

    ('2011-02-28 09:38:00.037','MSSQL$INTERNAL2:Buffer Manager','Free pages','3653','28.54','Total number of pages on all free lists.'),

    ('2011-02-28 09:38:00.037','MSSQL$INTERNAL2:Memory Manager','Target Server Memory','3180888','3106.00','Total amount of dynamic memory the server can consume.(Virtual memory reserved by the OS for SQL Server.)'),

    ('2011-02-28 09:38:00.037','MSSQL$INTERNAL2:Buffer Manager','Total pages','387045','3023.79','Number of pages in the buffer pool (includes database, free, and stolen pages).'),

    ('2011-02-28 09:38:00.037','MSSQL$INTERNAL2:Memory Manager','Total Server Memory','3096360','3023.00','The committed memory from the buffer pool. (Not total amount of memory in use by SQL Server but commited to SQL Server.)'),

    ('2011-02-28 09:37:00.033','MSSQL$INTERNAL2:Buffer Manager','Database pages','155179','1212.34','Number of pages in the buffer pool with database content.'),

    ('2011-02-28 09:37:00.033','MSSQL$INTERNAL2:Buffer Manager','Free pages','3903','30.49','Total number of pages on all free lists.'),

    ('2011-02-28 09:37:00.033','MSSQL$INTERNAL2:Memory Manager','Target Server Memory','3096360','3023.00','Total amount of dynamic memory the server can consume.(Virtual memory reserved by the OS for SQL Server.)'),

    ('2011-02-28 09:37:00.033','MSSQL$INTERNAL2:Buffer Manager','Total pages','387045','3023.79','Number of pages in the buffer pool (includes database, free, and stolen pages).'),

    ('2011-02-28 09:37:00.033','MSSQL$INTERNAL2:Memory Manager','Total Server Memory','3096360','3023.00','The committed memory from the buffer pool. (Not total amount of memory in use by SQL Server but commited to SQL Server.)')

    As you can see in the data, there are 5 counters that are queried every minute. At this point, there are about 700k records in the table.

    The main query that runs against the table selects data filtered by createDate then counterName.

    The index is scheduled to reorg at 5% and rebuild at 15% on a nightly basis. It's usually 5% fragmented by the time the reorg runs. The problem is, when the reorg happens, it fills my log (599mb).

    Does anyone see some improvements that would reduce log growth when the reorg runs?

    Some additional information. This is the approximate physical size of the table

    reserved space: 314888kb

    data space: 306528kb

    index size: 8304kb

    ______________________________________________________________________________________________
    Forum posting etiquette.[/url] Get your answers faster.

  • This data is not normalized, and it will end up being very FAT & containing a lot of repetitive bloated data that will impact your performance when you get 700K + rows in it. That means extra I/O and CPU time for your queries. You should avoid the use of NULLable columns!! (unless they can really contain a NULL lots of times you can just use a 0 value to indicate non-valid data)

    If you want performance with that many rows your table structures should look like this:

    CREATE TABLE [performance].[ramStat]

    ( [createDate] [datetime] NOT NULL,

    [objectID] int (or smallint!) NOT NULL,

    [counterID] int (or smallint!) NOT NULL,

    [counterValue] int NOT NULL,

    [counterValueMb] [decimal](14, 2) NULL

    )

    (clustered index on createDate only)

    CREATE TABLE [performance].[objects]

    (

    [objectID] smallint not NULL, PK

    [objectName] [varchar](150) NULL

    )

    CREATE TABLE [performance].[counters]

    (

    [counterID] int (or smallint!) NOT NULL, PK

    [counterName] [varchar](150) NULL,

    [counterDescription] [varchar](255) NULL

    )

    The probability of survival is inversely proportional to the angle of arrival.

  • You are right. It is not normalized as far as the objectName, counterName, and description are concerned. This data is populated by pulling from a DMV which is why I didn't bother with normalization of counter and object information. Explicitly making the fields not null is also a good idea.

    Good call

    ______________________________________________________________________________________________
    Forum posting etiquette.[/url] Get your answers faster.

  • All I can say is that if you don't want to normalize it you will have to live with the performance issues.

    The probability of survival is inversely proportional to the angle of arrival.

Viewing 4 posts - 1 through 3 (of 3 total)

You must be logged in to reply to this topic. Login to reply