Headache on a data range query

  • Hi,

    i have one table (partitioned by year) with 57 M rows and 30 Go of data + 7 Go of index

    the application execute two different queries :

    the table have multiple columns but three columns matters here : idaccount, dateExecute, dateEffect

    first query is on : multiple idaccount and a range on dateExecute

    second query is on : multiple idaccount and a range on dateEffect

    I have create a clustered index on idaccount,dateExecute,dateEffect

    the first query is logically faster than the second.

    With no way to find the dateEffect from the dateExecute (mathematically), how can i improve the second query without loose a correct performance on the first ?

    I already try to create an non clustered index on (idaccount,dateEffect) or just dateEffect.

    Thank you for your assistance.

  • Seems that if you create a nonclustered index on idaccount, dateEffect which is partitioned using your partitioning scheme, you can gain some benefits.

    I made a simple test and seems that presence of such index improves performance of the loop by 50%. It is also worth to look at other columns of your queries - is the length of data in rows comparable in both cases?

    use

    master

    go

    create

    database testind

    go

    use

    testind

    go

    create

    partition function fnpart(datetime) as range left for values ('20000101', '20010101', '20020101', '20030101', '20040101', '20050101', '20060101', '20070101', '20080101', '20090101')

    create

    partition scheme schpart as partition fnpart all to ([primary])

    create

    table testtable(

    id

    int identity(1, 1),

    execdate

    datetime not null,

    refdate

    datetime not null,

    chardata

    char(400))on schpart(execdate)

    go

    set

    nocount on

    declare

    @dt datetime

    set

    @dt = '20000101'

    declare

    @i int

    set

    @i = 0

    while

    @i < 10 * 366 * 24

    begin

    insert testtable (execdate, refdate, chardata)values (@dt, @dt, convert(varchar, @i) + ': ' + convert(varchar(20), @dt, 120))set @dt = dateadd(hour, 1, @dt)set @i = @i + 1if @i % 1000 = 0print convert(varchar(20), @dt, 120)

    end

    go

    create

    clustered index ix1 on testtable(id, execdate, refdate)

    on

    schpart(execdate)

    --this is for testing - if this index is not created,

    --following loop takes 30ms on average, otherwise 18ms

    --drop index ix2 on testtable

    create

    nonclustered index ix2 on testtable(id, refdate) on schpart(refdate)

    go

    create table exectimes(exectimems int not null)

    go

    declare

    @begin datetime

    declare

    @msecs int

    declare

    @id int, @chardata Char(400)

    declare

    @runcounter int

    delete

    exectimes

    set

    @runcounter = 0

    while

    @runcounter < 100

    begin

    set @begin = current_timestampselect @id = id, @chardata = chardata from testtable where execdate >= '20031230' and execdate < '20040102'select @id = id, @chardata = chardata from testtable where refdate >= '20031230' and refdate < '20040102'set @msecs = datediff(millisecond, @begin, current_timestamp)insert exectimes(exectimems) values (@msecs)set @runcounter = @runcounter + 1

    end

    select

    @msecs = Avg(exectimems) from exectimes

    print

    'Queries took ' + Convert(varchar, @msecs) + ' ms'

    go

    use

    master

    go

    drop

    database testind

    go

    ...and your only reply is slàinte mhath

  • You left out a very important piece of information here: the average number of rows per idaccount. If it is large then sql server may not use a nonclustered index. If it is small, then the clustered index you have should suffice. Also note that you are carrying at 8 bytes per index pointer for the two date fields in addition to the idaccount length. I sure hope that idaccount isn't a large varchar!

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

Viewing 3 posts - 1 through 2 (of 2 total)

You must be logged in to reply to this topic. Login to reply