Poor performance

  • just my 2ct.

    Don't remove the clustered index ! drop all nci and then truncate the table.

    Load the data sorted according to the clustered index definition. This way , your data does not have to be rewritten at create time of the clustering index.

    Try to avoid updates after your data has been loaded. If an update occurs, and the row nolonger fits on the page, pagesplits occur, which is bad for performance at update time, so during your refresh window.

    I hope you've been able to shift tempdb to a separate disk-set (raid0 or raid10). You'll gain at e.g. index create time, sorts, ...

    Johan

    Learn to play, play to learn !

    Dont drive faster than your guardian angel can fly ...
    but keeping both feet on the ground wont get you anywhere :w00t:

    - How to post Performance Problems
    - How to post data/code to get the best help[/url]

    - How to prevent a sore throat after hours of presenting ppt

    press F1 for solution, press shift+F1 for urgent solution πŸ˜€

    Need a bit of Powershell? How about this

    Who am I ? Sometimes this is me but most of the time this is me

  • I have heard the phrase "nothing has changed, but it just got slow all at once" many many times. I have always found something that has changed in these cases.

    Slow progressive performance over extended times does happen, but not significant change all at once without a change.

    My point about something should be pegged is there needs to be a bottleneck somewhere. Index rebuilds are PAINFUL. If that is happening and you box is idle, then you have blocking.

    I heard you say that this runs at night and the procedures are called sequentionally. Any new triggers? Something as a simple as a off hours report accidentally scheduled during this time window? Setup Perfmon on the system. Something has to be the bottleneck. 50-150 isn't great, but I have seen worse. Given that your system is Raid 5 (which is less than ideal to say the least) I/O might be the problem, but I would have expected this to degrade over time. Is is possible that you lost a drive? Getting any hardware errors?

    The reason I asked about "Set nocount on", if this was a networked box I would say network card set to half duplex rather than full is a common issue.

    Once we find the bottleneck we might be able to help, hard part now is finding it.

  • Thanks ALZDBA Not sure what the benefits of sorting the data by the fields in the clustered index, because if the clustered index is not dropped the data will be automatically sorted. But I will give it a go.

    with regards to updates after the insert is done, I need to do this as the data is coming from lots of other tables where the data may/may not exist which would mean a very large OUTER JOIN query.

    When you mentioned page splits, that could be the issue do you know how I can check for page splits. I can then monitor them... I think that issue may be what is causing the problem.

    Pete

    Peter Gadsby
    Business Intelligence Consultant
    www.hgconsult.co.uk

  • Thanks Bob,

    It has been slowly increasing in time, although recently it increased from running in 10 hours to 13 hours (a year ago it was running in 6 hours).

    Trigger yuk... I avoid those as they convert a set based query to a row based query which slows the system down dramatically especially in a data warehouse environment.

    I ran the Dell hardware diagnostics as I thought that the hardware may be an issue, but it didn't report anything as an issue, I have also trawled through the event log to look for errors. None found.

    I am the only person that maintains this server, so I know nothing has changed on this machine!

    You are right about RAID-5 I am going to try a RAID 0+1 configured machine soon to see if this solves the issue.

    With regards to the network settings I checked it was 100Mb full duplex, but as these processes are running on the server and not across the netwrok I don't think this would be the problem.

    SET NOCOUNT ON is set in all the SP's

    Do you know how to find how much memory is being used by SQL Server. I know how to find how much is allocated (about 90% of the total memory). So that I can work out whether to increase it or not.

    Pete

    Peter Gadsby
    Business Intelligence Consultant
    www.hgconsult.co.uk

  • For what it's worth - according to several MS case studies - the best performance during a big ETL load like that is achieved when all indexes EXCEPT the clustered index are dropped. If you drop all indexes including the clustered, load the data (100GB) and THEN apply the clustered index, your 100GB gets rewritten, since it has to organize the data.

    Apply the clustered index FIRST, then load the data.

    Also - you must be KILLING your RAID 5 interface with that much random writing. If your processor levels are low - I suspect your disk subsystem is plowing. You need to find a way to not have the do "random" inserts across all stripes as it's trying to organize the data.

    If at all possible - load the data in chunks that would represent SEQUENTIAL chunks in the clustered index. Won't matter how small the chunks are - if they're "spread" across the clustered index - you're continuously org'ing and reorg'ing the internal data structure, on disks that penalize your random writes severely. At very least - if you're forced to bring this stuff into some intermediate table, apply a non-clustered index to the intermediate table that encompasses what would be your clustered index on the "real" table, and do your inserts with a WITH (INDEX(blah)...) hint, so as to "walk" the clustered index values.

    That many writes on RAID-5 is going to suck your performance, period. Is it really necessary to drop everything every day? Is that really the best way? You haven't described enough for anyone other than you to make that decision, but if you need to continue to do that - I'd invest in RAID 10, and lots of it.

    ----------------------------------------------------------------------------------
    Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?

  • Peter Gadsby (10/29/2007)


    Thanks ALZDBA Not sure what the benefits of sorting the data by the fields in the clustered index, because if the clustered index is not dropped the data will be automatically sorted. But I will give it a go.

    data will be automatically sorted indeed, by using page splits to allocate new pages and stuff the data into them.

    You can avoid these splits, if you present the data sorted according to the clustered index.

    Every little saving can have exponential effects with larger systems :hehe:

    When you mentioned page splits, that could be the issue do you know how I can check for page splits. I can then monitor them... I think that issue may be what is causing the problem.

    there is a "pagesplits /sec" perfmon counter.

    Johan

    Learn to play, play to learn !

    Dont drive faster than your guardian angel can fly ...
    but keeping both feet on the ground wont get you anywhere :w00t:

    - How to post Performance Problems
    - How to post data/code to get the best help[/url]

    - How to prevent a sore throat after hours of presenting ppt

    press F1 for solution, press shift+F1 for urgent solution πŸ˜€

    Need a bit of Powershell? How about this

    Who am I ? Sometimes this is me but most of the time this is me

  • Thanks Matt, mostly these tables don't have clustered indexes, just a few.

    BTW - The 100 Gb refers to the total size of the database, not a specific table. The largest tables TRANSACTIONS and ACCOUNT_BALANCES are around 12.5 Million rows each,and those tables are incrementally loaded so don't cause an issue.

    The ETL is loading about 50 odd tables...

    Peter Gadsby
    Business Intelligence Consultant
    www.hgconsult.co.uk

  • Peter Gadsby (10/29/2007)


    It has been slowly increasing in time, although recently it increased from running in 10 hours to 13 hours (a year ago it was running in 6 hours).

    6-13 hours over a year is an understandable degradation. Given that, I would agree with all of the comments that we need to look at improving the process overall.

    As was mentioned we are in the dark here, because we have yet to determine what the bottleneck is. Have you profiled the process? I recall mention that the time was spent mostly on the index rebuilds (which is why we are concentrating on that aspect).

    The suggestions of dropping all indexes except for the clustered one, truncating table then starting your load is a great start to change the process.

    Just to give you an idea of my system performance, I have a 50G system with 8G and 4 HT (8) processors. and to rebuild a 1G table took about 40 seconds, but it spiked my test systme up to 40% CPU load.

    For a better idea of memory usage, look into the sysinternals tools (now owned by Microsoft)

  • Matt Miller (10/29/2007)


    For what it's worth - according to several MS case studies - the best performance during a big ETL load like that is achieved when all indexes EXCEPT the clustered index are dropped.

    Matt could you please provide a link ot 2. Just as reference for anyone else looking.

  • Antares686 (10/29/2007)


    Matt Miller (10/29/2007)


    For what it's worth - according to several MS case studies - the best performance during a big ETL load like that is achieved when all indexes EXCEPT the clustered index are dropped.

    Matt could you please provide a link ot 2. Just as reference for anyone else looking.

    After looking at the multiple links I had - they all seem to be paraphrasing various portions (mix and match) of this ONE case study....

    http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/incbulkload.mspx#E3CAC

    All of the various test scenarios are described in there.

    ----------------------------------------------------------------------------------
    Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?

  • Peter Gadsby (10/29/2007)


    Trigger yuk... I avoid those as they convert a set based query to a row based query which slows the system down dramatically especially in a data warehouse environment.

    This just shows how poor and limited is your T-SQL knowledge.

    Another confirmation to my point about bad design.

    _____________
    Code for TallyGenerator

  • Sergiy I don't mind constructive criticism, but why bother posting if you can’t help me with the issue I have posted

    Do explain why you think a trigger is good when loading 1million rows into a table.

    If a trigger is executed on insert then every row that is inserted into the table will trigger the trigger, this will drastically reduce performance.

    Peter Gadsby
    Business Intelligence Consultant
    www.hgconsult.co.uk

  • Peter Gadsby (10/30/2007)


    If a trigger is executed on insert then every row that is inserted into the table will trigger the trigger, this will drastically reduce performance.

    Sergiy's point is that triggers aren't executed for each row of an insert (unless you're doing inserts 1 row at a time) but are executed once for the insert statement, and affect however many rows were inserted.

    Only badly written triggers are row-based operations.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Hi Gail,

    Thanks for the clarification... But looking at the simple example in BOL

    USE pubs

    IF EXISTS (SELECT name FROM sysobjects

    WHERE name = 'employee_insupd' AND type = 'TR')

    DROP TRIGGER employee_insupd

    GO

    CREATE TRIGGER employee_insupd

    ON employee

    FOR INSERT, UPDATE

    AS

    /* Get the range of level for this job type from the jobs table. */

    DECLARE @min_lvl tinyint,

    @max_lvl tinyint,

    @emp_lvl tinyint,

    @job_id smallint

    SELECT @min_lvl = min_lvl,

    @max_lvl = max_lvl,

    @emp_lvl = i.job_lvl,

    @job_id = i.job_id

    FROM employee e INNER JOIN inserted i ON e.emp_id = i.emp_id

    JOIN jobs j ON j.job_id = i.job_id

    IF (@job_id = 1) and (@emp_lvl <> 10)

    BEGIN

    RAISERROR ('Job id 1 expects the default level of 10.', 16, 1)

    ROLLBACK TRANSACTION

    END

    ELSE

    IF NOT (@emp_lvl BETWEEN @min_lvl AND @max_lvl)

    BEGIN

    RAISERROR ('The level for job_id:%d should be between %d and %d.',

    16, 1, @job_id, @min_lvl, @max_lvl)

    ROLLBACK TRANSACTION

    END

    Although the trigger is not being called for every row inserted, but at the end of the insert, it will still create another read to the other table it is checking for each record in the Pseudo table inserted.

    Perhaps I didn't explain myself enough on the previous message.

    The above is why I would avoid using triggers in a data warehouse environment, because you are usually dealing with millions of rows instead of a few at a time.

    Pete

    Peter Gadsby
    Business Intelligence Consultant
    www.hgconsult.co.uk

  • The above shows that there are crappy programmers in MS as well.

    It's a good example how NOT to do triggers.

    _____________
    Code for TallyGenerator

Viewing 15 posts - 31 through 45 (of 74 total)

You must be logged in to reply to this topic. Login to reply