Help with a very large table (over 500 million rows)

  • I am having a trouble while working with a big table in my warehouse which has around 550 million rows. This all data is required and nothing can be moved to historical tables.

    My problem is that I need to provide support to some reports based over this table. As these reports are not meant for the end-users, but some production users, we have a separate database for them. We dump the data from the original huge table to this separate database on the basis of some business requirement (joining with other tables, summing up and grouping).

    Now the issue is that this dumping process is taking more than 25 hours. Is there any way that I can change the architecture or include in current to speed the dumping process up?

    Just guide me towards the right direction.

  • You can use replication.


    N 56°04'39.16"
    E 12°55'05.25"

  • Are you doing a complete refresh of the data every time? If so, you might want to consider getting an incremental process in place.

    Structurally, another way to speed this up would be to use partitioning so that you're getting more disk drives and controllers involved in the action. This will speed up the loads and the reads.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

Viewing 3 posts - 1 through 2 (of 2 total)

You must be logged in to reply to this topic. Login to reply