December 17, 2010 at 2:08 pm
Hi All
how does the size of the table affect INSERT performance?
Let' say I have 2 identical tables with same indexes. Table A is 3 million records and table B is 60.
Will the performance be better on table A?
Thanks in advance.
December 17, 2010 at 3:01 pm
It depends.. Sorry, but it does..
Initially the 60 row table probably will, but I can think of several possible factors:
1. Table width, how many columns, how wide are the columns.
2. How many rows fit on a page.
3. Straight inserts or a mix of inserts and updates (possibly causing page splits)
4. Number of indexes.
5. How many rows inserted/updated.
6. Do you query the table for cases like, NOT EXISTS..
7. Last time the tables were reindexed/stats updated.
At some point the 60 row table will perform worse and then MUCH worse due to the indexes having bad stats for the table. I would say WELL before 1M rows.
So as a simple question, no, we can't tell you for sure.. It depends.
CEWII
December 17, 2010 at 3:14 pm
It all depends on your insert statement
like how many rows you are inserting or
any default value columns in tables or
For better results insert them and see execution plan or write query and see estimated exec plan
Elaborate your question more to get help!
Regards,
Pavan Srirangam.
December 17, 2010 at 8:50 pm
pavan_srirangam (12/17/2010)
Elaborate your question more to get help!
I have this huge table. The SELECT perfromance is very bad. So I was thinking to add more indexes, which of course will affect INSERT perfromance. So I was just wondering if I divide that huge table into smaller partitions, if that would give me some more room to add additional indexes.
For some reason I thought it would be less performance consuming for the system to maintain small size indexes.
December 17, 2010 at 8:56 pm
Elliott Whitlow (12/17/2010)
So as a simple question, no, we can't tell you for sure.. It depends.
CEWII
Thanks.
I was looking for a rule of thumb, but clearly it's not that simple.
How would it change the answer if I used bulk inserts?
December 17, 2010 at 9:39 pm
Again no rule of thumb here. however, changing the recovery model to bulklogged would help you a lot
December 17, 2010 at 10:04 pm
I have this huge table. The SELECT perfromance is very bad. So I was thinking to add more indexes, which of course will affect INSERT perfromance. So I was just wondering if I divide that huge table into smaller partitions, if that would give me some more room to add additional indexes.
For some reason I thought it would be less performance consuming for the system to maintain small size indexes.
Whenever you insert a row, the system has to insert it into each index in the proper logical order. If there is not room on the particular page where the row needs to be inserted, then the system will have to do a page split. This is true whether talking about a table of 100 rows or 10000000 rows.
Partitioning large tables will almost certainly help SELECT performance, as will creating proper covering indexes. However partitioning a table doesn't create additional room. Reserving space in your indexes by setting the FILLFACTOR can make index inserts less likely to cause page splits... for a while. Then you have to rebuild your indexes to reserve space again.
__________________________________________________
Against stupidity the gods themselves contend in vain. -- Friedrich Schiller
Stop, children, what's that sound? Everybody look what's going down. -- Stephen Stills
December 18, 2010 at 12:29 am
As a rule of thumb, follow these guidelines:
Low Write Tables (100-1 read to write ratio): 100% fillfactor
High Write Tables (where writes exceed reads): 50%-70% fillfactor
Everything In-Between: 80%-90% fillfactor.
You may have to experiment to find the optimum fillfactor for your particular application. Don't assume that a low fillfactor is always better than a high fillfactor. While page splits will be reduced with a low fillfactor, it also increase the number of pages that have to be read by SQL Server during queries, which reduces performance. And not only is I/O overhead increased with a too low of fillfactor, it also affects your buffer cache. As data pages are moved in from disk to the buffer, the entire page (including empty space) is moved to the buffer. So the lower the fillfactor, the more pages that have to be moved into SQL Serve's buffer, which means there is less room for other important data pages to reside at the same time, which can reduce performance.
Also remember if you don't specify a fillfactor, the default fillfactor is 0.
--------------------------------------------------------------------------------------------------
I am just an another naive wannabe DBA trying to learn SQL Server
Viewing 8 posts - 1 through 7 (of 7 total)
You must be logged in to reply to this topic. Login to reply