June 19, 2003 at 2:26 am
I know there are performance issues if a table has too many rows, but I have not heard of any performance impacts if a table has too many columns especially columns of large varchar and nvarchar fields. Does anyone know of any performance issues if a table has too many columns as compared to the rest of the tables in the database?
Patrick Quinn
June 19, 2003 at 2:48 am
the maximum row size in sql server is 8060. so, all the cumululative size of the data cannot go beyond 8060 values. so, i dont see any performance degradation with lot of columns.
but still if your table has too many columns ,you should look at its design. Try normalizing it. its always a better idea to keep related data in seperate tables instead of tryign to club them into one .which might rise to data inconsistency ,data redundancy and other problems .
HTH
June 19, 2003 at 3:41 am
I don't know how much overhead is associated with SQL handling a table with 3000 columns instead of 20, but if your rows are mostly used (close to 8k) the disk io required is going to increase. SQL reads pages and extents (8 pages) at a time, so if you can get 500 rows on a page that makes it a lot faster than if there is one row on a page. The other part is that regardless of the number of columns, bring back as few as you can. Just pushing a lot of columns x rows across the network will slow things down.
Andy
June 24, 2003 at 4:18 am
AS far as performance is concern it will degraded as the columns added...
If one table is having 200 columns and current size of the data in each row comes to 4k. As one page can contain maximum 8K of data, in this case one page will be having 2 records and say to get 1000 records sql server has to read atleast (1000/2)= 500 pages, if in the earlier case only twenty rows are there than 1 page can have 20 records thus 1000/20 = 50 pages read is required so scan is much faster. In update the more the column more the problem, if you update rows add add some 1k of data to each row there will be a lot of page-split happen as there is no space for extra space in any of the pages.
Cheers..
Prakash
Prakash Heda
Lead DBA Team - www.sqlfeatures.com
Video sessions on Performance Tuning and SQL 2012 HA
June 24, 2003 at 4:32 am
Hi Patrick,
I've never experienced performance issues with 'large' tables, but when your table design goes beyond 4KB, only one row can be stored in a page. The rest is wasted disk space.
Now when SQL Server needs to update the data in the row, it loads the entire affected row into memory (along with any other rows that fit into that same page).
SQL Server is able to load multiple rows of data into and update them, but if the page can only contain one single row because of the size, SQL Server must load multiple pages into memory to update multiple rows. After this SQL Server must resort to reading data from disk. Performance will surely suffer.
Cheers,
Frank
--
Frank Kalis
Microsoft SQL Server MVP
Webmaster: http://www.insidesql.org/blogs
My blog: http://www.insidesql.org/blogs/frankkalis/[/url]
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply