October 20, 2007 at 8:28 am
Heh... yeah... I forgot about that. Well said...
--Jeff Moden
Change is inevitable... Change for the better is not.
October 21, 2007 at 8:20 pm
Hi rudy
Can you please tell me how this effect performance..
--------Got clustered primary keys on big transactional tables?
binu john
October 22, 2007 at 1:40 am
Won't be any effect if the Clustered key is on an IDENTITY column.
--Jeff Moden
Change is inevitable... Change for the better is not.
October 22, 2007 at 2:25 am
Jeff can you please tell me If I creates a Clustered index on a Primary (Not a indentity) column on table that has good amount of writes how it is going to effect the performance? (In comparison to a clu.index created on Identity column)
Any way on a clusterd index the leaf level being the data itself it woudnt be issue I belive...:cool:
binu john
October 22, 2007 at 5:29 am
If your clustered index is on a column that doesn't increment (identity, date inserted) then inserts cause page splits. Page splits cause increased IOs and index fragmentation. Fragmentation causes slower reads, especially on scans and clustered index lookup and higher IOs on reads
(slower reads lead to anger, anger leads to the dark... sorry, wrong forum. :D)
Defragmenting a clustered index on a large table is a time consuming operation that you don't want to do too often.
Of course, it depends on the amount of inserts you get, the column you put the cluster on, etc, etc.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
October 22, 2007 at 6:40 am
Heh... "the footbone's connected to the... shinbone... the shinbone's connect to the... kneebone..."
--Jeff Moden
Change is inevitable... Change for the better is not.
October 22, 2007 at 11:34 am
GilaMonster (10/22/2007)
If your clustered index is on a column that doesn't increment (identity, date inserted) then inserts cause page splits. Page splits cause increased IOs and index fragmentation. Fragmentation causes slower reads, especially on scans and clustered index lookup and higher IOs on reads(slower reads lead to anger, anger leads to the dark... sorry, wrong forum. :D)
Defragmenting a clustered index on a large table is a time consuming operation that you don't want to do too often.
Of course, it depends on the amount of inserts you get, the column you put the cluster on, etc, etc.
...with fill factor @ 100%. You know - Fill Factor (the new reality show on ABC?), or is that Phil Factor (the "old" poster on SSC?)...hehe
With the right fill factor and appropriate, regular reorgs - non-identity clustered inserts can perform faster than identity inserts (since they'd hit more slices/data pages and them the writes would be "spread out").
----------------------------------------------------------------------------------
Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?
October 22, 2007 at 6:50 pm
That is it Matt.
If we can maintain an index with proper fill factor and reorgs, identity or non identity coulmn wudnt be a performance bottleneck...;)
But I dont think I can't live without a clusterd index on big table.:P
binu john
October 22, 2007 at 8:57 pm
of course - you need to have a clustered index - you just don't need the clustered index to be based on something sequential, like an identity field.
----------------------------------------------------------------------------------
Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?
October 22, 2007 at 11:39 pm
I know... I'm a sick man... I usually have a primary key and an alternate key... except for a relatively static state or country reference table, I don't like natural keys. More of a personal preference than anything concrete... Everytime someone suggests that I use a natural key or even a key fabricated from parts of natural columns in something like a Customer table, I just lose it 😛
--Jeff Moden
Change is inevitable... Change for the better is not.
October 23, 2007 at 12:31 am
Matt Miller (10/22/2007)
...with fill factor @ 100%.
No. With any fill factor. Of course, the lower the fill factor the more out of order inserts you can have without getting major fragmentation, but a fill factor of less than 100% does not mean you're safe.
I had one table with the cluster on a guid (worst possible case). Fill factor set to 60%, rebuilds every week (cause it took too long to do in the evenings) and the fragmentation would reach 90% withuin 2 days.
I moved the cluster to the date inserted column and I haven't had to rebuild the table in over a month. Fragmentation last time I checked was 4%
With the right fill factor and appropriate, regular reorgs - non-identity clustered inserts can perform faster than identity inserts (since they'd hit more slices/data pages and them the writes would be "spread out").
I recall someone doing a measurement of that. The speed of inserts started varying somewhere around 1000 inserts/sec. Insert hotspots haven't been a problem except on very high usage systems since SQL 6.5.
If you have an insert rate that high, then by all means have a non-increasing cluster (or maybe no cluster all all, depending)
If you don't have inserts that high, then you'll be doing more IOs than necessary during an insert as you'll have to fetch quite a few more pages of the table than with an increasing cluster key. You'll also have more writes either by the lazy writer or the checkpoint process as more pages are dirty.
As always, test it out on your specific system and see what works best.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
December 10, 2007 at 2:46 pm
Hi all,
just a wild guess - some time ago there was a virus that attacked SQL Servers with standard Admin passwords. Typical picture after Server got infected was very high resource consumption.
Regards
Sergej
December 17, 2008 at 9:02 pm
is the set "SET CPU_ UTILISATION LOW" is the query to set the cpu utilisation low..
can u help me..?
when we are taking the backup of sql server database the cpu utilisation is high and there is slow performance of the application what can i do.?
is there any query or option to set the cpu utilisation (high, low, medium)
December 18, 2008 at 12:05 am
karthikaug18 (12/17/2008)
is the set "SET CPU_ UTILISATION LOW" is the query to set the cpu utilisation low..
Um, no. SQL will take what CPU it needs.
when we are taking the backup of sql server database the cpu utilisation is high and there is slow performance of the application what can i do.?
Don't run full backups during business hours.
Optimise the app's queries so that they need less CPU and hence are affected less by the backups
s there any query or option to set the cpu utilisation (high, low, medium)
Yes, on SQL 2008.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
December 18, 2008 at 1:07 am
Just some additional thought: If you have an index on an identity column and you have a massive amount of inserts, you might face contention during the index maintenance, since all inserts are made at the right hand side of the B-Tree and affect the same page frequently. If you have a natural key that is not as sequential, you might achieve better results in this case (if the inserts hit different index pages).
Of course this is a special case, and in most cases this should not cause any concerns.
Best Regards,
Chris Büttner
Viewing 15 posts - 16 through 30 (of 31 total)
You must be logged in to reply to this topic. Login to reply