August 17, 2005 at 9:50 am
hello everybody
first of all excuse me for my poor english.
well, i need some helpfull advices to resolve a question and a few doubts.
Let me explain you the matter
My company have a huge table in a SQL 2000 critical production database. We are talking about almost 13000000 of rows.
Somebody thaught that would be necessary a table partitioning for a better management and increase the overall
performance querying that table too.
In other way developers and users tell me that they think the present performance of the querys over the table
is good and they have no complaints about this point. They also say that they build their querys using the
Primary_Key in WHERE conditions,and the table has two indexes more for daily work querying purposes.
That primary key is defined this way
Create Table [dbo].[o_desnew]
[clav_id] [int] IDENTITY (1, 1) NOT NULL,
.
...
....
/-------
ALTER TABLE [dbo].[o_desnew] ADD
CONSTRAINT [pk_o_desnew] PRIMARY KEY NONCLUSTERED
(
[clav_id]
  WITH FILLFACTOR = 90
GO
I've tried for example to make a table partition (with another table) using and IDENTITY field but seems that it is
not possible or at least i couldn't did that thing.
Another choice it's partitionate the table with another field ...pherhaps a datetime??
The table has a not null datetime field that could be selected for partitioning the table
I think that if we make a partitioning we will have to forget the identity field anyway (and pherhaps the actual good performance) so we must change all the sql code that query that table .....oopsss developers wont'be happy with this ... I don't mind to do so if we finally get some real advantage from the partitioning experience, but i want to be sure we are doing the right thing before taking a single step in this direction.
Anyone has any idea or suggestion ????.
I can publish more info about the case if needed
Thanks.
August 17, 2005 at 10:07 am
Here's a link from books online that discusses partitioning
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/optimsql/cm_fedserv_27eb.asp
I don't think that partitioning a table with 1.3 million rows is worth it.
If the table is going to continue to grow, eventually to tens or hundreds of millions of rows then it would be a viable option.
Your question about the primary key will all depend on the partition key. It depends on how you want to partition.
I have created a partition based on time, so all new records are inserted into the first table, then sql jobs/dts moves the data down to second, third tables based on how old the records are. The identity value was assigned on insert to first table. then moved down.
But say you want to partition the data based off a value. and insert into any of the partitions you'll have to be creative for assigning an identity, If you want to keep integer, you either have to insert into primary table (First one) then move it to the correct one. (Via trigger is a good way), or your primary key could be a GUID. You can insert into any table, and id will be unique.
In any case you should then create a partitioned view that covers all these tables. Just check out the link above.
Hope this helps
August 18, 2005 at 5:51 am
H!!,
I felt partitioning into filegroup as a better way to improve performance ,initially a new file group should be created for the production database and seperate the index into the secondary filegroup which I had mentioned.This will definetly increase the performance of update query's.
regards,
Vinod (DBA)
August 18, 2005 at 12:54 pm
Just a couple of things ... First 13 million rows instead of 1.3 million rows warrants partitioning potentially. Second, a GUID as a primary key will give you uniqueness, however if the underlying index is 'clustered' (as is the default !) insert performance will vary greatly from fantastic to abysmal.
RegardsRudy KomacsarSenior Database Administrator"Ave Caesar! - Morituri te salutamus."
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply