September 28, 2021 at 3:27 pm
Is there a T-SQL way to add the following to an existing table that already has data in it?:
PRIMARY KEY CLUSTERED
(
[CALL_ID] ASC,
[DATE] ASC,
[TIME] ASC,
[CALL_TYPE] ASC,
[DOMAIN] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
September 28, 2021 at 5:14 pm
Yeah, you can use the alter table add constraint clause.
ALTER TABLE MYTABLE ADD CONSTRAINT PRIMARY KEY CLUSTERED (…..)
September 28, 2021 at 5:29 pm
Can this be done with a populated table with 24MM rows and 130 fields? I keep getting errors that the tempdb is full (?) I tried copying the table, made the change and then repopulate the copied table and get a constraint violation.
September 28, 2021 at 6:22 pm
Can this be done with a populated table with 24MM rows and 130 fields? I keep getting errors that the tempdb is full (?) I tried copying the table, made the change and then repopulate the copied table and get a constraint violation.
Well if the data in the table doesn't currently comply with the key you will have to clean it up in some way, that will be a problem regardless of how you try to add the key.
September 28, 2021 at 6:25 pm
Can this be done with a populated table with 24MM rows and 130 fields? I keep getting errors that the tempdb is full (?) I tried copying the table, made the change and then repopulate the copied table and get a constraint violation.
When y0u create an index, you will need at least as much space in tempdb as the size of the index. In this case, a clustered index IS the table, so that's going to be fairly large.
When you get the constraint violation, you have data that violates the primary key. Likely there are duplicate rows based upon the values in these 5 columns
Michael L John
If you assassinate a DBA, would you pull a trigger?
To properly post on a forum:
http://www.sqlservercentral.com/articles/61537/
September 29, 2021 at 1:15 pm
What @michael-2 says.
The reason you need space in tempdb for the index is because the data must be sorted before it can be written out to the index. So, yeah, you need a lot more space for what you're trying to do.
But yes, fundamentally, you can do it. You just have to address what Michael says.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
September 29, 2021 at 11:59 pm
When y0u create an index, you will need at least as much space in tempdb as the size of the index. In this case, a clustered index IS the table, so that's going to be fairly large.
I'm pretty sure that's not true when SORT_IN_TEMPDB = OFF and it does default to OFF.
... AND that example is specifically for converting a 200MB HEAP to a Clustered Index with an 80% Fill Factor. (Note that 200MB/80% = 250MB).
FROM THE LINK ABOVE:
3. Determine additional temporary disk space for sorting.
Space requirements are shown for sorting in tempdb (with SORT_IN_TEMPDB set to ON) and sorting in the target location (with SORT_IN_TEMPDB set to OFF).
a. When SORT_IN_TEMPDB is set to ON, tempdb must have sufficient disk space to hold the largest index (1 million * 200 bytes ~ 200 MB). Fill factor is not considered in the sorting operation.
Additional disk space (in the tempdb location) equal to the Configure the index create memory Server Configuration Option value = 2 MB.
Total size of temporary disk space with SORT_IN_TEMPDB set to ON ~ 202 MB.
b. When SORT_IN_TEMPDB is set to OFF (default), the 250 MB of disk space already considered for the new index in step 2 is used for sorting.
Additional disk space (in the target location) equal to the Configure the index create memory Server Configuration Option value = 2 MB.
Total size of temporary disk space with SORT_IN_TEMPDB set to OFF = 2 MB.
--Jeff Moden
Change is inevitable... Change for the better is not.
September 30, 2021 at 12:17 am
Can this be done with a populated table with 24MM rows and 130 fields? I keep getting errors that the tempdb is full (?) I tried copying the table, made the change and then repopulate the copied table and get a constraint violation.
Are you, by any chance, using SET IDENTITY_INSERT ON ???
--Jeff Moden
Change is inevitable... Change for the better is not.
September 30, 2021 at 1:32 pm
Ah... two questions also come to mind.
Don't forget my question about SET IDENTITY_INSERT ON above this.
--Jeff Moden
Change is inevitable... Change for the better is not.
September 30, 2021 at 2:15 pm
Can you post the table definition?
Your original post list the columns you want included in your clustered primary key and that looks like a very wide clustering key and best practices recommended that the clustering key be narrow because the clustering key is part of the key for every non-clustered index. Also a wide clustering key increases the storage required for the table because there are more pages required for the intermediate level(s) of the index that aren't required for a heap.
It may be that those columns may be the right columns for the primary key (business key that uniquely identifies a row), but not the right columns for the clustering key (more of a performance key that sorts the data). Ideally the clustering key is unique itself because otherwise SQL has to make it unique by adding a uniqueifier (int) to the clustering key, but that doesn't mean it is the best primary key.
Jack Corbett
Consultant - Straight Path Solutions
Check out these links on how to get faster and more accurate answers:
Forum Etiquette: How to post data/code on a forum to get the best help
Need an Answer? Actually, No ... You Need a Question
September 30, 2021 at 3:01 pm
Can you post the table definition?
Your original post list the columns you want included in your clustered primary key and that looks like a very wide clustering key and best practices recommended that the clustering key be narrow because the clustering key is part of the key for every non-clustered index. Also a wide clustering key increases the storage required for the table because there are more pages required for the intermediate level(s) of the index that aren't required for a heap.
It may be that those columns may be the right columns for the primary key (business key that uniquely identifies a row), but not the right columns for the clustering key (more of a performance key that sorts the data). Ideally the clustering key is unique itself because otherwise SQL has to make it unique by adding a uniqueifier (int) to the clustering key, but that doesn't mean it is the best primary key.
To add to this, will any of the values in the 5 columns change once inserted??
Michael L John
If you assassinate a DBA, would you pull a trigger?
To properly post on a forum:
http://www.sqlservercentral.com/articles/61537/
Viewing 11 posts - 1 through 10 (of 10 total)
You must be logged in to reply to this topic. Login to reply