August 2, 2006 at 9:13 am
Hi,
I'm hoping that someone can provide a solution to a problem that I have. I am trying to truncate a table (let's call it A), but it is being referenced by a foreign key from another table (let's call it B). I disabled all constraints on table B (alter table B nocheck all). When I tried to truncate table A, I still received the same error message... can't truncate because it is being referenced by a foreign key. It seems that the only way that I can truncate table A is to first drop the foreign constraint from table B. Is this correct? Any suggestions?
Thanks,
ken8iron
August 2, 2006 at 10:03 am
Hello Ken,
You need to physically remove the foreign key constraint before heading to truncate the table. after truncation you need to re-create the constraint.
Lucky
August 3, 2006 at 7:21 am
Is there a way to get all the DDL for all constraints on a table? Then I could run the drop all the constraints on the table, run the truncate, and then recreate the constraints.... OR should I not even bother and just DELETE?
August 3, 2006 at 7:39 am
to get the fk constraints related to a specific table, use the proc sp_fkeys:
use pubs
sp_fkeys titles
results:
pubs | dbo | titles | title_id | pubs | dbo | roysched | title_id | 1 | 1 | 1 | FK__roysched__title___0DAF0CB0 | UPKCL_titleidind | 7 |
pubs | dbo | titles | title_id | pubs | dbo | sales | title_id | 1 | 1 | 1 | FK__sales__title_id__0BC6C43E | UPKCL_titleidind | 7 |
pubs | dbo | titles | title_id | pubs | dbo | titleauthor | title_id | 1 | 1 | 1 | FK__titleauth__title__060DEAE8 | UPKCL_titleidind | 7 |
Lowell
August 3, 2006 at 7:42 am
I did see a script posted on this site that generated a script for all the foreign key (or was it primary key?) constraints in a database. You could try searching for it. Unfortunately, it didn't work if the constraints acted on composite keys. I gave up on my own attempt to get it working!
If your foreign key constraint names all begin with "FK_", for example, then you could start off with something like this:
SELECT CONSTRAINT_NAME, TABLE_NAME, COLUMN_NAME
FROM INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE
WHERE CONSTRAINT_NAME LIKE 'FK_%'
but then you'd still need to script them out one by one using Query Analyzer.
Even if you used DELETE FROM, you'd still run into problems if the constraints weren't created with cascading delete, or if the child table has more than one column referencing the parent table.
John
August 7, 2006 at 5:08 am
Inspired by Lowell's suggestion to use sp_fkeys, I have finally come up with a script that I hope will work in all situations. Here it is:
SET nocount ON
--Create temp table to hold FK information.
--May need to increase varchar sizes if you have composite
--keys with many columns or long names
CREATE TABLE #FKs (PKTABLE_QUALIFIER sysname NULL, PKTABLE_OWNER sysname,
PKTABLE_NAME sysname, PKCOLUMN_NAME VARCHAR(256), FKTABLE_QUALIFIER sysname NULL,
FKTABLE_OWNER sysname, FKTABLE_NAME sysname, FKCOLUMN_NAME VARCHAR(256),
KEY_SEQ smallint, UPDATE_RULE smallint, DELETE_RULE smallint,
FK_NAME sysname NULL, PK_NAME sysname NULL, DEFERRABILITY smallint)
--Populate the table.
--The STUFF function in here gets rid of the [dbo]. from the front of the table name
EXEC sp_MSforeachtable 'DECLARE @table sysname
SET @table=CAST(STUFF(''?'', 1, 6, '''') AS sysname)
INSERT INTO #FKs EXEC (''sp_fkeys '' + @table)'
--Create temp table to contain cascade actions
CREATE TABLE #Cascade (Number tinyint, CascadeAction VARCHAR(9))
--Populate the table
INSERT INTO #Cascade
SELECT 0, 'CASCADE' UNION
SELECT 1, 'NO ACTION'
--Concatenate cols of composite keys into one row
DECLARE @maxseq smallint
DECLARE @i smallint
SET @i = 1
SELECT @maxseq = MAX(KEY_SEQ) FROM #FKs
WHILE @i < @maxseq
BEGIN
UPDATE f1
SET f1.PKCOLUMN_NAME = f1.PKCOLUMN_NAME + ', ' + f2.PKCOLUMN_NAME,
f1.FKCOLUMN_NAME = f1.FKCOLUMN_NAME + ', ' + f2.FKCOLUMN_NAME
FROM #FKs f1 JOIN #FKs f2
ON f1.KEY_SEQ = f2.KEY_SEQ - @i
AND f1.FK_NAME = f2.FK_NAME
AND f1.KEY_SEQ = 1
SET @i = @i + 1
END
--Generate the script
SELECT 'ALTER TABLE ' + f.FKTABLE_NAME + '
ADD CONSTRAINT ' + f.FK_NAME + ' FOREIGN KEY (
' + f.FKCOLUMN_NAME + '
  REFERENCES ' + f.PKTABLE_NAME + ' (
' + f.PKCOLUMN_NAME + '
 
ON DELETE ' + cd.CascadeAction + '
ON UPDATE ' + cu.CascadeAction + '
'
FROM #FKs f
JOIN #Cascade cu
ON f.UPDATE_RULE = cu.Number
JOIN #Cascade cd
ON f.DELETE_RULE = cd.Number
WHERE f.KEY_SEQ = 1
August 7, 2006 at 5:17 am
I guess I don't get it... if Table A is referenced by Table B, it would appear that Table A is a parent of Table B. Why would you want to truncate Table A to begin with? And, unless you correctly repopulate Table A, reinstantiating the FK's on Table B will surely result in an error, the exception being, of course, if Table B is empty.
Again, why are you truncating Table A? The answer may lead us to the correct solution...
--Jeff Moden
Change is inevitable... Change for the better is not.
August 8, 2006 at 2:23 pm
He's probably also truncating Table B.
You can have Enterprise manager generate the drop and add foreign key constraints with a little fiddling with the "Generate SQL Script ... " feature. Then just replace the drop table commands with truncate table.
August 8, 2006 at 6:25 pm
If that's the case... I'd just drop both tables and the related keys and start them over... but, unless it's for repopulating tables for testing purposes, I'd still like to know why Ken needs to do this.
--Jeff Moden
Change is inevitable... Change for the better is not.
August 8, 2006 at 9:23 pm
Great feedback. Thanks everyone.
Here's the reason why I want to use truncate:
We have a database refresh job that happens weekly. The job basically deletes all rows from all tables, then reloads the data into the tables. I wanted to remove the DELETE statements, and use TRUNCATE TABLE to try and improve performance. Some of the tables are huge, and I thought truncate would be a lot faster than delete. There are foreign constraints on tables, so tables must be deleted in a specific order, as well as the loading of the tables. For example: if table A is the parent of table B (i.e. table B has foreign keys to table A), then table B must be deleted first before table A is deleted. When loading, table A must be loaded first before table B. You get the picture, right?
Anyway, it seems that from the feedback from Jeff, it may be more efficient, to just drop the tables, recreate the tables and constraints, then load the data.... probably, load the data with the constraints disabled, to improve performance.
Better yet, it may be even faster to just drop the database, and recreate it, and then load the data with the constraints disabled.
What do you think is the best, and fastest way?
Thanks,
Ken
August 8, 2006 at 9:35 pm
One thing that I forgot to mention. The data that we load is grouped by year. That is we get a set of data for 2003, 2004, 2005, 2006, etc.
The job purges the database (by deletes), then it loads the data from 2003 into temp tables. Then it takes the data from the temp tables and loads it into the destination tables, using a query as follows:
insert into DestinationTable
select * from #TempTable
where SomePrimaryKey not in (select SomePrimaryKey from DestinationTable)
Once the data for a particular year has finished loading, the remaining sets of data are loaded as follows:
1. drops and recreates the temp tables
2. load the data into the temp tables
3. load the data from the temp tables into the destination tables using the query above.
I've tried using NOT EXIST statements, but it didn't make too much difference, in terms of speed.
The total job runs for approx. 5 hours.
And in case anyone is interested in what sort of data we're loading, it's Chrome data (automotive data from Chrome), which lists all car models, styles, etc.
If anyone has suggestions on how to improve performance, I would like to hear about them.
Thanks,
Ken
August 8, 2006 at 10:16 pm
Now... that's why I was asking... you're doing it the hard way
If you have a SAN storage unit (or similar), check with the network guys and find out how to do a clone or a snapshot instead. Our 200 gig production database only take 14 minutes to flip into the reporting data base... ALL data, entities, and other objects are 100% intact. Also, check with the DBA (if that's not you). He may know if the network guys don't know.
Right now... you're doing it the hard and very slow way...
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 12 posts - 1 through 11 (of 11 total)
You must be logged in to reply to this topic. Login to reply