November 28, 2002 at 4:39 am
Hi,
I have a table of contacts with a column called contact_id. This column has an Identiy Seed of 1 and an increment of 1. This works great. However, now I've designed my database I would like to delete all of my test data and set the seed back to 1. If I add more records after I've deleted the test data it starts counting from where it left off !
How do I do this ?
Thanks
Mike
November 28, 2002 at 5:01 am
Drop and re-create the table
or
DBCC CHECKIDENT ('table name', RESEED, seed value)
e.g.
DBCC CHECKIDENT ('MyTable', RESEED, 0)
November 28, 2002 at 5:12 am
Doing a truncate table will also do it.
Andy
November 28, 2002 at 5:13 am
Works great.
Thanks for you help. Greatly appreciated.
Mike
December 3, 2002 at 10:10 pm
Truncate deletes all rows and reset the ident counters.
If you don't want to drop your table and recreate, use the DBCC CHECKIDENT() as listed to reset after you delete all the rows.
TRUNCATE has the advantage of not filling up your txn log. if your row count is small, no big deal.
December 4, 2002 at 2:22 am
I was wondering why my transaction log is twice the size of the data file !
Are there any issues in adding 20,000 records into a database, taking them out and then adding them back in a number of times ?
I presume the statistics and indexes maintain themselves on each import?
Thanks, Mike
December 4, 2002 at 3:16 am
Each time you run your import, SQL Server will write the transactions to your transaction log, and so your log will continue to grow until you back up either the transaction log or the database.
If you aren't interested in actually recovering the data you could set the recovery mode of your database to 'Simple', or you could back up the transaction log with the 'TRUNCATE_ONLY' option.
The indexes will maintain themselves, but unless you have the 'auto update statistics' option set on, your statistics will not maintain themselves. In this case you will need to manually update the statistics.
December 6, 2002 at 4:03 am
The TRUNCATE command looks an ideal one to use, however it does not deleting if their are any FOREIGN KEYS defined !
If I delete the data, and restablish the ID SEED back to the start. Is their an easy way of clearing the transaction log ? As I presume the actual data file will reduce in size on the DELETE command.
Thanks
Mike
December 6, 2002 at 4:48 am
To clear the transaction log just run 'BACKUP TRAN dbname WITH TRUNCATE_ONLY'
As to the actual database space, SQL Server doesn't always re-use space freed by a DELETE command (SQL 2000 is much better at this than SQL 7). If you haven't got the 'Auto Shrink' option set on the database, you may have to manually shrink the database using the 'DBCC SHRINKDATABASE(dbname)' command.
December 6, 2002 at 6:54 am
quote:
I was wondering why my transaction log is twice the size of the data file !Are there any issues in adding 20,000 records into a database, taking them out and then adding them back in a number of times ?
I presume the statistics and indexes maintain themselves on each import?
Thanks, Mike
This all depends on how you do your import and settings of the database. If you use INSERT then it will be logged and the TL will grow even if set to Simple Recovery in 2000 or Trunc. on Checkpoint on in 7 but in with those settings te transactions will be marked and removed from the TL. If running BULK INSERT or using BCP then the transactions are unlogged unless you have Bulk Logged recovery set in 2000 or Select Into/Bulk Insert on in 7 then the transactions are logged and will cause the TL to grow. As well when you remove them thru DELETE the transactions a fully stored, using TRUNCATE will store only a single record for the pointer removal of the tables data (SQL uses internal pointers to know the offset of the data within the file).
If the TL has grown and have trnasactions then use BACKUP LOG dbname WITH TRUNCATE_ONLY to flush them and follow with DBCC SHRINKFILE to shrink the log in 2000. If using SQL 7 there is a known issue with the vitual logs within the TL file, you will find a script on this site sp_force_shrink_log to clear the virtual logs to make them release and allow the file to shrink. If size is a concern set a max growth size for the file and motnitor regularly. Also, to save growth of file causing IO contention if the max space you will allow is ok to have used set the size of the file instead to that size and turn off automatic growth, it will have better performance overall. Just keep in mind if you stop growth and the file hits the end during a transaction the transaction will stop and rollback.
Viewing 10 posts - 1 through 9 (of 9 total)
You must be logged in to reply to this topic. Login to reply