delete

  • how can i delete all the data from a dataabse with just one command. I just want to make my database empty without any data in it except the schema.

  • theres no way to do delete all data with on simple command.

     

    But you can use enterprise manager and generate a sql script of all objects tables indexes constraints etc. It will give you one large script for DDL, and of course no Data.

    then you can execute that on any server and have a nice clean DB.

     

  • I think, if i can get all table names with a script then I can do a "delete from 'tablename', how can i get just table names from a DB

  • Take a look at this thread:

    Truncate All Tables

    http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=65341

     

  • I get the following mesg for most of the tables when I run the query listed inthe link provided

    Warning: The table empcontact' has been created but its maximum row size (17468) exceeds the maximum number of bytes per row (8060). INSERT or UPDATE of a row in this table will fail if the resulting row length exceeds 8060 bytes.

    Warning: The table 'empact' has been created but its maximum row size (17727) exceeds the maximum number of bytes per row (8060). INSERT or UPDATE of a row in this table will fail if the resulting row length exceeds 8060 bytes.

  • Those error could not have come from the script on that link.

    The errors are coming from some create table statements that you must have added.

     

  • ok.

    I truncated all the data in my DB using

    -- disable referential integrity

    EXEC sp_MSForEachTable 'ALTER TABLE ? NOCHECK CONSTRAINT ALL'

    GO

    EXEC sp_MSForEachTable 'TRUNCATE TABLE ?'

    GO

    -- enable referential integrity again

    EXEC sp_MSForEachTable 'ALTER TABLE ? CHECK CONSTRAINT ALL'

    GO

    but still the space is not freed, more over it is occupying more space on the disk than beforewith them.

  • You probably need to run UPDATEUSAGE on all your tables.


    Live to Throw
    Throw to Live
    Will Summers

  • I meant to say:

    You probably need to run UPDATEUSAGE on all your tables to recover the free space.

    I always UPDATESTATS and UPDATEUSAGE after every TRUNCATE


    Live to Throw
    Throw to Live
    Will Summers

  • >>....but still the space is not freed,<<<

    Your transaction log may have grown but the space used is definitely marked as unsued, just not released to the OS. Files won't shrink automatically, if that is what you are expecting.

    How are you checking for the used space?


    * Noel

  • Here is a script to see the table sizes in your database. (the database to be checked needs to be selected before running)

    select

    so.id as [OBJECT_ID],

    CONVERT(CHAR(30),so.name) as [OBJECT_NAME],

    coalesce(j_rows.rows,0) as [ROWCOUNT],

    coalesce(j_ru.sum_reserved,0) * cast(m.low as dec) / 1024 as [RESERVED (KB)],

    d.data * cast(m.low as dec) / 1024 /1024 as [DATA (MB)],

    (coalesce(j_ru.sum_used,0) - d.data) * cast(m.low as dec) / 1024/ 1024 as [INDEX (MB)],

    (coalesce(j_ru.sum_reserved,0) - coalesce(j_ru.sum_used,0)) * cast(m.low as dec) / 1024 as [UNUSED (KB)]

    from

    sysobjects so -- rows

    left join sysindexes j_rows on j_rows.indid < 2 and j_rows.id = so.id

    left join

    ( select id, sum(reserved) as sum_reserved, sum(used) as sum_used from sysindexes where indid in (0, 1, 255) group by id ) j_ru on j_ru.id = so.id

    left join

    ( select j_dpages.id, coalesce(j_dpages._sum,0) + coalesce(j_used._sum,0) as data from ( select id, sum(dpages) as _sum from sysindexes where indid < 2 group by id) j_dpages left join

    ( select id, sum(used) as _sum from sysindexes where indid = 255 group by id ) j_used on j_used.id = j_dpages.id

    ) d on d.id = so.id

    inner join master.dbo.spt_values m on m.number = 1 and m.type = 'E'

    where OBJECTPROPERTY(so.id, N'IsUserTable') = 1

    order by [DATA (MB)] DESC, [ROWCOUNT] ASC


    Live to Throw
    Throw to Live
    Will Summers

  • I ran the above script but in my result looks like

    1301579675 mapilink                       0 0.000000 0.000000000 0.000000000 0.000000

    1317579732 matter                         0 0.000000 0.000000000 0.000000000 0.000000

    1349579846 matter2                        0 0.000000 0.000000000 0.000000000 0.000000

    1381579960 matter3                        0 0.000000 0.000000000 0.000000000 0.000000

    1413580074 message                        0 0.000000 0.000000000 0.000000000 0.000000

    1445580188 notes                          0 0.000000 0.000000000 0.000000000 0.000000

    1477580302 otldtl                         0 0.000000 0.000000000 0.000000000 0.000000

    1493580359 otlhdr                         0 0.000000 0.000000000 0.000000000 0.000000

    1525580473 pcmatter                       0 0.000000 0.000000000 0.000000000 0.000000

    1557580587 phase                          0 0.000000 0.000000000 0.000000000 0.000000

    1589580701 phone                          0 0.000000 0.000000000 0.000000000 0.000000

    1621580815 popuid                         0 0.000000 0.000000000 0.000000000 0.000000

  • It looks like there is no problem; all the tables are empty.

     

  • trunacte statements are not logged so I don't believe the t-log would be affected by this. If those sp's called did delete's however it would most definitely increase the t-log size. Just do a truncate log statement to shrink it down. Not sure the exact syntax but if I recall it's something like:

    backup log 'mylog.ldf' with trunate only


    Cheers,

    Ben Sullins
    bensullins.com
    Beer is my primary key...

  • >>>trunacte statements are not logged ... <<

    Beg to Differ .. well slightly

    TRUNCATE statement *are* MINIMALLY logged. they are logged; just page deallocations are the only thing recorded in the log.

    Also ...WITH TRUNCATE_ONLY will be deprecated so you should start looking into setting up a proper maintenace schedule

    Cheers,


    * Noel

Viewing 15 posts - 1 through 15 (of 16 total)

You must be logged in to reply to this topic. Login to reply