Table size limitation on SQL update task

  • I have two tables that need to have data updated, deleted and/or added.  One table has about 100,000 and everything works fine.  The other table has over 600,000 records and although it appears that all the tasks are successful, I can tell by the balancing reports that there might be a limitation in the number of records that can be manipulated at one time.  Is this true?  Since both tables contain historical information, we can't just truncate and append, unless I dump the data into a temporary table, truncate the main table, add new data and then load everything back in.  This seems kind of a waste of resources to me.  Is there anything else I can do?   

  • If you post your table structures and the queries you are running you might get a better response than "It depends..."

    Certainly SQL Server has no problem updating 600,000 records, depending on hardware, table and index structure, it might take a little while.

    --------------------
    Colt 45 - the original point and click interface

  • If you got no results, I would expect resource limitation, batch size, bad design, etc.  However, since you are getting an incorrect result, I would expect a data or logic problem.  Any triggers?  Have you examined the records causing the balance problem?  Try changing your update query to a select query for some of the problem records to see if you get what's expected. 

    RandyHelpdesk: Perhaps Im not the only one that does not know what you are doing. 😉

  • It's also not impossible that a developer, after testing, forgot to remove a "top" in a select statement or a SET ROWCOUNT statement. 

    RandyHelpdesk: Perhaps Im not the only one that does not know what you are doing. 😉

  • In response to everyone:

    1. We decided to dump, truncate and append, but I foresee a problem with this once we become bigger and better and more record intensive.

    2. There are no triggers on this table - I upsized it from an Access 2K database.  By triggers (and please excuse my ignorance) are you referring to primary keys?  That's all I've done to the tables.

    3. When I use Query Analyzer, I do not see any issues with any of the update/append/delete queries I'm trying to use, except I had to switch two append queries around to avoid duplicate entries.  Even putting this into play, the 600,000 row table still had data not deleting properly and several entries not being updated as expected. 

    4. I looked at the missmatched data I can see what they are, but they give me no clue as to why they are there. I would need to manually fix these every day and so that's why we went to dump, truncate and append - this way, the data balances.

    4. No, I didn't put TOP in anywhere (I'm the developer).  When I open the table it returns all the rows. But thanks for the suggestion.

  • A trigger is a stored procedure like T-SQL that executes in response to inserts, updates, or deletes.  It is not related to keys or indexes. 

    Every table should have a primary key even if you have to add an identity column.  If you have a set of columns that are unique, then they are a good candidate for a PK.  Without a key or any index, performance will be very bad. 

    It sounds like you have a problem with the queries.  I can't say what without seeing them or understanding the process.  Perhaps your queries need to distinguish or include existing records from or to the new records being inserted. 

    RandyHelpdesk: Perhaps Im not the only one that does not know what you are doing. 😉

Viewing 6 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply