January 18, 2007 at 4:41 pm
Hi,
I need to create tables with a fixed number of rows, lets say 50. When a new row of data is added, the oldest entry disappears. I don't care what happens to the old data, I just need the latest 50.
It would be nice if this could be done without too much overhead. If you can think of more than one way of doing this it would be helpful.
The tables will be used by a web server that will display operational "dashboard" data from various systems.
Thank you,
Barry
January 18, 2007 at 4:48 pm
You've got a couple of options here. Either create a stored procedure to 1. check the current row count; 2. delete oldest row if needed; 3. insert new row, or use an INSTEAD OF INSERT trigger. The amount of overhead will be determined by the volume of INSERTs you are expecting to have. I woud go the stored procedure route personnaly, but the trigger is an option as well.
January 18, 2007 at 5:06 pm
SP route will not stop from inserting more than 50 rows by another SP created by another developer.
Trigger is clearly the best option here.
But not INSTEAD OF, use just AFTER trigger.
You need to leave latest 50 rows happened to be in the table AFTER INSERT. So, AFTER trigger is more logical and more simple in terms of coding in this case.
_____________
Code for TallyGenerator
January 18, 2007 at 5:11 pm
Yes, Sergiy is correct in saying that a SP will not stop the ability to insert more than 50 rows. I was assuming that the SP would be used for ALL inserts into your table. Sergiy is also correct in that an AFTER trigger would be easier in this case.
January 19, 2007 at 3:33 pm
Thank you, everyone.
Just to be clear, you are saying I should use an AFTER trigger to run a stored procedure after an INSERT.
The SP will check if the table has >50 rows, and if it does, it will delete the excess.
Thanks again,
Barry
January 19, 2007 at 4:40 pm
Actually trigger is stored procedure.
It's just invoked in another way.
_____________
Code for TallyGenerator
January 19, 2007 at 5:30 pm
Barry,
You probably already thought of this, but you will need to somehow designate the 51st row so you delete the correct row. If you use some variation of TOP 50 with a join, you are going to delete whatever SQL Server thinks is the 51st row, not necessarily what you think it is.
Luck, Dave
January 19, 2007 at 10:50 pm
You might be able to do the count thing with a RowNum IDENTITY column... have the trigger delete...
WHERE RowNum < IDENT_CURRENT('table_name')-49
--Jeff Moden
Change is inevitable... Change for the better is not.
January 19, 2007 at 11:52 pm
Jeff, there is a rule. Sounds a little bit bold but actually works.
If there is hardcoded value there is an error.
It does not mean I don't tolerate iteration step = 1 hardcoded anywhere, or I don't accept number of weekdays = 7 hardcoded anywhrere, but I understand that this solution is limited to the current calendar and I hope it will not be changed in my lifetime. So, I can put up with this error.
In you code there is hardcoded value "49".
So, there is an error.
Should I tell you what kind of error it is?
_____________
Code for TallyGenerator
January 20, 2007 at 12:06 am
No "error"... Think of it as an "interation step" "that I hope will never be changed in my lifetime"...
...unless you'd rather see...
WHERE RowNum <= IDENT_CURRENT('table_name')-50
--Jeff Moden
Change is inevitable... Change for the better is not.
January 20, 2007 at 2:47 am
No, this is an error.
This will keep 50 last inserted records about some events, not records about 50 last happened events.
Feel the difference?
_____________
Code for TallyGenerator
January 20, 2007 at 8:54 am
Got a code example for this particular "keep last 50" problem so we can see what you're talking about? I think you're talking about pre-assigning constants as variables which also makes the constants self documenting if the variables are named correctly. Before you start, variables named like @One (as you've used on other posts) or @Fifty also constitute an "error" as they don't expose what the constant is being used for.
--Jeff Moden
Change is inevitable... Change for the better is not.
January 20, 2007 at 1:12 pm
Deleting based on the latest identity value does not take into account the possibility that rows in the latest 50 have been deleted, or that the identity seed was incremented without rows being inserted because of a rollback or failed insert.
delete from MyTable where MyTableID not in ( select top 50 a.MyTableID from MyTable a order by a.MyTableID desc )
January 20, 2007 at 9:15 pm
In the presence of an appropriate index, DClark's method and the following method both work at about the same speed and both do the same thing functionally...
DELETE FROM MyTable
WHERE MyTableID <= IDENT_CURRENT('MyTable')-50
However, without an index (it sometimes happens), the above works more than 4 times faster.
But, I don't think that's what Serqiy is getting at... I think he's suggesting the following...
DECLARE @RowsToKeep INT
SET @RowsToKeep = 50
DELETE FROM MyTable
WHERE MyTableID <= IDENT_CURRENT('MyTable')-@RowsToKeep
On that note, if it were a larger proc and the "50" were used more than once, I'd absolutely agree (well, almost). But it isn't and it isn't, so it just doesn't matter for this particular example.
I said "well, almost" because, if you really want to do it right, it really ought to be done where you don't have to change the code at all to change the setting. The settings should be in a configuration table and read when needed.
DECLARE @RowsToKeep INT
SELECT @RowsToKeep = Setting
FROM dbo.Configuration WITH (NOLOCK)
WHERE SettingName = 'FixedTableSize'
DELETE FROM MyTable
WHERE MyTableID <= IDENT_CURRENT('MyTable')-@RowsToKeep
But for a forum example? I really didn't expect to get my butt kicked over using a constant by someone who has used them in a good number of his own postings
--Jeff Moden
Change is inevitable... Change for the better is not.
January 20, 2007 at 11:33 pm
"In the presence of an appropriate index, DClark's method and the following method both work at about the same speed and both do the same thing functionally...
DELETE FROM MyTable
WHERE MyTableID <= IDENT_CURRENT('MyTable')-50
"
As I stated in my prior post, they are not functionally the same. The IDENT_CURRENT function returns the last value generated for a table, not the last value in the table. Other values in the range of > IDENT_CURRENT('MyTable')-50 may also not be in the table, so the DELETE statements above could leave the table with fewer than 50 rows, or even empty.
I agree that the query I posted could perform worse, but sometimes that is the cost required to achieve the specified results. However, I doubt that there will be a noticeable difference in performance on a table that only has a few more than 50 rows.
Using a trigger for this is probably a bad idea. Why it is important to limit the number of rows in the table like this? Also, if you insert 60 rows in the table with one statement, you will immediately delete at least 10.
Viewing 15 posts - 1 through 15 (of 18 total)
You must be logged in to reply to this topic. Login to reply