March 10, 2004 at 3:20 am
hi,
I have an inventory table.
tblInvenvory
<SCHEMA>
storeId
productId
quantity
<\SCHEMA>
Now, I'd like to archive this table.So that I could check what the inventory was on a given date.I am not really sure how to do this, or, if there is a better way.
Should I perhaps:
1) perform a "select *" on the inventory
2) copy the records to a new table that is the same as tblInventory, except that it has a timestamp + storeId as a primary key.
All advice and resource suggestions greatly appreciated.
Thanks,
yogi
March 10, 2004 at 5:21 am
The way I usually do this in an OLTP system by creating an inventory transaction table. That way one can ascertain inventory balances at any point, plus you have the added benefit of auditability.
If this is purely a DSS system, then your denormalized method would be fine.
--Jonathan
March 10, 2004 at 5:55 am
great,
I'll look into "oltp" and the "inventory transaction table"
thanks Jonathan.
March 10, 2004 at 7:07 am
Sorry, I guess that's jargon.
OLTP is OnLine Transaction Processing and DSS is Decision Support System. Although I think the term DSS is now out of vogue and has been replaced by Data Mart, Data Warehouse, or OLAP (OnLine Anayltical Processing). The typical accounting system is both, of course, but one usually optimizes for the OLTP processes (inserts, updates, deletes).
The inventory transaction table would probably be a change to your schema and would require rewriting every process that updates inventory to insert a row in the transaction table. For your historic reporting, you would also need a table with baseline quantities, which is a natural consequence of the physical count process. Here's an example:
CREATE TABLE InvTrans(
TranID int IDENTITY PRIMARY KEY NONCLUSTERED,
ProductID smallint REFERENCES Inventory,
StoreID smallint REFERENCES Stores,
TranType tinyint REFERENCES InvTranTypes,
Quantity int NOT NULL,
UserID smallint REFERENCES Users,
TranDate datetime NOT NULL DEFAULT GETDATE())
CREATE CLUSTERED INDEX ix_c_InvTrans_StoreProd ON InvTrans(StoreID,ProductID)
The quantity would be negative for any transaction where inventory is relieved and positive for any transaction where inventory is increased.
If this level of auditability is beyond your needs, and the time granularity can be increased, then your method of a balance history table would certainly be easier to implement.
--Jonathan
March 10, 2004 at 7:51 am
Hi Jonathan,
Thanks for going into so much detail.I've done a bit of digging and will probably be back at some point with progress + questions.
cheers
yogi
March 11, 2004 at 7:27 am
just spent 20 minutes on a writeup in here and when posting the message, it didn't take, so I'm testing this out before I rewrite the whole thing again...
please disregard, sorry.
March 11, 2004 at 7:42 am
Ok, this post probably won't be as good as the one that just got lost into cyberspace, but I'll try to sum up the main points:
I typically attack this issue in one of the following ways:
1. The "archive table" approach, as you mentioned. Basically, you create insert,update, and delete triggers on your inventory table and then populate your "archive" table as all inserts. Then the "archive table" includes includes a column to determine whether the action was insert,update, or delete. This is all pretty standard stuff. However, I would also recommend that you at least consider adding the following columns to the table as well:
* APP_NAME(): This will tell you whether the action originated from SQL Query Analyzer, some web form, a custom application,etc. Of course, the application must set the APP in the connection string, but it is a more common thing done these days, and I've found that many applications will do this.
* NT/SQL User name: Dependent on how you set security up in SQL Server, this can become a very handy column to tell you who did it. If you don't use the granularity in SQL server, try getting your application username (independent of SQL Server).
* DATETIME: obviously
I would consider very carefully the architecture if you utilize this solution. First, be very careful how you architect the indexing strategy. Since you will have 100% inserts into the table, and you will be doing some intense select operations on the table, and because the table will get very big, you will be running into a catch-22 with respect to performance in the database.
I would also carefully consider where this table needs to live. It might be worth it to store this table on another server or database (for various reasons). This might require some dynamic SQL which is not as desirable. Or you could use an intermediate table and DTS the results to another server. Just things to consider in your design...
2. You may also consider using one-way transactional replication. There's lots of benefits to going with this approach, including reduced contention from the aforementioned indexing catch22 and probably improved performance if you use a subscriber that is different than your publisher. However the downfalls to this approach include: a) Learning curve. If you haven't used transactional replication before, there's a little to understand before you just "turn it on". b) Since you will be doing a little more "working" with the data as it moves from source to destination, you might need to consider using stored procedure replication or stored procedure invocation through replication, to do all your custom stuff on the subscriber end.
Just a couple of things to consider...
March 11, 2004 at 3:24 pm
hi Grasshopper,
i've been stung a few times thru writing my posts directly on the site, then somehow losing them.I generally now do them on notepad and copy.
Anyway, i appreciate the time you've taken on your post.
The more things thrown into the mix the merrier for all.I'll be back.
cheers,
yogi
Viewing 8 posts - 1 through 7 (of 7 total)
You must be logged in to reply to this topic. Login to reply