March 12, 2002 at 9:21 am
The company I work for is currently looking at ways to imrove performance and one way to achieve this is for com+ data objects to gather all related data before posting it to the database. To achieve this, the data objects need to maintain referential integrity and to achieve this, these data objects need to generate ID's. Now, not to go to deep into my database ID retrieval process I have written for my databases, I need to know how or if there is a way to write a server based application that retrieves lists of ID's for tables and populates a collection from the database. Then the data objects that need to maintain referential integrity can request ID's from the ID object so database inserts can be done in bulk. This approach also reduces the number of database calls so performance will be imroved even more.
March 12, 2002 at 10:10 am
You can do this easily, but the issue will be maintaining the integrity between mutliple calls. If objA calls your IDserver for 10 ids, the IDServer needs to be sure it does not allow objB to also request 1 id simultaneously and get the same ID value. You can serialize the server to handle this, but it will take some doing. Alternatively, you can maintain an IDs table in the db and let the ID server increment this with one call and then send the appropriate values to the objects.
BTW, not that this is the best way, but the identity property in SQL handles this quite well.
Steve Jones
March 12, 2002 at 10:28 am
Thanks for the prompt response.
What we are trying to do is minimise calls to the database. I have written an ID retrieval process for the database and the reason behind this is we don't use identity columns because of a merge distributed data environment. The process we are trying to implement is:
ObjID calls the database ID retrieval process and populates its collection with ID's (different for each user table). Then ObjA, ObjB etc. call ObjID to get ID's reducing the number of database calls. Once all data objects have gathered their data along with ID's, this is submitted to the database as one batch reducing the number of database calls once again. For an OLTP environment, where you have say 100+ transactions happening a second, reducing the number of database calls does significantly improve performance.
My next problem is how do you configure this accross load balanced com+ boxes?
March 12, 2002 at 10:53 am
Well, if your objID is controlling access, then it can still reduce calls by storing the IDs in SQL.
For Example, suppose you have a Customers table. In the IDs table, you store "ABCD100" as the next ID to be used. objID calls SQL Server and notes it needs 1000 ids, so it updates the IDs table to set the next customers ID to "ABCD1100" and it then disconnects. BEFORE it hands out an ID, it then populates a collection with ids ABCD100 to ABCD1099. It can then hand these out. If there is another objID on a load balanced server, it will connect, perhaps at the same time as the first objID, but locking in SQL will prevent it from getting an ID value until the update from the first objID is complete. Now this object will see "ABCD1100" as the next ID and update to "ABCD2100" and populate his collection with "ABCD1100" to "ABCD2099". And then hand them out. If you are concerned about keeping things sequentially, then if a box fails, you will have issues with managing the missing IDs.
Does this make sense?
Steve Jones
March 12, 2002 at 11:01 am
Thanks Steve
Sequential ID's is not an issue as all ID's are internal to maintain integrity. I am going to try this approach now. Thanks for all your help.
Cheers
Deon
March 12, 2002 at 11:09 am
March 12, 2002 at 11:27 am
I use uniqueidentifiers to avoid the server trip, easy to generate on the client or middle tier, guaranteed unique.
Andy
March 12, 2002 at 11:36 am
Thanks and I will consider using GUID's as a next step. The reason behind this is, we use a 6 character prefix to our ID's which is a server, database, table location makeup with a number concatenated to this. Although using GUID's seems like a sensible solution, the amount of work required to go and restructure all our user tables along with data conversion routines will not be feasable.
I have to admit, I wish I was faced with this problem when we were still at the design phase.
Thanks anyway.
Viewing 8 posts - 1 through 7 (of 7 total)
You must be logged in to reply to this topic. Login to reply