November 1, 2004 at 3:14 pm
This is our setup: Lets call the database MerchTran. We get daily data for all our merchants in one large file. Then we also have many (equal to the number of merchants in our system) tables that contain merchant transaction data for the past one year. So each mercahnt has one table and the total number of tables is qual to the number of merchants that are active in our system for the past one year. I have a stored procedure that reads data from this big file (lets call it extract) and inserts each merchants record in the respective table. Record for each merchant vary from 1 to a few hundred every day. So we are not talking about a lot of data - something like 200,000 records per day.
The stored procedure works like this:
it gets a distinct list of all merchants in the file for the day and inserts the merchant IDs in a temporary table. Then it opens a cursor and selects merchant IDs from this temporary table one by one and Inserts data in respective merchant table. If no table exists for the merchant then it creates a new table on the fly.
my issue is (and thanks for the patience!) this entire process takes over 30 mintues which seems a lot to me. This is running on a 4 processor pentium 4 dell server with 2 GB of memory.
Question to all the GURUS out there is: Can I do away with using cursors and still insert each merchants record in their respective table? Or am I stuck with having to use cursors. Also if I have to use cursors is there any way to speed up the process?
Thanks in advance!
Vishal
November 1, 2004 at 4:00 pm
It sounds like you need to give more information. Are you using single row processing, (the Cursor) to accomplish some kind of flow control for Updates versus Inserts?
If not, I would start to look at "dynamic" SQL, (it is my limited understanding that I will get pounded for using that term, so I apologies in advance...). What is nice about this is you can still use Flow Control, but you can also perform Inserts and Updates without knowing ahead of time what table name or field name you are filling, (it also needs thorough testing because of that). It will probably help in this circumstance if you have similar designs to your various merchant tables.
I can give you an example I made for a guy if you would like to try this approach - I just have to find the darn thing...
I wasn't born stupid - I had to study.
November 1, 2004 at 4:05 pm
Is there any reason why you don't have a single table for all the merchants, containing a 'MerchantID' field to identify which merchant it is from? Then you could do your upload straight into that, without the need for creating tables on the fly etc etc. The whole structure would become much more coherent if you were to do so.
Regards
Phil
The absence of evidence is not evidence of absence
- Martin Rees
The absence of consumable DDL, sample data and desired results is, however, evidence of the absence of my response
- Phil Parkin
November 2, 2004 at 3:07 am
Hi,
If it helps - yes you can do away with cursors and still step through everything:-
SET @pk = (SELECT MIN(PK) FROM #Values)
SET @MaxPK = (SELECT MAX(PK) FROM #Values)
WHILE @pk <= @MaxPK
BEGIN
--Do whatever you are doing
SET @pk = (SELECT MIN(PK) FROM #Values WHERE PK > @pk)
END
Though i'm not too certain what good it will do you in this scenario......
Have you checked the execution plan to see where the biggest costs are?
Steve
We need men who can dream of things that never were.
November 2, 2004 at 3:28 am
May I ask why you have a separate table for each merchant?
--
Frank Kalis
Microsoft SQL Server MVP
Webmaster: http://www.insidesql.org/blogs
My blog: http://www.insidesql.org/blogs/frankkalis/[/url]
November 2, 2004 at 6:49 am
If you are inserting into a table and creating on the fly then I assume you are already using dynamic sql or have an if statement for each merchant.
I agree with Steve, check where the performance is worst.
Is the input table indexed? If not then this is probably where the time is taken, eg if you have 200,000 rows and approx 1-100 rows per merchant then you have approx 2,000 merchants!! With this you will do table scan of 200,000 rows, 2000 times (one for each merchant).
Far away is close at hand in the images of elsewhere.
Anon.
November 2, 2004 at 4:54 pm
Thanks for all the replies. Here are answers to some questions raised:
For Farrell Keough's question: Yes I do use dynamic SQL here
For Phil Parkin and Fran Kalis: To comply with certain Agency certifications we HAVE to use one table for each merchant. Don't ask but that's the way they work!
For David Burrows and more info: I create table for merchants on the fly if needed. All these tables (including source table) have appropriate indexes. Select from source table for cursor takes < 1 second. Don't know what to do!
November 3, 2004 at 2:04 am
How many merchants are there?
How long does the whole process take? (you state > 30 mins)
Based on my last post, if there are 2,000 merchants and it takes 30 mins then that is only 0.9 secs per merchant, so the problem may just be volume.
In cases like this I would output date/time between each processes and see which is the longest, I think you will find that each merchant extract is reasonably quick but the aggregate of doing so many will take time.
Far away is close at hand in the images of elsewhere.
Anon.
Viewing 8 posts - 1 through 7 (of 7 total)
You must be logged in to reply to this topic. Login to reply