Execute SQL over multiple connections?

  • I need to execute a few SQL statements against several databases.

    Here's what I do currently but it will be difficult to maintain:

    1. I have three execute SQL tasks in the control flow that have the exact same text but different connections. They create a table for the data.

    2. They are all precedents for a data flow task. It has a multi-cast and conditional split to determine where the rows go. These are large amounts of data so I don't want to run the query twice.

    3. Repeat of step 1 but to create indexes.

    The reason I want to do this is that I'll be doing this for 20-30 tables and it will get difficult to maintain, not to mention ugly. I've thought of using a foreach loop but have not been able to get that to work. Creating the pk's may be an issue with this approach because I'd like it to run in parallel but it would still be nice to get this to work. Also, the three databases is the current requirement but that may change.

    I have not been able to find any samples of this and don't have any other ideas. Any help is greatly appreciated.

  • This sounds like a job for a stored procedure, you could cursor through the list of databases that need a table and build the tables by creating a dynamic SQL statement substituting in the database name.

    When a sproc can do it better than I use a sproc.

    Don't know of any way to change the connection string of a connection manager on the fly.

  • Just to clarify - you have a single (large) source file that is directed to one of 20-30 tables in multiple databases, depending on some attributes of the source data? So the nature of the source data determines both the destination db and the table to be updated?

    What is the purpose behind creating the tables? Why not just use a permanent table?

    Phil

    The absence of evidence is not evidence of absence
    - Martin Rees
    The absence of consumable DDL, sample data and desired results is, however, evidence of the absence of my response
    - Phil Parkin

  • The problem we have is the source system does not capture changes. Therefore, we receive the entire data file each week. We drop and recreate the tables each time. They are permanent tables. We receive one large file for this but we have three data marts it goes into: global, domestic, and international. Global is the union of domestic and international.

    I'm new to SSIS but what I've read is suggesting that populating all tables while the data is in memory gives the best performance. The sproc gives me maintainability but it would have to read the data again which would reduce performance since it must read the data again.

    I currently have it working with three execute sql statements side-by-side for the drop and creates, then again for the index creation. Some loads have more processing. The data flow in the middle then populates all three sources with a multicast and a conditional split. What I've descripted above happense 20-30 and the packages are big and ugly. Plus, we have more than three data marts and as I go some of these may be larger. (I'm new to this project so I don't know what the whole system is like.) This is also a maintenance problem. A data type just changed and the table creates are all over the place and need to be changed. It's just a pain and I'd rather not have duplicate code if I can avoid it.

    Thanks for your help.

Viewing 4 posts - 1 through 3 (of 3 total)

You must be logged in to reply to this topic. Login to reply