schema changed after the target table was created. rerun the select into query

  • Hello

    I am using select into a temporary table as part of an etl package. There are multiple packages running in parallel. Some of which disable indexes and re enable them after the table has been loaded.

    We have seen the following error a few times 'schema changed after the target table was created. rerun the select into query'

    Googling this returns mostly information about triggers causing the problem, but we do not have any triggers.

    I can probably get around this issue by making the load more sequencial but would like to understand what events can cause this error?

    Thanks

    Michael

  • Michael

    What kind of table are you creating - temp table, global temp table, or table in a user database? Are there multiple instances of the same package running at the same time creating the same table?

    John

  • Hi John

    Thanks for the reply,

    I am using a temp table created inside a stored procedure. There should only be one instance of this sp running at a time.

    Michael

  • Michael

    The only thing that I can think of is that there's another piece of code somewhere that's creating a temp table with the same name. You can run a trace and look for code being executed that includes the temp table name. If it's not that, you could try creating the table with a CREATE TABLE statement instead of SELECT INTO. Even if that doesn't solve the problem, it may throw up an error message that is more helpful than the one you're currently getting. Please confirm - your table name starts with "#" and not "##", doesn't it?

    John

  • i thought the issue might be related to the altering of the indexes; that would count as a schema change.

    Some of which disable indexes and re enable them after the table has been loaded.

    so if process#1 selects from the table, process#2 disables indexes, and then process#1 tries to do something to the table again, would that cause the issue?

    Lowell


    --help us help you! If you post a question, make sure you include a CREATE TABLE... statement and INSERT INTO... statement into that table to give the volunteers here representative data. with your description of the problem, we can provide a tested, verifiable solution to your question! asking the question the right way gets you a tested answer the fastest way possible!

  • so if process#1 selects from the table, process#2 disables indexes, and then process#1 tries to do something to the table again, would that cause the issue?

    Once the data is in the temporary table, it doesn't matter whether the indexes are disabled on the source table I guess...

  • i'm thinking it has to do with the target tables, and not anything to do with the temp tables;

    i read his comment as disabling reading data into temp tables, manipulating it, dropiing indexes ont he target tables, loading data from temp to target, then re-enabling indexes.

    so if one process alters the destination/target table, it might affect other processes that do not expect the target table to have any schema changes.

    Lowell


    --help us help you! If you post a question, make sure you include a CREATE TABLE... statement and INSERT INTO... statement into that table to give the volunteers here representative data. with your description of the problem, we can provide a tested, verifiable solution to your question! asking the question the right way gets you a tested answer the fastest way possible!

  • Hey Guys,

    Good points here.. The table is a temp table starting with #.

    We had come to the conclusion that it was disabling/enabling the indexes which was causing the problem but we tried to create the same problem doing an insert to a test #temp table, disabling and re enabling the indexes on the source table, but the insert completes fine. Confusing us even more 😀

    Would changing the index on a source table, during the insert to the #temp, cause a 'schema change'?

    Cheers:cool:

  • MichaelMontgomery (6/7/2012)


    We had come to the conclusion that it was disabling/enabling the indexes which was causing the problem but we tried to create the same problem doing an insert to a test #temp table, disabling and re enabling the indexes on the source table, but the insert completes fine. Confusing us even more 😀

    Are you certain you don't have any DDL triggers (not DML triggers)? Run the following queries in the database where the error occurs:

    SELECT

    t.name,

    t.type_desc,

    t.create_date,

    t.modify_date,

    t.is_ms_shipped,

    t.is_disabled

    FROM sys.triggers AS t

    WHERE t.parent_class_desc = N'DATABASE';

    SELECT

    st.name,

    st.parent_class_desc,

    st.type_desc,

    st.create_date,

    st.modify_date,

    st.is_ms_shipped,

    st.is_disabled

    FROM sys.server_triggers AS st;

    Aside from that, can you provide a clearer explanation? I have read your posts a few times now, and I'm still not sure whether you are using SELECT INTO to *create* a temporary table (named with a single # mark) or something else. There are a few peculiarities around #temp tables, mostly for backwards compatibility, particularly where one procedure uses a #temp table created outside its scope. Add in transactions and a mix of DDL and DML, and keeping things like cached plans consistent gets very complex very quickly.

  • Hi Guys

    Thanks for the input again, I have run the script and no triggers exist on the database or server. When you say cached plans, could it be something related to that causing the schema change?

    SELECT

    column1,

    column2,

    column3,

    column4

    INTO #TempTable

    FROM SourceTable1 s1

    INNER JOIN SourceTable2 s2

    ON s1.id = s2.id

    Where sourcetable1 and SourceTable2 could possibly have indexes disabled and/or enabled for data inserts, during the population of #TempTable

    Hopefully this makes it a little clearer, i know its not easy to understand simply from my description so thanks for percervering 🙂

    Michael

  • MichaelMontgomery (6/8/2012)


    I have run the script and no triggers exist on the database or server...Hopefully this makes it a little clearer.

    That helps a lot, thanks. The cause of your occasional problems is that SELECT...INTO is a bit of an unusual statement: it executes in two parts. You can't see this directly in the query plan, but the engine first inspects the query to determine the schema of the table that will be created. It then creates the table, compiles a query plan to insert rows to the new table, and then executes the query. By the way, the two main steps (create, then insert) are part of the reason that SELECT...INTO creates the table even if an error occurs:

    -- Divide by zero...

    SELECT *

    INTO #Banana

    FROM sys.columns AS c

    WHERE 1/0 = 50

    -- ...but the table is still created

    SELECT * FROM #Banana;

    DROP TABLE #Banana;

    There is a small window between creating the table and starting to perform the insert, where the source tables in the query are not protected by schema stability locks. If a concurrent process happens to change the schema of one of the source tables, after the target has been created, but before the insert starts, error 539 ("Schema changed after the target table was created. Rerun the Select Into query.") is raised.

    In your case, the concurrent disabling and rebuilding of non-clustered indexes on the source tables occasionally hits this window and causes the error (disabling or rebuilding an index is a schema change). So, you need to plan for the possibility of a 539 error when using SELECT...INTO with source tables that are visible to other processes. You can use a TRY...CATCH block with a retry if there's no open transaction you need to protect (error 539 dooms the current transaction). If that works for you, something like this would work:

    Retry:

    BEGIN TRY

    -- For retry

    IF OBJECT_ID(N'tempdb..#Temp', N'U')

    IS NOT NULL

    DROP TABLE #Temp;

    SELECT

    s2.col2,

    s2.col1

    INTO #Temp

    FROM dbo.Source1 AS s1

    JOIN dbo.Source2 AS s2 ON

    s2.col1 = s1.col1;

    END TRY

    BEGIN CATCH

    IF ERROR_NUMBER() = 539 GOTO Retry

    -- Handle other errors

    END CATCH;

    Otherwise, we need to hold some sort of lock on the source table(s) across both parts of the SELECT...INTO statement to ensure the schema stays stable. Just adding a locking or isolation hint on its own will not help, because the target table creation happens within a system transaction, not the implicit user transaction provided by the statement itself. An explicit transaction is definitely required as well. One option is to use an explicit transaction with a held shared lock on the tables you need to protect:

    BEGIN TRANSACTION;

    SELECT

    s2.col2,

    s2.col1

    INTO #Temp

    FROM dbo.Source1 AS s1 WITH (REPEATABLEREAD, TABLOCK)

    JOIN dbo.Source2 AS s2 WITH (REPEATABLEREAD, TABLOCK) ON

    s2.col1 = s1.col1;

    COMMIT TRANSACTION;

    There are many other locking/isolation level options (and other query tricks) with their own advantages and disadvantages. Let us know if you have specific concurrency needs for the source tables while the SELECT...INTO is running, beyond what I have covered above.

  • Finally it all makes sense 😀 Thanks a million for that Paul, I think we will probably try locking the source tables and wrapping it up in a single transaction.

    Finally we know the reason behind it, it was very confusing trying to recreate the problem and the insert always completed on our tests.

    The fact that the amount of time between the #table creation and the select was probably tiny, and not enough for us to change the source table schema for our test, would explain why we could not recreate the problem

    Thanks again for taking the time and going into so much detail!

    Cheers

    Michael

  • MichaelMontgomery (6/8/2012)


    The fact that the amount of time between the #table creation and the select was probably tiny, and not enough for us to change the source table schema for our test, would explain why we could not recreate the problem.

    Yes, exactly.

  • SQL Kiwi (6/8/2012)The cause of your occasional problems is that SELECT...INTO is a bit of an unusual statement: it executes in two parts. You can't see this directly in the query plan, but the engine first inspects the query to determine the schema of the table that will be created. It then creates the table, compiles a query plan to insert rows to the new table, and then executes the query. By the way, the two main steps (create, then insert) are part of the reason that SELECT...INTO creates the table even if an error occurs:

    -- Divide by zero...

    SELECT *

    INTO #Banana

    FROM sys.columns AS c

    WHERE 1/0 = 50

    -- ...but the table is still created

    SELECT * FROM #Banana;

    DROP TABLE #Banana;

    There is a small window between creating the table and starting to perform the insert, where the source tables in the query are not protected by schema stability locks. If a concurrent process happens to change the schema of one of the source tables, after the target has been created, but before the insert starts, error 539 ("Schema changed after the target table was created. Rerun the Select Into query.") is raised.

    I have a similar problem, except I am not using a SELECT * INTO #TempTableY FROM [TableX] statement. My statement is more like SELECT ParticipantID INTO #TempTableY FROM [TableX].

    Other processes might change some portion of the schema of TableX while this statement is running. (The most likely process is some kind of index maintenance.) However, nothing will happen to the ParticipantID field.

    So, my question is this. Rather than use an explicit transaction to lock the source table while this statement is running, could I also solve this problem by simply not using the SELECT INTO statement? I am thinking of something along the lines of...

    CREATE TABLE #TempTableY (ParticipantID INT NOT NULL);

    INSERT #TempTableY (ParticipantID)

    SELECT ParticipantID

    FROM TableX;

    Would this avoid error 539 ("schema changed after the target table was created")?

  • CELKO (6/7/2012)


    I am using a temp table created inside a stored procedure. There should only be one instance of this sp running at a time.

    The first question is how did you enforce this "one instance" rule? But the real problem is how you write code. The # and ## tables are leftover from 1970's Sybase. They mimic 1950's scratch tapes from an even earlier ethnology!

    Back when the machines were 16-bit and lucky to have a few KB of main storage , and OS software was one pass interpreters we put metadata prefixes on names.

    Today, we write derived table expressions and avoid the code museum. The optimizer will decide if it needs to go to disk, or SSD or whatever.

    Do you know one of the versions of the ham/brisket/ turkey and oven jokes (http://www.snopes.com/weddings/newlywed/secret.asp)? Well you are the bride with your coding!

    No... you're not actually avoiding the code museum... you're just hiding it because the system will many times make temp tables in the background to create things like inline views, etc, etc. And "local" temp tables that start with a "#" have no "one instance" rule because they have a sequential binary number added to the name to keep local temp tables from stepping on each other. The 1950s scratch tapes were a joy back then because they held only the work you needed. It was a good idea back then and it's such a good idea now that SQL Server automatically does it behind the scenes for you. It's also time honored and very effective in "Divide'n'Conquer" methods whether done automatically or if it's declared.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 15 posts - 1 through 15 (of 16 total)

You must be logged in to reply to this topic. Login to reply