Is this possible, select into on specified filegroup?

  • Is something like this possible?

    select * into t2 from t1 on big_filegroup

    I’m trying to direct the new table to a specific filegroup.

  • from the 'Inserting Rows by Using SELECT INTO' topic of BOL:

    SELECT INTO does not use the partition scheme of the source table. Instead, the new table is created in the default filegroup.

  • Use the CREATE TABLE statement to put the table on the desired filegroup.

  • You could change the default file group first. Probably not a good idea to be mucking with that too regularly, but...

    ALTER DATABASE dbname

    MODIFY FILEGROUP mygroup DEFAULT

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Hmm... select into is typically pretty darn fast... how does it compare to create table/insert, anybody have time to test with some real data volume?

    Joe

  • i'm sure someone will correct me if i'm wrong, but i believe that select into is not logged while insert into is. if so, select into will be faster and the recovery model of your db will influence any speed difference.

  • idea of modify default file group seems best for your criteria....though id the idea is just to move this table to another filegroup then recreating clustered index provide method to point to new filegroup and that will table too...

    Prakash Heda
    Lead DBA Team - www.sqlfeatures.com
    Video sessions on Performance Tuning and SQL 2012 HA

  • Since SELECT INTO doesn't usually copy over indexes and modifying the default file group before & after a SELECT INTO could cause major problems later on (what if you forget to switch it back?), I recommend going with the previously suggested "Do the SELECT INTO and then create a Clustered Index on the proper filegroup" option.

    It's easy to code for, after all, and gives you what you want with a minimum of fuss.

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • Joe Clifford (1/24/2008)


    Hmm... select into is typically pretty darn fast... how does it compare to create table/insert, anybody have time to test with some real data volume?

    Joe

    Yes the select into is much faster (bulk logging) on a DB set to simple recovery mode.

  • Brandie Tarvin (1/25/2008)


    Since SELECT INTO doesn't usually copy over indexes and modifying the default file group before & after a SELECT INTO could cause major problems later on (what if you forget to switch it back?)

    Agreed, I already tried modifying the default file group strategy.

    Brandie Tarvin (1/25/2008)


    I recommend going with the previously suggested "Do the SELECT INTO and then create a Clustered Index on the proper filegroup" option.

    Unfortunately these are large staging tables that are removed when my ETL is completed. If I build a clustered index, in essence it’s like copying the table twice.

  • SELECT INTO is much faster on FULL Recovery too. I've never profiled it to see if it works like TRUNCATE (minimally logged) or if it just works like Bulk Logging, though.

    I use Inserts & Select Into's all the time for data updates at my work place. When I can, I use the Select Into over the Insert for speed, both in processing and in typing. However, there are times where I'll revert to an plain old Create Table / Insert for things that Select Into doesn't do for me... Like constraints or Identities, etc.

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • If these tables are that temporary, why are you even worried about what filegroup they end up on?

    If it's a space or processing issue, here's another possibility. Create a new database (same instance) with the file on the disk that you have your other secondary filegroup on. Throw all your staging tables into it. Then it automatically creates onto that disk drive and when you drop it, you don't have to worry about your primary DB being bloated after everything's done.

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • I think Brandie has it nailed - use a seperate database for loading/staging your ETL operations.

Viewing 13 posts - 1 through 12 (of 12 total)

You must be logged in to reply to this topic. Login to reply