March 26, 2014 at 10:25 am
I am trying to split up a large database file into several smaller database files at work. So far I've tried it in a way of using shinkfile with emptyfile command by taking the following steps.
1. Create temp database file to empty the primary file into.
2. Create additional files to fill with data.
3. Resize all empty to hold a faction of data from however many files I split it across
4. Empty temp file into new smaller files.
5. Drop temp file.
This pretty much ensures that all database files end up roughly the same size, and grow a the same rate. The only downside is working with large files is extremely slow, emptying takes forever. Is there a different way of splitting data equally across all database files for one database?
March 26, 2014 at 10:43 am
You can rebuild the clustered index or the heap for each table.
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
March 26, 2014 at 10:57 am
Could you please elaborate a little more on how that would work?
March 26, 2014 at 11:10 am
If you have several files in the filegroup, set to the same size and set to evenly grow, then a rebuild of the CI or heap will usually split the data evenly across the files in the filegroup.
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
March 26, 2014 at 12:05 pm
The problem with that might be that I won't be able to set the original file smaller than it has the data on there right now. The only way to get around that would be to create every additional file same size as current file, so for example if I need to create 7 additional files, and I have a 45GB database file, I could possibly run the server out of space.
March 26, 2014 at 12:35 pm
No.
I do this quite frequently where I add files that are smaller and then rebuild the indexes.
Once the rebuild is complete you can work on shrinking the original file.
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
March 26, 2014 at 12:51 pm
So, basically, add the files, run DBCC DBREINDEX for all indexes, and that's it?
March 26, 2014 at 2:34 pm
use alter index ... Rebuild since you are on SQL 2008
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
March 26, 2014 at 2:44 pm
So I tried it that way, stuck the command into a cursor, looped through all the user indexes, and it didn't seem to work. Maybe 8% of data was moved from main file to the additional .ndf files. This is the script I'm using to rebuild the indexes
DECLARE @TableName sysname
DECLARE @TableIndex sysname
DECLARE @sql NVARCHAR(MAX)
DECLARE Index_Rebuild CURSOR FORWARD_ONLY FOR
SELECT i.name index_name, o.name table_name FROM sys.indexes i (NOLOCK)
JOIN sys.objects o (NOLOCK)
ON i.object_id = o.object_id
WHERE i.name IS NOT NULL
AND o.is_ms_shipped <> 1
OPEN Index_Rebuild
FETCH NEXT FROM Index_Rebuild INTO @TableIndex, @TableName
WHILE @@FETCH_STATUS = 0
BEGIN
--DBCC DBREINDEX (@TableName, @TableIndex, 70)
SET @sql = 'ALTER INDEX ' + @TableIndex + ' ON ' + @TableName + ' REBUILD PARTITION = ALL WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)'
EXEC (@SQL)
FETCH NEXT FROM Index_Rebuild INTO @TableIndex, @TableName
END
CLOSE Index_Rebuild
DEALLOCATE Index_Rebuild
March 26, 2014 at 2:50 pm
From that script it looks like your issue could be one of two things or a combination of both.
Issue #1: You are excluding all tables that are heaps. To move the table data, you must rebuild the Clustered Index for those tables that have a CI. And you must rebuild the heap for those tables that are heaps.
Issue #2: You probably have BLOB data.
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
March 26, 2014 at 3:40 pm
DVSQL (3/26/2014)
I am trying to split up a large database file into several smaller database files at work.
WHY? What advantage do you believe that will bring you?
Unless you can put each file on separate physical spindles, there likely won't be any significant performance gains.
Unless you separate old unchanging data from new data on the files, there will be no less maintenance in the form of index maintenance, backup reduction, etc. The only advantage will be the minor possibility of "Piece-meal-restores" and the backups required for those will be more complicated.
Like I asked, why do you want to do this? What advantage are you looking for?
--Jeff Moden
Change is inevitable... Change for the better is not.
March 26, 2014 at 4:20 pm
Alright, so I rebuilt the heaps as well, and it's still not very proportionate. Here is an example:
Database 4.5GB
Add 7 additional files of 583MB
After File Addition:
Primary: 97% full
Secondary 7: 1% full
After Index Rebuild:
Primary: 85% full
Secondary 7: 17% full
After Heap Rebuild:
Primary: 71% full
Secondary 7: 22% full
This is still a very uneven distribution. What else can be done about this?
March 26, 2014 at 4:27 pm
I have to ask again, why are you going through all of this? What advantage do you think you're getting out of all this?
--Jeff Moden
Change is inevitable... Change for the better is not.
March 26, 2014 at 4:31 pm
My superior asked me to do this. There is a slight gain in IO performance I guess, and we are trying to squeeze completely everything out of it. It's not as much as splitting up the tempDB, but there is apparently still gains to be had when you split up the data files as well.
March 27, 2014 at 8:20 am
8 Files for a 4.5GB database is overkill. For a database that small, I would not split the files. I would look to performance tune the code.
That said, it appears you have LOBs stored in your database. Is that true? I would also venture that the LOBs are the majority of your data.
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
Viewing 15 posts - 1 through 15 (of 17 total)
You must be logged in to reply to this topic. Login to reply