March 27, 2014 at 10:48 am
SQLRNNR (3/27/2014)
8 Files for a 4.5GB database is overkill. For a database that small, I would not split the files. I would look to performance tune the code.That said, it appears you have LOBs stored in your database. Is that true? I would also venture that the LOBs are the majority of your data.
4.5GB is just a test database so I could get it done quickly and see the results. The actual databases in question will be more like 50GB+. I've had a different idea yesterday as well. What adding the files, setting the maximum file growth, and then emptying the main file into them. The way I see it, once the files stop growing, it will stop moving the data over. I already tested it, and it seemed to have worked without any issues. Once the files were filled up, I got an error saying that it can't move any more data and the move was complete. After that I defragmented indexes, and checked database for integrity, both were completed successfully. Last step was to turn on the autogrowth for the additional files, and that was it. It was very quick, and seemed to have worked really well. A little crude, sure, but seems to be very effective.
March 27, 2014 at 11:56 am
SQLRNNR (3/27/2014)
8 Files for a 4.5GB database is overkill. For a database that small, I would not split the files. I would look to performance tune the code.That said, it appears you have LOBs stored in your database. Is that true? I would also venture that the LOBs are the majority of your data.
BWAAA-HAAAA!!!! I misread the orginal notation on how big the database was and apparently thought the period was a spec on my glasses. I was thinking that splitting even a 45GB database was serious overkill.
Last night, I went back to read what Paul Randal had to say (I don't have the URL handy here at work) about doing such splits and the tests he ran. Under some pretty optimal conditions, he only got a 6% improvement using 8 files. That's a pittance compared to the 2 to 100X (read that as 100% to 9,900%) improvement that one can get with a little thoughtful tuning and a few best practices changes in the code.
I'm amazed at how much time people will spend on "optimizing hardware" without addressing the root of nearly every performance problem... performance either is or is not in the code and bad code will humble even great hardware.
--Jeff Moden
Change is inevitable... Change for the better is not.
March 31, 2014 at 6:18 am
DVSQL (3/26/2014)
So I tried it that way, stuck the command into a cursor, looped through all the user indexes, and it didn't seem to work. Maybe 8% of data was moved from main file to the additional .ndf files. This is the script I'm using to rebuild the indexesDECLARE @TableName sysname
DECLARE @TableIndex sysname
DECLARE @sql NVARCHAR(MAX)
DECLARE Index_Rebuild CURSOR FORWARD_ONLY FOR
SELECT i.name index_name, o.name table_name FROM sys.indexes i (NOLOCK)
JOIN sys.objects o (NOLOCK)
ON i.object_id = o.object_id
WHERE i.name IS NOT NULL
AND o.is_ms_shipped <> 1
OPEN Index_Rebuild
FETCH NEXT FROM Index_Rebuild INTO @TableIndex, @TableName
WHILE @@FETCH_STATUS = 0
BEGIN
--DBCC DBREINDEX (@TableName, @TableIndex, 70)
SET @sql = 'ALTER INDEX ' + @TableIndex + ' ON ' + @TableName + ' REBUILD PARTITION = ALL WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)'
EXEC (@SQL)
FETCH NEXT FROM Index_Rebuild INTO @TableIndex, @TableName
END
CLOSE Index_Rebuild
DEALLOCATE Index_Rebuild
Alongside specifying the new filegroup whwere you want to move the index too, you'll also need the following clause to move an index to a new filegroup
[DROP_EXISTING=ON]
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
Viewing 3 posts - 16 through 17 (of 17 total)
You must be logged in to reply to this topic. Login to reply