September 15, 2019 at 4:15 pm
I'm trying to move a large database (22 TB) from a stand alone server to a Storage Spaces Direct (S2D) cluster. The database to be moved is four files, none over 16 TB. The cluster is already hosting many TB of data and is working fine. To minimize downtime, I generally restore a full backup, then restore a Diff backup, then stop providing the service and restore the Trans backups. But I can't even get the process started. When I try to execute the Restore statement, I get the following error message:
Msg 3257, Level 16, State 1, Line 13
There is insufficient free space on disk volume '\\ReindeerAFS3\WSPSQL\' to create the database. The database requires 21812542636032 additional free bytes, while only 17592186040320 bytes are available.
You might note that the number reported as "available" is exactly 16 TB. The Cluster Shared Volume actually has 25 TB free, it was created specifically for this SQL Server data.
I have tried the rumored Trace Flag 3104 which is supposed to bypass checking of free space, but it does not help. The error message remains the same. I believe that Trace Flag was a SQL 2000 item.
The data files target is a Scaleout Fileserver (\\ReindeerAFS3) share (WSPSQL). I have restored small files to this share a few months ago to make sure things were working and things went very smoothly. I can restore this same backup to a stand-alone virtual machine that lives on the cluster, with the VHDX hosted on that same CSV, with no issues. So I'm sure that the free space message is wrong.
Has anyone else tried to Restore a large (over 16 TB) database to a S2D cluster? Is anyone storing large databases on a SOFS share? I'm using SQL Server Standard 2014 SP3 on both the source and destination servers. The cluster is all 2019 Datacenter, fully patched. This is either a bug in SQL Server, a bug in S2D or SOFS, or I'm missing something.
Student of SQL and Golf, Master of Neither
September 16, 2019 at 5:10 pm
Thanks for posting your issue and hopefully someone will answer soon.
This is an automated bump to increase visibility of your question.
September 16, 2019 at 6:56 pm
Just curious - but what was the allocation unit used when the drive was formatted?
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
September 16, 2019 at 7:26 pm
Good question, and I wondered the same thing. I took the defaults when I built the volumes on the S2D Pool. This query shows that the block size is 4096, which used to mean a max volume size of 16TB. but I have several 25 TB volumes with a block size of 4096. I'm using REFS, don't know if it has the same limitations that NTFS did.
get-wmiobject win32_volume -ComputerName Donner | FT Name,Blocksize - Shows all volumes with 4096 block size
I can certainly understand no one file being more than 16 TB, but the volume is clearly 25 TB. I have actually "worked around" the problem by adding another SOFS share on the same CSV, and successfully restored all 22 TB in one Restore statement.
Student of SQL and Golf, Master of Neither
September 16, 2019 at 7:30 pm
Good question, and I wondered the same thing. I took the defaults when I built the volumes on the S2D Pool. This query shows that the block size is 4096, which used to mean a max volume size of 16TB. but I have several 25 TB volumes with a block size of 4096. I'm using REFS, don't know if it has the same limitations that NTFS did.
get-wmiobject win32_volume -ComputerName Donner | FT Name,Blocksize - Shows all volumes with 4096 block size
I can certainly understand no one file being more than 16 TB, but the volume is clearly 25 TB. I have actually "worked around" the problem by adding another SOFS share on the same CSV, and successfully restored all 22 TB in one Restore statement.
It's an interesting subject. You should write an article on that..
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply