January 4, 2003 at 3:30 pm
Which is more efficient, one big data file or many smaller files, using one small primary and many small secondaries? Assuming a great deal of data, and that a big primary data file would be maxed out. How big is too big for an .mdf? Does whether or not a Simple Recovery option is used affect the answer? Thanks!
January 5, 2003 at 11:39 am
Lots of questions. On the simple recovery ?, it limits the restore ability. You can only restore up to the last full or full + diff backup. Not recommended for production systems, but you'd have to give more info on data loss acceptability. Better to post that in the backup forum.
For the data files, more files allows you to spread the IO load across physical disks, also SQL uses a separate thread for each, so you get some gains. Increases complexity, especially if you have a disaster, need multiple file MOVE options. also, increases failure likelihood. More disks, more chances of failure.
Not sure of the max size. I've had 100GB single file MDFs. How much data we talking? What type of system?
Steve Jones
January 6, 2003 at 8:00 am
The customers' needs are varied, so I can't predict size--unfortunately. A theory question regarding efficiency, one big data file versus many small, is what I had. There will be one or many users accessing databases in a server at a time, and the databases may be small to very big. It's not certain yet, but the biggest might get by with a 100 MB primary data file.
Just would like to add this: what are the prospects of Novell Netware and SQL Server 2000 working together? Apparently, MySQL will start shipping with Netware an a few months.
January 6, 2003 at 9:53 am
I'd stick with one data file. Simpler is usually better.
As far as Netware, no chance. If anything, I'd like to see SQL Server on Linux, but after talking with developers at MS at PASS, no chance.
Steve Jones
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply