Memory Needed More

  • We have server with 16 core 64GB RAM in Active\Passive Cluster,out of which 56GB is assigned to SQL Server.  We are planning to introduce filetable which uses Windows memory if I am not wrong. Do we need to increase Server memory? If yes ,by how much?

  • You'll have to test this with realistic data to answer the question for yourself. If you have very small files and very few transactions with them, it probably won't affect memory much at all. Bigger files and more transactions, or both, has more memory impact. Without seeing a realistic load, the only advice I could offer would be to say "more". That's completely unrealistic. Do testing and you'll know a realistic answer.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Rechana Rajan - Monday, July 10, 2017 12:16 AM

    We have server with 16 core 64GB RAM in Active\Passive Cluster,out of which 56GB is assigned to SQL Server.  We are planning to introduce filetable which uses Windows memory if I am not wrong. Do we need to increase Server memory? If yes ,by how much?

    Are you under memory pressure already? What size documents are you going to have/access? Where will they be stored? 

    Personally I still love the old-school way of doing this: storing the files in the file system and only keeping a pointer in the database. I really dislike having BLOBs crush my SQL Server if I can avoid it and have a viable alternative.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • TheSQLGuru - Monday, July 10, 2017 1:32 PM

    Are you under memory pressure already? What size documents are you going to have/access? Where will they be stored? 

    Personally I still love the old-school way of doing this: storing the files in the file system and only keeping a pointer in the database. I really dislike having BLOBs crush my SQL Server if I can avoid it and have a viable alternative.

    FileTables are not like BLOBs stored in a database. The storage is the file system.

    Sue

  • Sue_H - Monday, July 10, 2017 2:41 PM

    TheSQLGuru - Monday, July 10, 2017 1:32 PM

    Are you under memory pressure already? What size documents are you going to have/access? Where will they be stored? 

    Personally I still love the old-school way of doing this: storing the files in the file system and only keeping a pointer in the database. I really dislike having BLOBs crush my SQL Server if I can avoid it and have a viable alternative.

    FileTables are not like BLOBs stored in a database. The storage is the file system.

    Sue

    I refer you to FileTable Documentation

    among many things in there are these:

    * Calls to create or change a file or directory through the Windows share are intercepted by a SQL Server component and reflected in the corresponding relational data in the FileTable.
    * FileTables can also be queried and updated through normal Transact-SQL access. They are also integrated with SQL Server management tools, and features such as backup.

    Those are things I mentioned NOT wanting: my SQL Server getting pounded with BLOB activity (especially the IO and RAM hits) AND having to maintaing the data via backups, checkdb, etc.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • The buffer pool is not used for FileTables. The tools, components access the file system using Windows APIs. T-SQL to APIs. I can change the registry in T-SQL as well. The files are not stored in the table - it's metadata, properties, logical directory structure, etc.

    Using Filestreams, which FileTables are based off of,  is compared with Blobs in tables, pure file system storage in Paul Randal's whitepaper. Filestreams can be setup for high performance if configured correctly.
    It's no longer a comparison of Blobs in the database vs files stored in isolation on the file system. And there are times that different applications, systems need the advantages of one of the approaches. And if it's a lot of smaller BLOBs, per Jim Gray, the performance is actually better when stored in the database. If someone reads both Randal's and Gray's papers, blanket statement usually won't apply.

    Sue

  • Sadly I am just overwhelmed and can't dig into this as much as I would like. I did slap together a quick demo and verified that the backup was the size of the files I put in the filetable. For me that cuts out the vast majority of scenarios where I would feel this would be the best approach for a client solution. There are so many better ways to back up and restore a bunch of disk-based files than what SQL Server offers. I had to pull the plug there and didn't get to dig into the buffer pool usage, but I will take your word for it. I am aware that small files have been bench-marked faster. I still believe that the myriad other advantages of external storage will provide the winning solution the majority of the time.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • TheSQLGuru - Monday, July 10, 2017 9:13 PM

    Sadly I am just overwhelmed and can't dig into this as much as I would like. I did slap together a quick demo and verified that the backup was the size of the files I put in the filetable. For me that cuts out the vast majority of scenarios where I would feel this would be the best approach for a client solution. There are so many better ways to back up and restore a bunch of disk-based files than what SQL Server offers. I had to pull the plug there and didn't get to dig into the buffer pool usage, but I will take your word for it. I am aware that small files have been bench-marked faster. I still believe that the myriad other advantages of external storage will provide the winning solution the majority of the time.

    We did extensive testing when this first came out and largely came to the same conclusions. We had a few places where we had quite a few small files that it actually performed better and made sense to sacrifice the recovery time in order to arrive at the performance gains. However, it was the exception, not the rule. For the most part, it does tend to work slower and, as you point out, it affects the recovery time pretty radically.

    I'm still where I was before. Test it locally to validate that it's going to work in your situation. In some situations it really is the better choice, just not in most.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • There are a lot of cool things you can do with the filestream and filetables though that you can't do nicely in windows.  Full text searching of file contents for example is something that windows can do if you are on the latest version or you have some 3rd party tool, but using a full text index you can get a lot faster searching for common keywords than any windows tools.  Or if you have some data in SQL that you need to compare with a file on the filesystem.  You can do comparisons on the file as if it were a table.
    And some 3rd party software requires filestream and filetables.
    I think it is a nifty feature that we are actually looking to implement as well.

    Going back to the OP, I agree with Grant's original statement - Only you know if you need more memory.  It all depends on what is running on that system.  If you have SQL Server, SSRS, SSIS, SSAS and you want to add filestream on top of that, you will likely have memory contention on a 64 GB server.  But maybe your SSRS reports and SSIS packages are very light and SSAS is not used much so the memory added from all 3 of those systems is only 1 GB and your filestream stuff is going to be under a GB.  In that case you may not need more memory.  Every system is different and the only way you can know for certain if you have enough resources is trial and error.
    But I highly doubt you will ever find a DBA who would tell you "naw.  You have enough memory".  Memory for SQL Server is like money to me - you can never have too much.

    An interesting read/watch on filestream was done by LogicalRead at https://logicalread.com/sql-server-filestream-part1-mo01/#.WWT6W5B-scw

    The above is all just my opinion on what you should do. 
    As with all advice you find on a random internet forum - you shouldn't blindly follow it.  Always test on a test server to see if there is negative side effects before making changes to live!
    I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply