July 15, 2015 at 7:01 pm
We are building a new environment and have got a Sql Server with 32 GB memory, 32 core 2.00 GHZ CPU, a Sql 2012 installation. We have about 6 databases but really there are just 2 databases that have all the activity.
One of these two databases has about 6 logging tables that are supposed to get about 30 to 50 thousand inserts an hour. We also have FileStreaming enabled on this one database.
The second database has mainly one big table that will have again a lot of logging (inserts) going on. The table has a couple of max(varchar) columns and so will get a lot of data in those columns.
When we try to test this server by stressing it, we see a very high memory usage and consistently high (over 80%) CPU usage.
Questions:
1. Will bumping mem to 64 or 128 GB help? We know that we don't have many selects or stored procs doing a lot of joins and selects, for those queries more memory would mean we could have more of that data in memory and get it from there and not do a disk read, but when we only do a lot of INSERTS, does having a lot of memory help, how?
2. Will a lot of inserts drive the CPU usage so high for what seems a pretty beefy server?
3. Is there a way to do inserts in a efficient manner and reduce memory and CPU usage.
4. Does Filestreaming add overhead to inserts?
Any tips highly appreciated.
Thanks,
Vip.
July 16, 2015 at 1:32 am
Bharatvip (7/15/2015)
We are building a new environment and have got a Sql Server with 32 GB memory, 32 core 2.00 GHZ CPU, a Sql 2012 installation. We have about 6 databases but really there are just 2 databases that have all the activity.One of these two databases has about 6 logging tables that are supposed to get about 30 to 50 thousand inserts an hour. We also have FileStreaming enabled on this one database.
The second database has mainly one big table that will have again a lot of logging (inserts) going on. The table has a couple of max(varchar) columns and so will get a lot of data in those columns.
When we try to test this server by stressing it, we see a very high memory usage and consistently high (over 80%) CPU usage.
Questions:
1. Will bumping mem to 64 or 128 GB help? We know that we don't have many selects or stored procs doing a lot of joins and selects, for those queries more memory would mean we could have more of that data in memory and get it from there and not do a disk read, but when we only do a lot of INSERTS, does having a lot of memory help, how?
2. Will a lot of inserts drive the CPU usage so high for what seems a pretty beefy server?
3. Is there a way to do inserts in a efficient manner and reduce memory and CPU usage.
4. Does Filestreaming add overhead to inserts?
Any tips highly appreciated.
Thanks,
Vip.
Quick questions, what about the IO subsystems, file and disk configuration and layout, filegroups, multiple data files etc.? What you've mentioned so far has little to do with write/insert performance.
😎
10-15 inserts per second is not much, shouldn't be a problem on a properly configured system.
32GB is rather meager memory and the size does matter. The server writes the inserts to pages in memory before writing to disk, further if memory is sparse it is more likely to escalate locks.
July 16, 2015 at 2:57 am
Bharatvip (7/15/2015)
1. Will bumping mem to 64 or 128 GB help? We know that we don't have many selects or stored procs doing a lot of joins and selects, for those queries more memory would mean we could have more of that data in memory and get it from there and not do a disk read, but when we only do a lot of INSERTS, does having a lot of memory help, how?
Probably not. Inserts are going to be more constrained by the latency on the log drive, especially if you're talking about single row inserts.
2. Will a lot of inserts drive the CPU usage so high for what seems a pretty beefy server?
Shouldn't do.
I've seen a few hundred inserts/sec on a server with less CPU (far less than what you need). I suggest you do some workload analysis, see what's using the CPU, see what the inserts are constrained by. Without that information, you're guessing.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
July 16, 2015 at 7:40 am
Bharatvip (7/15/2015)
We are building a new environment and have got a Sql Server with 32 GB memory, 32 core 2.00 GHZ CPU, a Sql 2012 installation. We have about 6 databases but really there are just 2 databases that have all the activity.One of these two databases has about 6 logging tables that are supposed to get about 30 to 50 thousand inserts an hour. We also have FileStreaming enabled on this one database.
The second database has mainly one big table that will have again a lot of logging (inserts) going on. The table has a couple of max(varchar) columns and so will get a lot of data in those columns.
When we try to test this server by stressing it, we see a very high memory usage and consistently high (over 80%) CPU usage.
Questions:
1. Will bumping mem to 64 or 128 GB help? We know that we don't have many selects or stored procs doing a lot of joins and selects, for those queries more memory would mean we could have more of that data in memory and get it from there and not do a disk read, but when we only do a lot of INSERTS, does having a lot of memory help, how?
2. Will a lot of inserts drive the CPU usage so high for what seems a pretty beefy server?
3. Is there a way to do inserts in a efficient manner and reduce memory and CPU usage.
4. Does Filestreaming add overhead to inserts?
Any tips highly appreciated.
Thanks,
Vip.
I'm inclined to agree with Erikur... As no mention is made of your I/O subsystem or it's configuration, you may well be "looking in the wrong place" to determine where you'll get the most improvement from the new box. Of all the things in a server that can impact performance, the I/O subsystem and the attached disk are the slowest components, so that's usually the best place to start. Just getting spindle separation of portions of a given workload can often make a huge difference in performance. Also, while SANs are now common, and 10 Gigabit per second fiber network cards are often used, these links are still turtles compared to the bandwidth of a local SATA disk drive at 6 GB/sec. This is why it's often essential to ensure that your SAN fabric has multiple paths enabled. Also, be careful how your LUN boundaries are created, or you can end up with unintended workload mixing on a given set of spindles. With all this in play, it's thus rather important that you know what your existing I/O scenario looks like. I/O wait stats and disk queue stats are rather useful information, along with data on I/O volume (reads/writes per unit time, tracked over long periods).
All that said, extra RAM isn't going to hurt, and for a database box, 32 GB isn't really all that beefy. The extra RAM would likely improve the resale value at the point of decommissioning after the server has been fully depreciated, AND, it might well extend the useful life of the server well beyond it's date with full depreciation. Check to see if your existing workload is showing signs of RAM constraint, and if so, you mgiht want to err on the side of more than you need, so as to elongate the time before that threshold is reached in the future.
Steve (aka sgmunson) 🙂 🙂 🙂
Rent Servers for Income (picks and shovels strategy)
July 16, 2015 at 10:50 am
So Eirikur said that drives, file groups and configuration would have an impact on performance.
We have 4 drives, one has data (mdf) files, second has log (ldf) files, third has backup files and fourth has tempdb on it.
So two of our databases have most of the inserts going to them, both have their mdf's on the same drive.
We also just use one primary file group.
So my question is:
1.If we create another filegroup and maybe put half of the tables having inserts going to them, into the new filegroup. If the filegroup is still on the same drive, will it make any difference?
2.If there are records inserted into 4 different tables within a transaction, if two tables are in one filegroup and another two in a different one, do we still gain performance?
3.Will it maybe better to move the mdf file of one of the two databases that has a lot of inserts to the backup drive as backup drive sees most activity at night and is not that busy during the day really?
One other thing from the original question, does Filestreaming cause any overhead (degradation in performance)
Thanks
Vip
July 16, 2015 at 10:59 am
Bharatvip (7/16/2015)
1.If we create another filegroup and maybe put half of the tables having inserts going to them, into the new filegroup. If the filegroup is still on the same drive, will it make any difference?
No. Two files on the same drive may as well be one file on the drive.
2.If there are records inserted into 4 different tables within a transaction, if two tables are in one filegroup and another two in a different one, do we still gain performance?
Maybe. Maybe not. Depends on what is slowing the inserts down and whether or not the filegroups are on different drives.
3.Will it maybe better to move the mdf file of one of the two databases that has a lot of inserts to the backup drive as backup drive sees most activity at night and is not that busy during the day really?
Maybe. Maybe not. Depends on what is slowing the inserts down. (However since that's a lot of work, probably not a great idea)
Do some workload analysis, see what's using the CPU, see what the inserts are waiting for. Without that information, you're guessing.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
July 16, 2015 at 11:56 am
Further on Gail's and Steve's replies, don't ponder on how before you can identify and quantify the potential problem. If there are only 10-20 inserts per second, each record would have to be quite large in footprint in order to substantially load the server. Feel that something is missing from the equation here, what is the main concern?
😎
BTW, are you building this new system/database on an SQL Server 2008?
July 16, 2015 at 1:40 pm
It is on SQL 2012. The inserts do have several large varchar(max) columns being inserted and we wanted to make sure that it will handle 40 to 50,000 transactions an hour. During the test things start slowing down and that is what is causing concerns.
Thanks,
Vip.
July 16, 2015 at 5:29 pm
Bharatvip (7/16/2015)
It is on SQL 2012. The inserts do have several large varchar(max) columns being inserted and we wanted to make sure that it will handle 40 to 50,000 transactions an hour. During the test things start slowing down and that is what is causing concerns.Thanks,
Vip.
With that in mind, it's probably well worth repeating the test to gather memory usage stats as well as I/O (if you haven't already), and see if one or both might be part of the bottleneck, and then that can at least guide your remediation efforts.
Steve (aka sgmunson) 🙂 🙂 🙂
Rent Servers for Income (picks and shovels strategy)
July 17, 2015 at 2:55 am
Bharatvip (7/16/2015)
During the test things start slowing down and that is what is causing concerns.
Then you need to identify why things are slowing down. What are the bottlenecks? What are the high resource-using queries? Can any tuning be done?
Trying to fix things when you don't know the cause of the problem will likely be a waste of time.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
Viewing 10 posts - 1 through 9 (of 9 total)
You must be logged in to reply to this topic. Login to reply