December 20, 2012 at 10:23 pm
Comments posted to this topic are about the item The Load Poll
December 21, 2012 at 1:34 am
At one point one of our old 32bit SQL2000 clusters were getting through 8,000+ batch requests/sec regularly at peak.
Different lines of business have different busy periods both seasonally and weekly so an average doesn't mean much.
At 8:30 in the morning on December 23rd I can see people arriving at work and starting to use our website and the transaction/sec on the old box has jumped from 200/sec up to 787/sec.
December 21, 2012 at 2:48 am
"I was taking to someone recently and this person had a large transaction load on their SQL Server."
Is that a euphemism and is that why you took to them?
(Sorry, just a joke. The typo made me think of that and it amused me so I had to share it.)
December 21, 2012 at 5:24 am
Right now at the quietest part of the day with probably half the company having taken the day off our busiest SQL Server database is fluctuating between 250 batch requests per second and 800 with regular (every few seconds) spikes up to 2500+. about 800 over a 30 second average.
I don't have any other metrics to hand but when the business are using the application in anger I'd expect to be averaging about 1500 to 2000/s
we also get peaks of activity during import phases which occur throughout the day - don't have any metrics on that unfortunately.
CPU is between 0 and 1%, DB I/O is less than 0.1MB/s
I'd think that 1000 transactions per second is actually quite low usage on a system with the number of users we have. I'd think our server could probably handle upward of 16000 tps maybe even 20k at a push.
Ben
^ Thats me!
----------------------------------------
01010111011010000110000101110100 01100001 0110001101101111011011010111000001101100011001010111010001100101 01110100011010010110110101100101 011101110110000101110011011101000110010101110010
----------------------------------------
December 21, 2012 at 7:11 am
I has been at least 10 years since I have had a Production SQL server that averaged 1000tps or less during regualr business process cycles. Currently our main Database servers for our line of Business applications have process they support that average around this. Add to that Share Point Server support and JDE and you got a very large average tps.
This is all on our older SQL 2005 single node and Fail over clusters running on x86 systems.
Our newer x64 class systems can handle a lot more than that, but we have had some issues with vritualization and using iSCSI SAN interfaces for our shared storage. 😎
December 21, 2012 at 8:32 am
Vila Restal (12/21/2012)
"I was taking to someone recently and this person had a large transaction load on their SQL Server."Is that a euphemism and is that why you took to them?
(Sorry, just a joke. The typo made me think of that and it amused me so I had to share it.)
Doh! I read that a few times, and edited this without noticing the mistake.
It's corrected.
December 21, 2012 at 8:48 am
So after having made fun of such a minor typo I feel obliged to give my figures:
I'm just looking after a small set of databases - just 100 users on a LAN.
Today isn't a good day to measure stats for it. Many people are on leave.
But anyway peak tps today was 12!! And averaging about 0.2.
On a busy day I think it would be double or triple that.
Perhaps not stats worth mentioning but it's important to have it known that not all databases are enterprisey multi-billion-row, multi-hundred-thousand-user monsters (and it's not a competition and even if it were I wouldn't want to win it: I don't envy those who after look after them). The little ones count too.
Lastly, this method of collecting a poll isn't very efficient for you or the contributors.
Does this blog not have a survey facility or at least a thread that can be voted on?
December 21, 2012 at 9:49 am
Vila Restal (12/21/2012)
Perhaps not stats worth mentioning but it's important to have it known that not all databases are enterprisey multi-billion-row, multi-hundred-thousand-user monsters (and it's not a competition and even if it were I wouldn't want to win it: I don't envy those who after look after them). The little ones count too.
I totally agree. Our database that generates the most TPS actualy has one windows user or servcie account that connects for the services our 15 primary business users consume. These processes spawn multiple threads in multiple services running on 3+ servers. This ends up creating an average of 300-500 active dtabase connections executing batches that do 1500+ tps. So in a way our average tps per application user on this database is 150-200. Not too shaby for one of several systems supported by a team of 10 people including our CIO and First Level support. 😎
This is not something I could have dreamed about doing 10 years ago. Now it is considered slow... 😛
December 21, 2012 at 9:59 am
The environment is a rural hospital I ran stats over our last month for all DBs under my team's purview (which excludes the data warehouse and our Electronic Health Record) and excluded system databases and the DBA utility DB as well as low use times which also eliminated some DBs since they never averaged over one transaction per second (test or DR for example). I can't pick one time range because some servers are 24x7 and other are 8-5 depending on what they're used for. I also excluded one server that went through an application upgrade and migration since it's not a fair representation of load.
Our highest average is 331 tps with a peak on that DB of 772 and our highest peak was 2,106 with an average of 10 for that DB. The average average was 16 and the average peak was 92.
And thus concludes my TPS report.
December 21, 2012 at 10:58 am
SQL2012EE on Win2K8R2 active-passive 128GB RAM single 6-core E5-2640 @ 2.5 GHz on Dell R720 runs at:
- Transaction/sec: 13800 peak, 1600 average excluding peaks.
- 1170 Batches/sec peak, otherwise between 100 and 800 per monitored minute.
- 15 peak active sessions, 3/minute normal average.
- 12MB/sec peak Physical IO, 1MB/sec average otherwise.
- Page life expectancy 300,000.
- 500 Log Flushes/sec peak, 350 average.
- 58% CPU Utilization max for 1 minute in every 10, otherwise under 10%.
Specifications for the machine courtesy of Glenn Berry's excellent blog posts.
December 21, 2012 at 11:56 am
cfradenburg (12/21/2012)
And thus concludes my TPS report.
Ah.... yeaaahhhh... you see, we've changed to a new TPS report format, and that... that's the old format. Yeahhhh... we send a memo out, and I'll get you a copy of it tomorrow... because, yeahhhh, we'll need you to come in on Saturday. At 9.
Most of our systems peak in the low tens and average single digits. Our few very large production systems average low to mid triple digits, with peaks in the low to mid four-digit numbers, though performance during those peaks is still quite good.
Note that of course "transactions" can be small or large, and thus don't say a whole lot by themselves.
December 21, 2012 at 11:59 am
Vila Restal (12/21/2012)
Lastly, this method of collecting a poll isn't very efficient for you or the contributors.Does this blog not have a survey facility or at least a thread that can be voted on?
Polls don't work well here. We've tried, and they often are forcing people into a certain answer. I was hoping to get counts and some background, just as you gave.
December 21, 2012 at 12:40 pm
Situation: 60GB production database for medium-sized company, 700+ web users during the day, some usage 7*24.
We've never thought about our system in terms of TPS... we just focus on CPU utilization on the database server. Our machine runs all 8 cores between 20 and 40% during the business day, and down to 1% overnight. Whenever utilization gets over 50% our users start to complain about speed, and it's almost always some explainable cause.
In the last 30 seconds we've been between 20 and 40%, and SQL says we've done 86,000 transactions... so I guess we're running 2,800 TPS.
We can take the system down during off hours (with advance notice) for up to an hour with no problem. More than an hour offline starts to cause real problems with the team serving our customers.
Our biggest performance pain point is trying to get big jobs done overnight. We have to sequence backup, export to data warehouse, synchs with other databases, and 45 minutes worth of database maintenance scripts so they don't run concurrently, but our overnight users always feel real system slowdowns in the depths of the night.
December 28, 2012 at 5:19 am
Im designing a system to scale to 12,000 TPS. It involved some clever tricks such as using Service Broker for Lazy processing.
Viewing 14 posts - 1 through 13 (of 13 total)
You must be logged in to reply to this topic. Login to reply