June 10, 2014 at 11:37 pm
Hi,
Just installed standard editon, put tempdb on SSD and configured buffer pool extension to SSD. Ran some production replay trace. Very slow. Eats a lot of CPU too.
Updated the stats, - did not help.
Any ideas?
Thanks.
June 11, 2014 at 12:14 am
Run update statistics or Reindexing on the database and re-play the production load.
---------------------------------------------------
"Thare are only 10 types of people in the world:
Those who understand binary, and those who don't."
June 11, 2014 at 12:16 am
Is the hardware configuration of prod and test are same?
---------------------------------------------------
"Thare are only 10 types of people in the world:
Those who understand binary, and those who don't."
June 11, 2014 at 12:32 am
No, it is not, but I an not maxing out the CPU or I/O. Besides, the edition is different: the production is SQL Sever 2008R2 Ent, in test is 2014 Standard with Buffer pool extension to SSD and tempdb on SSD.
June 11, 2014 at 12:34 am
Roust_m (6/10/2014)
Hi,Just installed standard editon, put tempdb on SSD and configured buffer pool extension to SSD. Ran some production replay trace. Very slow. Eats a lot of CPU too.
Updated the stats, - did not help.
Any ideas?
Thanks.
Could you provide more information on the hw/sw setup and config?
😎
June 11, 2014 at 4:02 am
I have not noticed a general slowdown with SQL Server 2014, so some more details are needed in order to understand what's happening. Although, it's possible that it's the version differences, Enterprise vs. Standard, that might be leading to the issue. And I'm not even sure why based on what we know so far. Just a guess.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
June 11, 2014 at 4:08 am
Hi,
Does the database on SQL2014 has compatibility level 120?
In that case you may want to change it to 110 or 100 and run your test again, since 2014 uses other algorithms to calculate execution plans. Some queries will run faster, but other might run slower.
http://www.sqlpassion.at/archive/2014/04/22/cardinality-estimation-for-correlated-columns/ gives more info about this change.
Kind regards,
Peter.
June 11, 2014 at 4:12 am
Perhaps the new cardinal estimator of SQL2014 is not handling your database well. You can change it for a specific query by using a trace flag. Or you can use the former cardinality estimator for the database by using ALTER DATABASE Compatibility Level (Transact-SQL) to set the database compatibility level to 110.
(see http://msdn.microsoft.com/en-us/library/dn600374%28v=sql.120%29.aspx)
Edit: I see peter.roothans just mentioned the same...
June 11, 2014 at 5:23 am
If the OP has just restored the database from SQL 2008 R2 to SQL 2014, then the compatibility mode (and therefore cardinality estimation model) will remain as 100, ie the original one. You have to change it explicitly if you want to take advantage of it.
I think you really need to install SQL 2008 R2 same edition on the same box, configure tempdb on SSD, for a comparable run, a baseline. You also need to check some of the usual suspects, eg SQL Server RAM is capped, tempdb is presized not autogrowing, your db log is not autogrowing etc for starters.
Buffer Pool Extension on SSD is only really going to help if you are memory-bound. Do you think that is the case?
July 22, 2014 at 12:12 pm
+1
I understand my issue to be a corner case unless I hear otherwise, but an Sql Server 2014 Enterprise in a VM on a MacBook Pro is significantly faster than a 6 core 3.8 GHZ Intel i7 workstation.
The scenario is doing Parallel inserts using .NET 4.5 and the Parallel Task Library. I'm building a tool to upload OpenStreetMap data from XML and I fully expected the workstation to be faster, even if the program is inserting inefficiently. I get the fact I should be using a bulk insert or SSIS (per MS SQL Perf guidance paper). I should use SqlBulkCopy with a table lock and to enable trace flag 610 (as one of a few much faster options).
So now you know, that I know, inserting in this way will not achieve best results. I'm posting to say SQL Server (or .NET framework 4.5 and not so much SQL Server) is behaving slower on a much faster machine and I think this post I'm responding to is related.
Workstation is a custom build with the following hardware
32 GB 1600 GHZ RAM
Asus P9X79 Pro Motherboard[/url]
6 Vertex 3 SSD's on an LSI 9260 RAID controller striped (32K size) in a RAID 0
Model Name:MacBook Pro
Model Identifier:MacBookPro8,3
Processor Name:Intel Core i7
Processor Speed:2.4 GHz
Number of Processors:1
Total Number of Cores:4
L2 Cache (per Core):256 KB
L3 Cache:6 MB
Memory:16 GB
Vertex 3 SSD (500 MB/s)
The total number inserts in 54 (ish) minutes on the laptop was 6305264 and that works out to be 1946 inserts a second. I didn't write down the workstation results as I didn't let it run lon enough since inserts were a few hundred per second.
-- 51785 in 54 minutes
select count(id) from member;
-- 0 in 30 minutes
-- 4901 in 54 minutes
select count(id) from relation;
-- 0 in 30 minutes
-- 333243 in 54 minutes
select count(id) from way;
-- 57827 in 30 minutes
-- 3195585 in 54 minutes
select count(Id) from Tag;
-- 1304603 in 30 minutes
-- 2719750 in 54 minutes
SELECT count([Id]) FROM [dbo].[node]
July 28, 2014 at 4:04 am
How are you tuning your BPE and server Max Memory? A BPE can turn poor performance to acceptable if you are starved of memory, but it is not a substitute for having enough memory.
Make sure that Max Memory is set correctly. A reasonable starting point is 80% of server memory, then tune to get the best value for your environment. Make sure you only refer to the physical memory on your server and do not include the page file size. Let your server run for a few hours dealing with real workload before starting your tuning, otherwise you may base your decisions on the wrong figures.
Even at the 80% level you should be getting an idea about how much memory SQL Server would ideally need by comparing Target Server Memory (as shown by Performance Monitor) against Total Server Memory. If the target memory is significantly higher than total memory, then you are going to suffer performance issues until you put more memory on to the server.
If pages are going into the BPE, how is this affecting overall IO for your server? You can test this by measuring performance with the BPE on and with it off. Hopefully you will see an improvement with the BPE turned on. However, it can take some time for the BPE to become fully effective. On our BI server we reckon it takes the best part of 2 days from SQL startup before the BPE is being used to its maximum extent.
If you do find your BPE is taking a lot of your server IO then you should consider adding more RAM.
As an example, our move to SQL2014 in January coincided with some new BI extracts that significantly increased the IO demand on the server. We started out by specifying a 400GB BPE on a SSD, and found a big improvement but still had performance worse than our users wanted. The BPE was peaking at 75MB/Sec IO with a normal load of about 40MB/Sec, while our spinning disks were peaking at 10MB/Sec with normal load in the KB/Sec range. The BPE never got beyond 75% full and took about 90% of all IO. We then added another 64GB memory and got performance that kept our users satisfied. With the extra memory, our BPE was now peaking at under 10MB/Sec with normal load in the KB/Sec range, and our spinning disks now very rarely go out of KB/Sec IO. Our conclusion was that a BPE on SSD can be a useful tool in tuning a server, but that nothing beats physical memory when it comes to getting the best performance.
Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.
When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara
Viewing 11 posts - 1 through 10 (of 10 total)
You must be logged in to reply to this topic. Login to reply