December 26, 2010 at 1:51 pm
Our primary production database is SQL 2005 Enterprise 64-bit on a Server 2008R2 active/passive cluster. I'd like to propose ( sell ) an upgrade to sql 2008 to management. There will have to be very strong reasons if this is going to get in our budget. And now I see 2011 Denali is starting it's debut.
A brief description of how we do things may eliminate some advantages of upgrading ( or vice versa ). Backups are done by our Systems group with Commvault iData agents so native backups or restores are rarely done. Very few stored procedures are used because an ORM generates the sql statements.
I don't believe anything in the database is encrypted ( not compressed either ), but as our SAS70 audits have become more intense this could change. Our primary performance issue is high average IO read times ( 50-200ms ) so we're hoping to get an in-depth review of everything between the data and our netapp. Memory is 64GB but we're finding Server 2008R2 uses a lot of that for file cache ( about 15GB on average ). So a move to 128GB of memory will happen soon.
The ORM-generated sql is parameterized , but there is question as to how well this is being done. We have missing indexes, unused indexes and some duplication of index columns. The tables and indexes have been split out of the original large data file so there are now about 16 data files spread over 8 luns -- more such splitting should be done. Tempdb consists of 8 files of equal size on one netapp drive -- this also needs to be split out or put on local solid state drives.
December 26, 2010 at 4:56 pm
All of the reasons that I'm going to list are here:
Backup Compression.
Change Data Capture.
Central Management Servers.
Extended Events.
Intellisense (some love it; others loath it).
Policy-based management.
Query Multiple Servers simultaneously from SSMS.
Resource Management.
Color coding of status bar based on the server (however, it must be done in two places).
Wayne
Microsoft Certified Master: SQL Server 2008
Author - SQL Server T-SQL Recipes
December 26, 2010 at 7:35 pm
Indianrock (12/26/2010)
Our primary production database is SQL 2005 Enterprise 64-bit on a Server 2008R2 active/passive cluster. I'd like to propose ( sell ) an upgrade to sql 2008 to management. There will have to be very strong reasons if this is going to get in our budget. And now I see 2011 Denali is starting it's debut.A brief description of how we do things may eliminate some advantages of upgrading ( or vice versa ). Backups are done by our Systems group with Commvault iData agents so native backups or restores are rarely done. Very few stored procedures are used because an ORM generates the sql statements.
I don't believe anything in the database is encrypted ( not compressed either ), but as our SAS70 audits have become more intense this could change. Our primary performance issue is high average IO read times ( 50-200ms ) so we're hoping to get an in-depth review of everything between the data and our netapp. Memory is 64GB but we're finding Server 2008R2 uses a lot of that for file cache ( about 15GB on average ). So a move to 128GB of memory will happen soon.
The ORM-generated sql is parameterized , but there is question as to how well this is being done. We have missing indexes, unused indexes and some duplication of index columns. The tables and indexes have been split out of the original large data file so there are now about 16 data files spread over 8 luns -- more such splitting should be done. Tempdb consists of 8 files of equal size on one netapp drive -- this also needs to be split out or put on local solid state drives.
It sounds more like you have some disk hardware problems and, perhaps, some code performance challenges especially since you've brought up "ORM-generated sql". My recommendation would be to skip 2008 altogether and spend some money on training people how to write code in T-SQL.
No... I'm not making this recommendation lightly, sarcasticly, or in jest. It was a while back but one of the companies I've worked for in the past made the costly jump from Standard Edition to Enterprise Edition and bought a (at the time) "monster" server and some very expensive high speed SAN systems. It was basically all for naught because the performance of the code only increased by about 5% if at all and, trust me, the code was really performance challenged. If they had spent the money on developer training and incentives instead of hardware and licenses that made no difference, they would have be way ahead of the game instead of where they still are after all these years... still fighting performance problems with performance challenged code and ORM-generated code. Even some decent in-house training would go a long way.
One of the courses that should be offered is for managers and especially "C" level company officers and it should be titled "If you want it real bad, you'll normally get it that way. Plan better." π
--Jeff Moden
Change is inevitable... Change for the better is not.
December 27, 2010 at 7:05 am
I agree with Jeff, although I will say that SQL 2008 seems to manage memory better, especially in SSAS.
W2008 R2 - this OS also offers some better memory management.
Jeff's comments on version of SQL (standard of enterprise) - we know why we run Enterprise - some of the features that we use are available only in this version. You are the best one to go through the feature list for SQL 2008 and SQL 2008 R2 and compare what you are using in 2005, and see what else is offered in the newer versions. And some of the cost difference might be swayed by if you have software assurance.
No one mentioned Spatial....new to 2008, enhanced in R2
2008 seemed to offer some performance improvements over 2005, and R2 was more of a BI release.
Because I work in a data warehouse group, my choice to upgrade was based on some of the new features.
It would have been much different if we only used SQL server.
So it depends - do some research. And maybe think of a proof of concept project to understand more first.
I wouldn't expect a big performance boost just from an upgrade (if that is what you're looking for).
Greg E
December 27, 2010 at 7:36 am
Unfortunately I think dropping ORM would take an act of Congress. So far we don't have a data warehouse, but it is being discussed. No use of analysis or reporting services. No use of log shipping, mirroring or replication or even native sql backups.
We seem to have a development group that tends to think of sql server as a dumb shoe box to store things in. They prefer to write their own monitoring tools as well as their own job scheduler rather than use much of anything that comes with sql server. The argument against stored procedures is an old one so no need to resurrect that here.
The former Systems manager was a big fan of what I call hardware/snap disaster recovery solutions rather than anything from sql server or Microsoft. So there seems to be a bias against learning anything about sql server in that realm also.
I'll do some research about sql 2008 advantages, but it may just take the application falling on it's face to get much movement. We have about 30 million records now with another 15-20 million to be added over the next year.
December 27, 2010 at 10:13 am
Just from my experience on 2005, once you start having performance issues in the configuration we were running (SQL and SSAS on the same box), it would spiral downhilll very rapidly.
That might not apply in your case. But could if you start using more features.
Since you have no data warehouse presently, another thing that comes to mind, you might possibly look at overall DB design. Especially if someone is looking towards a data warehouse. What works good for transactional DB's might leave something to be desired for data warehousing.
One other feature with both W2008 R2 and SQL 2008 / R2 - you might find it easier to zero in on performance bottlenecks.
Each exposes some additional views.
Since we went with both the R2 on the SQL side and OS side at the same time, all I can say it was helpful for us.
Good Luck!
Greg E
December 27, 2010 at 12:06 pm
Indianrock (12/27/2010)
Unfortunately I think dropping ORM would take an act of Congress. So far we don't have a data warehouse, but it is being discussed. No use of analysis or reporting services. No use of log shipping, mirroring or replication or even native sql backups.We seem to have a development group that tends to think of sql server as a dumb shoe box to store things in. They prefer to write their own monitoring tools as well as their own job scheduler rather than use much of anything that comes with sql server. The argument against stored procedures is an old one so no need to resurrect that here.
The former Systems manager was a big fan of what I call hardware/snap disaster recovery solutions rather than anything from sql server or Microsoft. So there seems to be a bias against learning anything about sql server in that realm also.
I'll do some research about sql 2008 advantages, but it may just take the application falling on it's face to get much movement. We have about 30 million records now with another 15-20 million to be added over the next year.
The problem with trying to throw hardware at SQL Server performance issues is that faster hardware will usually give only marginal improvement unless it is targeted at a specific problem. People often rush into un-necessary hardware upgrades when what is really needed is something as simple as some new indexes or some query tuning. Adding a high performance RAID array will not help if you are not having IO problems. Adding memory will not help if you already have plenty. Adding more CPU power will not help if you already have plenty of processing power.
The only way to really improve it is to look at exactly what is performing badly, and fix it by adding indexes, tuning the query, etc. Itβs usually a faster and cheaper solution, too. We had an SQL application that was having performance problems and our CIO got everyone worked up with a crazy plan to split the database over multiple servers. While he was holding emergency planning sessions to put this plan into motion, I used profiler to identify 4 stored procedures that where consuming 95% of the system CPU time. We spent about 4 hours optimizing these procedures, reduced the CPU workload by 90%, and eliminated the performance problems.
If the development team is unwilling to deal with SQL Server performance issues, then that is probably the upgrade you should be looking into: Developers who know what they are doing with SQL Server.
December 27, 2010 at 5:30 pm
A couple of items missed in the recommendations might help out:
Data Compression
Page Compression
Either (or both) of the above options could reduce the storage requirements and also improve IO performance. However, you really need to test these thoroughly - as they could have a negative effect on your application.
Other considerations:
New data types (e.g. date, time, HeirarchyID, etc...)
New functionality (e.g. Merge - instead of Update/Insert)
And I am going to say a bit about using Netapp Snap as a DR solution. First, unless Snap Manager has been improved - there are just way too many problems. The recommended solution is to actually snap your databases at least every hour - with a transaction log backup every half-hour (or 15 minutes). And, because you cannot run both a snap backup and a transaction log backup at the same time - you have to schedule the transaction log backups outside the window for the snaps. And forget about using the verify option on clusters - won't work. Not to mention using the verify option on a production database - which will block all other operations until it has completed.
Frankly - I uninstalled it on those servers and reverted to native backups.
Jeffrey Williams
βWe are all faced with a series of great opportunities brilliantly disguised as impossible situations.β
β Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
December 27, 2010 at 7:17 pm
I'd like to hear more about netapp snap. At this point I'm not sure what is being planned for DR, except mention of some form of replicating the db to a colo site and backing that up.
December 27, 2010 at 9:53 pm
Indianrock (12/27/2010)
I'd like to hear more about netapp snap. At this point I'm not sure what is being planned for DR, except mention of some form of replicating the db to a colo site and backing that up.
Read what Jeffrey wrote in his final sentence above. He uninstalled it and went back to native backups.
I also see that (it seems) you're ignoring most of the advice given which, of course, is up to you. But most of the advice given so far has nothing to do with upgrades and little to nothing to do with new hardware or software. I have to say it again... spend the money on training your people. It'll go a whole lot further than any other money you can spend until SP 1 of 2k11 comes out. π Heh... and that also depends on the "color" of the money, I suppose. Lot's of companies can't move money from a hardware budget to a training budget. If that's true then, yeah... buy a fire breathing EMC SAN with all the goodies and a twin for the DR so you can snapshot a whole server's worth of databases to the DR twin in minutes. :hehe: It may not help your code run any faster, though. Performance challenged code usually can't really be fixed by better hardware.
And the colo site is likely fine for the near term for DR. π
--Jeff Moden
Change is inevitable... Change for the better is not.
December 28, 2010 at 6:04 am
Thanks Jeff. I'm taking serious note of all being said here, and passing it to our Corporate DBA for review also. I'm not in a position to dictate how our company decides to handle these things. I just give my input when asked, but based on previous experience I'm not very optimistic about dropping ORM.
If hardware and configuration changes don't solve things, it may take a significant "failure" of some sort ( decision not to convert certain large legacy clients to this new system ) before serious change is made. One hindrance is having all clients data in one large database. This is an application design "requirement", but in fact some of the large clients could be put on their own systems if there was no other choice.
Anyway, I appreciate all the input.
December 28, 2010 at 6:53 pm
Indianrock (12/28/2010)
Thanks Jeff. I'm taking serious note of all being said here, and passing it to our Corporate DBA for review also. I'm not in a position to dictate how our company decides to handle these things. I just give my input when asked, but based on previous experience I'm not very optimistic about dropping ORM.If hardware and configuration changes don't solve things, it may take a significant "failure" of some sort ( decision not to convert certain large legacy clients to this new system ) before serious change is made. One hindrance is having all clients data in one large database. This is an application design "requirement", but in fact some of the large clients could be put on their own systems if there was no other choice.
Anyway, I appreciate all the input.
Ooooohh... no... I'm not saying to drop the use of ORM's... I'm saying that you really have to watch it especially if someone uses it to do an "agressive get" to return data from dozens of tables.
Also... I don't usually see having all the clients data in one large database as a hindrance unless some really oddball security is required to keep their data separate. In fact, I usually find benefit to such a thing because it's easier to report on, etc, etc. The big problems normally occur when you go to use such data. While ORMs are a great replacement for C.R.U.D. and other (usually "single record") "Let/Get" types of things, people will often make the mistake of trying to do aggregates and other reporting using the same methods they use for OLTP. It's two totally different worlds with the biggest difference being the number of rows "touched" by either process.
--Jeff Moden
Change is inevitable... Change for the better is not.
December 29, 2010 at 7:44 am
Yes I'm sure we're seeing some queries that belong in a data warehouse. Our "business" side has even insisted that clients be allowed to do "contains/like" type searches where they look for records where the account number "contains" "345" for example.
There is a limit to the number of rows that can be returned to the GUI, but I think it's done like:
select top 5000 yada yada yada from tableA join tableB join tableC where yada yada yada
Example of ORM generated sql. This is one of our worst in terms of logical reads. ( I've "obfuscated" some of this using "anonymous" )
( list of parameters sent with sql )
SELECT Anonymous02.Anonymous_ID AS Anonymous02_Anonymous_ID0, yada yada FROM Anonymous AS Anonymous02 INNER JOIN (PROPERTY AS AnonymousProperty13 INNER JOIN (ERROR AS FollowUpDataError14 INNER JOIN WORK_QUEUE_ITEM AS ResolveFollowUpInvalidDataError15 ON FollowUpDataError14.ERROR_ID=ResolveFollowUpInvalidDataError15.ERROR_ID) ON AnonymousProperty13.PROPERTY_ID=FollowUpDataError14.Anonymous_PROPERTYPROPERTY_ID) ON Anonymous02.Anonymous_ID=AnonymousProperty13.Anonymous_ID WHERE ((ResolveFollowUpInvalidDataError15.Anonymous_PROCESS_STATUS = @DerivedTable01_Anonymous_PROCESS_STATUS20 AND Anonymous02.CLIENT_ID = @DerivedTable01_CLIENT_ID41 AND ResolveFollowUpInvalidDataError15.SECURED_ORGANIZATIONORGANIZATION_ID = @DerivedTable01_SECURED_ORGANIZATIONORGANIZATION_ID62) AND ((ResolveFollowUpInvalidDataError15.CONCRETE_TYPE IN ( @DerivedTable01_73, @DerivedTable01_84, @DerivedTable01_95))))) AS ScalarQueryTable) AND (Anonymous02.Anonymous_OBJECT_STATUS = @DerivedTable01_Anonymous_OBJECT_STATUS36 AND Anonymous02.Anonymous_DATE_TIME IS NULL AND AnonymousProperty13.EEE_DATE_TIME IS NULL AND Anonymous02.RECOVERY_STATUS IS NULL AND Anonymous02.Anonymous_ID NOT IN (SELECT ServicedAnonymous02_Anonymous_ID50 FROM (SELECT ServicedAnonymous02.Anonymous_ID AS ServicedAnonymous02_Anonymous_ID50 FROM SERVICED_Anonymous_GROUP_ITEM AS ServicedAnonymous02 INNER JOIN (SERVICE_REQUEST AS AnonymousGroupRequest13 CROSS JOIN Anonymous_AGENCY_REQUEST_TRANSACTION_TYPE AS ServiceRequestTransactionType14) ON ServicedAnonymous02.Anonymous_GROUP_REQUESTSERVICE_REQUEST_ID=AnonymousGroupRequest13.SERVICE_REQUEST_ID WHERE ((AnonymousGroupRequest13.CLIENT_ID = @DerivedTable01_CLIENT_ID37 AND (ServiceRequestTransactionType14.REQUEST_TRANSACTION_TYPE = AnonymousGroupRequest13.REQUEST_TRANSACTION_TYPE AND ServiceRequestTransactionType14.REQUEST_TYPE IN ( @DerivedTable01_REQUEST_TYPE78, @DerivedTable01_REQUEST_TYPE89, @DerivedTable01_REQUEST_TYPE910, @DerivedTable01_REQUEST_TYPE1011, @DerivedTable01_REQUEST_TYPE1112) AND AnonymousGroupRequest13.Anonymous_PROCESS_STATUS = @DerivedTable01_Anonymous_PROCESS_STATUS1313 OR (AnonymousGroupRequest13.REQUEST_TRANSACTION_TYPE = @DerivedTable01_REQUEST_TRANSACTION_TYPE1514))) AND ((ServicedAnonymous02.CONCRETE_TYPE IN ( @DerivedTable01_1615)) AND (AnonymousGroupRequest13.CONCRETE_TYPE IN ( @DerivedTable01_1716, @DerivedTable01_1817, @DerivedTable01_1918, @DerivedTable01_2019, @DerivedTable01_2120, @DerivedTable01_2221, @DerivedTable01_2322, @DerivedTable01_2423, @DerivedTable01_2524, @DerivedTable01_2625, @DerivedTable01_2726, @DerivedTable01_2827, @DerivedTable01_2928, @DerivedTable01_3029, @DerivedTable01_3130, @DerivedTable01_3231, @DerivedTable01_3332, @DerivedTable01_3433, @DerivedTable01_3534, @DerivedTable01_3635, @DerivedTable01_3736, @DerivedTable01_3837, @DerivedTable01_3938, @DerivedTable01_4039, @DerivedTable01_4140, @DerivedTable01_4241, @DerivedTable01_4342, @DerivedTable01_4443, @DerivedTable01_4544, @DerivedTable01_4645, @DerivedTable01_4746, @DerivedTable01_4847, @DerivedTable01_4948))))) AS ScalarQueryTable) AND (Anonymous02.CLIENT_ID = @DerivedTable01_CLIENT_ID1149 AND (Anonymous02.Anonymous_DATE >= @DerivedTable01_Anonymous_DATE1350) AND (Anonymous02.SUB_CATEGORY IN ( @DerivedTable01_SUB_CATEGORY1551)) AND (Anonymous02.CATEGORY = @DerivedTable01_CATEGORY1752) AND ((AnonymousProperty13.EXPECTED_Anonymous_STATE_ABBR IN ( @DerivedTable01_EXPECTED_Anonymous_STATE_ABBR1953, @DerivedTable01_EXPECTED_Anonymous_STATE_ABBR2054, @DerivedTable01_EXPECTED_Anonymous_STATE_ABBR2155, @DerivedTable01_EXPECTED_Anonymous_STATE_ABBR2256))))) AND (DateDiff(Day, DateAdd(Day, @DerivedTable01_2357, DateAdd(Day, @DerivedTable01_2458, Anonymous02.Anonymous_DATE)), GETDATE()) >= @DerivedTable01_3059)) AND ((AnonymousProperty13.CONCRETE_TYPE IN ( @DerivedTable01_3160))))
Viewing 13 posts - 1 through 12 (of 12 total)
You must be logged in to reply to this topic. Login to reply