May 6, 2016 at 1:34 pm
orjan.franzen (5/6/2016)
Having to drop an unique index to increase performance can be done easily with a generated script wether it's a primary key with clustered index or a unique clustered index.However rebuilding unique clustered indexes is a clear indication of bad physical design and can be avoided by analysing structure, volumes and search needs correctly from the beginning...
Yes, it's easy to code, it's the performance hit of making changes that I'm talking about. Constraints are often vastly more costly to change than indexes.
Changing the clustered index can indeed be from a poor initial choice -- most often from the horrific assumption many people make of automatically/by default using identity column as the clustering key -- or it can be because table usage has changed over time. Either way, it's much better to be able to correct it, since the clus index is the most critical index for performance, rather than not being able to change it because it being a unique constraint rather than just an index makes it so much more difficult to change.
SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".
May 6, 2016 at 1:45 pm
Sounds like good news!
But... as I said before indexing and performance problems are often a result of a bad database structure and can in most cases be avoided with a better structure.
It's like repairing cracks in the house walls with effective machines stopping the cracks from getting wider instead of making the house ground solid from the beginning...
May 6, 2016 at 6:07 pm
John Hick-456673 (5/6/2016)
Creating indexes for data in other environments that does not conform to a production-like spread just does not make any sense and accomplishes nothing.
Yes, it does - it depends on the usage of the data. If the environment is for reporting, the report queries may have entirely different needs than the UI application has. Same thing for analysis.
John Hick
Hmm, wouldn't you want reporting and analysis on data that came from production? It may have different schema for warehousing etc, but the contents would be the same as production, no?
Hakim Ali
www.sqlzen.com
May 6, 2016 at 9:14 pm
hakim.ali (5/6/2016)
John Hick-456673 (5/6/2016)
Creating indexes for data in other environments that does not conform to a production-like spread just does not make any sense and accomplishes nothing.
Yes, it does - it depends on the usage of the data. If the environment is for reporting, the report queries may have entirely different needs than the UI application has. Same thing for analysis.
John Hick
Hmm, wouldn't you want reporting and analysis on data that came from production? It may have different schema for warehousing etc, but the contents would be the same as production, no?
Not when you get the data via transactional replication; the data and schema are identical, but not necessarily the procs nor the functions.
May 6, 2016 at 11:23 pm
Of course an OLAP system has a different schema implementing a star or a snowflake structure and normally these schemas are implemented separately from each other.
I started this discussion with talking about normalized OLTP systems. An OLAP systems is normally updated periodically from one or more OLTP systems. OLAP systems are designed for queries and not for transaction based updates.
May 9, 2016 at 6:48 am
Thanks for the good article.
May 9, 2016 at 12:00 pm
The article does give some insights to the DMVs used to analyze Indexes - that is helpful.
I would say that using the ideas presented and then sending the top 10 index recommendations in an email alert on a regular basis could be quite helpful with keeping any index maintenance strategy moving forward. However, I would never automate the creation of the indexes [EDIT for clarity: Automating deployment of a tested index is fine]. If you are, then I would consider it an outlier instance at best, and in no way a typical deployment scenario.
In a typical deployment scenario with DEV -> QA -> STAGE -> UAT -> PROD instances for example:
The idea of each instance having different indexes outside of a develop->test->push to prod strategy does not make any viable sense AT ALL. Nothing goes to PROD that has not been documented, developed/scripted tested and change logged first.
DMVs get reset after the services are restarted, cluster failover etc. - as others have said what about those anual queries or quarterly queries - whose indexes you are now dropping because your cluster failed over or your server was rebooted or your services restarted because you added a trace flag or for any other reason.
Also, if you are going to automate this activity (but please don't) you MUST calculate space as part of the equation. This is a good way to blow out your space and instantly have a lot more problems then you were trying to solve. This is simply a very risky idea. Imagine automatically adding an index to a 900 million row table - that is simply a very bad idea. You always want eyes on anything going to prod that could even remotely cause issues. Not to mention that you also need some strategy to determine the best FILEGROUP to place the index. You may need to move the FILEGROUP to a new volume or sometimes it might be best to create a new one. Way too many factors for an automated process.
Besides the fact that automating random changes is just a bad idea, there are simply too many factors not considered, some too difficult to automate. But like I said, to spur and assist with deeper index analysis and planning by using the tools presented here to generate an automated email report could be a very good thing. That is where the automation should occur [EDIT for clarity: Automating deployment of a tested index is fine also], not with the actual creation of the indexes. Be sure to include in the report when the sql service was last restarted so you have better context.
SELECT sqlserver_start_time FROM sys.dm_os_sys_info
That is my take anyway - I hope some find it helpful - your take may be different and that is OK.
May 10, 2016 at 4:13 am
Hey again,
I was wondering if you could give an idea of the scope/scale this is rolled out to, and your resource available to manage it? I.e. could you cope with manual best practise if you didn't have anything automated?
The reason I ask, is there are some very good arguments coming from people on why not to do things, but sometimes practicalities get in the way of reality.
Case in point, an old requirement at a friends place of work was regular live role outs to over 33k servers across the planet (to web facing services), within time windows measuring hours. This was over a decade ago. Total resource to do it - him. He scripted things to hell and back. And it did and worked well. No time to check different servers to see how they might cope under their various loads!
On the flip side, you might have 1 nice dedicated server in a building, dedicated admin, dedicated developers and db developers, lots of time, well structured processes in place, etc. who can hone everything to perfection.
Or you could work in an environment where management refuse to provide any test environments, the developer/dba/devops style person has to manage development and testing on the live server (assuming there is space), and has zero time for ongoing db monitoring and tuning due to the 10 other projects that need doing by yesterday.
We all know the realities of the real world vary a great deal 🙂 Risk vs reward balance.
Viewing 8 posts - 46 through 52 (of 52 total)
You must be logged in to reply to this topic. Login to reply