July 24, 2008 at 1:33 am
If it's 64 bit, you should really set the max memory setting for SQL. I've seen too many cases of runaway memory allocation.
If you're using enterprise edition, make sure that the SQL service account has permission to lock pages in memory. If you're not sure, check the error log. During the start up messages there wikll be a note about whether or not SQL is using locked pages for the buffer pool. Locking pages means that the OS can't move SQL's memory to the swap file..
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
July 24, 2008 at 8:16 am
After tinkering with indexes in our test system, I wasn't seeing a huge performance increase whether I used standard rule-of-thumb indexing or even created indexes I thought would be appropriate, there wasn't much in it, no matter how I tried to optimize things.
I respectfully submit that someone with years of training and experience may well be able to develop an indexing strategy much better than you are able to do, so be careful in thinking that indexing won't help out here. Effective performance analysis and tuning is part science, part art and part experience. The latter is VERY important! Said person would also not find it at all curious that SELECT statements hitting a reasonable fraction of a table actually take a table lock to do so. 🙂
As for nolock, here is a good blog post on that: http://blogs.msdn.com/sqlcat/archive/2007/02/01/previously-committed-rows-might-be-missed-if-nolock-hint-is-used.aspx.
Note that you can both miss as well as multiple-acquire COMMITTED data!! It is no longer an issue of reading uncommitted transactional states. When knowledge of this hit several years ago everyone was stunned that it was possible, including people at Microsoft.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
July 24, 2008 at 6:01 pm
Thanks Gail. I appreciate your advice. I will get the network guys onto this.
I've started investigating the TempDB and its greater role in SQL 2005 especially. There are some good articles out there about "best practice" with the TempDB in SQL 2005, and some code to take snapshots of what TempDB is up to as far as coping with volume throughout the day.
It's been such a long number of years since I've had to actually do any admin on SQL Server databases (and I was spoiled there with really well setup Oracle database for about 3 years recently) that I think I've really lost touch with what's going on with SQL Server these days as far as configuration, so it demands a bigger look!
Our TempDB is currently located on the C: drive and set to grow automatically, but has not yet gone over the 8Mb initial file size. Obviously all this would change dramatically if switching to read-only row-versioning. I like the idea of moving this to its own drive in a RAID setup. There are two disks currently on the server and the user DBs are on the D: drive, so things would probably be pretty evenly spread for the size and use of our current database at the moment.
But going forward, I'm going to be prodding the network guys to look into this with me and reconfigure this to make things so they will not suddenly come crashing down on us!
I've recently been doing a lot of development on the SQL Server, and there are a number of items that will be utilizing the TempDB much more than previously, too, so I suppose it's worth keeping this in mind also.
Thanks again!
July 24, 2008 at 6:21 pm
TheSQLGuru (7/24/2008)
I respectfully submit that someone with years of training and experience may well be able to develop an indexing strategy much better than you are able to do, so be careful in thinking that indexing won't help out here. Effective performance analysis and tuning is part science, part art and part experience. The latter is VERY important!Said person would also not find it at all curious that SELECT statements hitting a reasonable fraction of a table actually take a table lock to do so. 🙂
I agree with you that experience must be a big part of index tuning. I have worn a lot of hats in the past 15 years, and more of them have been programming and T-SQL hats than SQL Admin hats, so I wouldn't doubt I'm less experienced in this area than many, and I certainly wouldn't say that it's not worth trying to tune indexes to try and improve performance, when you have the time and impetus to do so.
However, on the occasions where I have taken a look at index tuning over the past 15 years (and there have been several occassions in several versions of SQL Server), I have always ended up drawing the same conclusion: The "rules of thumb" that you generally are told about how to best optimise indexes (i.e., the theory) don't have much to do with reality. I have on several occasions added clustered indexes to tables where some idiot has stupidly REMOVED them in favour of nonclustered only to find that this actually SLOWS data retrieval significantly, and yes, the clustered index is usually placed on a primary key ID field which would also match the date order of the records we are retrieving, and should be an appropriate place for such an index... and the indexes and pages rebuilt so they are effectively defragged.
Perhaps the not very large size of most of the DBs I've worked with over the years contributes to this, as I usually tend to get the feeling that there is not enough data in these systems that indexes really have much of a chance to impact the performance. I don't know, 3 million records in a table with 30 or so fields doesn't seem all that massive to me compared to what some of you might see out there, and if it is, then I'm in favour of going back and working with Oracle again instead, as it can handle such things without a fuss!
Bit I digress. My point is, yes, I agree that I wouldn't have the MOST experience with Indexes, but I can read and follow instructions and on the several occasions I have worked to optimize indexes, I have found relatively little benefit in bothering to do so. I have more often found that (as Gail has rightly pointed out to me) sloppy db structure and SQL Server setup, lots of massive, complex, nested queries that group and Union data more often contribute to such problems than indexes, and are a great deal more noticeable as a problem. So at this point, after having run some tests on the indexes in our larger tables to see what they are up to and if there were any blatant problems with them, and finding that not to be the case, tuning the indexes in this particular situation has become rather low on the priority list at the moment.
TheSQLGuru (7/24/2008)
As for nolock, here is a good blog post on that: http://blogs.msdn.com/sqlcat/archive/2007/02/01/previously-committed-rows-might-be-missed-if-nolock-hint-is-used.aspx.
Note that you can both miss as well as multiple-acquire COMMITTED data!! It is no longer an issue of reading uncommitted transactional states. When knowledge of this hit several years ago everyone was stunned that it was possible, including people at Microsoft.
I've read a lot of posts where DBAs claim the sky is going to fall and the world will end if we use NOLOCK. I have also read nearly as many posts by DBAs who seem to know their stuff as well as the first group who swear by NOLOCK, despite the known anomalies.
As I have said previously, in our case, if we have a missing record or two, or we see a record twice, it is not such a big deal, as the users are using this particular form as a guide to follow up on overdue jobs, and nothing else.
If a job is duplicated or missing for a short time (and they are refreshing this frequently, which was one of the problems with the tablelocks) it is absolutely no problem.
If it was going to bring the DB crashing down, that would be another matter, but I have heard no reports of this nor experienced any since implementing the code with NOLOCK, so I really don't understand what all the panic is about.
If I come to be proven wrong in the coming weeks, I'll certainly let you all know. Gail has given me some good advice, and for now the NOLOCK stays until I can sort out our disk configuration, where our TempDB sits, how much space and memory it is generally using, and get that sorted so we can think about moving to a read-committed row-versioning situation instead. If we can get to a point where we can do this, it will lend much more nicely to our user app, which would prefer this style of optimistic locking, and then the NOLOCK hint can be removed without a problem.
July 25, 2008 at 12:23 am
sharon.bender (7/24/2008)
I don't know, 3 million records in a table with 30 or so fields doesn't seem all that massive to me compared to what some of you might see out there, and if it is, then I'm in favour of going back and working with Oracle again instead, as it can handle such things without a fuss!
So can SQL, with the appropriate indexes and efficient code.
3 million isn't much. I have tables with a hundred times that in them and queries on those tables perform very well.
Bit I digress. My point is, yes, I agree that I wouldn't have the MOST experience with Indexes, but I can read and follow instructions and on the several occasions I have worked to optimize indexes, I have found relatively little benefit in bothering to do so.
While fixing bad config helps, approprite indexes improve performance by orders of magnitude, and I'm not exagerating.
I've been doing perf tuning for a large bank for going on three years now and the combination of fixing poorly written code and creating useful indexes has provided massive performance improvements
Respectfully, if the indexes you created didn't help then either the code was writen in such a way that SQL couldn't use the index or the index was not appropriate for the query.
Did you check the execution plans for the queries before and after the index creation?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
July 25, 2008 at 12:55 am
GilaMonster (7/25/2008)
Respectfully, if the indexes you created didn't help then either the code was writen in such a way that SQL couldn't use the index or the index was not appropriate for the query.Did you check the execution plans for the queries before and after the index creation?
Of course. That's exactly what I was doing. I was able to introduce some parallelism into the execution plans by tweaking various things and in some cases removing some non-clustered indexes that would rarely if ever be used, improved things if anything, but the trade-off was minimal.
In our two largest tables, my boss TOOK OFF the clustered indexes from the primary keys when he created them way back when. I have no idea why. They are unique integers, and serve as the identifier for the "job"... thus, as they increase the date increases, so they're probably the best shot as a primary, clustered index for most of the querying we would be doing on these tables. There are two or three other fields we use quite frequently which would be good candidates for non-clustered indexes, but that's about it.
When I put the clustered indexes back on the primary keys, that's when performance DEFINITELY decreased, which I found rather shocking.
I'll fiddle some more when I have some time, but at the moment, my boss wants me to troubleshoot deadlocks on a needs basis and fix up query syntax first. Before I was here, he had put a lot of like operators in his code to deal with our client system, rather than taking time to construct queries on the fly so he didn't have to use these. It's a bit messy in there, so it makes some sense to clean up from the top, down.
It really gives me an itchy trigger finger to see rather massive tables in there without clustered indexes on them... it's just WRONG. But since my initial performance testing didn't show any real gain by putting them back on, I'm not yet. I will get to it in good time, when I can figure out both the best clustered and non-clustered candidates and prove performance gain.
July 25, 2008 at 7:16 am
When I put the clustered indexes back on the primary keys, that's when performance DEFINITELY decreased, which I found rather shocking.
1) What was the fill factor used for the PK?
2) Does this table receive many updates, and if so, could data be larger after the update than before? If so, page splits could be a culprit here. Have/do you monitor for fragmentation?
3) Was there a different clustered index that had to be removed when you shifted the CI to the PK?
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
Viewing 7 posts - 16 through 21 (of 21 total)
You must be logged in to reply to this topic. Login to reply