January 10, 2009 at 1:51 pm
Jeff Moden (1/10/2009)
... I usually expect that parallelism kicking in will only make things faster, but after seeing what parallel processing did to this rather simple lookup, I may have to do my final performance checking on the targeted production servers themselves.
This is definitely recommended, Jeff. Parallelism is a big bugaboo for SQL Server. Usually it helps, but sometimes it hurts. And sometimes it hurts a lot.
And I have yet to hear of any rhyme or reason behind it, so yeah, performance testing on the actual target configuration is pretty critical. And if you cannot test on it, then at least monitor it.
[font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
Proactive Performance Solutions, Inc. [/font][font="Verdana"] "Performance is our middle name."[/font]
January 10, 2009 at 3:27 pm
Lynn Pettis (1/10/2009)
GilaMonster (1/10/2009)
Lynn Pettis (1/10/2009)
I need to get it to the Dean of Electrical Engineering and Computer Science at Colorado Tech here in Colorado Springs. I am trying o get a part-time position as an adjunct professor and this is critical. I already talked to him yesterday for a few minutes and I really want to do this.Good luck. If you get it, will you be able to use the Professor title?
You know, I don't know? Maybe.
Heh... WOW... Professor Maybe... champion of the "It Depends" technique. 😛
Just kidding a bit. I'm tickled to death for you. Forget the title, the word "Professor" has a pretty nice ring to it. Which field would the professorship be for? Computer Science in general or something a bit more specific?
Man, this is awesome... when you get the chance, you gotta tell us about it. Congrats, Lynn.
--Jeff Moden
Change is inevitable... Change for the better is not.
January 10, 2009 at 3:35 pm
Lynn Pettis (1/10/2009)
It also could be due to disk fragmentation.
Ordinarily, I'd say that's a definite possibility... but not this time. I know for a fact that my desktop box continuously lives in the realm of less than 2% fragmentation and that my MDF/LDF files are never fragmented. Further, this table was built and then the clustered index was added virtually guaranteeing that the table's fragmentation level is slim to none and slim just left the building.
Nope... I don't believe it's the fragmentation on your dev box that caused 14 million logical reads because my box had the exact same number. I think someone may have hit the "run SQL fast" button on whatever marvelous disk controller and hard disk system you have on your home box because that's the only place where the nuber of logical reads where less than the number of rows in the table (4.5 millon).
--Jeff Moden
Change is inevitable... Change for the better is not.
January 10, 2009 at 4:03 pm
Actually, Jeff, I'm talking about physical fragmentation of the database file itself, not the table in the database. Two completely seperate fragmentations here. I'll have to go take a look.
January 10, 2009 at 4:10 pm
Then, maybe not. Did an analyze on the drive and the report did show any fragmentation on the DB files, just the backup files export files we are currently creating from our Stage/ODS databases.
Any ideas on what I should look at to see what else it may be, like specific counters and such?
January 10, 2009 at 5:09 pm
Lynn Pettis (1/10/2009)
Actually, Jeff, I'm talking about physical fragmentation of the database file itself, not the table in the database. Two completely seperate fragmentations here. I'll have to go take a look.
I was actually talking about 4 types of fragmentation... physical and logical both on the harddisk (ie, the status of the actual MDF/LDF files at the OS level) and the physical (clustered indexes) and logical condition of the other indexes.
After closer examination of the results you posted, the MAXDOP 1 run ran within the tolerances that I think any decent machine should be able to achieve... the first run which, apparently, suffered from parallelism, was a real surprise to me and that's why I said it looks like your dev box may have a problem. Like I said, I no longer believe that to be the case.
The interesting part is how relatively few logical reads your home machine had to make... it must have an absolutely awesome amount of both memory and disk cache not to mention a lightning fast throughput.
--Jeff Moden
Change is inevitable... Change for the better is not.
January 11, 2009 at 1:56 am
Lynn Pettis (1/10/2009)
Actually, Jeff, I'm talking about physical fragmentation of the database file itself, not the table in the database. Two completely seperate fragmentations here. I'll have to go take a look.
But that's not going to cause higher logical IOs. Physical maybe, but logical IOs are page reads from memory. By that point, the data's off disk and as such, and disk properties are no longer relevant.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
January 11, 2009 at 2:07 am
Jeff Moden (1/10/2009)
Lynn Pettis (1/10/2009)
Actually, Jeff, I'm talking about physical fragmentation of the database file itself, not the table in the database. Two completely seperate fragmentations here. I'll have to go take a look.I was actually talking about 4 types of fragmentation... physical and logical both on the harddisk (ie, the status of the actual MDF/LDF files at the OS level) and the physical (clustered indexes) and logical condition of the other indexes.
After closer examination of the results you posted, the MAXDOP 1 run ran within the tolerances that I think any decent machine should be able to achieve... the first run which, apparently, suffered from parallelism, was a real surprise to me and that's why I said it looks like your dev box may have a problem. Like I said, I no longer believe that to be the case.
The interesting part is how relatively few logical reads your home machine had to make... it must have an absolutely awesome amount of both memory and disk cache not to mention a lightning fast throughput.
Actually, Jeff, the article you read gave you a glimpse at my machine. Three year old 3.0 GHz P4 hyperthreaded CPU, 2 GB RAM, 2 SATA disk drives (each a master on its controller).
January 11, 2009 at 9:18 am
Hmmmm... is there a way for you to turn of the hyperthreading long enough to run another MAXDOP 1 test? I've got a feeling and would like to find out if I'm right...
--Jeff Moden
Change is inevitable... Change for the better is not.
January 11, 2009 at 9:53 am
Beating accepted. 😀
I just remember seeing the solution in his book. It's not something I played with a lot myself, mainly because I long stopped worrying about gaps in an identity chain.
Great test.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
January 11, 2009 at 10:03 am
Jeff Moden (1/11/2009)
Hmmmm... is there a way for you to turn of the hyperthreading long enough to run another MAXDOP 1 test? I've got a feeling and would like to find out if I'm right...
Actually I was sort of thinking along the same lines. I'm not sure how to turn off hyperthreading, but I could put the OPTION (MAXDOP 1) on the test code and give that a try. It may be a while, the wife is on the war path and I have things to get done around the house.
January 12, 2009 at 6:24 am
Grant Fritchey (1/11/2009)
Beating accepted. 😀I just remember seeing the solution in his book. It's not something I played with a lot myself, mainly because I long stopped worrying about gaps in an identity chain.
Great test.
Man, do I ever agree with that... the only time I've done such a thing for real is when a project I was working on had preallocated membership numbers and they wanted to assign them randomly.
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 12 posts - 31 through 41 (of 41 total)
You must be logged in to reply to this topic. Login to reply