May 28, 2007 at 6:43 am
Rob,
This is kind of like the SQL equivalent of a House episode. Nothing against Access developers; however, I often find that these are built by people with business knowledge foremost and SQL design knowledge secondly.
Some of these responses have me wondering why people are asking you to re-index the tables, etc. No offense, but they are asking you to operate on the patient based on the symptons without having an idea of what the problem is.
First regarding running slow, the first thing I think you should look at is running SQL Profiler. Profiler will tell you quite a bit of information regarding performance and potentially what may be happening.
More than likely you have some large tables that are doing table scans do to incorrect indexes or locks on objects that is causing performance degradation. You may just have a poorly written database to start with.
Certainly, the first thing you can do is just run some of the views you find listed to get an idea of performance as well.
Just google SQL Profiler and Execution plan and you'll get an idea of what you can start to look for and how to attack the problem.
May 28, 2007 at 1:23 pm
I have recruited a local database designer to assist me with this. SQL Profiler looks like a powerful tool, but you wouldn't put a grenade launcher into someone's hands unless you were confident in their ability to use it properly, would you? I can read how to use SQL Profiler and could probably gather some useful data. I'm not sure what stuff to look at, but I have found some decent articales on the topic.
So, then what? I have all this data to look at, but I have no way to interpret the data and certainly no way to make recommendations on correcting whatever wrongs there may be therein.
I have found that, by using Google and/or by contacing my favorite forums, I can find out pretty much what I need to know to get through a networking or OS issue. However, it is clear that, without the proper background, you can't just read a few articles about SQL Server and its tools and then troubleshoot a performance issue.
I've watched a lot of TV crime dramas, but I still can't defend you if you are accused of something...
I will keep updating this thread as I move through this issue - watching me learn and try things might be amusing for you. As much as I have appreciated your inputs throughout, it has taught be more about my inability to properly deal with this, than it has about doing so.
This stuff is very interesting, but I just don't have the time or perhaps even the aptitude to learn enough to be effective. My hat's off to those of you who can make a living doing this - you must have pretty twisted minds!
May 29, 2007 at 10:58 am
Rob,
Don't worry. Profiler isn't a grenade. @=) You can't kill SQL by using it unless your system is so screwed up that it's hardly running at all and from what I understand, it's only one app that's the problem.
When running Profiler, you should be aware that it will add a little bit of extra slowdown to the system, but it is essential for finding things like long-running queries. So you might as well just buckle down and use it. I recommend using the following in Profiler:
Save your trace to a file or a table in a different database than the one you're profiling. I prefer to save it to a local file. Keeps my database contention down to a minimum. Though others might advise saving the trace to a file on the server. This is a preference issue. Also, Events, Columns and Filters tend to be a matter of preference, but the most commonly used (and most helpful to you in this scenario) would be the following:
Events to monitor: Security Audit (Audit Login, Audit Logout), Sessions (ExistingConnection), T-SQL (SQL:BatchCompleted, SQL:BatchStarting). Columns to monitor: TextData, LoginName, CPU, Duration, SPID, StartTime, EndTime. Filters: Put Duration on a GreaterThan or equal to 10000 (which should be 10 seconds).
The above should give you a list of what logins might be causing the slowness issues as well as what queries, functions and stored procedures might be the issue. This is helpful because it'll tell you whether or not it's the application code that's causing the problem as opposed to any database issues. I'm betting its the app code or a bunch of poorly written procs/functions created for and used by the app. Anyway, run Profiler for an hour during business hours and have the end users call you or email you with precise times they are experiencing slowdowns. Correlate the two once your trace is stopped and this should help you come closer to pinning the problem down. If you don't see anything with a Duration of 10 seconds or longer (i.e., if the entire Duration column is blank or NULL), bring your Duration filter down to 8 seconds, then 5 seconds. Anything below 5 seconds shouldn't be causing major issues.
After all that, if you still can't find the issue, add Deadlock events to your Profiler trace to see if you're getting contention from that problem. Deadlocks are when connections are fighting each other for resources and one connection gets killed because it lost the fight.
Let us know what you find with Profiler and we can help you further with any questions you have on it.
May 29, 2007 at 12:23 pm
I suspect it is the Workload Governor in MSDE that is killing your performance. You can test the by running the following in a SQL window (during a busy time):
DBCC CONCURRENCYVIOLATION
If the third line has numbers (non zero), then you have concurrency violations. The numbers on 2nd line show the number of concurrent operations over the limit. For example if you have the 400 under number 1, you had 400 violations over the limit of 8 (8 for MSDE 2000, 5 for older)
If you are over the governor limit, then things will be slow and an upgrade to true SQL Server would help lots.
Good luck
May 29, 2007 at 1:30 pm
I recommended going through the 'short-list' as opposed to diving right into Profiler because this will give him the most bang for the least amount of work. You are correct, we are asking him to operate without knowing what the problem is, but isn't it worth making sure the indexes are optimally packed with current stats before spending hours on Profiler? At this point, you don't even know that it is SQL Server do you? What if it is CPU? What if it is disk? You are asking someone with a huge learning curve to start out profiling? See this post for backup: http://www.sqlservercentral.com/forums/shwmessage.aspx?forumid=65&messageid=361072#bm362205
Profiler is a great tool, but why not eliminate the easy stuff before spending the time with profiler when the learning curve for Rob will be so high? Just like when you have a PC problem, do you pull the motherboard out and test all of the electrical components and busses or do you first make sure the box is plugged into the wall?
Rob,
1. Read the linked post.
2. Follow the SQL Short list
3. Eliminate your server from being the performance bottleneck as previously described.
4. Run Profiler.
May 29, 2007 at 2:12 pm
Because I was tired of being a total dunce, I decided to play with this stuff a bit on the bench instead of going to a client's site and learning on live data.
I loaded SQL Server 2000 on a Windows 2000 SP4 Server I have running in Virtual PC. So far, so good. I was hoping to use some information I found in an article on the SQL Profiler Tool. To create a dummy load on the Northwind database, and read the resulting trace, a .vbs file was listed. It wouldn't run. Messed with that for an hour and said to hell with it.
Then I decided to try the suggestion of bnordberg and try:
"running the following in a SQL window (during a busy time): DBCC CONCURRENCYVIOLATION"
After about a half-hour of trying to find out what a "sql window" is, I finally decided what he meant was to run osql at a command prompt. (If that conclusion was wrong, it's just another indication of how my day is going...) So I screwed around with that for a while, but I keep getting:
"Login failed for user 'Administrator'. Reason: Not associated with a trusted
SQL Server connection."
I found a number of documents (all useless) on providing rights to the user, changing the local security policy, changing the connection to named pipes and changing authentication to sql and windows.
So, I wasted an afternoon messing with this stuff and am still unable to run a simple command-line statement.
You folks can surely understand why I am reluctant to visit a client's site and ask him to allow me to run some test commands on his server when I have no idea how to even start up a SQL Window.
Granted, I haven't read the posting that John has suggested yet, but if it's anything like any of the other reading I've done (and that's a LOT by now!), it will assume I know such things as how to open a command prompt, and whether or not I can run such and such a command during production hours or whether to schedule some down time.
I have often successfully resolved a networking or OS issue that I had never seen before, figuring that I could work through it given my experience. However, without some sort of SQL Server 101 (which I can't find ANYWHERE) to go through first, I can only assume that each and every trip to my client's office to perfrom SQL testing will end up with me not accomplishing anything and looking like a dope in the process. Books Online is a great reference, but like eveything else out there, you can't just start at page 1 and read trhough until you have enough information to do what you want - it assumes prior knowledge that I just don't have.
No, I'm not giving up, I am just venting a bit before I take a deep breath and invest another day or two in learning how to get to a &^$? command prompt, run a few commands and overcome the next roadblock which will be child's play to you pros, but painfully time-consuming to work through for a neophite.
As always, thanks for your input - I know it's going to be helpful - if I don't jump off a bridge first!
May 29, 2007 at 3:35 pm
It sounds like you need some help. The other consultant wanted to re-build the system from the ground up, but you should be able to find someone who can just help you troubleshoot the darn thing. Feel free to send me a private message if you want to discuss getting my help outside of SSC. In the mean time, keep reading.
May 29, 2007 at 10:36 pm
After about a half-hour of trying to find out what a "sql window" is, I finally decided what he meant was to run osql at a command prompt. (If that conclusion was wrong, it's just another indication of how my day is going...) So I screwed around with that for a while, but I keep getting: |
Yeah... that conclusion is wrong... they meant for you to run it from Query Analyzer... if you don't know what that is, write back...
--Jeff Moden
Change is inevitable... Change for the better is not.
May 30, 2007 at 12:19 am
The MSDE box was a test box not the production box. If tou read the first post, he is using SQL 2000 sp4 as the production db
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 30, 2007 at 12:22 am
Rob, sounds like you need some on-site help. Whereabouts are you based?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 30, 2007 at 1:31 am
Now, wouldn't it be an absolute hoot if they were looking at the MSDE box in Query Analyzer or EM thinking that it was the production box? Not so uncommon a problem with folks new to SQL Server... I've seen it twice (maybe 3 times if this turns out to be that problem)
--Jeff Moden
Change is inevitable... Change for the better is not.
May 30, 2007 at 6:03 am
This post is getting quite long, so you may have missed all that: as GilaMonster points out, the production machine has never had MSDE installed on it. So there's no workload governor in the mix.
At one point, the database designer moved the database to an XP Pro workstation, installed the MSDE on it, instructed the client to have only 5 users on it at a time and then "proved" that the problem wasn't database design. They then stated that the only proper way to run their product was on a separate server.
My client isn't cheap, but he's no fool either. If I could show him that their suggestion has merit, he'll fork over the cash for a separate box. However, running a database with 5 users successfully for a week on one box, doesn't prove that the other box has a problem when trying to run it with 20 users.
As a recap: This database is less than 800 MB in size, and has 20 users during peak usage times. My client has used this company's products for years, but has had performance issues since they went from an Access database to SQL, about 18 months ago. The performance started a bit slow, but became a problem (20 to 30 second screen change times) over time. The performance isn't consistent - it's never fast, but goes through various degrees of slow. Even now, it's not that bad - usable but still rather sluggish. Lots of user complaints, however, as they dispatch truck drivers and are often under brutal time constraints.
More background: It's an HP ML350 G3 server with mirrored SCSI drives (hardware RAID 1). It has 4 GB of RAM. The OS is Small Business Server 2003. The vendor figures Exchange 2003 SP2 is eating up all the RAM, but I have yet to see the system with less than 800 MB of available RAM, and it's often more.
I have engaged the help of a local database designer to assist me, along with the assistance I have been getting from this forum. Unfortunately, he has been very busy lately, and hasn't been as responsive as I'd like. I have checked in with my client - he isn't particularly frantic over this (yet). He has confidence that I will help him get to the bottom of the problem.
I am just trying to come through for him - you gotta appreciate it when a client puts so much faith in you, and you feel somewhat inadequate when it's not your forte. Clients figure - hey, it's a computer, you MUST be able to fix it!
However, I never sell expertise I don't have. He knows this isn't my field, but I am still his computer "go-to" guy. I told him I'd help him find a solution to the problem, and I will. I figured I'd run a few command-line commands, read a Knowledge Base document or two and, Voila! Instant DBA! Doesn't work that way, does it?
May 30, 2007 at 6:22 am
Unfortunately, no. It doesn't work that way.
Raid 1 config? That might be your problem. Raid 1 takes FOREVER to write to, double the time it would usually take to write new data to disk. Open up Query Analyzer. Type in the command 'sp_who2' (without the single quotes) during the busiest time of the day. Do you see a lot of "Lazy Writer" commands? If so, that's a clue that it could be the RAID 1. Also, check the size of the paging file. We recently had a similar issue where our paging file simply wasn't big enough. Even on a multi-GB memory system, we had to set ours to 1.5 times the RAM so we could get our system working at peak efficiency.
A good way to test is get a server with a similar config, but use a single disk or a different flavor of RAID. Do NOT break the mirror on the production box to test!!! Anyway, you get all the users on the test box doing their thing and if the app is faster, you know it's the RAID that's slowing you down.
If you cannot do the above, check Windows PerfMon for disk queue lengths and I/O issues. If that looks okay, then go back to sp_who2. Look at the logins to see what / who is connected to what databases (dbName column). Check the BlkBy column to see if there are any numbers in there. BlkBy means "Blocked By". If that column is all nulls, then you're not having locking issues. If you see a bunch of numbers (or even 1 number) in that column, that means that particular SPID is locking out all other queries. If you establish that the application is blocking itself or other users, you can use this as Ammo when you or your client calls the company that wrote the app.
And lastly, If you see nothing but NULLS in the BlkBy column, I highly recommend you run Profiler as I suggested earlier. It could be the way the Vendor set up SQL Server. It could be the application code itself. Or it could be procs / functions in the server.
May 30, 2007 at 6:33 am
Having SQL Server on the same machine as exchange server is not a recomended practice.
See if you can find out how much memory SQL is using and how much exchange is using.
I'd recomend using performance monitor for this. The counters you want are under the performance object Process. Have a look at the working set and the private bytes for the processes sqlservr and whatever exchange's main executable is.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 30, 2007 at 6:41 am
Yikes! I totally missed the Exchange reference. (Good catch, Gail!) That too could be a huge part of your problem. I've never heard of anyone successfully having Exchange cohabitate with SQL Server on the same box. They fight for resources, which slows everything down.
Viewing 15 posts - 16 through 30 (of 33 total)
You must be logged in to reply to this topic. Login to reply