Are the posted questions getting worse?

  • This was removed by the editor as SPAM

  • Am I the only one who likes to actually look at the regular server tools to verify things are working and get a handle on what my day will be like? Or does anyone else feel like there's something to be said about just taking a few minutes out of the start of the day to make sure everything looks good?

    No. My big bug bear at my last employer was that the head of IT (not having a technical background) didn't understand why I was looking at the servers when there was an automated email showing if any jobs failed, free disk space etc. I think any automated response is a high level indicator of how things are, there may be a more nuanced picture when looking in detail.

    -------------------------------Posting Data Etiquette - Jeff Moden [/url]Smart way to ask a question
    There are naive questions, tedious questions, ill-phrased questions, questions put after inadequate self-criticism. But every question is a cry to understand (the world). There is no such thing as a dumb question. ― Carl Sagan
    I would never join a club that would allow me as a member - Groucho Marx

  • Brandie Tarvin wrote:

    A few years ago, when I was getting ready for maternity leave, we hired a contractor to fill my position. While I was training them, I showed them how I start my morning (logging into all servers, checking the Job Activity Monitors on each, etc.) and their first complaint was "Why isn't this automated?". I suppose that's a valid complaint, but I explained my reasoning for doing it and told them if they had a better way, to feel free to come up with something.

    So they built a program to scan the logs of several servers and output an email to the DBA team with the 25 most important daily jobs and their status. The email includes job name, date it last ran, and a little "light" to indicate "Done," "Running," and "Failed." Which was kind of neat but I never really liked it for some reason.

    This morning I'm going through the Job Activity Monitors (because I'm awake earlier than the job runs) and as I discover a new job that failed and doesn't have notifications set up, it hits me why the contractor's automation bugs me so much. It doesn't give me a true feeling of the state of the server jobs. It doesn't auto-add new jobs, it doesn't cover all jobs, and the last run date is so small on it that I don't notice it half the time and might mistake a successful run from 3 days ago as a successful overnight run if I'm not paying attention.

    Am I the only one who likes to actually look at the regular server tools to verify things are working and get a handle on what my day will be like? Or does anyone else feel like there's something to be said about just taking a few minutes out of the start of the day to make sure everything looks good?

    When I was responsible for data safety I developed a job which was backing us all databases on the server.

    1st step was to scan the list of databases in master database and compare it to the list saved in a user table in a "DBA" database. That table contained the set of parameters for the backup process defined for all databases. If there was a database on the server not mentioned in that table then the job added it with a "default" set of backup parameters.

    of course we could edit those parameters at any stage. But at least no database would go missing from regular backups, and no database would grow extremely large TRN file because some junior developer have forgotten about LOG backups.

    thats what your contractors forgot to include to their automation - scan for the full list of active jobs.

    And you may guess - I don't like to login into servers to verify that everything is working. I prefer to look at things which are NOT working.

    "Mr. Corleone is a man who insists on hearing bad news immediately."

    _____________
    Code for TallyGenerator

  • Sergiy wrote:

    And you may guess - I don't like to login into servers to verify that everything is working. I prefer to look at things which are NOT working.

    "Mr. Corleone is a man who insists on hearing bad news immediately."

    I look at it all, though. When I'm on the JAM, I can instantly see failures, but I also look at jobs that have been executing too long (stuff that SHOULDN'T be running at the time I'm examining the JAM but is), jobs that may have been disabled and should have run but didn't, and jobs that ran that shouldn't have run (things that need to be disabled or suddenly have a different schedule).

    Automated jobs just don't give me that picture.

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • Brandie Tarvin wrote:

    I look at it all, though. When I'm on the JAM, I can instantly see failures, but I also look at jobs that have been executing too long (stuff that SHOULDN'T be running at the time I'm examining the JAM but is), jobs that may have been disabled and should have run but didn't, and jobs that ran that shouldn't have run (things that need to be disabled or suddenly have a different schedule).

    Automated jobs just don't give me that picture.

    well, actually there is nothing amongst your criteria which could mot possibly be programmed. It’s just a matter of formalising the definitions like of “running too long”.

    You may even raise an alarm or trigger some actions (or additional checks) when tempdb grows beyond expected limits, or a log file grows too fast. Or an identity column has used > certain % of its capacity.

    Hou can make those automated jobs quite intelligent. Enough to let you have a vacation, 🙂

    _____________
    Code for TallyGenerator

  • Sergiy wrote:

    Brandie Tarvin wrote:

    A few years ago, when I was getting ready for maternity leave, we hired a contractor to fill my position. While I was training them, I showed them how I start my morning (logging into all servers, checking the Job Activity Monitors on each, etc.) and their first complaint was "Why isn't this automated?". I suppose that's a valid complaint, but I explained my reasoning for doing it and told them if they had a better way, to feel free to come up with something.

    So they built a program to scan the logs of several servers and output an email to the DBA team with the 25 most important daily jobs and their status. The email includes job name, date it last ran, and a little "light" to indicate "Done," "Running," and "Failed." Which was kind of neat but I never really liked it for some reason.

    This morning I'm going through the Job Activity Monitors (because I'm awake earlier than the job runs) and as I discover a new job that failed and doesn't have notifications set up, it hits me why the contractor's automation bugs me so much. It doesn't give me a true feeling of the state of the server jobs. It doesn't auto-add new jobs, it doesn't cover all jobs, and the last run date is so small on it that I don't notice it half the time and might mistake a successful run from 3 days ago as a successful overnight run if I'm not paying attention.

    Am I the only one who likes to actually look at the regular server tools to verify things are working and get a handle on what my day will be like? Or does anyone else feel like there's something to be said about just taking a few minutes out of the start of the day to make sure everything looks good?

    When I was responsible for data safety I developed a job which was backing us all databases on the server.

    1st step was to scan the list of databases in master database and compare it to the list saved in a user table in a "DBA" database. That table contained the set of parameters for the backup process defined for all databases. If there was a database on the server not mentioned in that table then the job added it with a "default" set of backup parameters.

    of course we could edit those parameters at any stage. But at least no database would go missing from regular backups, and no database would grow extremely large TRN file because some junior developer have forgotten about LOG backups.

    thats what your contractors forgot to include to their automation - scan for the full list of active jobs.

    And you may guess - I don't like to login into servers to verify that everything is working. I prefer to look at things which are NOT working.

    "Mr. Corleone is a man who insists on hearing bad news immediately."

    Gotta agree with Sergiy here, heck I just generalize the premise to see which option I like best.

    So I have a program that is malfunctioning.

    Do I:

    [ ] Reason why the program is malfunctioning, and endeavor to fix the malfunction?

    or

    [ ] Delete the program with the intent of performing the programs actions manually?

    Ok I'm not saying anything about anybody else, but I know what option I personally would choose.

     

  • Stuart Davies wrote:

    Am I the only one who likes to actually look at the regular server tools to verify things are working and get a handle on what my day will be like? Or does anyone else feel like there's something to be said about just taking a few minutes out of the start of the day to make sure everything looks good?

    No. My big bug bear at my last employer was that the head of IT (not having a technical background) didn't understand why I was looking at the servers when there was an automated email showing if any jobs failed, free disk space etc. I think any automated response is a high level indicator of how things are, there may be a more nuanced picture when looking in detail.

    Regular server tools - perfmon is a great tool, especially when it's tweaked.

    Regarding what the contractor did during your maternity leave, I must give them kudos for taking the initiative. At least they didn't show up and do nothing like I've seen so many people do. They tried, albeit they should have handled things like including all jobs, making sure that the job notifies someone on failure (or completion) and what the max run time was.

    That said, you're correct in that nothing will give you a feel for how things are better than looking at the details. This is one of the reasons we all have different ways to look at things - from the general to the specific.

  • Sean Lange wrote:

    Jonathan AC Roberts wrote:

    Sean Lange wrote:

    So we have this third party shipping system that is...well...user hostile at best. To add insult to injury for the users the system has been suffering increased performance problems over the last several months. I opened a ticket with the vendor in July and have not received anything resembling a resolution. Just empty hope and increased irritation from our shipping department. Well yesterday I decided enough was enough and I was going to try to tackle this thing myself. I have access to the database and all the folders where there software resides. I discovered the source of the issue and fixed it myself. But the response I got from the shipping department is the kind of encouragement that people in our position need from time to time.

    Whatever you did, it is an insane difference. Orders are now running in seconds instead of long minutes. Sweet!!

    Did you find some indexes were missing?

    Haha I was actually initially thinking something along those lines. But this software is...well...interesting. It turned out that they were storing an archive copy of the xml in a file for every single package. In addition they were also saving the dynamic sql that they generated for every single package. This folder had over 3.5 million files in it. No wonder it was so damn slow. Windows has to make sure every file name is unique, and with that many to parse through of course it is slow. I renamed the folder and created a new one with the same name. I found 3 other folders with the same kind of craziness going on. Today I am writing some scheduled tasks to clear this insanity out of these folders. Gotta love third party software that decimates a system like that. Sheesh!!! And since Ed Wagner will ask, no this isn't our ERP doing this. It is another choice piece of vendor work.

    And here I thought you just added NOLOCK hints on every query. 😀

    Luis C.
    General Disclaimer:
    Are you seriously taking the advice and code from someone from the internet without testing it? Do you at least understand it? Or can it easily kill your server?

    How to post data/code on a forum to get the best help: Option 1 / Option 2
  • Luis Cazares wrote:

    Sean Lange wrote:

    Jonathan AC Roberts wrote:

    Sean Lange wrote:

    So we have this third party shipping system that is...well...user hostile at best. To add insult to injury for the users the system has been suffering increased performance problems over the last several months. I opened a ticket with the vendor in July and have not received anything resembling a resolution. Just empty hope and increased irritation from our shipping department. Well yesterday I decided enough was enough and I was going to try to tackle this thing myself. I have access to the database and all the folders where there software resides. I discovered the source of the issue and fixed it myself. But the response I got from the shipping department is the kind of encouragement that people in our position need from time to time.

    Whatever you did, it is an insane difference. Orders are now running in seconds instead of long minutes. Sweet!!

    Did you find some indexes were missing?

    Haha I was actually initially thinking something along those lines. But this software is...well...interesting. It turned out that they were storing an archive copy of the xml in a file for every single package. In addition they were also saving the dynamic sql that they generated for every single package. This folder had over 3.5 million files in it. No wonder it was so damn slow. Windows has to make sure every file name is unique, and with that many to parse through of course it is slow. I renamed the folder and created a new one with the same name. I found 3 other folders with the same kind of craziness going on. Today I am writing some scheduled tasks to clear this insanity out of these folders. Gotta love third party software that decimates a system like that. Sheesh!!! And since Ed Wagner will ask, no this isn't our ERP doing this. It is another choice piece of vendor work.

    And here I thought you just added NOLOCK hints on every query. 😀

    He did that already...twice. Then he used it to run payroll. 😀

    • This reply was modified 3 years, 10 months ago by  Ed Wagner.
  • Ed Wagner wrote:

    Luis Cazares wrote:

    Sean Lange wrote:

    Jonathan AC Roberts wrote:

    Sean Lange wrote:

    So we have this third party shipping system that is...well...user hostile at best. To add insult to injury for the users the system has been suffering increased performance problems over the last several months. I opened a ticket with the vendor in July and have not received anything resembling a resolution. Just empty hope and increased irritation from our shipping department. Well yesterday I decided enough was enough and I was going to try to tackle this thing myself. I have access to the database and all the folders where there software resides. I discovered the source of the issue and fixed it myself. But the response I got from the shipping department is the kind of encouragement that people in our position need from time to time.

    Whatever you did, it is an insane difference. Orders are now running in seconds instead of long minutes. Sweet!!

    Did you find some indexes were missing?

    Haha I was actually initially thinking something along those lines. But this software is...well...interesting. It turned out that they were storing an archive copy of the xml in a file for every single package. In addition they were also saving the dynamic sql that they generated for every single package. This folder had over 3.5 million files in it. No wonder it was so damn slow. Windows has to make sure every file name is unique, and with that many to parse through of course it is slow. I renamed the folder and created a new one with the same name. I found 3 other folders with the same kind of craziness going on. Today I am writing some scheduled tasks to clear this insanity out of these folders. Gotta love third party software that decimates a system like that. Sheesh!!! And since Ed Wagner will ask, no this isn't our ERP doing this. It is another choice piece of vendor work.

    And here I thought you just added NOLOCK hints on every query. 😀

    He did that already...twice. Then he used it to run payroll. 😀

    Yeah I added NOLOCK to every hint and then added READ UNCOMMITTED as well just to make sure. 😛

    _______________________________________________________________

    Need help? Help us help you.

    Read the article at http://www.sqlservercentral.com/articles/Best+Practices/61537/ for best practices on asking questions.

    Need to split a string? Try Jeff Modens splitter http://www.sqlservercentral.com/articles/Tally+Table/72993/.

    Cross Tabs and Pivots, Part 1 – Converting Rows to Columns - http://www.sqlservercentral.com/articles/T-SQL/63681/
    Cross Tabs and Pivots, Part 2 - Dynamic Cross Tabs - http://www.sqlservercentral.com/articles/Crosstab/65048/
    Understanding and Using APPLY (Part 1) - http://www.sqlservercentral.com/articles/APPLY/69953/
    Understanding and Using APPLY (Part 2) - http://www.sqlservercentral.com/articles/APPLY/69954/

  • Sean Lange wrote:

    Ed Wagner wrote:

    Luis Cazares wrote:

    Sean Lange wrote:

    Jonathan AC Roberts wrote:

    Sean Lange wrote:

    So we have this third party shipping system that is...well...user hostile at best. To add insult to injury for the users the system has been suffering increased performance problems over the last several months. I opened a ticket with the vendor in July and have not received anything resembling a resolution. Just empty hope and increased irritation from our shipping department. Well yesterday I decided enough was enough and I was going to try to tackle this thing myself. I have access to the database and all the folders where there software resides. I discovered the source of the issue and fixed it myself. But the response I got from the shipping department is the kind of encouragement that people in our position need from time to time.

    Whatever you did, it is an insane difference. Orders are now running in seconds instead of long minutes. Sweet!!

    Did you find some indexes were missing?

    Haha I was actually initially thinking something along those lines. But this software is...well...interesting. It turned out that they were storing an archive copy of the xml in a file for every single package. In addition they were also saving the dynamic sql that they generated for every single package. This folder had over 3.5 million files in it. No wonder it was so damn slow. Windows has to make sure every file name is unique, and with that many to parse through of course it is slow. I renamed the folder and created a new one with the same name. I found 3 other folders with the same kind of craziness going on. Today I am writing some scheduled tasks to clear this insanity out of these folders. Gotta love third party software that decimates a system like that. Sheesh!!! And since Ed Wagner will ask, no this isn't our ERP doing this. It is another choice piece of vendor work.

    And here I thought you just added NOLOCK hints on every query. 😀

    He did that already...twice. Then he used it to run payroll. 😀

    Yeah I added NOLOCK to every hint and then added READ UNCOMMITTED as well just to make sure. 😛

    Something one couldn't make up, took me almost two years to remove all of the several thousand NOLOCK and READ UNCOMMITTED directives from a SaaS platform's code (1M+ lines of code).

    😎

    Was asked: Why do you bother if it works? My response: Does it?

  • I need assistance with a minor, but irritating, problem here. SQL Alias issue. Assistance would be appreciated if anyone has time.

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • Grant/Steve/anyone else, just received the email about PASS legacy moving to RedGate, well done for any/all of you who made that happen!

    -------------------------------------------------------------------------------------------------------------------------------------
    Please follow Best Practices For Posting On Forums to receive quicker and higher quality responses

  • jonathan.crawford wrote:

    Grant/Steve/anyone else, just received the email about PASS legacy moving to RedGate, well done for any/all of you who made that happen!

    I got the same one. Well done and my compliments to whoever was involved and got it done.

    Does that mean you're going to get another couple of drives to store all the past presentations and content? That's got to be some serious space.

  • jonathan.crawford wrote:

    Grant/Steve/anyone else, just received the email about PASS legacy moving to RedGate, well done for any/all of you who made that happen!

    Cheers!

    Wasn't me, but I'm pretty excited about it.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

Viewing 15 posts - 65,401 through 65,415 (of 66,712 total)

You must be logged in to reply to this topic. Login to reply