May 15, 2009 at 2:42 pm
Hi All,
I want to stop SQL Server service as a job. How can I do that?
Thanks.
May 15, 2009 at 3:08 pm
call a OS command to "NET STOP MSSQLSERVER" or any named instance
But that's kind of a scary thing to put in a job. What are you trying to accomplish?
May 16, 2009 at 11:24 am
Our transaction logs are growing because of very very large transactions from application. We have notified them a lot of times to break their transactions but they are still researching for that. They said that they can't afford more disk space on transaction log drive. We have shrink log job which runs every hour after transaction log backups but that don't buy much as there are many uncommited transactions on that log files and we can't shrink it. We have notified them that we will need to stop sql server service if it go more than 95% of disk size limit. We have setup an alert to notify us but as a further step we are planning to stop sql server service if database logs goes more than 95% of disk space.
This is the desicion by my seniors and they want me to follow this.
May 16, 2009 at 5:14 pm
Sorry to hear that. It's a very tough spot to be in. Stopping data services may get their attention for sure though. Kind of scary to say it still for me
May 16, 2009 at 7:46 pm
You said you have an alert to notify DBA's when log file grows beyond 95%, you can add response to that alert.
Response would to execute a job.
First Step of that job will be to execute a bat file on OS.
Bat file contains.
NET STOP MSSQLSERVER
Note : MSSQLSERVER, if it is default instance.
~ IM.
May 16, 2009 at 8:54 pm
How frequently are you running log backups? The transaction being long enough so as to require the tlogto grow just doesn't sound right. Trans that long should produce crippling amounts of blocking.
I would suggest looking at your recovery model vs your log backup schedule plus look at your log growth settings.
David
May 16, 2009 at 8:55 pm
Instead of stopping SQL Server, there are other options you can take.
Before the next run of the data run (that increases the log file to fill up the disk), ensure you have taken a log backup. Then TRUNCATE the log. The command for SQL Server 2005 would be
BACKUP LOG WITH NO_LOG
For SQL 2000, it would be BACKUP LOG WITH TRUNCATE_ONLY
This will break the log backup chain, but you can recreate / reschedule it. Once the log is cleared, either shrink it to a value you want, the command for that would be:
DBCC SHRINKFILE('', )
Alternately, you can keep the same value for the file size - say, 12000 MB. Once the log file is shrunk (or if you decide to keep the file size same) restrict the max size for .LDF file up to that limit. You can do that from the database properties.
This approach will ensure the log will never fill up the disk drive, because the log will not grow beyond the limit.
Now, if user transaction fills up the log, the database (not the whole server) will be inaccessible. The only way out of that will be to clear the log. So to cater for this situation, you can do another thing.
If you are using SQL 2005, create a database alert. For the alert
a) Choose the type of "SQL Server Performance Condition Alert".
b) Choose Object as SQL Server Instance:Database
c) Choose "Percent Log Used" as counter
d) Choose the database name as instance
e) Choose "Alert if counter becomes equal to" and specify a percent value (say 90).
So we want to fire an alert if the log is used 90%.
Now create a job. The job will have single step - the BACKUP LOG...NO_LOG command. Specify this job as "response" to the alert. Also, ensure the alert notifies you as the operator.
May 16, 2009 at 9:00 pm
Correction...I am sure I typed the DBCC SHRINKFILE command, don't know why it did not show up - but here it is again:
DBCC SHRINKFILE(, file_size_in_MB)
May 17, 2009 at 4:05 am
You can also shutdown SQL Server without having to shell out to the Operating System and using "NET STOP".
In T-SQL just the command "shutdown" will accomplish this.
May 18, 2009 at 6:23 am
Sadequl Hussain (5/16/2009)
Instead of stopping SQL Server, there are other options you can take.Before the next run of the data run (that increases the log file to fill up the disk), ensure you have taken a log backup. Then TRUNCATE the log. The command for SQL Server 2005 would be
BACKUP LOG WITH NO_LOG
For SQL 2000, it would be BACKUP LOG WITH TRUNCATE_ONLY
No these are not options at all. that's a wrong piece of advise. TRUNCATE_ONLY will break the backup chain and its a dissater if you are aiming for 5 9's (99.999%). And also it has been deprecated in SQL 2008.
There are better options what is your backup schedule for Tlogs? do you have frequent commits in your code? and also did you think of setting the recovery model to bulk-logged while you have bulk operations running against your table?
This will break the log backup chain, but you can recreate / reschedule it. Once the log is cleared, either shrink it to a value you want, the command for that would be:
It's not that easy to build it if the backup chain is broken, is it? And also re think before you shrink the log file, it needs space to grow, if it needs it it will grow again.....frequent shrinking and growing the log file will cause fragmentation....
DBCC SHRINKFILE('', )
Alternately, you can keep the same value for the file size - say, 12000 MB. Once the log file is shrunk (or if you decide to keep the file size same) restrict the max size for .LDF file up to that limit. You can do that from the database properties.
OMG! restrict the log file to grow?? Why???
This approach will ensure the log will never fill up the disk drive, because the log will not grow beyond the limit.
True but with business downtime.. are you ready on that?
Now, if user transaction fills up the log, the database (not the whole server) will be inaccessible. The only way out of that will be to clear the log. So to cater for this situation, you can do another thing.
If you are using SQL 2005, create a database alert. For the alert
a) Choose the type of "SQL Server Performance Condition Alert".
b) Choose Object as SQL Server Instance:Database
c) Choose "Percent Log Used" as counter
d) Choose the database name as instance
e) Choose "Alert if counter becomes equal to" and specify a percent value (say 90).
So we want to fire an alert if the log is used 90%.
Now create a job. The job will have single step - the BACKUP LOG...NO_LOG command. Specify this job as "response" to the alert. Also, ensure the alert notifies you as the operator.
Very wrong !!!!!!!!!
You should rather think in a constructive way and look for possibilities to manage your tlog efficiently rather than going for shortcuts which will cause a huge problem in the future.
What is your recovery model set to? If it is full do you really need it. If it is not then what is your backup schedule? how frequent do you take your tlog backups? If there are any bulk-operations did you consider changingthe recovery model to bulk-logged?
May 18, 2009 at 7:12 am
I have to agree with Krishna's points.
Looking at your transaction log backup schedule and recovery model should be your starting point.
David
May 18, 2009 at 10:05 am
Not to belabor the point but I think the OP stated that the requirement is to stop sql server by his management. As much as I agree with tracing down the root causes for the issue and getting tlog management under control, lets not get this guy fired for not doing what they're asking him to do. Even if its stupid.
May 18, 2009 at 10:15 am
Agreed but every poster needs to evaluate the information provided and implement for their environment and company as appropriate.
IMHO, someone else might see this post in the future and use it for their purposes so a full and fair of debate solutions and opinions is important.
David
May 18, 2009 at 10:33 am
Jeremy Brown (5/18/2009)
Not to belabor the point but I think the OP stated that the requirement is to stop sql server by his management. As much as I agree with tracing down the root causes for the issue and getting tlog management under control, lets not get this guy fired for not doing what they're asking him to do. Even if its stupid.
I also agree, however, if a better solution can be found instead of just shutting down SQL Server it should be provided and the OP given enough information so that he can go back to management with alternatives that may not be as severe as just shutting down SQL Server.
May 18, 2009 at 10:37 am
Agree completely David. Just making the point for the OP, looking out for him too.
I get asked to do stupid things by clients / managers all the time who refuse to listen to anything else. So for his benefit I encourage him to use the methods others have mentioned here to find and resolve his issue, but don't push your luck. And if you work in that kind of environment... get out... fast, but on your own terms.
Viewing 15 posts - 1 through 15 (of 42 total)
You must be logged in to reply to this topic. Login to reply
This website stores cookies on your computer.
These cookies are used to improve your website experience and provide more personalized services to you, both on this website and through other media.
To find out more about the cookies we use, see our Privacy Policy