April 17, 2015 at 12:00 am
Comments posted to this topic are about the item Monitoring After Deployment
April 17, 2015 at 1:46 am
Steve Jones - SSC Editor (4/16/2015)
... I've been most wary, and worried, when the changes were out of my control, and I had no chance to review things......
Have you ever heard the term "control freak"?? 😉
The most difficult aspect is to nail down who is responsible for what, and making certain that nothing falls between the cracks. I'm not the network admin, but if floor tiles need to be ripped up and a new cable laid then yes I'm there to get my hands dirty, but I'm not there to monitor/design/administrate the network. Likewise a 3rd party comes in without discussing the upgrade then fine I take a backup. They break it they fix it or I restore it. I can not and will not be made to feel responsible for others mistakes that path leads to too much stress. Been there and got the t-shirt
Regarding monitoring pre/post installation, I tend to make it a given and keep the rest of my work load on non-critical stuff just in case I have to drop everything. Nothing cause more issues than trying to juggle too many critical issues at the same time.
April 17, 2015 at 7:06 am
The volume of calls being logged by our helpdesk is a reliable way of monitoring if a change has been successful or not :hehe:
April 17, 2015 at 8:11 am
Below is a script I use to quickly zip through every database, looking for objects that have been created or modified in the past 24 hours. In addition to running this following a deployment, it's also one of the first scripts I turn to as a first responder to a performance or troubleshooting incident.
if object_id('tempdb..#changed') is not null drop table #changed;
create table #changed
(
primary key ( databasename, schema_name, object_name, type_desc ),
databasename varchar(180) not null
, schema_name varchar(180) not null
, parent_name varchar(180) null
, object_name varchar(180) not null
, type_desc varchar(180) not null
, create_date datetime null
, modify_date datetime null
, diff int null
);
insert into #changed
exec sp_msforeachdb
'
use ?;
select
db_name() as databasename,
schema_name(schema_id) as schema_name
, object_name(parent_object_id) parent_name
, name as object_name
,type_desc
,create_date
,modify_date
,datediff( hour, modify_date, getdate() )diff
from sys.objects
where substring(name,1,1) != ''#''
and substring(isnull(object_name(parent_object_id),''''),1,1) != ''#''
and datediff( hour, modify_date, getdate() ) <= 24;
';
select * from #changed order by diff;
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
April 17, 2015 at 8:23 am
Some ALTER DATABASE operations can clear the buffer and procedure cache, so that can also explain a brief period of higher physical reads and CPU immediately following a deployment.
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
April 17, 2015 at 10:50 am
Eric M Russell (4/17/2015)
Below is a script I use to quickly zip through every database, looking for objects that have been created or modified in the past 24 hours. In addition to running this following a deployment, it's also one of the first scripts I turn to as a first responder to a performance or troubleshooting incident.
if object_id('tempdb..#changed') is not null drop table #changed;
create table #changed
(
primary key ( databasename, schema_name, object_name, type_desc ),
databasename varchar(180) not null
, schema_name varchar(180) not null
, parent_name varchar(180) null
, object_name varchar(180) not null
, type_desc varchar(180) not null
, create_date datetime null
, modify_date datetime null
, diff int null
);
insert into #changed
exec sp_msforeachdb
'
use ?;
select
db_name() as databasename,
schema_name(schema_id) as schema_name
, object_name(parent_object_id) parent_name
, name as object_name
,type_desc
,create_date
,modify_date
,datediff( hour, modify_date, getdate() )diff
from sys.objects
where substring(name,1,1) != ''#''
and substring(isnull(object_name(parent_object_id),''''),1,1) != ''#''
and datediff( hour, modify_date, getdate() ) <= 24;
';
select * from #changed order by diff;
I like your code here, Jack, it's going straight in to my tools folder.
-----
[font="Arial"]Knowledge is of two kinds. We know a subject ourselves or we know where we can find information upon it. --Samuel Johnson[/font]
April 17, 2015 at 11:09 am
Wayne West (4/17/2015)
Eric M Russell (4/17/2015)
Below is a script I use to quickly zip through every database, looking for objects that have been created or modified in the past 24 hours. In addition to running this following a deployment, it's also one of the first scripts I turn to as a first responder to a performance or troubleshooting incident....
I like your code here, Jack, it's going straight in to my tools folder.
If anyone knows of a way to also include indexes and the last date/time the index was rebuilt or reorganized, then that would make it even better.
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
April 17, 2015 at 11:45 am
Wouldn't it make sense to use DDL triggers to capture all schema changes at that point and log all of the changes to some DB to which the normal users can't make any updates? We did that and gen'd a report that we got regularly to show the unauthorized changed back when that was allowed for the shared dev server. Now all of those changes have to come out of source control.
I'd be interested in hearing a bit more about the performance monitoring aspects, though I still think the majority of those should be caught before hitting prod if at all possible. 🙂
April 17, 2015 at 12:04 pm
Peter Schott (4/17/2015)
Wouldn't it make sense to use DDL triggers to capture all schema changes at that point and log all of the changes to some DB to which the normal users can't make any updates? We did that and gen'd a report that we got regularly to show the unauthorized changed back when that was allowed for the shared dev server. Now all of those changes have to come out of source control.I'd be interested in hearing a bit more about the performance monitoring aspects, though I still think the majority of those should be caught before hitting prod if at all possible. 🙂
DDL triggers would be a better solution, because it would also identity dropped objects in addition to other details. However, I sometimes have to troubleshoot issues on servers for which I have no past affiliation with. I'm not the DBA, just the go-to guy when there is a all-hand-on-deck issue with one of the 100 database servers somewhere in the enterprise, and querying sys.objects works generically in all cases.
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
April 18, 2015 at 5:52 am
1. Performance counters (OS & MSSQL in combination)
2. Red Gate's SQL Response, later, Monitor
3. SQL Server alerts, notifications, jobs, checks inspired in Brent Ozar stuff (I know, it's a wild idea using SQL Server itself as monitoring tool)
4. SQL Server MDW (another wild idea)
5. Nagios / OpCenter
6. http://sqltouch.blogspot.ca/2014/03/free-real-time-sql-server-performance.html
7. Handy queries, DMVs and stored procedures
8. SQL Server Audits based on lightweight extended events
April 18, 2015 at 5:53 am
Eric's code seems cool, got to try!
April 20, 2015 at 8:26 am
Yet Another DBA (4/17/2015)
Steve Jones - SSC Editor (4/16/2015)
... I've been most wary, and worried, when the changes were out of my control, and I had no chance to review things......Have you ever heard the term "control freak"?? 😉
Certainly I have displayed those characteristics at times.
The issues for me have often been upgrades, whether from internal developers or third parties. Far too often their quality control is poor, or the outage required is beyond what they've informed me of.
It's not my fault, but I get some of the blame and have to pick up the pieces. There's no reason for me to be blase about a patch when I've had plenty of them require me to work into the night and cancel personal plans when they failed.
April 20, 2015 at 9:06 am
Getting post deployment metrics sounds like a great idea that we'll think about, see if we can come up with something. We do the old tried and true method of practice deployments, check data, test, test, test. Then when we actually deploy we do some shotgun testing and call it good. When the staff comes in the next morning they do the real test. If something is wrong we fix it.
April 21, 2015 at 5:24 am
Steve Jones - SSC Editor (4/20/2015)
Yet Another DBA (4/17/2015)
Steve Jones - SSC Editor (4/16/2015)
... I've been most wary, and worried, when the changes were out of my control, and I had no chance to review things......Have you ever heard the term "control freak"?? 😉
Certainly I have displayed those characteristics at times.
The issues for me have often been upgrades, whether from internal developers or third parties. Far too often their quality control is poor, or the outage required is beyond what they've informed me of.
It's not my fault, but I get some of the blame and have to pick up the pieces. There's no reason for me to be blase about a patch when I've had plenty of them require me to work into the night and cancel personal plans when they failed.
Yep, know that one. 🙁
Just come from a meeting where we were talking about a migration. Usual blank faces when I ask at what point would we rollback if things dont go as they should. And again the usual comment of dont need to test this because its been done a couple of times on other projects.
Oh well he comes another wasted weekend :crazy:
April 27, 2015 at 6:16 am
Yet Another DBA (4/21/2015)
Steve Jones - SSC Editor (4/20/2015)
Yet Another DBA (4/17/2015)
Steve Jones - SSC Editor (4/16/2015)
... I've been most wary, and worried, when the changes were out of my control, and I had no chance to review things......Have you ever heard the term "control freak"?? 😉
Certainly I have displayed those characteristics at times.
The issues for me have often been upgrades, whether from internal developers or third parties. Far too often their quality control is poor, or the outage required is beyond what they've informed me of.
It's not my fault, but I get some of the blame and have to pick up the pieces. There's no reason for me to be blase about a patch when I've had plenty of them require me to work into the night and cancel personal plans when they failed.
Yep, know that one. 🙁
Just come from a meeting where we were talking about a migration. Usual blank faces when I ask at what point would we rollback if things dont go as they should. And again the usual comment of dont need to test this because its been done a couple of times on other projects.
Oh well he comes another wasted weekend :crazy:
I was about to suggest that collaboration was the greatest tool for this scenario. I guess that sometimes we don't have access to all the tools.
Gaz
-- Stop your grinnin' and drop your linen...they're everywhere!!!
Viewing 15 posts - 1 through 15 (of 16 total)
You must be logged in to reply to this topic. Login to reply