November 9, 2004 at 6:02 am
That's right - hundreds!
We are looking for a more easy (and time-saving) way to administer lots and lots of SQL servers, with huge numbers of databases.
I can't help but wonder if someone somewhere has the same problem - this take too much time in Enterprise Manager, and sometimes (too often) the administration of the SQL servers is suffering because of lack of time.
Is there some software which is suited for this purpose?
November 9, 2004 at 6:49 am
Don't know if this is an good tool for you, but try out SQL Stripes. I have used it with several SQL Servers, not as many as hundreds
November 9, 2004 at 8:41 am
Hi,
When you are talking hundreds of servers what are you company's policies on the server setup? I mean, do they have similar hardware, similar volume setup, SQL is installed in the same path..... or we are talking about a big variety of hardware, software, custom setups and all spectrum of SQL versions and instances?
Yelena
Regards,Yelena Varsha
November 9, 2004 at 9:52 am
I am just interested to know, what company or project you work with?
How many DBA's do you have to handle 100's of sql server? Are we talking actual sql servers or Just the Databases...
Not trying to be smart, I have heard people using 30, 40, 50 sql server's but never heard 100's...
Tajammal Butt
November 9, 2004 at 10:19 am
I did this for JD Edwards, about a hundred servers, thousands of databases. We have Patrol (BMC) for performance monitoring, but only on select servers. The big issue there was price. It's too expensive to buy software for that many servers.
I'm a home grown guy, we built standard processes that allowed each server to manage itself. The maint plans don't quite cut it, so we added our own processes that was part of our build process to get each server gathering it's own information and running standard jobs on each server. We had a central monitoring server that gathered the information each day from all servers and provided a central report to the DBAs, but if it was down, EACH SERVER ran independently as far as administration is concerned.
You have to be build things that allow for times, dates, etc to change per server. We used a standard database (dba_admin) on EVERY server. This contained the same tables on each server, but the data could be different. We could read a time from a table or a list of databases on which to run processes. Andre has written an interesting piece on scheduling (http://www.sqlservercentral.com/columnists/AVigneau/unifieddatabasetoolkitscheduling.asp) and Eli wrote a nice one on backup strategy (http://www.sqlservercentral.com/columnists/eleiba/implementingaflexiblebackupstrategy.asp) that are similar to what I've built in the past.
November 10, 2004 at 1:19 am
No similarity between the SQL servers here - oh no.
There is a big variety of server hardware, operating system varies between NT4/W2000/W2003, volume setup can be anything from just C: to 5-6 different volumes with RAID, SQL is not always installed in the same path either. Some SQL clusters too.
It'e enough to get me tired even just thinking about it.
These SQL servers not only belong to the company I work for (one of the three largest in Scandinavia), but also to the many customers we have. That's why it's so many.
And we are only 4 DBA's.
November 10, 2004 at 1:32 am
Hi Steve,
we do monitor all the servers with CA TNG Unicenter. That work perfectly by the way. So the DBA-on-24/7-call guy gets the call when something happens to one of the servers and takes care of business.
What we really would like is make checking the jobs and logs quicker. Now it takes quite some time to surf all these servers in Enterprise Manager.
I'll take a look at the articles you linked to.
Thanks very much - to all of you.
November 10, 2004 at 4:47 am
Hi
you can monitor the servers automatically bu generating daily reports, i was part of a four man team that monitored 260 odd servers with about 2300 user databases. We did this by using isql and running scripts and a dts scheduled job against the servers which saved the results into csv files which we imported and manipulated via dts. the report was mailed to us on a daily basis and we used that a lot.
Write your own custom scripts for monitoring disk space , DB free space, changes, failed jobs and sort the results accordingly. It's fun and you'll use the system tables often.
if you want to know more just shout ... will be happy to ablige
November 10, 2004 at 6:21 am
newbie......( Charl )
you can monitor the servers automatically
The original question is / was "how to administer" !!!
Grasshopper.....( Jonny )
I also have to administer 160-170 sql servers ( several hundreds of database ) but we do it central on one place, in The Netherlands, for Europe.
Administring can be done using a good setup within one SQL server console ( if nessesarry use MSX server to administer jobs ( see BOL ) ).
Monitoring is something different. SQL server console does not provide any quick to read monitoring tool so you have to create it yourself. As Newbie just said use a pre-scheduled script ( DTS or CMD ) and report to one specific database.
For a easy to view monitoring tool I can suggest SPOTlight on SQL server from Quest but you can examin ony one database at a time.
Monitoring your systems is a job for our NET-guys and they use ""pentasafe" right now. It will be replaced it by MOM within several months.
Furthermore be sure your network architecture is good and fast ( glasfiber ).
Gather a crew around you and deligate tasks...you can not do it on your own!!!!
Introduce a good methode administering your changes on all these databases using special trained change manager ( highly recomandable ).
More questions? please reply ( private ).
Guus Kramer
The Netherlands
November 10, 2004 at 7:07 am
Good catch, administration is a pain. We typically setup servers into 3 groups. First group is lower impact, less people where we test patches and hotfixes. Sometimes we could script a deployment, sometimes not. When we couldn't, it was us running through each server in a remote KVM session or terminal server session and applying a dozen patches on Fri night.
2nd group was normal deployment and the vast number of servers here. Applying SP3 to 100 servers isn't fun, but 2 of us across one long night could do it. Usually running 5-10 at one time. Have checklists handy, servers lists that are spread out amongst the group, one person coordinating. Old paper and pen checkoff works well.
Last group was small, exception group for people that hadn't tested, app not certified, whatever.
Normal admin, fix backups, restores, etc. Had a call Q to manage workflow, most things run well, it's not often that a large number of backups fail. By monitoring, you can catch them quickly, always a few a week, and you don't get behind.
Other admin, reindex, etc. was scripted and scheduled. Each server had it's own schedule, which was rolled up to a central reporting server. If you catch issues inside a day, admin gets much smoother. New scripts are rolled out in the same way as patches.
November 10, 2004 at 12:09 pm
I'm a home grown guy myself. Everything happens by server groups Prod Dev Test etc. and sub groups. All maint jobs and scripts deploy from a single administration server. All servers are responsible for running jobs. Central server gathers data every hour for most stuff and daily for other stuff to catch failed maint jobs and track growth trends and the like. I haven't found a single tool or suite of tools that does a better job than the home grown stuff we have here. We funnel request through a weekly lead DBA who is also on call. We have a small enviornment 30 production servers and double that for test and dev. The home grown stuff is working so well one DBA is enough to handle the work load.
Wes
November 10, 2004 at 3:49 pm
In our companuy we have about few hundred servers, 4 prod dba. At the same time we have developers who takes care of the code.
We manage all our servers thru linked servers, where one server is a designated server for collecting data about others. It took some time to put all together, but once it in place it is very convinient. We run quite a number of jobs every day to determine the health of other servers such as: Disk Space, suspected dbs, job failures, backups, blocking, counter logs, if Server/ or dbs has been added/created. At the same time we use SiteScope and HP Open View which all compliment each other.
November 11, 2004 at 1:52 am
Some tips to help monitoring all the active servers.
A) Setup a linked server to every active server from you maintenace server.
B) Write A script (SP) to retrieve every database on each server and put into a temp-table
C) reading this temp-table (database by database) and gather all info you want and write it down in an user table on you maintenace server and report from it.
D) Schedule this SP and schedule the report.
optional
E) retrieve (system) log info as well ( use the "xp_" procedures )
Easy to setup and it will you give an enormous amount of info of the status of every database on you LAN/WAN.
Hopes this helps...more info wanted? please contact me.
Guus Kramer
The Netherlands
*****************************************
example of retrieving the backup status;
CREATE procedure sp_GetBackupHistory
@prsd_ServerName char(30)
AS
--If the DbBackupHistory table does NOT exist, create it. Otherwise, delete all rows in it.
if not exists (select name from DBAInfo..sysobjects where name = 'DbBackupHistory')
BEGIN
CREATE TABLE DBAInfo.dbo.DbBackupHistory (
c_DbName [char] (20) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
c_ServerName [varchar] (30) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
c_BackupEndTime [datetime] NULL ,
c_Succeeded [bit] NULL ,
c_Duration [int] NULL ,
c_Message [nvarchar] (600) COLLATE SQL_Latin1_General_CP1_CI_AS NULL
) ON [PRIMARY]
END
-- The next statement retreives the latest backup time for each active database on the PARSED Server and inserts the
-- values into the local DbBackupHistory table.
exec ('insert into DBAInfo.dbo.DbBackupHistory (c_DbName, c_ServerName, c_BackupEndTime)
select database_name COLLATE SQL_Latin1_General_CP1_CI_AS, server_name COLLATE SQL_Latin1_General_CP1_CI_AS, max(end_time)
from ' +@prsd_ServerName +'.msdb.dbo.sysdbmaintplan_history
where activity = ''Backup database''
and server_name != ''(local)''
and database_name in
(select name COLLATE SQL_Latin1_General_CP1_CI_AS from ' +@prsd_ServerName +'.master.dbo.sysdatabases)
group by database_name, server_name')
-- For each row retrieved in the above statement, additional details are then gaterthed and updated into the
-- local table.
exec ('update DBAInfo.dbo.DbBackupHistory
set c_Succeeded = HistTab.succeeded,
c_Duration = HistTab.duration,
c_Message = HistTab.message COLLATE SQL_Latin1_General_CP1_CI_AS
from ' +@prsd_ServerName +'.msdb.dbo.sysdbmaintplan_history HistTab
where DBAInfo.dbo.DbBackupHistory.c_DbName = HistTab.database_name COLLATE SQL_Latin1_General_CP1_CI_AS
and DBAInfo.dbo.DbBackupHistory.c_BackupEndTime = HistTab.end_time')
GO
*************************************************************
November 11, 2004 at 6:07 am
actually, I don't use linked servers at all. I use osql and bcp. every server has a management database on it that houses all the tables and stored procs every server needs. I have a central management server that has a list of all servers and thier type in a table. the management server has several jobs that go to each server run a set of scripts to generate data into the management database then the data gets bcp'ed out and picked up by the central management server. This server also polls the network for new servers attempts to catorize them based on name, this is when standards come into play, and then gathers data about the server. All automated so I don't have to mess with it every time a server comes on line.
Wes
Viewing 14 posts - 1 through 13 (of 13 total)
You must be logged in to reply to this topic. Login to reply