June 16, 2009 at 7:48 am
1.) Individual servers have a remote server link to a central repository and update the repository with the server statistics (failed jobs, disk space, etc.)
2.) The central repository has a link to all participating servers and initiates/runs stored procedures on the participating servers, logging the data to the central repository.
2.) The central repository has a link to all participating servers and queries data that was run locally on the remote servers in their own repository. (Note: either the central or remote server will have to be set up to delete data once it's moved to get rid of redundant data)
I am curious to know what model you think is best!
Thanks!
Skaar
June 16, 2009 at 9:14 am
#2 or #3 are almost the same and will make sense in terms of managibility:
I have a central Rep, which has linked servers to all the other servers and i run Querries/SSIS packages on this central Server to go out and fetch info from all of these servers onto a central location. I Just create a temp table to dump info in each serer locally and then pull the info into this central server/db.
This is my personal exp: it is easier to mange the linked servers and data in this environment. so #2 and #3 are good options.
Maninder
www.dbanation.com
June 16, 2009 at 9:22 am
that was my thought also - are you using one login to identify the process?
Viewing 3 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic. Login to reply