June 13, 2017 at 9:24 am
Hello,
I read somewhere that linked servers would be deprecated like database mirroring. I use my production data for a lot of reports. However, I don't want to have users hitting my production server. That also is what our productions lines are running on. I create report tables and keep them updated with linked servers.
With that going away, what are our options? Do we cluster everything and use the secondary replicas for reporting? I am already doing this for one project. I was wondering if this is the way should create report tables going forward. Cluster production servers and create secondary replicas for reporting. This keeps the databases on two different servers up to date and in sync. I didn't think log shipping and/or replication would be the way to go.
Thanks for your thoughts?
Tony.
Things will work out. Get back up, change some parameters and recode.
June 13, 2017 at 10:22 am
I think that's partially incorrect.
what I read is that a linked server featuring the SQLOLEDB is deprecated, and must be replaced with a server using the SQLNCLI) driver instead.
Linked servers | Specifying the SQLOLEDB provider for linked servers. | SQL Server Native Client (SQLNCLI) | SQLOLEDDB for linked servers |
https://docs.microsoft.com/en-us/sql/database-engine/deprecated-database-engine-features-in-sql-server-2016">https://docs.microsoft.com/en-us/sql/database-engine/deprecated-database-engine-features-in-sql-server-2016
Lowell
June 13, 2017 at 12:54 pm
I actually saw another that made it look like linked servers would be deprecated. However, it may have just been referring to remote servers.
Thanks for the clarification.
Tony
Things will work out. Get back up, change some parameters and recode.
June 13, 2017 at 2:50 pm
Where I work, we are working on removing linked servers where possible.
We are replacing them with a combination of Service broker and SSIS.
We have our data warehouse which holds all of our reporting data. The data in the warehouse is refreshed based on end user need.
For most things, it is a nightly refresh. We have company downtime most nights, so having the nightly refresh start at 2:00 AM is fine as long as it completes before 6:00 AM. This is entirly through SSIS
For some other things, we do hourly refreshes during company uptime. This results in a small window (currently just over 1 minute, but we are looking at optimizing that soon) in which end users might have no results from the report or application. The 1 minute window is for the entire SSIS load which loads a lot of tables. Each individual table is only empty for a few seconds at most.
Then comes the "real time" data. The stuff we cannot wait the hour for the data to be refreshed. For that we use service broker. Any of the "real time" tables get inserted, deleted or updated, an XML message gets sent out to the data warehouse. The data warehouse takes the XML, parses it to determine the operation performed, and then either inserts, updates or deletes the data in the data warehouse tables.
To ensure data consistency in the unlikely event that the service broker message gets stuck or lost (which has happened), we also do a nightly refresh of that data.
In our shop, service broker has been fairly reliable and any time it has had issues, it is usually due to the DBA (ie me) not doing proper error handling on the message processing steps. Like trying to insert a value into an identity column without first enabling identity insert or violating a foreign key constraint. The odd time messages seem to stall or take an exceptionally long time to process, but we notice that moreso on the test system (less resources for that one) so it is expected.
If you do take the service broker route, and you have a lot of activity on the tables you are monitoring, you will want to make sure you only open 1 conversation per table and reuse conversations where possible. Conversation reuse improved our performance drastically. In one system, we are averaging 50,000 messages per day and when we had 1 conversation per message it was horrendously slow. So much so, that we would have 20,000-30,000 messages queued up by the time I went home. With the 1 conversation per table, I very rarely see more than 50 messages queued up to be processed and when I do, they are processed by the time I re-run the SELECT on the queue.
Is our method best practice - I am not sure, but it was what my boss prefers. He isn't a big fan of Linked servers.
The above is all just my opinion on what you should do.
As with all advice you find on a random internet forum - you shouldn't blindly follow it. Always test on a test server to see if there is negative side effects before making changes to live!
I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.
June 15, 2017 at 12:01 pm
Thank you for that. I have not looked at that type solution. But I will take a look at that method.
The production lines are going all day. The reports need to report on data that is production without hitting the production servers. I was going to use log shipping. I actually got it to work. However, it places the secondary replica in a read only status. Then while it restores the database, the reports wouldn't work. That solution didn't see like a good fit.
Thanks again for the idea.
Things will work out. Get back up, change some parameters and recode.
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply