September 13, 2011 at 12:12 pm
Avalin
I will consider your inputs and take a discussion with my colleuges if we shall do this or not. It seems like we shall think it through many times before we do.
Unfortunately you seem to have already made up your mind, and appear frustrated because ninja and Steve don't endorse your idea.
Remember your 2 databases serve different functions, one for reporting/data analysis, your historic database while I assume the other os an OLTP database. System requirements for each are different and each utilizes resources differently.
I further assume, based upon what you are writing, that the current OLTP database consists of one data file and one transaction log. You do not specify how your system is configured and leads me to wonder if your t-log and data file are on the same physical drives.
The size of your database is actually quite small, but needs the ability to operate properly. Adding the OLAP data to the OLTP data can be done if the system server can support it. One thin is as Steve jones alluded to, you have seperate data files for the OLAP portion of your database, and place them on seperate physical drives. You MUST also have your tempdb seperate from all of it, and then this can still be a problem as now both systems will be competing for tempdb activity and that won't be seperated to cover OLAP and OLTP. Additionally, if something happens to either piece of the DB, ALL of your users will be down while you perform proper troubleshooting and initiate repairs.
If you, and your users can live with your decision, then go for it. But I stand with Ninja and say it isn't worth it. You will gain nothing except the perception that life will be easier, and you will lose much more.
Steve Jimmo
Sr DBA
“If we ever forget that we are One Nation Under God, then we will be a Nation gone under." - Ronald Reagan
September 13, 2011 at 12:50 pm
I think you're getting excellent advice here. I work with a data warehouse daily and it would not work at all to have it on the production server.
If you look up data warehouse design, most of the time the warehouse is shown on it's own server, not just another database. It is assumed at the very least that the warehouse is in its own database.
p.s., Very well said Steve. 🙂
The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge. - Stephen Hawking
September 13, 2011 at 12:58 pm
To echo what others have said - you have two different databases with two different functions. I would keep them that way.
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
September 13, 2011 at 1:51 pm
Steve Jones - SSC Editor (9/13/2011)
They could even go to something like cloud services for production data and local servers for historical data.
Steve, please enlighten me, but why would one put the archive on a local server and the current stuff on the cloud? I'd be inclined to do it the other way around, if I were to consider this option at all.
September 13, 2011 at 2:59 pm
Jan Van der Eecken (9/13/2011)
Steve Jones - SSC Editor (9/13/2011)
They could even go to something like cloud services for production data and local servers for historical data.Steve, please enlighten me, but why would one put the archive on a local server and the current stuff on the cloud? I'd be inclined to do it the other way around, if I were to consider this option at all.
The marketing pitch of the cloud is that it's up 24/7 and can't go down (hence I said pitch and not reality).
This is what is required of oltp.
Usually olap can live to be a little offline. The reports can be late (however annoying) but lost sales are another story.
September 13, 2011 at 3:09 pm
I would keep those separate in order to restore the live-db faster if the needs arise.
September 13, 2011 at 3:32 pm
Let's another thread for a debate on that. I'll open one up in this forum.
http://www.sqlservercentral.com/Forums/Topic1174524-373-1.aspx
September 13, 2011 at 3:55 pm
I think you've been given good advice. Often it turns out to be worth putting effort in in order to split historic data from production data, because the advantages of separation are both large and clear. I can't imagine a situation where it's worth putting effort into merging the two if they are separate.
Tom
September 13, 2011 at 4:14 pm
just saying ..
Would your business clients appreciate the apps not working whilest some one consumes historic data and scans all your lovely large tables pushing regular oltp stuff out of memory ?
Johan
Learn to play, play to learn !
Dont drive faster than your guardian angel can fly ...
but keeping both feet on the ground wont get you anywhere :w00t:
- How to post Performance Problems
- How to post data/code to get the best help[/url]
- How to prevent a sore throat after hours of presenting ppt
press F1 for solution, press shift+F1 for urgent solution 😀
Need a bit of Powershell? How about this
Who am I ? Sometimes this is me but most of the time this is me
September 13, 2011 at 6:11 pm
I've seen this done both ways successfully.
My personal favourite, in many circumstances (and assuming Enterprise Edition) is same database with sensible partitioning/file group/SAN arrangements. Proper configuration can make development and maintenance much easier, while retaining the advantages of quick recovery using online partial/piecemeal restore.
Ultimately though, whether it is worth the time and effort required to consolidate an existing two-database design is a decision only you can make.
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
September 13, 2011 at 6:18 pm
SQLkiwi (9/13/2011)
I've seen this done both ways successfully.My personal favourite, in many circumstances (and assuming Enterprise Edition) is same database with sensible partitioning/file group/SAN arrangements. Proper configuration can make development and maintenance much easier, while retaining the advantages of quick recovery using online partial/piecemeal restore.
Ultimately though, whether it is worth the time and effort required to consolidate an existing two-database design is a decision only you can make.
And the difference between a Level III and a Level II on staff is thus clearly indicated. 😎
While everyone should strive to this level of understanding, not everyone has. The two database structure is much more easily inheritable by other staff, even if the OP can get this knowledge in a reasonable amount of time.
Sorry Paul. While I agree with you it's probably the best way to go, I've met waay to many otherwise solid IIs who just don't 'get' partitioning and how to properly code against it, nevermind handle filegroup restores in a DR situation. It's a code complexity for best practice vs. staff inheritance and common knowledgebase issue that usually drives these to two separate databases.
Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.
For better assistance in answering your questions[/url] | Forum Netiquette
For index/tuning help, follow these directions.[/url] |Tally Tables[/url]
Twitter: @AnyWayDBA
September 13, 2011 at 6:24 pm
Evil Kraig F (9/13/2011)
And the difference between a Level III and a Level II on staff is thus clearly indicated. 😎While everyone should strive to this level of understanding, not everyone has. The two database structure is much more easily inheritable by other staff, even if the OP can get this knowledge in a reasonable amount of time.
Sorry Paul. While I agree with you it's probably the best way to go, I've met waay to many otherwise solid IIs who just don't 'get' partitioning and how to properly code against it, nevermind handle filegroup restores in a DR situation. It's a code complexity for best practice vs. staff inheritance and common knowledgebase issue that usually drives these to two separate databases.
I don't disagree at all; though that so many professional DBAs don't 'get' partitioning (a new feature in 2005!) is a bit depressing. Luckily, it seems 'commodity' DBAs will be free to acquire alternate employment when the environments they currently 'manage' migrate to the cloud :w00t:
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
September 14, 2011 at 3:03 am
SQL Kiwi (9/13/2011)
I've seen this done both ways successfully.My personal favourite, in many circumstances (and assuming Enterprise Edition) is same database with sensible partitioning/file group/SAN arrangements. Proper configuration can make development and maintenance much easier, while retaining the advantages of quick recovery using online partial/piecemeal restore.
Ultimately though, whether it is worth the time and effort required to consolidate an existing two-database design is a decision only you can make.
While I agree that given the avaiability of Enterprise Edition and given also that the greater logfile size for the combined database as opposed to for a separate active database (and the resulting increase in recovery times for the active partition of the comnbined database as opposed to for the separate active database) is acceptable, it will sometimes be easier to develop from the start using partitioning than using two databases, I find it difficult to envisage any circumstances in which converting from a system split into two databases could make development "much easier" in cases like the present one where the archiving feature is already implemented and working. And of course if partitioning can't be used because of the increased cost of enterprise edition as against standard edition (or perhaps express edition, if the thing is to be deployed for multiple customers) the development cost of doing an unsplit would be large and a pure waste.
Tom
September 14, 2011 at 4:05 am
Jan Van der Eecken (9/13/2011)
Steve Jones - SSC Editor (9/13/2011)
They could even go to something like cloud services for production data and local servers for historical data.Steve, please enlighten me, but why would one put the archive on a local server and the current stuff on the cloud? I'd be inclined to do it the other way around, if I were to consider this option at all.
Based on what they're saying, the current stuff is pretty small beans. It might serve better in the cloud.
However, if you have to query both at the same time, one in the cloud, one local... well, shoot me first. That would stink.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
September 14, 2011 at 4:08 am
I'm with Paul. I've done it both ways successfully, but the only way that splitting the database really worked well was if the structure was changed in the archive system so that it was optimized for a different set of queries. If it was just identical structures on both, it was a huge pain to maintain that way. Although, the current database was frequently very small and very responsive, the situations where you had to create cross-database joins were a pain.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
Viewing 15 posts - 16 through 30 (of 35 total)
You must be logged in to reply to this topic. Login to reply