May 20, 2011 at 8:40 am
If only it was that simple! What seems like a good idea with a couple of rows in each table turns into a nightmare 5 yeears later!
Without breaking confidentiality -- imagine a ticketing system for a concert where you can book tickets for each seat. Rather than establishing a price for each seat, every time a ticket enquirey is made the price is calulated based upon such things as the seat position, day of the week, name of show, time of show, how close to the ice creams you are, what colour is the seat and so on. There are a couple of hundred procs that may or may not participate in this ( every time ) and also 30 permanent tables which have data placed into them and taken back out again as part of the calulations ( the data can be accessed by multiple procs by passing ids between them )
No we don't do tickets but it's the best way to describe the process - sadly you have to do this for every report too.
From the DBA view the schema and data held does not reflect the way in which the data needs to be used by the system - any query which does less than 32 table joins is a bonus! I have procs with so many statements in them it'll crash SSMS if you try to show the query plan ( graphically ) you might like to try it yourself - for me SSMS usually gives up at around query 126 - and that's probably only half way through!
sigh!
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
May 20, 2011 at 8:53 am
I tested this in my test environment with a 468GB principal database/High safety with automatic failover (synchronous)-Mirroring, and worked like a charm.It took 10 seconds to create the snapshot and .ss file was 448 GB. If your mirror is a virtual so beware of the disk space.Mine was a physical box.
Rohan Joackhim
Viewing 2 posts - 16 through 16 (of 16 total)
You must be logged in to reply to this topic. Login to reply