December 22, 2010 at 2:57 am
Hi,
We have financial database size of around 400Gb and we do daily batch updates which takes nearly 30mint to 45mints.During this time one of very important Price table is locked half of this upload process becuase it has to populate calculated columns.
Our management asking me that we need our database which should be used on 24/7 and 365 days but aslo we have to get into daily updates? How can be possible?Users should not be blocked any point in time and aslo they should not getting dirty reads.
Can some one please suggest me best way of achieving this problem.
Thanks
Nick
December 22, 2010 at 5:01 am
What is happening that the Price table is locked for so long? Price updates? Where do the changes come from?
I would consider a 2 table approach... 1 online table (current) and 1 offline table (can have updates and changes made). You will need some sort of control flag (additional table, perhaps) that tells which pricing table to use.
IF flag = 1
EXEC proc1
ELSE IF flag = 2
EXEC proc2
Something like that....
December 22, 2010 at 5:11 am
Thanks for your advise.
On price table it does :
if Price Date exists with different value then moves data from Main table to archieve table
then Updates with new values on Main table
then insert complete new values on Main table.
I source of data comes from differnt SQL database.
In your solution when do you sync both tables becuase our updates are happens on daily basis and which required lot of changes in application level.
Thanks
Nick
Thanks
December 22, 2010 at 5:14 am
It depends on the process, but, if the update is done once a day, then on day 1 you update table 1 while table 2 is still in use. You then switch over to use table 1 and can then update table 2, that way both tables should always be the same.
There are other options with regards to "trickle feed" of the data from the other SQL DB, etc.
December 22, 2010 at 8:35 am
Can you please let me know what are other solutions?
What is the trickle feed?
Thanks
Nick
December 29, 2010 at 2:09 pm
Nick,
Delete unchanged records from the staging table prior to the mass update. This removes the "has price changed" logic from the Product table update.
Dan
December 29, 2010 at 2:15 pm
Have you considered one of the snapshot isolation levels?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
January 3, 2011 at 3:09 am
The usual easy way to fix this problem is to upsert your data in small batches so the lock time are tiny and users won't notice anything (if you have a clustered index and manages to use only the smallest lock possible).
The total run time of your job might be longer but you'll not lock down everything while you do it.
You could also have a look at partitioned tables (load the new data in a table and then switch it to the main table once it's ready).
January 3, 2011 at 6:12 am
Nick123-481228 (12/22/2010)
Thanks for your advise.On price table it does :
if Price Date exists with different value then moves data from Main table to archieve table
then Updates with new values on Main table
then insert complete new values on Main table.
I source of data comes from differnt SQL database.
In your solution when do you sync both tables becuase our updates are happens on daily basis and which required lot of changes in application level.
Thanks
Nick
Thanks
Gosh... I feel your pain but there's still not much anyone knows about your data or meta data. I recommend you start by taking a look at the article at the last link in my signature line below. You'll probably get better help that way. Otherwise, one guess is as good as another. Please be sure to tell us which version (Enterprise, Standard, ???) you're using, as well.
--Jeff Moden
Change is inevitable... Change for the better is not.
January 3, 2011 at 6:15 am
how about using a staging table to prepare all your new data and only at the very last merge the data into your target table ?
Johan
Learn to play, play to learn !
Dont drive faster than your guardian angel can fly ...
but keeping both feet on the ground wont get you anywhere :w00t:
- How to post Performance Problems
- How to post data/code to get the best help[/url]
- How to prevent a sore throat after hours of presenting ppt
press F1 for solution, press shift+F1 for urgent solution 😀
Need a bit of Powershell? How about this
Who am I ? Sometimes this is me but most of the time this is me
January 3, 2011 at 10:16 am
Hi All,
Thanks for your replies.
We are using SQL Server Standard edition and currently we are using staging table.
I have brought down transfer time to 15 min but management are asking NO Downtime and not even for few mintues...still thinking best way.
Thanks
Nick
January 3, 2011 at 10:43 am
You may wish to start by reading --
http://msdn.microsoft.com/en-us/library/ee523927(SQL.100).aspx.
January 3, 2011 at 10:59 am
GilaMonster (12/29/2010)
Have you considered one of the snapshot isolation levels?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
January 11, 2011 at 2:20 am
Hi ..
Try to create replication for that Table..
i.e. : create a new database and configure Transactional replication bw the tables.
Do all your bulk updates in new DB and replicate it to active DB table.. configure replication to happen every miniute. so your DB is up to date with 1 minute lag
Viewing 14 posts - 1 through 13 (of 13 total)
You must be logged in to reply to this topic. Login to reply