October 15, 2011 at 4:34 am
I have a customer who has a customer facing solution containing large amount of end-user data.
The data are archived for end-user data so the solution is a ever growing database.
Data in the system is by nature dated and no end-user data is changed during the stay in the system except for end-users own deletion of his data.
The end-user can place his data in views.
Views and these kinds of changes are stored in a different database (containing only these Meta data)
The customer has decided to migrate the low activity data from a DB2 system on z/OS to SQL-Server 2008 R2 (data older than 3-6 month)
The current size of the database is around 7 TB and is expected to grow to 35-50 TB over the next months (due to the movement of data from DB2 to SQL-Server).
Currently there are 3.500.000 end-users on the system.
As an enterprise and solution architect I have gotten the task to steer a project around handling the recovery of this database.
I haven't been a DBA myself for many years and when I stopped as a DBA, databases of 10 GB were considered large on Windows and OS/2
Adding to the long term challenge the customer will expand his business so we could be facing going from the current 3.5 mill end-users to 7-10 mill users in a relative short timeframe.
And if he will have success in his expansion we could face a system that would contain 50-100 times more users and data than we have today.
The database does contain large amount of end-user data, but access to these data is very rare.
98% of accesses to data are on data younger than 6 month and these accesses primarily go to DB2 on z/OS.
We use IBM Tivoli Storage Manager (TSM) for backup and recovery.
Behind the TSM we use different kind of virtual tape systems and SUN SL8500 tape libraries using T10000 tape drives.
I am looking for ideas and feedback that can help me steer this project in the right direction and making sure that I ask the specialists the right questions in our common search for a solution for the customer.
So any good ideas, feedback, links and God only know what else I need is more than welcome.
October 15, 2011 at 5:21 am
* Use partitioning for tables having more data(based on date)
* Use data and backup compression to reduce the size of the database and there by reduce the backup size and recovery time
* Compare the backup/recovery using TSM and the Native SQL 2008 compressed backup/recovery.
* Use different file groups to store the appropriate table partitions and use indexed views for reads.
* Schedule tasks on a regular basis for refreshing the views as the data grows.
* Use Database mirroring / logshipping to maintain a warm standby for disaster/to restore services quickly.
These are my few cents. A better bet for you would be to hire a consultant and take it from there.
October 15, 2011 at 5:50 am
I would look at filegroups and partitioning for you backups. Its difficult to say if this would be suitable without knowing your applications and their logic.
Viewing 3 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic. Login to reply