September 13, 2009 at 6:54 am
hi,
We are developing web application which will serve 1000K users simultaneously.The fron end for this applicaiton is under development using asp.net 3.5. The application is already in production which was developed in classical asp and database is in sql server 2005.
The overall Databsae size is 4 GB, and there are four highly transactional tables which contain the 50% data of the database.
Now my queston is that what should be our database desing in order to supoort such large user base, we are evaluting different starategies like
1) Clusterring
2) Dividing the databsae into two separate databse to be hosted on different servers so that requests can be distributed.
3) Placing the highly transaction tables on seprate file groups and distribute those files groups on separate logical drives of RAID
Please suggest if any other strategy could be helpful to satisfy this requirement.
September 13, 2009 at 7:07 am
chandio11 (9/13/2009)
1) Clusterring2) Dividing the databsae into two separate databse to be hosted on different servers so that requests can be distributed.
3) Placing the highly transaction tables on seprate file groups and distribute those files groups on separate logical drives of RAID
Please suggest if any other strategy could be helpful to satisfy this requirement.
1. Clustering is a high availability mechanism where in if one of your node fails, the other node will take over from it. At a given time only one node will be active.
2. If you are running enterprise edition, you may want to consider peer to peer replication which will be able to do load balancing for you. you need to carefully partition your data horizontally for both the servers.
3. keep your tables and associated indexes in different file groups on separate spindles or RAID so that read/write throughput is maximum. also keep your log files on separate spindles than data files.
September 13, 2009 at 9:11 am
chandio11 (9/13/2009)We are developing web application which will serve 1000K users simultaneously.
Shall we understand "1000K users simultaneously" as in one million concurrent requests to the database?
If yes... call Microsoft and ask for help.
chandio11 (9/13/2009)The overall Databsae size is 4 GB
Easy money, pin database in memory.
_____________________________________
Pablo (Paul) Berzukov
Author of Understanding Database Administration available at Amazon and other bookstores.
Disclaimer: Advice is provided to the best of my knowledge but no implicit or explicit warranties are provided. Since the advisor explicitly encourages testing any and all suggestions on a test non-production environment advisor should not held liable or responsible for any actions taken based on the given advice.September 13, 2009 at 9:42 am
The overall Databsae size is 4 GB, and there are four highly transactional tables which contain the 50% data of the database.
Now my queston is that what should be our database desing in order to supoort such large user base, we are evaluting different starategies like.
You need to redisgn this part of your application because in .NET 3.5 you can propagate transactions from system.transaction code through ADO.NET transaction to SQL Server transactions, the ADO.NET transactions comes with Asynch classes but you can use Enterprise Library. If you choose to use SQL Server 2008 it implemented ANSI SQL transaction feature that allows more than one transaction at the point of connection. So I am thinking this part of your application is rewrite to use current features to address possible transactional performance issues.
Kind regards,
Gift Peddie
September 13, 2009 at 4:35 pm
When you say support, do you mean performance, availability, or both? Clustering only helps with availability, it doesn't address performance at all. In fact, it might be worse availability than something like mirroring, but that depends on your structure.
The performance issue is best addressed by getting faster hardware and trying to better tune your code up front. Beyond that, I'd look at replication or other architectural decisions, but I'd first look to better tune the existing code.
September 14, 2009 at 8:06 am
You're talking about 4GB database? As in gigabyte?
That's teeny-tiny. Having a million people connecting to it is pretty scary, but you should be able to put the entire database, including indexes on to a server with 16gb of memory and never, ever, have to read anything from disk.
Or, are you talking about a 4TB database? As in terabyte? Then, you need to look into options 2 & 3, as well as worry about the overall design (not mentioned in your concerns), indexing strategies, query mechanisms and more. As was already stated, clustering does nothing for performance. It's all about high availibility.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply