November 17, 2009 at 8:26 am
My company has just inherited a database solution that for reasons beyond our control has an every growing number of tables currently over 1 million and growing. The users have been complaining about a general slowing down of performance.
I have started the normal Perf mon, QA and DTA. But have a more general questions.
Is the any immediate performance problem with having such a large number of tables. The main cause of the speed I feel is the amount of ram (currently 8gb) as having such a large number of tables must I assume cause more of an issue with memory that 1 table with a large number of rows.
Has anybody got any opinions?
Server 2003 64 bit with SQL Server 2005 64 bit
Cheers
Daniel
November 17, 2009 at 8:46 am
At the very least, I would expect that to slow down metadata queries, which just about everything needs. Plus, you're going to have the execution plan cache absolutely jammed with what are probably very similar solutions to very similar tables.
Of course, the first question is what are all those tables? How different are there from one another?
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
November 17, 2009 at 8:46 am
Could try to use partition
November 17, 2009 at 9:09 am
The first things I would look for is buffercache hit rates and physical I/O.
I all tables are being used and with a block size of 8k and 1 million tables that is your 8gig of RAM gone.
What size are the records. Are the tables all the same?
On average how many records does each table have?
This one is a bit much but, What is the life expentecny of a data block in the buffercache.
November 17, 2009 at 9:13 am
Thanks for the replies
More information
Buffer cache Ratio seems to be 90 - 96
There is also an issue with high IO on the data drive which the System team is looking at.
The solution is a Microsoft application (a financial application), The way it works is to recreate all tables for each financial company added to the system Now the way this Company works is to create a new company every 2 or 3 days which adds 200 tables a time. They now have over 5000 companies hence the 1million tables.
November 17, 2009 at 9:19 am
Daniel Wood (11/17/2009)
Thanks for the repliesMore information
Buffer cache Ratio seems to be 90 - 96
There is also an issue with high IO on the data drive which the System team is looking at.
The solution is a Microsoft Navision application (a finacial application), The way navision works is to recreate all tables for each financial company added to the system Now the way this Company works is to create a new company every 2 or 3 days which adds 200 tables a time. They now have over 5000 companies hence the 1million tables.
Why are they adding new companies every 2 or 3 days? Are these companies actual entities?
November 17, 2009 at 9:21 am
Not to sure I think It's to do with financial Tax reasons
November 18, 2009 at 3:20 am
I have a feeling that this may well be more a Business Process/Functional Analysis issue - rather than something to be addressed from the DB side.
Don't know about anyone else, but I have a nasty gut feeling that in order to reconcile the results of the process they're using there's a leviathan spreadsheet that's some beancounter 'Excel Whizz' 's pride and joy lurking in the background. Probably of such an obfusticated and baroque construction that even the 'irreplacable' constructor has little idea of what the heck's actualy really going on with it.
Then there's the overhead of having to regen the companies that often in pure business terms.
Might be politically tricky to get that agreed, though, as the Beancounter in Chief is probably going to lose staff as a result of a process amendment to something that does not require such an overhead. They only like that when it happens to other departments.
November 18, 2009 at 5:06 am
The companies are actual entities in a tax audit sense - they are creating a new company to keep revenue under a certain bracket to enable them to pay less tax and therefore make more money by skimming the 1% they save off the top.
It's not illegal or anything, it's one of those legal loophole exploit things I suppose, but in this case it's causing some serious issues with SQL, the question is, at what point is throwing more hardware at it going to get too expensive/not possible?
November 19, 2009 at 2:44 am
The system tables are very well tuned and SQL definitely can scale beyond 1 million tables. It will make using any gui tool that enumerates tables an ongoing nightmare, but day to day operation should be possible. To be honest, the scale of the enviroment sounds massively beyond your hardware - What sort of database size are you looking at?
Doubling/tripling RAM would be a start, but I agree with everyone else, that it would be a better use of time and money to re-examine your solution at the business level
November 19, 2009 at 7:56 pm
SQL may be able to handle that many tables. The problem is that these are tables being generated (if i understand correctly) through one of the Microsoft CRM products (NaVision). The creation of these tables and the code-behind dynamically query the metadata - and the tables are not the most sound design (IMO). It is due to these issues that the slowness is becoming an issue.
CRM is not designed for a DBA to create the tables, procs, or indexes. CRM likes to handle all of that stuff itself - and it shouldn't.
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
November 21, 2009 at 8:54 am
Hardware is probably the only viable solution here. Lots of RAM will help to keep more pages of data in cache, but that will not get you very far if you have to access lots of those 'company' table sets regularly. CPUs will be important too. But by far the most important thing will be IO SPEED (not capacity). You need LOTS of spindles, well configured and tuned and with significant bandwidth to and from them. Or you need some SSDs.
You are at the extreme end of the scale here - much like a client I have who has over 7000 databases on one server. There are things you need to do right to have any chance of success. I HIGHLY recommend you get a very good performance consultant in place to give your system a performance review and help get your system configured for optimal operations.
Normally when I make this recommendation to forum posters (which I do quite a bit actually since I think it is the best thing that numerous entities out there can do) I make a generic statement saying "get someone". In this case I will make an exception and say "you should consider using me" because I bring to bear the level of knowledge and experience you really need for such a configuration. I also am fortunate enough to be able to cherry pick the clients I take and your situation seems 'interesting' to me. 😎
In any case, best of luck with your system!!
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
November 23, 2009 at 9:51 am
Thanks for the all your help it is all appreciated
November 23, 2009 at 12:03 pm
Depending on how much data you need to be available for "every day work" you might wanna consider "archiving" some of data since you must have a lot of data that is not even used. 5000 companies /2 every day=2500 days worth of data in same place ...doubt that they need that much data to be online all the time. At the most they might need some "summary data to be available" rest can go to archive.
Regards,
Vasc
Vasc
November 23, 2009 at 11:15 pm
Actually, I'd run like hell.
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 15 posts - 1 through 14 (of 14 total)
You must be logged in to reply to this topic. Login to reply