August 3, 2016 at 12:48 am
Hi all
Currently I am working in university automation project. In my database we have more then 400 tables available. Now I am facing problem in fetching huge record and more user access the database time its very slow(While publish the result, Extract huge report). Some time I am getting timeout error. My manager suggested me we will create de normalized database then dump the data. Please give me suggestion what are all the points I should consider create the de normalized database. This table only for report extraction purpose.
Thanks,
v.s.satheesh
November 14, 2016 at 6:38 am
Hi
We would need to know more about the report requirements to offer any advice.
A starting point would be how often and when is the report accessed and by how many?
How up to date does the data in the report need to be?
What are they doing with all the data? For instance are they just exporting into Excel for analysis?
Is the current database 24 hours or just office time only?
This should get you started....
November 14, 2016 at 7:43 am
vs.satheesh (8/3/2016)
Some time I am getting timeout error.
That's likely the real problem. Such timeouts are normally because of code that has accidental many-to-many joins and folks trying to "do it all" in a single query. "Set based" code doesn't require everything to be done in a single query with a bazillion joins. "Divide'n'Conquer", "DRY" methods, and inline pre-aggregation usually negate the need for building aggregate tables that go out of date as soon as they're populated.
--Jeff Moden
Change is inevitable... Change for the better is not.
November 14, 2016 at 8:18 am
Jeff Moden (11/14/2016)
vs.satheesh (8/3/2016)
Some time I am getting timeout error.That's likely the real problem. Such timeouts are normally because of code that has accidental many-to-many joins and folks trying to "do it all" in a single query. "Set based" code doesn't require everything to be done in a single query with a bazillion joins. "Divide'n'Conquer", "DRY" methods, and inline pre-aggregation usually negate the need for building aggregate tables that go out of date as soon as they're populated.
Often accidental very wide joins are caused by over-use of views, where the poor sucker using a view is a victim of someone else's bad design rather. While 3 way joins are reasonable, if you have a 5-deep nest of views each doing a three-way join the resulting structure at run time may be a 243-way join, if it is that is terrible design and the person who invented that horrible nest is utterly incompetent and I have sympathy for his victims (often the victimes are developers who are told by the "expert dba" responsible for the mess that they have to use the views provided). When a too-wide join is done deliberately be someone trying to "do it all" in one query that's not really "accidental", describing it as "stupid" would be nearer the mark.
Tom
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply