June 20, 2013 at 10:04 am
Sai Viswanath (6/19/2013)
Hi Jan,Even I had raised this question with the development team, but they could not provide me satisfactory answer. The tables are having huge data, and that's one of the issue. Also, could you point out which part of the output has got incorrect structure.
Regards,
Sai Viswanath
The tables that have huge data - could you clarify?
What do you consider to be huge?
What does the client consider to be huge?
To me, huge might mean that I have 1TB of data in the table. And seeing the drop/create of tables to move this data around would indicate to me that you are going to experience pain in that process.
With this kind of process (drop/create of tables) - I can't see how partitioning even came into the topic of discussion.
You also now know the procedures that are executing - you can look to tune those procedures.
As for the code coming from Access or across the linked server, you can tune that too.
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
June 20, 2013 at 10:25 am
SQLRNNR (6/20/2013)
You also now know the procedures that are executing - you can look to tune those procedures.As for the code coming from Access or across the linked server, you can tune that too.
Anything can be tuned. You just need the right fork. @=)
June 20, 2013 at 11:23 am
Brandie Tarvin (6/20/2013)
SQLRNNR (6/20/2013)
You also now know the procedures that are executing - you can look to tune those procedures.As for the code coming from Access or across the linked server, you can tune that too.
Anything can be tuned. You just need the right fork. @=)
I think I found water with that fork.
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
June 20, 2013 at 2:29 pm
Sai Viswanath (6/19/2013)
Hi All,Thanks for the comments, and here's the code that I could get from the SQL Server Profiler when I worked with a report. The tables its referring donot have any primary keys, and NonClustered indexes are defined on almost all the columns, and one or group of columns will be added for Unique, Clustered Index.
And they want to add partitioning to this to fix it... in 10 days. Are these tables heaps to boot?
Save yourself the headache. If they're insisting that you have 10 days to fix it with a method that doesn't fix anything of this nature, cancel the contract due to inability to complete within timeframe allotted abiding by the arbitrary rule enforcement of contractee.
After that, run away, find a bar, drink two beers. Then walk away whistling. That doesn't need partitioning, it needs a ground level overhaul of the data flow logic and the process rebuilt from there.
Partitioning does basically 2 things:
1) Eases archiving
2) Simplifies some ETL manipulations, but you have to build around it from the core of the process out. You can't add it in later, and it's restrictive.
You'd be better off setting the entire server to default as SET TRANSACTION LEVEL READ UNCOMMITTED (please note: Don't do that) than implementing partitioning as a blind optimization approach.
Good luck.
Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.
For better assistance in answering your questions[/url] | Forum Netiquette
For index/tuning help, follow these directions.[/url] |Tally Tables[/url]
Twitter: @AnyWayDBA
June 21, 2013 at 1:30 am
Evil Kraig F (6/20/2013) ... SET TRANSACTION LEVEL READ UNCOMMITTED
The cure for all :hehe:
Viewing 5 posts - 31 through 34 (of 34 total)
You must be logged in to reply to this topic. Login to reply