August 13, 2014 at 1:49 pm
Recently, I partitioned one of my largest tables into multiple monthly field groups. For the current month, it is attached to my "Active' table. The older records are kept in the "historical" table. I need an efficient way to pull records when have a date range that can be spread across both tables.
August 13, 2014 at 1:56 pm
What about:
[Code]
SELECT .... FROM Active_Table WHERE ......
UNION ALL
SELECT .... FROM Archive_Table WHERE ......
[/Code]
For best practices on asking questions, please read the following article: Forum Etiquette: How to post data/code on a forum to get the best help[/url]
August 13, 2014 at 1:58 pm
I was hoping I could avoid a union but I guess it is unavoidable. Would a CTE be more effective?
August 13, 2014 at 2:00 pm
mqbk (8/13/2014)
How would you use a CTE to accomplish this?
For best practices on asking questions, please read the following article: Forum Etiquette: How to post data/code on a forum to get the best help[/url]
August 13, 2014 at 2:04 pm
not sure that is why I asked. I guess if I did a union inside of a CTE it would be the same as doing it without one.
August 13, 2014 at 2:27 pm
I'm not an expert in using CTEs, but I don't see how they would help here.
For best practices on asking questions, please read the following article: Forum Etiquette: How to post data/code on a forum to get the best help[/url]
August 13, 2014 at 3:07 pm
I agree with Alvin. A cte is nothing but chatter in this instance. If you used a cte it would look like this.
with MyCTE as
(
SELECT .... FROM Active_Table WHERE ......
UNION ALL
SELECT .... FROM Archive_Table WHERE ......
)
Select .....
from MyCTE
It produces the exact same thing but it takes a lot more typing to get there. 😀
_______________________________________________________________
Need help? Help us help you.
Read the article at http://www.sqlservercentral.com/articles/Best+Practices/61537/ for best practices on asking questions.
Need to split a string? Try Jeff Modens splitter http://www.sqlservercentral.com/articles/Tally+Table/72993/.
Cross Tabs and Pivots, Part 1 – Converting Rows to Columns - http://www.sqlservercentral.com/articles/T-SQL/63681/
Cross Tabs and Pivots, Part 2 - Dynamic Cross Tabs - http://www.sqlservercentral.com/articles/Crosstab/65048/
Understanding and Using APPLY (Part 1) - http://www.sqlservercentral.com/articles/APPLY/69953/
Understanding and Using APPLY (Part 2) - http://www.sqlservercentral.com/articles/APPLY/69954/
August 13, 2014 at 3:18 pm
mqbk (8/13/2014)
Recently, I partitioned one of my largest tables into multiple monthly field groups. For the current month, it is attached to my "Active' table. The older records are kept in the "historical" table. I need an efficient way to pull records when have a date range that can be spread across both tables.
You could of course add another table containing the KEY values for both historic and current records, with an indicator of which table the record is in. Identifying the records might be quicker but the retrieval will be slower. Hence, my suggestion is to go for the UNION as previously suggested.
😎
August 13, 2014 at 7:18 pm
mqbk (8/13/2014)
Recently, I partitioned one of my largest tables into multiple monthly field groups.
Now that you have your answer, I'm curious as the "How" you partitioned the table (Enterprise Edition Partitioned Table, Partitioned View, or something "homegrown") and What" benefit(s) of partitioning the table you were trying to derive.
--Jeff Moden
Change is inevitable... Change for the better is not.
August 14, 2014 at 6:10 am
Yeah I was just wondering if a cte could be used to improve the performance, but I see the census is that it will have no added value; therefore, I will not use it.
As for the question what kind of partition did I use. I am creating a file group for each month. I went with this route because they were having read and write performance issues on the table - the particular table in question was approx 60 GB in size. The reason for that size was because they were trying to keep every transaction possible from the time the database was created. I had to implement some type of historical retention process because simple select queries were a nightmare. At one time it would take nearly 15 minutes for a select where the records were 30 days old from today's date. I am not attributing all of the performance enhancement to the partitioning of the table because along with it I created better indices as well as fixed the fragmentation on some of the key ones. It was actually a combination of things that helped to improve.
Also, we need a quick and clean way to handle record retention. It's the least taxing - in my opinion - on server resources to perform a switch to our historical table than attempt to do some type of insert and purge process.
If there is a better, more efficient, and server resource cost effective way deal with the issue I am opened to the idea.
August 14, 2014 at 7:31 am
mqbk (8/14/2014)
Yeah I was just wondering if a cte could be used to improve the performance, but I see the census is that it will have no added value; therefore, I will not use it.As for the question what kind of partition did I use. I am creating a file group for each month. I went with this route because they were having read and write performance issues on the table - the particular table in question was approx 60 GB in size. The reason for that size was because they were trying to keep every transaction possible from the time the database was created. I had to implement some type of historical retention process because simple select queries were a nightmare. At one time it would take nearly 15 minutes for a select where the records were 30 days old from today's date. I am not attributing all of the performance enhancement to the partitioning of the table because along with it I created better indices as well as fixed the fragmentation on some of the key ones. It was actually a combination of things that helped to improve.
Also, we need a quick and clean way to handle record retention. It's the least taxing - in my opinion - on server resources to perform a switch to our historical table than attempt to do some type of insert and purge process.
If there is a better, more efficient, and server resource cost effective way deal with the issue I am opened to the idea.
Understood on the filegroup thing but I also need to know how many files there are per filegroup and what type of partitioning you used to make any suggestions. Also, what is the byte size and number of rows in a typical month?
It would also be nice to see one of the queries that was taking 15 minutes to run just to isolate 30 days previous.
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 11 posts - 1 through 10 (of 10 total)
You must be logged in to reply to this topic. Login to reply