April 30, 2012 at 3:25 am
database is increasing rapidly. After investigation we found there are few tables which have large volume of data. we require to partition the table into multiple partitions into date range year-wise. Later can we Add partition dynamically if required for a new year? or we have to initialize partition at the beginning?
April 30, 2012 at 4:05 am
Additional partitions can be added later as and when required.
No need to create all the partition in the beginning.
April 30, 2012 at 4:10 am
Hi, you can dynamically add new partition,As this time I am also Performing Database Partitioning On the Basis of Year,Currently I have made two partitions for testing for Storing Transactions Of Year 2010 and 2011 and Some Transactions for year 2012 are this time in Primary Filegroup.But as this 2012 data Grows I will Create a new partition dynamically.Below is the Example
ALTER PARTITION SCHEME SchemeName
NEXT USED [NewPartitionName]
-- THIS WILL UPDATE THE FUNCTION
ALTER PARTITION FUNCTION PartitionFunctionName()
SPLIT RANGE (New Partition Range);
Thanks
April 30, 2012 at 5:13 am
Thanks for the information. can you please describe the partition function in more detail.
April 30, 2012 at 6:01 am
Hi aakash,
i to want to create partition as the data in some tables are more that 50 Lakhs.
i was about to create partition but dropped it due to parition to be created manually.
can you just elobrate the automated processs.
Regards
Durai Nagarajan
April 30, 2012 at 11:37 pm
Hi,
Partition Function Is nothing but It tells which Partition is going to store what range of values,
Firstly We Create a Partition Function then that Created Function is attached to the PartitionScheme.Please Have a look on simple Example Below.
1. CREATE PARTITION FUNCTION PartitionYearFunction_TranId(Decimal(19)) AS
RANGE Right FOR VALUES ('644026824','646050551')
Here,
PartitionYearFunction_TranId = Name Of Partition Function, As I am Partitioning on the basis of Tranid Whose Datatype is Decimal(19).
RANGE Right FOR VALUES ('644026824','646050551') = It Shows that We Will take values From Table Whose Maximum Range in Partition will be '644026823' that will go in one Partition of the Table and Another Partition will contain value from '644026824' to '646050551' and Remaining values will be in Primary FileGroup.
2.CREATE PARTITION SCHEME PartitionYearScheme
AS PARTITION PartitionYearFunction_TranId TO (CAT_FG2010,CAT_FG2011,[PRIMARY])
It is a Partition Scheme to which we have attached Partition Function Created above and We have Passed names of Partitions Name (a).CAT_FG2010 :- To Store Transactions of 2010 year (b) CAT_FG2011:- to Store Txns of 2011,[PRIMARY]:- will store remaining txns.
Please tell me if any confusion,I will try to Solve it.
Thanks,
Aakash
May 1, 2012 at 1:07 am
The article I refer below is written by Kimberly Tripp (an MVP and known expert). It’s best suited for SS2K5 but it works well for later versions as well. It covers many scenarios related to Partitioning.
Partitioned Tables and Indexes in SQL Server 2005
http://msdn.microsoft.com/en-US/library/ms345146(v=SQL.90).aspx
May 1, 2012 at 5:07 am
Table have ID for which cluster index have been defined. there is a field TRNdate, a datetime field which is a non cluster index. Is there any changes required in index to improve performance?
May 1, 2012 at 7:00 am
durai nagarajan (4/30/2012)
i to want to create partition as the data in some tables are more that 50 Lakhs.
May I ask what are you trying to accomplish by partitioning the offending tables?
If the answer is anything around "because of performance issues" then table partitioning is not the solution to the issue.
What is - or should be - the purging or archive-and-purging strategy for the offending tables?
do you expect them to grow forever?
is this an OLTP or a DSS system?
_____________________________________
Pablo (Paul) Berzukov
Author of Understanding Database Administration available at Amazon and other bookstores.
Disclaimer: Advice is provided to the best of my knowledge but no implicit or explicit warranties are provided. Since the advisor explicitly encourages testing any and all suggestions on a test non-production environment advisor should not held liable or responsible for any actions taken based on the given advice.May 1, 2012 at 9:08 am
Partitioning is a COMPLEX topic and I have seen LOTS of clients and forum posters get into trouble trying to implement it without truly knowing what they were doing or WHY. It was NOT built to improve performance (although it can in some cases) - it was built to help with management of VLDBs, especially ETL. Do yourself a HUGE favor and engage a qualified consultant to help you understand what you need and help get you there.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
May 1, 2012 at 12:26 pm
PaulB-TheOneAndOnly (5/1/2012)
durai nagarajan (4/30/2012)
i to want to create partition as the data in some tables are more that 50 Lakhs.May I ask what are you trying to accomplish by partitioning the offending tables?
If the answer is anything around "because of performance issues" then table partitioning is not the solution to the issue.
What is - or should be - the purging or archive-and-purging strategy for the offending tables?
do you expect them to grow forever?
is this an OLTP or a DSS system?
This is just a normal table and we have some reports build in around it for each tables.
I want to improve the performance of reports.
archive-and-purging strategy - i cant get it.
Regards
Durai Nagarajan
May 1, 2012 at 12:30 pm
durai nagarajan (5/1/2012)
PaulB-TheOneAndOnly (5/1/2012)
durai nagarajan (4/30/2012)
i to want to create partition as the data in some tables are more that 50 Lakhs.May I ask what are you trying to accomplish by partitioning the offending tables?
If the answer is anything around "because of performance issues" then table partitioning is not the solution to the issue.
What is - or should be - the purging or archive-and-purging strategy for the offending tables?
do you expect them to grow forever?
is this an OLTP or a DSS system?
This is just a normal table and we have some reports build in around it for each tables.
I want to improve the performance of reports.
Forget about partitioning, partitioning wouldn't improve performance.
To improve performance focus on indexing strategy and the quality of the queries.
durai nagarajan (5/1/2012)
archive-and-purging strategy - i cant get it.
... meaning, get rid of data by either destroying it or move it to an archiving table.
_____________________________________
Pablo (Paul) Berzukov
Author of Understanding Database Administration available at Amazon and other bookstores.
Disclaimer: Advice is provided to the best of my knowledge but no implicit or explicit warranties are provided. Since the advisor explicitly encourages testing any and all suggestions on a test non-production environment advisor should not held liable or responsible for any actions taken based on the given advice.May 1, 2012 at 12:43 pm
current deploying the archive stratergy in some tables and making the changes in report as i am moving it to different DB. this is an activity monitor table.
As i have to pull the data based on login, activity and datetime and data load is also high the performance of reports are not as expected despite of indexes.
Is there any other workaround apart from partitioning.
Regards
Durai Nagarajan
May 1, 2012 at 12:50 pm
Well written code and indexes that support the queries.
Partitioning will not improve performance, it's not a magic bullet, it can even make queries slower.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 1, 2012 at 3:11 pm
durai nagarajan (5/1/2012)
current deploying the archive stratergy in some tables and making the changes in report as i am moving it to different DB. this is an activity monitor table.As i have to pull the data based on login, activity and datetime and data load is also high the performance of reports are not as expected despite of indexes.
Is there any other workaround apart from partitioning.
There are almost ALWAYS ways (often very simple to implement ones too!) to make queries run faster. But we cannot help you without details of tables, data size and distribution, query used, actual query plan, etc. Even with that stuff it may be too complex for a free forum and you may be better off hiring a performance tuning professional to get things fixed up quickly.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
Viewing 15 posts - 1 through 15 (of 15 total)
You must be logged in to reply to this topic. Login to reply