March 16, 2018 at 1:23 am
Hi,
I have go through the limitations of SQL Server on the official documents but please let me know a table size limit of any database of SQL server or it is limited.
Thanks,
Aneel
March 16, 2018 at 3:16 am
no limit to table size apart from the amount of disk space available
limits exist for partitions per table or index (enterprise version) which is 15000 i believe
limit on number of objects per database is over 2 billion
March 16, 2018 at 3:25 am
Hi,
Thanks for the reply. Actually we are looking for the performance aspects as well for 168 TB of table can be handle by SQL Server or not. Or we have to look for another solution. We are having 9 columns like VehicleNumber, Datetime, Status, Longitude, Latitude, Place, Direction, Speed, Mileage.. So can SQL Server single table can handle such data of 168 TB ? Looking for the response. We have created required indexes and partitioning as well.
Thanks,
Aneel
March 16, 2018 at 3:42 am
that is a bit above my experience....SQL will happily hold a table of that size....but whether you can actually use it efficiently depends on your application, code etc.
March 16, 2018 at 3:54 am
what hardware are you planning to use?
will this be hosted or local?
i assume you wont be using azure or another SaaS offering?
March 16, 2018 at 3:55 am
how are you currently using the database and how is it performing?
how much I/O do you generate during peak usage?
what is your recovery requirements?
March 16, 2018 at 4:23 am
Hi,
We are using SQL Server and not any other product. We have vehicle tracking data which will be generated and archive on daily basis. It will be OLAP and sometime OLTP but mostly OLAP. The data is real time and it is peak time during the morning till evening. We need to extract sort of reports of statistics and data report.The I/O generations are huge as we are having real time data from the 100,000 units with different reporting parameters.
For the Hardware we will be having SAN and mostly VM servers. The recovery method is full / Differential backups on SAN and as well as on tape.
Thanks.
Aneel
March 16, 2018 at 4:26 am
Hi,
By the way , have you an experience of such data handling ? Are you saying it on your experience ? Do you have any example of such sort of data ?
Thanks.
Aneel
March 16, 2018 at 4:37 am
the largest database i have worked with was only 2tb
but i have worked mainly in OLTP environment
i would not mix OLTP and OLAP queries on the same database of this size....have you thought about creating a dw containing all the archive data and just keep active data in the live database?
100,000 units seems low compared to the amount of data....for example....we had 250,0090 customers in a financial banking app and only had 2tb of data including history
how many rows in the main table active partition?
you should speak to a data warehouse expert and see what they think....but seriously....you cannot consider keeping that much data in an active database and expect a high level of performance.
also....trying to restore the full backup would take many hours....how big is the active partition?
March 29, 2018 at 2:13 am
This was removed by the editor as SPAM
March 29, 2018 at 3:46 am
That is a very large table that you intend to manage. SQL Server can definitely handle the table based on the underlying hardware thrown at it.
You probably might need to consider using columnstore indexes if you are using SQL Server 2016. Ofcourse performance also depends on other design parameter.There is a detailed report give my Microsoft when they collaborated with Intel to demonstrate the performance of SQL Server 2016 with Intel Hardware on a 100TB database. I believe it is the single table they used in the experiment which might be a very helpful in your case.
https://blogs.technet.microsoft.com/dataplatforminsider/2016/06/01/sql-server-2016-is-generally-available-today/
https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/microsoft-sql-database-analytics-paper.pdf-Hari Mindi
March 29, 2018 at 6:06 am
There is practically no limit to the logical size of a SQL Server table. However, the underlying hardware, data model, and configuration will limit what is practically possible. 168 TB is very very large, but it's been done. Like someone mentioned earlier, clustered columnstore will scale at this size much better than the traditional row-store table type. ColumnStore will actually compress at a ratio of about 1/10.
Also, to keep the table manageable in terms of backups and index maintenance, I'd recommend implementing a partitioning scheme where each partition is no more than like 20 GB. For example, you can backup at the partition level and also re-build / re-organize index at the single partition level. If you partition on an incrementing key, where only the latest partition is writable and the earlier partitions are archival, then that 168 TB table essentially becomes a 20 GB table in terms of daily maintenance and backups.
I'm curious, how did you arrive at the estimate of 168 TB ?
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
March 29, 2018 at 6:02 pm
Aneel Amin - Friday, March 16, 2018 3:25 AMHi,Thanks for the reply. Actually we are looking for the performance aspects as well for 168 TB of table can be handle by SQL Server or not. Or we have to look for another solution. We are having 9 columns like VehicleNumber, Datetime, Status, Longitude, Latitude, Place, Direction, Speed, Mileage.. So can SQL Server single table can handle such data of 168 TB ? Looking for the response. We have created required indexes and partitioning as well.
Thanks,
Aneel
Let's first make sure of what you mean because a lot of people make this mistake.
Do you really mean "TB" as in "Terabytes" (ie. A Million Million) or do you really mean "GB" as in "Gigabytes" (a Thousand Million)? 1 TB is a 1 followed by 12 zeros. 1 GB is a 1 followed by 9 zeros.
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 13 posts - 1 through 12 (of 12 total)
You must be logged in to reply to this topic. Login to reply