August 26, 2019 at 6:38 pm
Hello,
I am curious about all versions of SQL, I am currently working with 2014.
I understand the benefits of pre-sizing a database so that you are not paying the auto-growth penalty during operation hours.
I also understand that for optimal performance we want enough memory to cache the entire database.
If I allocate 50 GB to an initial mdf size and I only have 10 GB of actual data, what does SQL try to load into memory. Lets say I have 12 GB of memory allocated to my SQL instance; Does SQL know to load the 10 GB of data? Or is it going to try to allocate a full 50 GB of memory registers? Could doing this cause page faults?
Then what happens to new data with this 50 GB of space; If I add some records, will it span those records across pages that consume 50 GB of space, causing longer seek times like spanning data across the entire surface of a drive? Or does it keep it condensed?
As this may require a very long answer, I would be happy with a pointer to a resource or good google search that would cover this.
Thank You,
Ron
August 26, 2019 at 9:14 pm
The 50 GB is just how much disk space the database takes up. If you only have 10 GB of that space allocated to tables and indexes, then only that 10 GB would possibly be loaded into memory.
If you had used more than your 12 GB of RAM instead of just 10 GB, then as long as your SQL Server Maximum Server Memory setting was set right then SQL Server would manage its own memory and remove least recently used pages from RAM to load new pages as they are requested.
For the unallocated space, it will be allocated to objects in extents as the objects grow. The real details of how this work are documented
https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2008-r2/cc280360(v=sql.105)
https://www.sqlskills.com/blogs/paul/inside-the-storage-engine-anatomy-of-an-extent/
Viewing 2 posts - 1 through 1 (of 1 total)
You must be logged in to reply to this topic. Login to reply