July 20, 2016 at 11:13 am
Hello Folks,
What are the things that should be considered when provisioning a DB/Instance/Server. We are looking to follow the best practices also trying to get some cost savings.
* Selecting Physical/virtual for new database instance
* Add new db to an existing sql instance
* Selecting the right version of SQL Server
* Adding resources to the server (CPU/Memory,..)
Please share your thoughts on this.
Thanks!
July 23, 2016 at 11:03 am
Hi,
For the cost saving you should have to go with virtual machine, but the CPU and Memory are depends on your application and DB use's.
SQL Version also depend on the application and data critical. you can select best suited from enterprise, Standard and other version of SQL.
For the best practice, you should have the below drive configuration.
Drive for Binary files ..you can allocate 5 GB for it
Drive for the SQL data files ....depend on the application requirement(Let's say size is X GB)
Drive for the SQL Log files.....50% for the X GB
Drive for the Temp DB....40% of the X GB(Again it is also depend on the application uses....also you can have separate drive for Temp data file and Temp Log file)
apart from this you should have the two service account to run the SQL Services. one for SQL Service and one for agent service.
After installation you should enable the DAC login on the system.
create the login required by the application team with appropriate right.
Hope this help's. 🙂
July 23, 2016 at 5:33 pm
Lot of debate to be had regarding the benefits of physical vs virtual for SQL. Personally I'd choose virtual 95% of the time. You will get slightly better performance from a physical server but given the advances in technology the different will be minor. There are many benefits to virtualisation including the speed of spinning up a new VM compared to a physical server, easier resource allocation/monitoring, cost and easier migration if you ever need to move it. The remaining 5% for me would be if you had an application that was very resource intensive, then you might benefit from the slight performance increase the physical server provides.
The version and resources depend very much on your requirements. If you require the extra features of enterprise then you'll need it. Otherwise go for standard. Disk space will very much depend on how big your files are. However make sure you have enough for potential growth in the future. I would however suggest keeping your data, logs and tempDB on separate physical drives.
In terms of memory you can never really have too much. SQL is very greedy when it comes to memory and it's a good idea to give it as much as you can spare. The ideal would be to have as much memory as the total size of all data files but depending on the size and function of the databases this might not be realistic or overkill. However, be generous.
As for the database, firstly make sure you name it something logical and easy to understand. I'd suggest something like "ApplicationName_Environment". Follow the standard rule of giving the minimal amount of access necessary for a user to complete the work and setup a new schema for security purposes, along with roles to control these permissions. Also make sure to set the sizing properly (size needed + 20% is a good starting point) and keep on top of it as it grows.
I'd suggest getting a standard 'new instance' and 'new database' document created and make sure anyone fulfilling one of these tasks follows it. It will help keep your entire estate coherent and ensure nothing is missed.
Good luck.
July 24, 2016 at 7:17 am
R.KU.PA_DBA (7/23/2016)
Drive for Binary files ..you can allocate 5 GB for it
Bear in mind this is per instance, if you're planning multiple instances per server then the binary drive size will need to reflect this
R.KU.PA_DBA (7/23/2016)
Drive for the Temp DB....40% of the X GB(Again it is also depend on the application uses....also you can have separate drive for Temp data file and Temp Log file)
This is not an exact science of specifying 40% pro rata and it really depends on the type of system you're deploying.
Specifically, usage of AlwaysOn availability groups with readable secondary's or usage of snapshot isolation or read committed isolation will generally push the TempDB required size.
R.KU.PA_DBA (7/23/2016)
After installation you should enable the DAC login on the system.
DAC is enabled by default, only on a clustered instance do you need to manually intervene.
Gavin Fuller (7/23/2016)
You will get slightly better performance from a physical server but given the advances in technology the different will be minor.
This is not completely true and I wouldn't be relying on this. Virtualisation has an overhead (10-15% should be allowed) and the more the host is over subscribed the worst things will get.
Gavin Fuller (7/23/2016)
I would however suggest keeping your data, logs and tempDB on separate physical drives.
In a Virtual Infrastructure this can be a futile exercise especially if your virtual disks are on the same datastore!
Also bear in mind, virtual disks do not handle random I\O too well. You can utilise Raw Device Mappings for the virtual disk but this makes the management harder for VI admins.
Gavin Fuller (7/23/2016)
The ideal would be to have as much memory as the total size of all data files but depending on the size and function of the databases this might not be realistic or overkill. However, be generous.
It's more realistic to provide the RAM size equivalent to the largest object in your largest database and allow an amount extra for proc cacahe, ext proc execution and CLR.
Your data file drive may be 1TB in size but your databases may only consume 600GB, you may have extra spare for index maintenance tasks, etc.
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply