Limits in Containers
Docker gives you the ability to control a container’s access to CPU, Memory, and network and disk IO using resource constraints, sometimes called Limits. You define limits as parameters when creating containers. In its default configuration, a container will have no resource constraints for accessing resources of the host operating system. This post will look at how to configure resource constraints in Docker and look at how SQL Server sees the resources when CPU and Memory resource constraints are in place.
In this post, we will focus on using CPU and Memory Limits using the parameters --cpus <value>
and --memory <value>
. These provide the essential resource controls for access to CPU and Memory available on the host. If you want to dive deep into the various types of resource constraints available in Docker, check out this page https://docs.docker.com/config/containers/resource_constraints/. The goal of this post is to introduce the SQL Server DBA into resource constraints in containers.
CPU Limits
CPU Limits control access to the CPU cycles available on the host system. If no CPU Limits are defined, the container will have full access to all of the CPU cycles across all CPUs available. A CPU Limit controls a container’s access to the CPU by influencing the host operating system’s scheduler.
To create a container with CPU Limits, use the --cpus <value>
parameter. The scheduler provides the running container access to at most the number of CPUs specified in the --cpus
parameter balancing the workload across all of the CPUs in the system. Processes inside the container can see all of the CPUs in the host but cannot consume more CPU than defined in the CPU Limit.
For example, in a system with four CPUs, when you define a CPU Limit on a container such as --cpus=1
, the process(es) running in the container will see all four CPUs and will be able to consume CPU cycles on all four CPUs, but in total not able to consume more than one CPUs worth of cycles from the system.
Memory Limits
On the memory side of things, when no memory limit is specified in a container’s configuration, all of the memory from the host operating system is available to the container.
To create a container with a memory limit, use the --memory <value>
parameter. When defined, the container can use the only amount of Memory Limit specified.
For example, in a system with 16GB of memory, when you define a Memory Limit on a container such as --memory 4GB
the processes running in the container will see just 4GB of memory available.
Using SQL Server with CPU and Memory Limits in Docker
In this section, we will start up some SQL Server containers in two configurations, not using limits and then using limits. We will then examine how SQL Server sees the underlying hardware and configures itself based on that hardware configuration. The system I am using for testing is a Ubuntu Linux virtual machine configured with 4 CPUs and 16GB RAM and running docker as the container runtime.
First, lets start up a container with no resource constraints in place.
docker run
--env 'ACCEPT_EULA=Y'
--env 'MSSQL_SA_PASSWORD=S0methingS@Str0ng!'
--name 'sqldemo1'
--hostname 'sqldemo1'
--publish 31433:1433
--detach mcr.microsoft.com/mssql/server:2019-CU9-ubuntu-18.04
And once that SQL Server container is up and running, let’s query the CPU and memory configuration of the container with this code.
sqlcmd -S localhost,31433 -U sa
-Q "SELECT cpu_count, scheduler_count, physical_memory_kb / 1024 / 1024 as [MemoryGB] FROM sys.dm_os_sys_info"
-P 'S0methingS@Str0ng!'
…and below the output of that query. When no resource constraints in place, the container has access to all four CPUs, SQL Server creates 4 schedulers and has access to 12GB RAM.
cpu_count scheduler_count MemoryGB
----------- --------------- --------------------
4 4 12
You might be thinking…but Anthony…you told me that when there are no resource constraints/limits in place for memory, the container (and its processes) should see all of the memory…it does. This container is a Linux-based container running SQL Server on Linux. And in SQL Server on Linux, due to the architecture of SQLPAL in its default configuration, sees 80% of RAM from the host OS. 80% of 16GB is approximately 12GB. The container does have access to all 16GB of RAM; it’s just that SQL Server on Linux can access only 80% of the available memory. Although often not needed, this setting can be changed by using mssql-conf
.
To examine resource consumption from the container’s perspective, you can use docker stats
. In the output below, the column MEM USAGE / LIMIT
shows the containers’ current memory usage and the configured limit…its almost 16GB, which is the total amount of memory available on the host since there is no limit currently configured on this container. So even though SQL can only use 12GB, the container is providing 16GB.
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
09f3b75cbb5d sqldemo1 2.35% 798.6MiB / 15.64GiB 4.99% 940kB / 251kB 25MB / 90.1MB 165
Now, let’s look at how to create containers with resource constraints.
Let’s start another container with a 4GB memory limit and 1 CPU limit using the parameters --memory 4GB
and --cpus 1
.
docker run
--env 'ACCEPT_EULA=Y'
--env 'MSSQL_SA_PASSWORD=S0methingS@Str0ng!'
--name 'sqldemo2'
--hostname 'sqldemo2'
--memory 4GB
--cpus 1
--publish 31434:1433
--detach mcr.microsoft.com/mssql/server:2019-CU9-ubuntu-18.04
Now that CPU and memory Limits are in place let’s again query the CPU and memory configuration as SQL Server sees things, and we now get an interesting story.
sqlcmd -S localhost,31434 -U sa
-Q "SELECT cpu_count, scheduler_count, physical_memory_kb / 1024.0 / 1024.0 as [MemoryGB] FROM sys.dm_os_sys_info"
-P 'S0methingS@Str0ng!'
In the output below, the CPU limit on this container is one and SQL Server sees all of the CPUs and creates schedulers for each CPU it sees. When defining a CPU limit, this influences the scheduling of the processes running in the container on the host system’s CPUs. It does not limit access to or impact the visibility of the CPUs. Using a CPU limit of one, SQL Server worker threads can be scheduled to any one of the CPUs in the system, but in total, they cannot consume more than one CPU worth of cycles from the system. On the memory side of things, with a 4GB memory limit in place, SQL Server sees approximately 3.2GB of memory or 80% of the 4GB memory limit.
cpu_count scheduler_count MemoryGB
----------- --------------- -----------------------------------
4 4 3.200195312500
Setting Container Limits at Runtime
In addition to defining resource constraints at container startup, you can set them while the container is up and running. For example, if you need to constrain access to CPU for a SQL Server that’s burning up too much CPU you can set a limit using docker update sqldemo2 --cpus .5
. This will change the CPU Limit for a container to .5 and limit this container’s access to most .5 of CPU cycles available in the host. You can change this memory setting, but SQL Server will not see this change at runtime. You will need to restart the SQL Server process to see the hardware change.
Why Does This Matter?
SQL Server and Memory Limits
Running containers without resource constraints isn’t a good thing. Think about a scenario where you have several containers running on a system. They will compete for the resources on the system. Of particular concern is memory. On Linux systems, if the operating system determines the system is running out of memory processes will be killed to protect the overall health of the system. And one of those killed processes might be one of your SQL Server containers. Let’s dig a littler deeper on that topic…
Suppose you create a container running SQL Server with no memory limit is set and the instance configuration max server memory
isn’t set to a reasonable value. SQL Server will consume nearly all available memory in its default configuration, filling up its caches. In this scenario, if you’re running more than one SQL Server container, each will think it has access to all of the memory available in the system, and they will start competing for memory. Eventually, the host will run out of memory and OOM killer will do its thing and start killing processes.
The best way to protect against this is to set memory limits on containers. Giving each container an appropriate amount of memory for the workload required (and don’t forget that SQL Server on Linux will see only 80% of that) and set max server memory
in each of your instances.
SQL Server and CPU Limits
On the CPU side of things, things are a little less sinister. The operating system’s scheduler will balance the workload across all of the CPUs. And in a multi-container scenario, it will continue to do this, distributing the workload across all of the CPUs available on the host. The primary use of CPU Limits is to help constrain a workload on a particular container if that’s needed to protect the overall performance of the host.
But, your SQL Server workload might expect a specific performance profile based on the CPU topology, and it might not get that performance out of a container with CPU limits in place. SQL Server creates Schedulers for each CPU is sees and will try to schedule Tasks (via Workers, Threads and Schedulers) onto all of the CPUs available in the system. In our scenario above, with a CPU Limit of one on a system with four CPUs, for serial plans, SQL Server has four Schedulers available to assign multiple Tasks to thinking it has access to all four CPUs but really has the clock cycles of just one CPU. The workload will bottleneck on the CPU. For plans that go parallel, the query optimizer thinks it has access to all four CPUs and it creates plans with a DOP of 4 (if MAXDOP on the instance is the default, which is 0) and this too will assign Tasks to all four CPUs available and bottleneck on CPU. In both scenarios the CPU bottlenecks will surface as CPU wait. For parallel plans, the solution here is to set MAXDOP to a value less than or equal to the limit defined on the container or lower if that is appropriate for your workload. But what should you do for the serial workload? I’m still researching this.
Now there’s another wrinkle here that I haven’t been able to dig into yet and that’s NUMA. My theory is that the NUMA topology will be still exposed to SQL Server and it will create memory nodes accordingly. I’ll work on this blog post some time in the near future.
So I encourage you to always define resource constraints for your containers with the appropriate amount of resources needed to run the workload, specifically CPU and memory. Setting resource constraints will provide stability for the overall system, especially in multi-container scenarios. The key idea here is understanding how the resource constraints impact your workload and how SQL Server sees the underlying hardware from your host.
What about running SQL Server in Kubernetes?
The same theory applies to running SQL Server in Kubernetes because Kubernetes starts containers on Nodes in a Cluster. You may have heard me say that at a conference or two in the past 🙂 In the Pod Spec below, there is a field for limits
where I’ve defined limits for CPU and memory.
In this configuration below, the container running in the Pod will have access to at most one CPU worth of CPU cycles and only 4GB RAM. Also, in this Pod Spec is a request
. In Kubernetes, a Request is a guaranteed amount of resources. For this Pod to start, the amount of resources defined in the request must be available on a Node in the cluster. In the Pod Spec below, for this Pod to start a Node in the cluster must have at least one CPU worth of CPU cycles and at 2GB of RAM for the Pod to start.
containers:
- name: mssql
image: 'mcr.microsoft.com/mssql/server:2019-CU9-ubuntu-18.04'
resources:
requests:
cpu: 1
memory: 2Gi
limits:
cpu: 1
memory: 4Gi
When working with SQL Server in Kubernetes, it is best always to define limits and requests ensuring that the SQL Server running in the Pod has access to the minimum amount of resources needed and is limited in the amount of resources it can access. Configuring limits and requests will ensure that your workload appropriately shares the resources available in your cluster’s Nodes fairly across the applications.
For more details on Kubernetes limits and requests, check out the following resources:
- My Blog Post Memory Settings for Running SQL Server in Kubernetes
- Kubernetes docs on Managing Container Resources
- My Pluralsight courses Configuring and Managing Kubernetes Storage and Scheduling and Managing the API Server and Pods
The post Container Limits and SQL Server appeared first on Centino Systems Blog.