{
	"id": "4ac425dd-3ef8-4ee8-bb60-e9704cab9b9d",
	"created_at": "2026-04-06T00:12:25.876727Z",
	"updated_at": "2026-04-10T03:20:31.8521Z",
	"deleted_at": null,
	"sha1_hash": "84c9b016fb44e21d5738ddad61213ee950018c68",
	"title": "Running containers",
	"llm_title": "",
	"authors": "",
	"file_creation_date": "0001-01-01T00:00:00Z",
	"file_modification_date": "0001-01-01T00:00:00Z",
	"file_size": 206302,
	"plain_text": "Running containers\r\nBy Docker Inc\r\nPublished: 2026-03-23 · Archived: 2026-04-05 14:20:29 UTC\r\nDocker runs processes in isolated containers. A container is a process which runs on a host. The host may be local\r\nor remote. When you execute docker run , the container process that runs is isolated in that it has its own file\r\nsystem, its own networking, and its own isolated process tree separate from the host.\r\nThis page details how to use the docker run command to run containers.\r\nA docker run command takes the following form:\r\nThe docker run command must specify an image reference to create the container from.\r\nImage references\r\nThe image reference is the name and version of the image. You can use the image reference to create or run a\r\ncontainer based on an image.\r\ndocker run IMAGE[:TAG][@DIGEST]\r\ndocker create IMAGE[:TAG][@DIGEST]\r\nAn image tag is the image version, which defaults to latest when omitted. Use the tag to run a container from\r\nspecific version of an image. For example, to run version 24.04 of the ubuntu image: docker run\r\nubuntu:24.04 .\r\nImage digests\r\nImages using the v2 or later image format have a content-addressable identifier called a digest. As long as the\r\ninput used to generate the image is unchanged, the digest value is predictable.\r\nThe following example runs a container from the alpine image with the\r\nsha256:9cacb71397b640eca97488cf08582ae4e4068513101088e9f96c9814bfda95e0 digest:\r\nOptions\r\n[OPTIONS] let you configure options for the container. For example, you can give the container a name ( --\r\nname ), or run it as a background process ( -d ). You can also set options to control things like resource constraints\r\nand networking.\r\nCommands and arguments\r\nhttps://docs.docker.com/engine/reference/run/#entrypoint-default-command-to-execute-at-runtime\r\nPage 1 of 17\n\nYou can use the [COMMAND] and [ARG...] positional arguments to specify commands and arguments for the\r\ncontainer to run when it starts up. For example, you can specify sh as the [COMMAND] , combined with the -i\r\nand -t flags, to start an interactive shell in the container (if the image you select has an sh executable on\r\nPATH ).\r\nDepending on your Docker system configuration, you may be required to preface the docker run\r\ncommand with sudo . To avoid having to use sudo with the docker command, your system\r\nadministrator can create a Unix group called docker and add users to it. For more information about\r\nthis configuration, refer to the Docker installation documentation for your operating system.\r\nWhen you start a container, the container runs in the foreground by default. If you want to run the container in the\r\nbackground instead, you can use the --detach (or -d ) flag. This starts the container without occupying your\r\nterminal window.\r\nWhile the container runs in the background, you can interact with the container using other CLI commands. For\r\nexample, docker logs lets you view the logs for the container, and docker attach brings it to the foreground.\r\nFor more information about docker run flags related to foreground and background modes, see:\r\ndocker run --detach : run container in background\r\ndocker run --attach : attach to stdin , stdout , and stderr\r\ndocker run --tty : allocate a pseudo-tty\r\ndocker run --interactive : keep stdin open even if not attached\r\nFor more information about re-attaching to a background container, see docker attach .\r\nYou can identify a container in three ways:\r\nIdentifier type Example value\r\nUUID long identifier f78375b1c487e03c9438c729345e54db9d20cfa2ac1fc3494b6eb60872e74778\r\nUUID short identifier f78375b1c487\r\nName evil_ptolemy\r\nThe UUID identifier is a random ID assigned to the container by the daemon.\r\nThe daemon generates a random string name for containers automatically. You can also define a custom name\r\nusing the --name flag. Defining a name can be a handy way to add meaning to a container. If you specify a\r\nname , you can use it when referring to the container in a user-defined network. This works for both background\r\nand foreground Docker containers.\r\nA container identifier is not the same thing as an image reference. The image reference specifies which image to\r\nuse when you run a container. You can't run docker exec nginx:alpine sh to open a shell in a container based\r\non the nginx:alpine image, because docker exec expects a container identifier (name or ID), not an image.\r\nhttps://docs.docker.com/engine/reference/run/#entrypoint-default-command-to-execute-at-runtime\r\nPage 2 of 17\n\nWhile the image used by a container is not an identifier for the container, you find out the IDs of containers using\r\nan image by using the --filter flag. For example, the following docker ps command gets the IDs of all\r\nrunning containers based on the nginx:alpine image:\r\nFor more information about using filters, see Filtering.\r\nContainers have networking enabled by default, and they can make outgoing connections. If you're running\r\nmultiple containers that need to communicate with each other, you can create a custom network and attach the\r\ncontainers to the network.\r\nWhen multiple containers are attached to the same custom network, they can communicate with each other using\r\nthe container names as a DNS hostname. The following example creates a custom network named my-net , and\r\nruns two containers that attach to the network.\r\nFor more information about container networking, see Networking overview\r\nBy default, the data in a container is stored in an ephemeral, writable container layer. Removing the container also\r\nremoves its data. If you want to use persistent data with containers, you can use filesystem mounts to store the\r\ndata persistently on the host system. Filesystem mounts can also let you share data between containers and the\r\nhost.\r\nDocker supports two main categories of mounts:\r\nVolume mounts\r\nBind mounts\r\nVolume mounts are great for persistently storing data for containers, and for sharing data between containers. Bind\r\nmounts, on the other hand, are for sharing data between a container and the host.\r\nYou can add a filesystem mount to a container using the --mount flag for the docker run command.\r\nThe following sections show basic examples of how to create volumes and bind mounts. For more in-depth\r\nexamples and descriptions, refer to the section of the storage section in the documentation.\r\nVolume mounts\r\nTo create a volume mount:\r\nThe --mount flag takes two parameters in this case: source and target . The value for the source\r\nparameter is the name of the volume. The value of target is the mount location of the volume inside the\r\ncontainer. Once you've created the volume, any data you write to the volume is persisted, even if you stop or\r\nremove the container:\r\nThe target must always be an absolute path, such as /src/docs . An absolute path starts with a / (forward\r\nslash). Volume names must start with an alphanumeric character, followed by a-z0-9 , _ (underscore), .\r\n(period) or - (hyphen).\r\nhttps://docs.docker.com/engine/reference/run/#entrypoint-default-command-to-execute-at-runtime\r\nPage 3 of 17\n\nBind mounts\r\nTo create a bind mount:\r\nIn this case, the --mount flag takes three parameters. A type ( bind ), and two paths. The source path is the\r\nlocation on the host that you want to bind mount into the container. The target path is the mount destination\r\ninside the container.\r\nBy default, bind mounts require the source path to exist on the daemon host. If the source path doesn't exist, an\r\nerror is returned. To create the source path on the daemon host if it doesn't exist, use the bind-create-src\r\noption:\r\nBind mounts are read-write by default, meaning that you can both read and write files to and from the mounted\r\nlocation from the container. Changes that you make, such as adding or editing files, are reflected on the host\r\nfilesystem:\r\nThe exit code from docker run gives information about why the container failed to run or why it exited. The\r\nfollowing sections describe the meanings of different container exit codes values.\r\n125\r\nExit code 125 indicates that the error is with Docker daemon itself.\r\n126\r\nExit code 126 indicates that the specified contained command can't be invoked. The container command in the\r\nfollowing example is: /etc .\r\n127\r\nExit code 127 indicates that the contained command can't be found.\r\nOther exit codes\r\nAny exit code other than 125 , 126 , and 127 represent the exit code of the provided container command.\r\nThe operator can also adjust the performance parameters of the container:\r\nOption Description\r\n-m , --\r\nmemory=\"\"\r\nMemory limit (format: \u003cnumber\u003e[\u003cunit\u003e] ). Number is a positive integer. Unit can be\r\none of b , k , m , or g . Minimum is 6M.\r\n--memory-swap=\"\"Total memory limit (memory + swap, format: \u003cnumber\u003e[\u003cunit\u003e] ). Number is a\r\npositive integer. Unit can be one of b , k , m , or g .\r\nhttps://docs.docker.com/engine/reference/run/#entrypoint-default-command-to-execute-at-runtime\r\nPage 4 of 17\n\nOption Description\r\n--memory-reservation=\"\"Memory soft limit (format: \u003cnumber\u003e[\u003cunit\u003e] ). Number is a positive integer. Unit\r\ncan be one of b , k , m , or g .\r\n--kernel-memory=\"\"Kernel memory limit (format: \u003cnumber\u003e[\u003cunit\u003e] ). Number is a positive integer. Unit\r\ncan be one of b , k , m , or g . Minimum is 4M.\r\n-c , --cpu-shares=0\r\nCPU shares (relative weight)\r\n--cpus=0.000 Number of CPUs. Number is a fractional number. 0.000 means no limit.\r\n--cpu-period=0 Limit the CPU CFS (Completely Fair Scheduler) period\r\n--cpuset-cpus=\"\"\r\nCPUs in which to allow execution (0-3, 0,1)\r\n--cpuset-mems=\"\"Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on\r\nNUMA systems.\r\n--cpu-quota=0 Limit the CPU CFS (Completely Fair Scheduler) quota\r\n--cpu-rt-period=0Limit the CPU real-time period. In microseconds. Requires parent cgroups be set and\r\ncannot be higher than parent. Also check rtprio ulimits.\r\n--cpu-rt-runtime=0Limit the CPU real-time runtime. In microseconds. Requires parent cgroups be set and\r\ncannot be higher than parent. Also check rtprio ulimits.\r\n--blkio-weight=0\r\nBlock IO weight (relative weight) accepts a weight value between 10 and 1000.\r\n--blkio-weight-device=\"\"\r\nBlock IO weight (relative device weight, format: DEVICE_NAME:WEIGHT )\r\n--device-read-bps=\"\"Limit read rate from a device (format: \u003cdevice-path\u003e:\u003cnumber\u003e[\u003cunit\u003e] ). Number is\r\na positive integer. Unit can be one of kb , mb , or gb .\r\n--device-write-bps=\"\"Limit write rate to a device (format: \u003cdevice-path\u003e:\u003cnumber\u003e[\u003cunit\u003e] ). Number is a\r\npositive integer. Unit can be one of kb , mb , or gb .\r\n--device-read-iops=\"\"Limit read rate (IO per second) from a device (format: \u003cdevice-path\u003e:\u003cnumber\u003e ).\r\nNumber is a positive integer.\r\n--device-write-iops=\"\"Limit write rate (IO per second) to a device (format: \u003cdevice-path\u003e:\u003cnumber\u003e ).\r\nNumber is a positive integer.\r\n--oom-kill-disable=false\r\nWhether to disable OOM Killer for the container or not.\r\nhttps://docs.docker.com/engine/reference/run/#entrypoint-default-command-to-execute-at-runtime\r\nPage 5 of 17\n\nOption Description\r\n--oom-score-adj=0\r\nTune container's OOM preferences (-1000 to 1000)\r\n--memory-swappiness=\"\"Tune a container's memory swappiness behavior. Accepts an integer between 0 and\r\n100.\r\n--shm-size=\"\"\r\nSize of /dev/shm . The format is \u003cnumber\u003e\u003cunit\u003e . number must be greater than\r\n0 . Unit is optional and can be b (bytes), k (kilobytes), m (megabytes), or g\r\n(gigabytes). If you omit the unit, the system uses bytes. If you omit the size entirely,\r\nthe system uses 64m .\r\nUser memory constraints\r\nWe have four ways to set user memory usage:\r\nOption Result\r\nmemory=inf, memory-swap=inf (default)There is no memory limit for the container. The container can use as much\r\nmemory as needed.\r\nmemory=L\u003cinf, memory-swap=inf\r\n(specify memory and set memory-swap as -1 ) The container is not allowed\r\nto use more than L bytes of memory, but can use as much swap as is needed\r\n(if the host supports swap memory).\r\nmemory=L\u003cinf, memory-swap=2*L(specify memory without memory-swap) The container is not allowed to use\r\nmore than L bytes of memory, swap plus memory usage is double of that.\r\nmemory=L\u003cinf, memory-swap=S\u003cinf, L\u003c=S(specify both memory and memory-swap) The container is not allowed to use\r\nmore than L bytes of memory, swap plus memory usage is limited by S.\r\nExamples:\r\nWe set nothing about memory, this means the processes in the container can use as much memory and swap\r\nmemory as they need.\r\nWe set memory limit and disabled swap memory limit, this means the processes in the container can use 300M\r\nmemory and as much swap memory as they need (if the host supports swap memory).\r\nWe set memory limit only, this means the processes in the container can use 300M memory and 300M swap\r\nmemory, by default, the total virtual memory size ( --memory-swap ) will be set as double of memory, in this case,\r\nmemory + swap would be 2*300M, so processes can use 300M swap memory as well.\r\nWe set both memory and swap memory, so the processes in the container can use 300M memory and 700M swap\r\nmemory.\r\nhttps://docs.docker.com/engine/reference/run/#entrypoint-default-command-to-execute-at-runtime\r\nPage 6 of 17\n\nMemory reservation is a kind of memory soft limit that allows for greater sharing of memory. Under normal\r\ncircumstances, containers can use as much of the memory as needed and are constrained only by the hard limits\r\nset with the -m / --memory option. When memory reservation is set, Docker detects memory contention or low\r\nmemory and forces containers to restrict their consumption to a reservation limit.\r\nAlways set the memory reservation value below the hard limit, otherwise the hard limit takes precedence. A\r\nreservation of 0 is the same as setting no reservation. By default (without reservation set), memory reservation is\r\nthe same as the hard memory limit.\r\nMemory reservation is a soft-limit feature and does not guarantee the limit won't be exceeded. Instead, the feature\r\nattempts to ensure that, when memory is heavily contended for, memory is allocated based on the reservation\r\nhints/setup.\r\nThe following example limits the memory ( -m ) to 500M and sets the memory reservation to 200M.\r\nUnder this configuration, when the container consumes memory more than 200M and less than 500M, the next\r\nsystem memory reclaim attempts to shrink container memory below 200M.\r\nThe following example set memory reservation to 1G without a hard memory limit.\r\nThe container can use as much memory as it needs. The memory reservation setting ensures the container doesn't\r\nconsume too much memory for long time, because every memory reclaim shrinks the container's consumption to\r\nthe reservation.\r\nBy default, kernel kills processes in a container if an out-of-memory (OOM) error occurs. To change this\r\nbehaviour, use the --oom-kill-disable option. Only disable the OOM killer on containers where you have also\r\nset the -m/--memory option. If the -m flag is not set, this can result in the host running out of memory and\r\nrequire killing the host's system processes to free memory.\r\nThe following example limits the memory to 100M and disables the OOM killer for this container:\r\nThe following example, illustrates a dangerous way to use the flag:\r\nThe container has unlimited memory which can cause the host to run out memory and require killing system\r\nprocesses to free memory. The --oom-score-adj parameter can be changed to select the priority of which\r\ncontainers will be killed when the system is out of memory, with negative scores making them less likely to be\r\nkilled, and positive scores more likely.\r\nKernel memory constraints\r\nKernel memory is fundamentally different than user memory as kernel memory can't be swapped out. The\r\ninability to swap makes it possible for the container to block system services by consuming too much kernel\r\nmemory. Kernel memory includes：\r\nstack pages\r\nslab pages\r\nsockets memory pressure\r\nhttps://docs.docker.com/engine/reference/run/#entrypoint-default-command-to-execute-at-runtime\r\nPage 7 of 17\n\ntcp memory pressure\r\nYou can setup kernel memory limit to constrain these kinds of memory. For example, every process consumes\r\nsome stack pages. By limiting kernel memory, you can prevent new processes from being created when the kernel\r\nmemory usage is too high.\r\nKernel memory is never completely independent of user memory. Instead, you limit kernel memory in the context\r\nof the user memory limit. Assume \"U\" is the user memory limit and \"K\" the kernel limit. There are three possible\r\nways to set limits:\r\nOption Result\r\nU != 0, K\r\n= inf\r\n(default)\r\nThis is the standard memory limitation mechanism already present before using kernel\r\nmemory. Kernel memory is completely ignored.\r\nU != 0, K\r\n\u003c U\r\nKernel memory is a subset of the user memory. This setup is useful in deployments where the\r\ntotal amount of memory per-cgroup is overcommitted. Overcommitting kernel memory limits\r\nis definitely not recommended, since the box can still run out of non-reclaimable memory. In\r\nthis case, you can configure K so that the sum of all groups is never greater than the total\r\nmemory. Then, freely set U at the expense of the system's service quality.\r\nU != 0, K\r\n\u003e U\r\nSince kernel memory charges are also fed to the user counter and reclamation is triggered for\r\nthe container for both kinds of memory. This configuration gives the admin a unified view of\r\nmemory. It is also useful for people who just want to track kernel memory usage.\r\nExamples:\r\nWe set memory and kernel memory, so the processes in the container can use 500M memory in total, in this 500M\r\nmemory, it can be 50M kernel memory tops.\r\nWe set kernel memory without -m, so the processes in the container can use as much memory as they want, but\r\nthey can only use 50M kernel memory.\r\nSwappiness constraint\r\nBy default, a container's kernel can swap out a percentage of anonymous pages. To set this percentage for a\r\ncontainer, specify a --memory-swappiness value between 0 and 100. A value of 0 turns off anonymous page\r\nswapping. A value of 100 sets all anonymous pages as swappable. By default, if you are not using --memory-swappiness , memory swappiness value will be inherited from the parent.\r\nFor example, you can set:\r\nSetting the --memory-swappiness option is helpful when you want to retain the container's working set and to\r\navoid swapping performance penalties.\r\nhttps://docs.docker.com/engine/reference/run/#entrypoint-default-command-to-execute-at-runtime\r\nPage 8 of 17\n\nBy default, all containers get the same proportion of CPU cycles. This proportion can be modified by changing the\r\ncontainer's CPU share weighting relative to the weighting of all other running containers.\r\nTo modify the proportion from the default of 1024, use the -c or --cpu-shares flag to set the weighting to 2 or\r\nhigher. If 0 is set, the system will ignore the value and use the default of 1024.\r\nThe proportion will only apply when CPU-intensive processes are running. When tasks in one container are idle,\r\nother containers can use the left-over CPU time. The actual amount of CPU time will vary depending on the\r\nnumber of containers running on the system.\r\nFor example, consider three containers, one has a cpu-share of 1024 and two others have a cpu-share setting of\r\n512. When processes in all three containers attempt to use 100% of CPU, the first container would receive 50% of\r\nthe total CPU time. If you add a fourth container with a cpu-share of 1024, the first container only gets 33% of the\r\nCPU. The remaining containers receive 16.5%, 16.5% and 33% of the CPU.\r\nOn a multi-core system, the shares of CPU time are distributed over all CPU cores. Even if a container is limited\r\nto less than 100% of CPU time, it can use 100% of each individual CPU core.\r\nFor example, consider a system with more than three cores. If you start one container {C0} with -c=512\r\nrunning one process, and another container {C1} with -c=1024 running two processes, this can result in the\r\nfollowing division of CPU shares:\r\nPID container CPU CPU share\r\n100 {C0} 0 100% of CPU0\r\n101 {C1} 1 100% of CPU1\r\n102 {C1} 2 100% of CPU2\r\nCPU period constraint\r\nThe default CPU CFS (Completely Fair Scheduler) period is 100ms. We can use --cpu-period to set the period\r\nof CPUs to limit the container's CPU usage. And usually --cpu-period should work with --cpu-quota .\r\nExamples:\r\nIf there is 1 CPU, this means the container can get 50% CPU worth of run-time every 50ms.\r\nIn addition to use --cpu-period and --cpu-quota for setting CPU period constraints, it is possible to specify\r\n--cpus with a float number to achieve the same purpose. For example, if there is 1 CPU, then --cpus=0.5 will\r\nachieve the same result as setting --cpu-period=50000 and --cpu-quota=25000 (50% CPU).\r\nThe default value for --cpus is 0.000 , which means there is no limit.\r\nFor more information, see the CFS documentation on bandwidth limiting.\r\nCpuset constraint\r\nWe can set cpus in which to allow execution for containers.\r\nhttps://docs.docker.com/engine/reference/run/#entrypoint-default-command-to-execute-at-runtime\r\nPage 9 of 17\n\nExamples:\r\nThis means processes in container can be executed on cpu 1 and cpu 3.\r\nThis means processes in container can be executed on cpu 0, cpu 1 and cpu 2.\r\nWe can set mems in which to allow execution for containers. Only effective on NUMA systems.\r\nExamples:\r\nThis example restricts the processes in the container to only use memory from memory nodes 1 and 3.\r\nThis example restricts the processes in the container to only use memory from memory nodes 0, 1 and 2.\r\nCPU quota constraint\r\nThe --cpu-quota flag limits the container's CPU usage. The default 0 value allows the container to take 100% of\r\na CPU resource (1 CPU). The CFS (Completely Fair Scheduler) handles resource allocation for executing\r\nprocesses and is default Linux Scheduler used by the kernel. Set this value to 50000 to limit the container to 50%\r\nof a CPU resource. For multiple CPUs, adjust the --cpu-quota as necessary. For more information, see the CFS\r\ndocumentation on bandwidth limiting.\r\nBlock IO bandwidth (Blkio) constraint\r\nBy default, all containers get the same proportion of block IO bandwidth (blkio). This proportion is 500. To\r\nmodify this proportion, change the container's blkio weight relative to the weighting of all other running\r\ncontainers using the --blkio-weight flag.\r\nThe blkio weight setting is only available for direct IO. Buffered IO is not currently supported.\r\nThe --blkio-weight flag can set the weighting to a value between 10 to 1000. For example, the commands\r\nbelow create two containers with different blkio weight:\r\nIf you do block IO in the two containers at the same time, by, for example:\r\nYou'll find that the proportion of time is the same as the proportion of blkio weights of the two containers.\r\nThe --blkio-weight-device=\"DEVICE_NAME:WEIGHT\" flag sets a specific device weight. The\r\nDEVICE_NAME:WEIGHT is a string containing a colon-separated device name and weight. For example, to set\r\n/dev/sda device weight to 200 :\r\nIf you specify both the --blkio-weight and --blkio-weight-device , Docker uses the --blkio-weight as the\r\ndefault weight and uses --blkio-weight-device to override this default with a new value on a specific device.\r\nThe following example uses a default weight of 300 and overrides this default on /dev/sda setting that weight\r\nto 200 :\r\nThe --device-read-bps flag limits the read rate (bytes per second) from a device. For example, this command\r\ncreates a container and limits the read rate to 1mb per second from /dev/sda :\r\nhttps://docs.docker.com/engine/reference/run/#entrypoint-default-command-to-execute-at-runtime\r\nPage 10 of 17\n\nThe --device-write-bps flag limits the write rate (bytes per second) to a device. For example, this command\r\ncreates a container and limits the write rate to 1mb per second for /dev/sda :\r\nBoth flags take limits in the \u003cdevice-path\u003e:\u003climit\u003e[unit] format. Both read and write rates must be a positive\r\ninteger. You can specify the rate in kb (kilobytes), mb (megabytes), or gb (gigabytes).\r\nThe --device-read-iops flag limits read rate (IO per second) from a device. For example, this command creates\r\na container and limits the read rate to 1000 IO per second from /dev/sda :\r\nThe --device-write-iops flag limits write rate (IO per second) to a device. For example, this command creates\r\na container and limits the write rate to 1000 IO per second to /dev/sda :\r\nBoth flags take limits in the \u003cdevice-path\u003e:\u003climit\u003e format. Both read and write rates must be a positive integer.\r\nBy default, the docker container process runs with the supplementary groups looked up for the specified user. If\r\none wants to add more to that list of groups, then one can use this flag:\r\nOption Description\r\n--cap-add Add Linux capabilities\r\n--cap-drop Drop Linux capabilities\r\n--privileged Give extended privileges to this container\r\n--device=[] Allows you to run devices inside the container without the --privileged flag.\r\nBy default, Docker containers are \"unprivileged\" and cannot, for example, run a Docker daemon inside a Docker\r\ncontainer. This is because by default a container is not allowed to access any devices, but a \"privileged\" container\r\nis given access to all devices (see the documentation on cgroups devices).\r\nThe --privileged flag gives all capabilities to the container. When the operator executes docker run --\r\nprivileged , Docker enables access to all devices on the host, and reconfigures AppArmor or SELinux to allow\r\nthe container nearly all the same access to the host as processes running outside containers on the host. Use this\r\nflag with caution. For more information about the --privileged flag, see the docker run reference.\r\nIf you want to limit access to a specific device or devices you can use the --device flag. It allows you to specify\r\none or more devices that will be accessible within the container.\r\nBy default, the container will be able to read , write , and mknod these devices. This can be overridden using\r\na third :rwm set of options to each --device flag:\r\nIn addition to --privileged , the operator can have fine grain control over the capabilities using --cap-add and\r\n--cap-drop . By default, Docker has a default list of capabilities that are kept. The following table lists the Linux\r\ncapability options which are allowed by default and can be dropped.\r\nhttps://docs.docker.com/engine/reference/run/#entrypoint-default-command-to-execute-at-runtime\r\nPage 11 of 17\n\nCapability Key Capability Description\r\nAUDIT_WRITE Write records to kernel auditing log.\r\nCHOWN Make arbitrary changes to file UIDs and GIDs (see chown(2)).\r\nDAC_OVERRIDE Bypass file read, write, and execute permission checks.\r\nFOWNER\r\nBypass permission checks on operations that normally require the file system UID\r\nof the process to match the UID of the file.\r\nFSETID Don't clear set-user-ID and set-group-ID permission bits when a file is modified.\r\nKILL Bypass permission checks for sending signals.\r\nMKNOD Create special files using mknod(2).\r\nNET_BIND_SERVICE Bind a socket to internet domain privileged ports (port numbers less than 1024).\r\nNET_RAW Use RAW and PACKET sockets.\r\nSETFCAP Set file capabilities.\r\nSETGID Make arbitrary manipulations of process GIDs and supplementary GID list.\r\nSETPCAP Modify process capabilities.\r\nSETUID Make arbitrary manipulations of process UIDs.\r\nSYS_CHROOT Use chroot(2), change root directory.\r\nThe next table shows the capabilities which are not granted by default and may be added.\r\nCapability Key Capability Description\r\nAUDIT_CONTROL\r\nEnable and disable kernel auditing; change auditing filter rules; retrieve\r\nauditing status and filtering rules.\r\nAUDIT_READ Allow reading the audit log via multicast netlink socket.\r\nBLOCK_SUSPEND Allow preventing system suspends.\r\nBPF\r\nAllow creating BPF maps, loading BPF Type Format (BTF) data, retrieve\r\nJITed code of BPF programs, and more.\r\nCHECKPOINT_RESTORE Allow checkpoint/restore related operations. Introduced in kernel 5.9.\r\nDAC_READ_SEARCH\r\nBypass file read permission checks and directory read and execute permission\r\nchecks.\r\nIPC_LOCK Lock memory (mlock(2), mlockall(2), mmap(2), shmctl(2)).\r\nhttps://docs.docker.com/engine/reference/run/#entrypoint-default-command-to-execute-at-runtime\r\nPage 12 of 17\n\nCapability Key Capability Description\r\nIPC_OWNER Bypass permission checks for operations on System V IPC objects.\r\nLEASE Establish leases on arbitrary files (see fcntl(2)).\r\nLINUX_IMMUTABLE Set the FS_APPEND_FL and FS_IMMUTABLE_FL i-node flags.\r\nMAC_ADMIN Allow MAC configuration or state changes. Implemented for the Smack LSM.\r\nMAC_OVERRIDE\r\nOverride Mandatory Access Control (MAC). Implemented for the Smack\r\nLinux Security Module (LSM).\r\nNET_ADMIN Perform various network-related operations.\r\nNET_BROADCAST Make socket broadcasts, and listen to multicasts.\r\nPERFMON\r\nAllow system performance and observability privileged operations using\r\nperf_events, i915_perf and other kernel subsystems\r\nSYS_ADMIN Perform a range of system administration operations.\r\nSYS_BOOT\r\nUse reboot(2) and kexec_load(2), reboot and load a new kernel for later\r\nexecution.\r\nSYS_MODULE Load and unload kernel modules.\r\nSYS_NICE\r\nRaise process nice value (nice(2), setpriority(2)) and change the nice value for\r\narbitrary processes.\r\nSYS_PACCT Use acct(2), switch process accounting on or off.\r\nSYS_PTRACE Trace arbitrary processes using ptrace(2).\r\nSYS_RAWIO Perform I/O port operations (iopl(2) and ioperm(2)).\r\nSYS_RESOURCE Override resource Limits.\r\nSYS_TIME\r\nSet system clock (settimeofday(2), stime(2), adjtimex(2)); set real-time\r\n(hardware) clock.\r\nSYS_TTY_CONFIG\r\nUse vhangup(2); employ various privileged ioctl(2) operations on virtual\r\nterminals.\r\nSYSLOG Perform privileged syslog(2) operations.\r\nWAKE_ALARM Trigger something that will wake up the system.\r\nFurther reference information is available on the capabilities(7) - Linux man page, and in the Linux kernel source\r\ncode.\r\nhttps://docs.docker.com/engine/reference/run/#entrypoint-default-command-to-execute-at-runtime\r\nPage 13 of 17\n\nBoth flags support the value ALL , so to allow a container to use all capabilities except for MKNOD :\r\nThe --cap-add and --cap-drop flags accept capabilities to be specified with a CAP_ prefix. The following\r\nexamples are therefore equivalent:\r\nFor interacting with the network stack, instead of using --privileged they should use --cap-add=NET_ADMIN to\r\nmodify the network interfaces.\r\nTo mount a FUSE based filesystem, you need to combine both --cap-add and --device :\r\nThe default seccomp profile will adjust to the selected capabilities, in order to allow use of facilities allowed by\r\nthe capabilities, so you should not have to adjust this.\r\nWhen you build an image from a Dockerfile, or when committing it, you can set a number of default parameters\r\nthat take effect when the image starts up as a container. When you run an image, you can override those defaults\r\nusing flags for the docker run command.\r\nDefault entrypoint\r\nDefault command and options\r\nExpose ports\r\nEnvironment variables\r\nHealthcheck\r\nUser\r\nWorking directory\r\nDefault command and options\r\nThe command syntax for docker run supports optionally specifying commands and arguments to the container's\r\nentrypoint, represented as [COMMAND] and [ARG...] in the following synopsis example:\r\nThis command is optional because whoever created the IMAGE may have already provided a default COMMAND ,\r\nusing the Dockerfile CMD instruction. When you run a container, you can override that CMD instruction just by\r\nspecifying a new COMMAND .\r\nIf the image also specifies an ENTRYPOINT then the CMD or COMMAND get appended as arguments to the\r\nENTRYPOINT .\r\nDefault entrypoint\r\nThe entrypoint refers to the default executable that's invoked when you run a container. A container's entrypoint is\r\ndefined using the Dockerfile ENTRYPOINT instruction. It's similar to specifying a default command because it\r\nspecifies, but the difference is that you need to pass an explicit flag to override the entrypoint, whereas you can\r\noverride default commands with positional arguments. The defines a container's default behavior, with the idea\r\nthat when you set an entrypoint you can run the container as if it were that binary, complete with default options,\r\nand you can pass in more options as commands. But there are cases where you may want to run something else\r\nhttps://docs.docker.com/engine/reference/run/#entrypoint-default-command-to-execute-at-runtime\r\nPage 14 of 17\n\ninside the container. This is when overriding the default entrypoint at runtime comes in handy, using the --\r\nentrypoint flag for the docker run command.\r\nThe --entrypoint flag expects a string value, representing the name or path of the binary that you want to\r\ninvoke when the container starts. The following example shows you how to run a Bash shell in a container that\r\nhas been set up to automatically run some other binary (like /usr/bin/redis-server ):\r\nThe following examples show how to pass additional parameters to the custom entrypoint, using the positional\r\ncommand arguments:\r\nYou can reset a containers entrypoint by passing an empty string, for example:\r\nPassing --entrypoint clears out any default command set on the image. That is, any CMD instruction\r\nin the Dockerfile used to build it.\r\nExposed ports\r\nBy default, when you run a container, none of the container's ports are exposed to the host. This means you won't\r\nbe able to access any ports that the container might be listening on. To make a container's ports accessible from\r\nthe host, you need to publish the ports.\r\nYou can start the container with the -P or -p flags to expose its ports:\r\nThe -P (or --publish-all ) flag publishes all the exposed ports to the host. Docker binds each exposed\r\nport to a random port on the host.\r\nThe -P flag only publishes port numbers that are explicitly flagged as exposed, either using the\r\nDockerfile EXPOSE instruction or the --expose flag for the docker run command.\r\nThe -p (or --publish ) flag lets you explicitly map a single port or range of ports in the container to the\r\nhost.\r\nThe port number inside the container (where the service listens) doesn't need to match the port number published\r\non the outside of the container (where clients connect). For example, inside the container an HTTP service might\r\nbe listening on port 80. At runtime, the port might be bound to 42800 on the host. To find the mapping between\r\nthe host ports and the exposed ports, use the docker port command.\r\nEnvironment variables\r\nDocker automatically sets some environment variables when creating a Linux container. Docker doesn't set any\r\nenvironment variables when creating a Windows container.\r\nThe following environment variables are set for Linux containers:\r\nVariable Value\r\nHOME Set based on the value of USER\r\nhttps://docs.docker.com/engine/reference/run/#entrypoint-default-command-to-execute-at-runtime\r\nPage 15 of 17\n\nVariable Value\r\nHOSTNAME The hostname associated with the container\r\nPATH\r\nIncludes popular directories, such as\r\n/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\r\nTERM xterm if the container is allocated a pseudo-TTY\r\nAdditionally, you can set any environment variable in the container by using one or more -e flags. You can even\r\noverride the variables mentioned above, or variables defined using a Dockerfile ENV instruction when building\r\nthe image.\r\nIf you name an environment variable without specifying a value, the current value of the named variable on the\r\nhost is propagated into the container's environment:\r\nHealthchecks\r\nThe following flags for the docker run command let you control the parameters for container healthchecks:\r\nOption Description\r\n--health-cmd Command to run to check health\r\n--health-interval Time between running the check\r\n--health-retries Consecutive failures needed to report unhealthy\r\n--health-timeout Maximum time to allow one check to run\r\n--health-start-period\r\nStart period for the container to initialize before starting health-retries\r\ncountdown\r\n--health-start-interval\r\nTime between running the check during the start period\r\n--no-healthcheck Disable any container-specified HEALTHCHECK\r\nExample:\r\nThe health status is also displayed in the docker ps output.\r\nUser\r\nThe default user within a container is root (uid = 0). You can set a default user to run the first process with the\r\nDockerfile USER instruction. When starting a container, you can override the USER instruction by passing the -\r\nu option.\r\nhttps://docs.docker.com/engine/reference/run/#entrypoint-default-command-to-execute-at-runtime\r\nPage 16 of 17\n\nThe followings examples are all valid:\r\nIf you pass a numeric user ID, it must be in the range of 0-2147483647. If you pass a username, the user\r\nmust exist in the container.\r\nWorking directory\r\nThe default working directory for running binaries within a container is the root directory ( / ). The default\r\nworking directory of an image is set using the Dockerfile WORKDIR command. You can override the default\r\nworking directory for an image using the -w (or --workdir ) flag for the docker run command:\r\nIf the directory doesn't already exist in the container, it's created.\r\nSource: https://docs.docker.com/engine/reference/run/#entrypoint-default-command-to-execute-at-runtime\r\nhttps://docs.docker.com/engine/reference/run/#entrypoint-default-command-to-execute-at-runtime\r\nPage 17 of 17",
	"extraction_quality": 1,
	"language": "EN",
	"sources": [
		"MITRE"
	],
	"references": [
		"https://docs.docker.com/engine/reference/run/#entrypoint-default-command-to-execute-at-runtime"
	],
	"report_names": [
		"#entrypoint-default-command-to-execute-at-runtime"
	],
	"threat_actors": [],
	"ts_created_at": 1775434345,
	"ts_updated_at": 1775791231,
	"ts_creation_date": 0,
	"ts_modification_date": 0,
	"files": {
		"pdf": "https://archive.orkl.eu/84c9b016fb44e21d5738ddad61213ee950018c68.pdf",
		"text": "https://archive.orkl.eu/84c9b016fb44e21d5738ddad61213ee950018c68.txt",
		"img": "https://archive.orkl.eu/84c9b016fb44e21d5738ddad61213ee950018c68.jpg"
	}
}