Skip to content

Commit

Permalink
Remove references to obsolete engine versions
Browse files Browse the repository at this point in the history
Signed-off-by: Sebastiaan van Stijn <[email protected]>
  • Loading branch information
thaJeztah committed Oct 26, 2020
1 parent 43ce72c commit 2ce808e
Show file tree
Hide file tree
Showing 42 changed files with 253 additions and 334 deletions.
2 changes: 1 addition & 1 deletion .github/ISSUE_TEMPLATE.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
### Project version(s) affected

<!-- If this problem only affects specific versions of a project (like Docker
Engine 1.13), tell us here. The fix may need to take that into account. -->
Engine 19.03), tell us here. The fix may need to take that into account. -->

### Suggestions for a fix

Expand Down
6 changes: 3 additions & 3 deletions _includes/content/compose-extfields-sub.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ your Compose file and their name start with the `x-` character sequence.
> of service, volume, network, config and secret definitions.
```yaml
version: '3.4'
version: "{{ site.compose_file_v3 }}"
x-custom:
items:
- a
Expand All @@ -35,7 +35,7 @@ logging:
You may write your Compose file as follows:
```yaml
version: '3.4'
version: "{{ site.compose_file_v3 }}"
x-logging:
&default-logging
options:
Expand All @@ -56,7 +56,7 @@ It is also possible to partially override values in extension fields using
the [YAML merge type](https://yaml.org/type/merge.html). For example:
```yaml
version: '3.4'
version: "{{ site.compose_file_v3 }}"
x-volumes:
&default-volume
driver: foobar-storage
Expand Down
2 changes: 1 addition & 1 deletion _includes/kubernetes-mac-win.md
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,7 @@ Docker has created the following demo app that you can deploy to swarm mode or
to Kubernetes using the `docker stack deploy` command.

```yaml
version: '3.3'
version: "{{ site.compose_file_v3 }}"

services:
web:
Expand Down
2 changes: 1 addition & 1 deletion compose/aspnet-mssql-compose.md
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@ configure this app to use our SQL Server database, and then create a
> base 10 digits and/or non-alphanumeric symbols.
```yaml
version: "3"
version: "{{ site.compose_file_v3 }}"
services:
web:
build: .
Expand Down
26 changes: 15 additions & 11 deletions compose/networking.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,23 +12,27 @@ container for a service joins the default network and is both *reachable* by
other containers on that network, and *discoverable* by them at a hostname
identical to the container name.

> **Note**: Your app's network is given a name based on the "project name",
> **Note**
>
> Your app's network is given a name based on the "project name",
> which is based on the name of the directory it lives in. You can override the
> project name with either the [`--project-name` flag](reference/overview.md)
> or the [`COMPOSE_PROJECT_NAME` environment variable](reference/envvars.md#compose_project_name).
For example, suppose your app is in a directory called `myapp`, and your `docker-compose.yml` looks like this:

version: "3"
services:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
ports:
- "8001:5432"
```yaml
version: "{{ site.compose_file_v3 }}"
services:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
ports:
- "8001:5432"
```
When you run `docker-compose up`, the following happens:

Expand Down
42 changes: 23 additions & 19 deletions compose/rails.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,25 +68,29 @@ one's Docker image (the database just runs on a pre-made PostgreSQL image, and
the web app is built from the current directory), and the configuration needed
to link them together and expose the web app's port.

version: '3'
services:
db:
image: postgres
volumes:
- ./tmp/db:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: password
web:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db

>**Tip**: You can use either a `.yml` or `.yaml` extension for this file.
```yaml
version: "{{ site.compose_file_v3 }}"
services:
db:
image: postgres
volumes:
- ./tmp/db:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: password
web:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
```
> **Tip**
>
> You can use either a `.yml` or `.yaml` extension for this file.

### Build the project

Expand Down
8 changes: 4 additions & 4 deletions config/containers/live-restore.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,10 +7,10 @@ redirect_from:
---

By default, when the Docker daemon terminates, it shuts down running containers.
Starting with Docker Engine 1.12, you can configure the daemon so that
containers remain running if the daemon becomes unavailable. This functionality
is called _live restore_. The live restore option helps reduce container
downtime due to daemon crashes, planned outages, or upgrades.
You can configure the daemon so that containers remain running if the daemon
becomes unavailable. This functionality is called _live restore_. The live restore
option helps reduce container downtime due to daemon crashes, planned outages,
or upgrades.

> **Note**
>
Expand Down
20 changes: 8 additions & 12 deletions config/containers/resource_constraints.md
Original file line number Diff line number Diff line change
Expand Up @@ -167,9 +167,8 @@ by viewing `/proc/<PID>/status` on the host machine.
By default, each container's access to the host machine's CPU cycles is unlimited.
You can set various constraints to limit a given container's access to the host
machine's CPU cycles. Most users use and configure the
[default CFS scheduler](#configure-the-default-cfs-scheduler). In Docker 1.13
and higher, you can also configure the
[realtime scheduler](#configure-the-realtime-scheduler).
[default CFS scheduler](#configure-the-default-cfs-scheduler). You can also
configure the [realtime scheduler](#configure-the-realtime-scheduler).

### Configure the default CFS scheduler

Expand All @@ -180,31 +179,28 @@ the container's cgroup on the host machine.

| Option | Description |
|:-----------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `--cpus=<value>` | Specify how much of the available CPU resources a container can use. For instance, if the host machine has two CPUs and you set `--cpus="1.5"`, the container is guaranteed at most one and a half of the CPUs. This is the equivalent of setting `--cpu-period="100000"` and `--cpu-quota="150000"`. Available in Docker 1.13 and higher. |
| `--cpu-period=<value>` | Specify the CPU CFS scheduler period, which is used alongside `--cpu-quota`. Defaults to 100000 microseconds (100 milliseconds). Most users do not change this from the default. If you use Docker 1.13 or higher, use `--cpus` instead. |
| `--cpu-quota=<value>` | Impose a CPU CFS quota on the container. The number of microseconds per `--cpu-period` that the container is limited to before throttled. As such acting as the effective ceiling. If you use Docker 1.13 or higher, use `--cpus` instead. |
| `--cpus=<value>` | Specify how much of the available CPU resources a container can use. For instance, if the host machine has two CPUs and you set `--cpus="1.5"`, the container is guaranteed at most one and a half of the CPUs. This is the equivalent of setting `--cpu-period="100000"` and `--cpu-quota="150000"`. |
| `--cpu-period=<value>` | Specify the CPU CFS scheduler period, which is used alongside `--cpu-quota`. Defaults to 100000 microseconds (100 milliseconds). Most users do not change this from the default. For most use-cases, `--cpus` is a more convenient alternative. |
| `--cpu-quota=<value>` | Impose a CPU CFS quota on the container. The number of microseconds per `--cpu-period` that the container is limited to before throttled. As such acting as the effective ceiling. For most use-cases, `--cpus` is a more convenient alternative. |
| `--cpuset-cpus` | Limit the specific CPUs or cores a container can use. A comma-separated list or hyphen-separated range of CPUs a container can use, if you have more than one CPU. The first CPU is numbered 0. A valid value might be `0-3` (to use the first, second, third, and fourth CPU) or `1,3` (to use the second and fourth CPU). |
| `--cpu-shares` | Set this flag to a value greater or less than the default of 1024 to increase or reduce the container's weight, and give it access to a greater or lesser proportion of the host machine's CPU cycles. This is only enforced when CPU cycles are constrained. When plenty of CPU cycles are available, all containers use as much CPU as they need. In that way, this is a soft limit. `--cpu-shares` does not prevent containers from being scheduled in swarm mode. It prioritizes container CPU resources for the available CPU cycles. It does not guarantee or reserve any specific CPU access. |

If you have 1 CPU, each of the following commands guarantees the container at
most 50% of the CPU every second.

**Docker 1.13 and higher**:

```bash
docker run -it --cpus=".5" ubuntu /bin/bash
```

**Docker 1.12 and lower**:
Which is the equivalent to manually specifying `--cpu-period` and `--cpu-quota`;

```bash
$ docker run -it --cpu-period=100000 --cpu-quota=50000 ubuntu /bin/bash
```

### Configure the realtime scheduler

In Docker 1.13 and higher, you can configure your container to use the
realtime scheduler, for tasks which cannot use the CFS scheduler. You need to
You can configure your container to use the realtime scheduler, for tasks which
cannot use the CFS scheduler. You need to
[make sure the host machine's kernel is configured correctly](#configure-the-host-machines-kernel)
before you can [configure the Docker daemon](#configure-the-docker-daemon) or
[configure individual containers](#configure-individual-containers).
Expand Down
8 changes: 3 additions & 5 deletions config/pruning.md
Original file line number Diff line number Diff line change
Expand Up @@ -144,9 +144,8 @@ for more examples.
## Prune everything

The `docker system prune` command is a shortcut that prunes images, containers,
and networks. In Docker 17.06.0 and earlier, volumes are also pruned. In Docker
17.06.1 and higher, you must specify the `--volumes` flag for
`docker system prune` to prune volumes.
and networks. Volumes are not pruned by default, and you must specify the
`--volumes` flag for `docker system prune` to prune volumes.

```bash
$ docker system prune
Expand All @@ -159,8 +158,7 @@ WARNING! This will remove:
Are you sure you want to continue? [y/N] y
```

If you are on Docker 17.06.1 or higher and want to also prune volumes, add
the `--volumes` flag:
To also prune volumes, add the `--volumes` flag:

```bash
$ docker system prune --volumes
Expand Down
5 changes: 2 additions & 3 deletions develop/develop-images/multistage-build.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,8 @@ redirect_from:
- /engine/userguide/eng-image/multistage-build/
---

Multi-stage builds are a new feature requiring Docker 17.05 or higher on the
daemon and client. Multistage builds are useful to anyone who has struggled to
optimize Dockerfiles while keeping them easy to read and maintain.
Multistage builds are useful to anyone who has struggled to optimize Dockerfiles
while keeping them easy to read and maintain.

> **Acknowledgment**:
> Special thanks to [Alex Ellis](https://twitter.com/alexellisuk) for granting
Expand Down
3 changes: 1 addition & 2 deletions docker-for-mac/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -481,5 +481,4 @@ After you have successfully authenticated, you can access your organizations and
[Docker CLI Reference Guide](../engine/api/index.md){: target="_blank" rel="noopener" class="_"}.

* Check out the blog post, [What’s New in Docker 17.06 Community Edition
(CE)](https://blog.docker.com/2017/07/whats-new-docker-17-06-community-edition-ce/){:
target="_blank" rel="noopener" class="_"}.
(CE)](https://blog.docker.com/2017/07/whats-new-docker-17-06-community-edition-ce/){: target="_blank" rel="noopener" class="_"}.
12 changes: 5 additions & 7 deletions engine/install/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,10 +89,10 @@ and **nightly**:

Year-month releases are made from a release branch diverged from the master
branch. The branch is created with format `<year>.<month>`, for example
`18.09`. The year-month name indicates the earliest possible calendar
`19.03`. The year-month name indicates the earliest possible calendar
month to expect the release to be generally available. All further patch
releases are performed from that branch. For example, once `v18.09.0` is
released, all subsequent patch releases are built from the `18.09` branch.
releases are performed from that branch. For example, once `v19.03.0` is
released, all subsequent patch releases are built from the `19.03` branch.

### Test

Expand All @@ -114,10 +114,8 @@ format:
where the time is the commit time in UTC and the final suffix is the prefix
of the commit hash, for example `0.0.0-20180720214833-f61e0f7`.

These builds allow for testing from the latest code on the master branch.

> **Note:**
> No qualifications or guarantees are made for the nightly builds.
These builds allow for testing from the latest code on the master branch. No
qualifications or guarantees are made for the nightly builds.

## Support

Expand Down
6 changes: 2 additions & 4 deletions engine/security/apparmor.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,10 +10,8 @@ administrator associates an AppArmor security profile with each program. Docker
expects to find an AppArmor policy loaded and enforced.

Docker automatically generates and loads a default profile for containers named
`docker-default`. On Docker versions `1.13.0` and later, the Docker binary generates
this profile in `tmpfs` and then loads it into the kernel. On Docker versions
earlier than `1.13.0`, this profile is generated in `/etc/apparmor.d/docker`
instead.
`docker-default`. The Docker binary generates this profile in `tmpfs` and then
loads it into the kernel.

> **Note**: This profile is used on containers, _not_ on the Docker Daemon.
Expand Down
12 changes: 6 additions & 6 deletions engine/security/certificates.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,12 +23,12 @@ A custom certificate is configured by creating a directory under
`/etc/docker/certs.d` using the same name as the registry's hostname, such as
`localhost`. All `*.crt` files are added to this directory as CA roots.

> **Note**:
> As of Docker 1.13, on Linux any root certificates authorities are merged
> with the system defaults, including as the host's root CA set. On prior
versions of Docker, and on Docker Enterprise Edition for Windows Server,
> the system default certificates are only used when no custom root certificates
> are configured.
> **Note**
>
> On Linux any root certificates authorities are merged with the system defaults,
> including the host's root CA set. If you are running Docker on Windows Server,
> or Docker Desktop for Windows with Windows containers, the system default
> certificates are only used when no custom root certificates are configured.
The presence of one or more `<filename>.key/cert` pairs indicates to Docker
that there are custom certificates required for access to the desired
Expand Down
14 changes: 6 additions & 8 deletions engine/security/trust/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -86,12 +86,13 @@ The following image depicts the various signing keys and their relationships:

![Content Trust components](images/trust_components.png)

> **WARNING**:
> **WARNING**
>
> Loss of the root key is **very difficult** to recover from.
>Correcting this loss requires intervention from [Docker
>Support](https://support.docker.com) to reset the repository state. This loss
>also requires **manual intervention** from every consumer that used a signed
>tag from this repository prior to the loss.
> Correcting this loss requires intervention from [Docker
> Support](https://support.docker.com) to reset the repository state. This loss
> also requires **manual intervention** from every consumer that used a signed
> tag from this repository prior to the loss.
{:.warning}

You should back up the root key somewhere safe. Given that it is only required
Expand All @@ -101,9 +102,6 @@ read how to [manage keys for DCT](trust_key_mng.md).

## Signing Images with Docker Content Trust

> **Note:**
> This applies to Docker Community Engine 17.12 and newer.
Within the Docker CLI we can sign and push a container image with the
`$ docker trust` command syntax. This is built on top of the Notary feature
set, more information on Notary can be found [here](/notary/getting_started/).
Expand Down
14 changes: 6 additions & 8 deletions engine/swarm/admin_guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -239,9 +239,8 @@ you demote or remove a manager.
## Back up the swarm

Docker manager nodes store the swarm state and manager logs in the
`/var/lib/docker/swarm/` directory. In 1.13 and higher, this data includes the
keys used to encrypt the Raft logs. Without these keys, you cannot restore the
swarm.
`/var/lib/docker/swarm/` directory. This data includes the keys used to encrypt
the Raft logs. Without these keys, you cannot restore the swarm.

You can back up the swarm using any manager. Use the following procedure.

Expand Down Expand Up @@ -377,11 +376,10 @@ balance across the swarm. When new tasks start, or when a node with running
tasks becomes unavailable, those tasks are given to less busy nodes. The goal
is eventual balance, with minimal disruption to the end user.
In Docker 1.13 and higher, you can use the `--force` or `-f` flag with the
`docker service update` command to force the service to redistribute its tasks
across the available worker nodes. This causes the service tasks to restart.
Client applications may be disrupted. If you have configured it, your service
uses a [rolling update](swarm-tutorial/rolling-update.md).
You can use the `--force` or `-f` flag with the `docker service update` command
to force the service to redistribute its tasks across the available worker nodes.
This causes the service tasks to restart. Client applications may be disrupted.
If you have configured it, your service uses a [rolling update](swarm-tutorial/rolling-update.md).
If you use an earlier version and you want to achieve an even balance of load
across workers and don't mind disrupting running tasks, you can force your swarm
Expand Down
Loading

0 comments on commit 2ce808e

Please sign in to comment.