Description
Is this a BUG REPORT or FEATURE REQUEST?:
Uncomment only one, leave it on its own line:
/kind bug
/kind feature
Description
When podman stops a systemd container with bind mounts, it leaves behind a lot of cgroup debris.
This prevents the container from starting the third time.
Steps to reproduce the issue:
-
Workaround: currently to get working bind mounts I have to set
mount --make-private /tmp
. Otherwise oci-systemd-hook cannot move the mount to the overlay. This on Fedora 28. Cannot move mount from /tmp/ocitmp.XXXX to .../merged/run projectatomic/oci-systemd-hook#92 -
Create a systemd-based fedora:28 container
podman create --name bobby_silver -v /srv/docker/volumes/podman/home:/home:z --env container=podman --entrypoint=/sbin/init --stop-signal=RTMIN+3 fedora:28
- Start container 3 times
podman start bobby_silver
podman stop bobby_silver
podman start bobby_silver
podman stop bobby_silver
podman start bobby_silver
podman stop bobby_silver
Describe the results you received:
After first start/stop cycle there is cgroup debris:
cgroup on /sys/fs/cgroup/systemd/libpod_parent/libpod-conmon-cd8be22a52efaed7e2790d2eb3421c00542c3eb9763bfe715c3ad23647c419e0/cd8be22a52efaed7e2790d2eb3421c00542c3eb9763bfe715c3ad23647c419e0 type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,xattr,name=systemd)
cgroup on /sys/fs/cgroup/systemd/libpod_parent/libpod-conmon-cd8be22a52efaed7e2790d2eb3421c00542c3eb9763bfe715c3ad23647c419e0/cd8be22a52efaed7e2790d2eb3421c00542c3eb9763bfe715c3ad23647c419e0 type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,xattr,name=systemd)
cgroup on /sys/fs/cgroup/systemd/libpod_parent/libpod-conmon-cd8be22a52efaed7e2790d2eb3421c00542c3eb9763bfe715c3ad23647c419e0/cd8be22a52efaed7e2790d2eb3421c00542c3eb9763bfe715c3ad23647c419e0/libpod_parent/libpod-conmon-cd8be22a52efaed7e2790d2eb3421c00542c3eb9763bfe715c3ad23647c419e0/cd8be22a52efaed7e2790d2eb3421c00542c3eb9763bfe715c3ad23647c419e0 type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,xattr,name=systemd)
## third time unlucky
unable to start container "bobby_silver": container create failed: container_linux.go:348: starting container process caused "process_linux.go:402: container init caused \"rootfs_linux.go:58: mounting \\\"cgroup\\\" to rootfs \\\"/var/lib/containers/storage/overlay/52f7959a1a8a171b2c8aee587ea81c964e84130681444f0ff03b3202804a91cb/merged\\\" at \\\"/sys/fs/cgroup\\\" caused \\\"stat /sys/fs/cgroup/systemd/libpod_parent/libpod-conmon-cd8be22a52efaed7e2790d2eb3421c00542c3eb9763bfe715c3ad23647c419e0/cd8be22a52efaed7e2790d2eb3421c00542c3eb9763bfe715c3ad23647c419e0/libpod_parent/libpod-conmon-cd8be22a52efaed7e2790d2eb3421c00542c3eb9763bfe715c3ad23647c419e0/cd8be22a52efaed7e2790d2eb3421c00542c3eb9763bfe715c3ad23647c419e0/libpod_parent/libpod-conmon-cd8be22a52efaed7e2790d2eb3421c00542c3eb9763bfe715c3ad23647c419e0/cd8be22a52efaed7e2790d2eb3421c00542c3eb9763bfe715c3ad23647c419e0: no such file or directory\\\"\""
journal
May 06 10:37:05 podman.localdomain kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
May 06 10:37:05 podman.localdomain audit: ANOM_PROMISCUOUS dev=vethd814e8bb prom=256 old_prom=0 auid=1050 uid=0 gid=0 ses=3
May 06 10:37:05 podman.localdomain kernel: IPv6: ADDRCONF(NETDEV_UP): vethd814e8bb: link is not ready
May 06 10:37:05 podman.localdomain kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethd814e8bb: link becomes ready
May 06 10:37:05 podman.localdomain kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
May 06 10:37:05 podman.localdomain kernel: cni0: port 2(vethd814e8bb) entered blocking state
May 06 10:37:05 podman.localdomain kernel: cni0: port 2(vethd814e8bb) entered disabled state
May 06 10:37:05 podman.localdomain kernel: device vethd814e8bb entered promiscuous mode
May 06 10:37:05 podman.localdomain kernel: cni0: port 2(vethd814e8bb) entered blocking state
May 06 10:37:05 podman.localdomain kernel: cni0: port 2(vethd814e8bb) entered forwarding state
May 06 10:37:05 podman.localdomain NetworkManager[1159]: <info> [1525574225.9736] device (vethd814e8bb): carrier: link connected
May 06 10:37:05 podman.localdomain NetworkManager[1159]: <info> [1525574225.9747] manager: (vethd814e8bb): new Veth device (/org/freedesktop/NetworkManager/Devices/12)
May 06 10:37:05 podman.localdomain systemd-udevd[18311]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
May 06 10:37:05 podman.localdomain systemd-udevd[18311]: Could not generate persistent MAC address for vethd814e8bb: No such file or directory
May 06 10:37:05 podman.localdomain audit: NETFILTER_CFG table=nat family=2 entries=89
May 06 10:37:05 podman.localdomain audit: NETFILTER_CFG table=nat family=2 entries=91
May 06 10:37:05 podman.localdomain audit: NETFILTER_CFG table=nat family=2 entries=92
May 06 10:37:05 podman.localdomain audit: NETFILTER_CFG table=nat family=2 entries=93
May 06 10:37:05 podman.localdomain audit: NETFILTER_CFG table=filter family=2 entries=155
May 06 10:37:06 podman.localdomain conmon[18363]: conmon cd8be22a52efaed7e279 <ninfo>: about to waitpid: 18364
May 06 10:37:06 podman.localdomain kernel: SELinux: mount invalid. Same superblock, different security settings for (dev mqueue, type mqueue)
May 06 10:37:06 podman.localdomain oci-systemd-hook[18389]: systemdhook <error>: cd8be22a52ef: pid not found in state: Success
May 06 10:37:06 podman.localdomain conmon[18363]: conmon cd8be22a52efaed7e279 <error>: Failed to create container: exit status 1
May 06 10:37:06 podman.localdomain audit: NETFILTER_CFG table=filter family=2 entries=156
May 06 10:37:06 podman.localdomain audit: NETFILTER_CFG table=nat family=2 entries=94
May 06 10:37:06 podman.localdomain audit: NETFILTER_CFG table=nat family=2 entries=96
May 06 10:37:06 podman.localdomain audit: NETFILTER_CFG table=nat family=2 entries=94
May 06 10:37:06 podman.localdomain audit: NETFILTER_CFG table=nat family=2 entries=96
May 06 10:37:06 podman.localdomain audit: NETFILTER_CFG table=nat family=10 entries=78
May 06 10:37:06 podman.localdomain audit: NETFILTER_CFG table=nat family=10 entries=80
May 06 10:37:06 podman.localdomain kernel: cni0: port 2(vethd814e8bb) entered disabled state
May 06 10:37:06 podman.localdomain audit: ANOM_PROMISCUOUS dev=vethd814e8bb prom=0 old_prom=256 auid=1050 uid=0 gid=0 ses=3
May 06 10:37:06 podman.localdomain kernel: device vethd814e8bb left promiscuous mode
May 06 10:37:06 podman.localdomain kernel: cni0: port 2(vethd814e8bb) entered disabled state
May 06 10:37:06 podman.localdomain NetworkManager[1159]: <info> [1525574226.1560] device (vethd814e8bb): released from master device cni0
May 06 10:37:06 podman.localdomain gnome-shell[3393]: Removing a network device that was not added
May 06 10:37:06 podman.localdomain gnome-shell[2067]: Removing a network device that was not added
May 06 10:37:06 podman.localdomain audit: NETFILTER_CFG table=nat family=2 entries=94
May 06 10:37:06 podman.localdomain audit: NETFILTER_CFG table=nat family=2 entries=93
May 06 10:37:06 podman.localdomain audit: NETFILTER_CFG table=nat family=2 entries=91
Describe the results you expected:
start/stop without any issue
Additional information you deem important (e.g. issue happens only occasionally):
Output of podman version
:
# podman version
Version: 0.5.2-dev
Go Version: go1.10.1
OS/Arch: linux/amd64
Output of podman info
:
``
host:
MemFree: 17688948736
MemTotal: 33667493888
SwapFree: 0
SwapTotal: 0
arch: amd64
cpus: 8
hostname: podman.localdomain
kernel: 4.16.5-300.fc28.x86_64
os: linux
uptime: 10h 3m 7.91s (Approximately 0.42 days)
insecure registries:
registries: []
registries:
registries:
- docker.io
- registry.fedoraproject.org
- quay.io
- registry.access.redhat.com
store:
ContainerStore:
number: 4
GraphDriverName: overlay
GraphOptions: - overlay.override_kernel_check=true
GraphRoot: /var/lib/containers/storage
GraphStatus:
Backing Filesystem: xfs
Native Overlay Diff: "true"
Supports d_type: "true"
ImageStore:
number: 2
RunRoot: /var/run/containers/storage
**Additional environment details (AWS, VirtualBox, physical, etc.):**
* physical
* Fedora 28