Skip to content

CGroup Handling Enhancements #507

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 8 commits into from
Closed

Conversation

mheon
Copy link
Member

@mheon mheon commented Mar 16, 2018

Add validation to CGroup parents passed into libpod. If we are using the systemd CGroup manager, the basename must end in .slice; if we are using cgroupfs, the path cannot contain cgroup.slice. This is based on the CRI-O validation code located at https://github.com/kubernetes-incubator/cri-o/blob/master/server/sandbox_run.go#L379-L400

Also, add our CGroup path to the OCI spec, passing it into runc. When the container is deleted in runc, it will clean up configured CGroups, solving our problem of leaving CGroup scopes around after containers exit.

Closes: #497
Closes: #496

@mheon
Copy link
Member Author

mheon commented Mar 16, 2018

@baude If you get a chance, can you validate that this doesn't break our stats code? I'm pretty sure it's still working, but you've worked with it more than I have.

@mheon mheon force-pushed the cgroup_handling branch from 49215b6 to 02cdb6f Compare March 16, 2018 02:55
case SystemdCgroupsManager:
if ctr.config.CgroupParent == "" {
ctr.config.CgroupParent = SystemdDefaultCgroupParent
} else if len(ctr.config.CgroupParent) < 6 || !strings.HasSuffix(path.Base(ctr.config.CgroupParent), ".slice") {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know just enough about cgroups to be dangerous. Should ".slice" here be "system.slice"? Could a cgroups.slice string satisfy this and I'm not sure you'd want that? @rhatdan?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm pretty sure that anything with .slice is systemd-managed, which is why we want to avoid them when using cgroupfs - but not 100%. For what it's worth, this code is basically identical to CRI-O's validation.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes .slice means systemd.

@mheon
Copy link
Member Author

mheon commented Mar 16, 2018

Seeing some issues with cgroup parent, and a 'podman run' flag. Chasing them down.

@mheon
Copy link
Member Author

mheon commented Mar 16, 2018

Fixed CGroup parent. Unclear if the 'podman run' issue is actually a bug or a flake.

@mheon
Copy link
Member Author

mheon commented Mar 16, 2018

bot, retest this please

2 similar comments
@mheon
Copy link
Member Author

mheon commented Mar 16, 2018

bot, retest this please

@mheon
Copy link
Member Author

mheon commented Mar 16, 2018

bot, retest this please

@mheon mheon force-pushed the cgroup_handling branch from 889fd39 to 7a76579 Compare March 16, 2018 18:15
@nathwill
Copy link
Contributor

nathwill commented Mar 16, 2018

is this going to break running a container under podman from a systemd unit like the one below, or would we just need to delegate differently?

[Unit]
Description=CRI-O container: %p

[Service]
Type=forking
ExecStartPre=-/bin/podman stop %p
ExecStartPre=-/bin/podman rm %p
ExecStart=/bin/podman run --detach --net=host --volume=/data:/data --log-driver=json-file --log-opt=path=/var/log/redis.log \
    --cidfile=/var/run/%p.crio --cgroup-parent=/machine.slice/%p.service \
    --name=%p redis:3.2 
ExecStop=/bin/podman stop %p
ExecStop=/bin/podman rm %p
Restart=always
Slice=machine.slice

[Install]
WantedBy=multi-user.target

i've been using --cgroup-parent=/machine.slice/%p.service to make sure all contained processes end up in the service unit's cgroup for tracking (e.g. systemctl status), and for ensuring full termination on container stop.

it's working pretty well so far:

[root@default-centos-7 ~]# systemd-cgls --no-pager /machine.slice/redis.service
/machine.slice/redis.service:
└─libpod-conmon-3594f3290709d0a764d307c4d4f95e62f17e83fd9c2ff2b3717209100ec44d37
  ├─3868 /usr/libexec/podman/conmon -c 3594f3290709d0a764d307c4d4f95e62f17e83fd9c2ff2b3717209100ec44d37 -u 3594f3290709d0a764d307c4d4f95e6...
  └─3594f3290709d0a764d307c4d4f95e62f17e83fd9c2ff2b3717209100ec44d37
    └─3877 redis-server *:6379

i'm usingmachine.slice in anticipation of hooks support landing so units like the above can be used with oci-register-machine to fully integrate the contained processes as services/machines with the host and apply resource-constraints to prioritize system stuff (system.slice) over "workload" (machine.slice) services.

@mheon
Copy link
Member Author

mheon commented Mar 16, 2018

@nathwill I've been digging around under the hood, and I'm fairly certain our cgroup handling for systemd-managed cgroups is very wrong right now - we're inserting ourself into their hierarchy without telling systemd. It seems to be working, but it is definitely not the right way of doing things and is very fragile.

This patch as written would break using the systemd cgroup hierarchy in an effort to fix cgroupfs, but I think we can get both working with a bit more code. I'll mark this WIP and take a stab at it on Monday.

Be aware that you'll probably need to modify the configuration file or set a flag to use the systemd cgroup manager after these changes land, though. The config change should be a one-liner, but we should call it out in the manpage/readme.

@mheon mheon changed the title CGroup Handling Enhancements WIP: CGroup Handling Enhancements Mar 16, 2018
@mheon
Copy link
Member Author

mheon commented Mar 21, 2018

An update on this: I've enabled proper CGroup handling via the runc systemd cgroup manager, based on the CRI-O code for the same. However, CRI-O places Conmon and the container's own processes in two separate CGroups. This doesn't seem like a particularly nice way of doing things (we get everything under the same CGroup with cgroupfs), so I'm going to talk to the CRI-O devs to see if there is a reason things are done that way.

@mheon
Copy link
Member Author

mheon commented Mar 21, 2018

bot, retest this please

1 similar comment
@mheon
Copy link
Member Author

mheon commented Mar 21, 2018

bot, retest this please

@nathwill
Copy link
Contributor

nathwill commented Mar 22, 2018

fwiw, i tested this out, and things seem to work as intended 👏.

i was also able to get it working from a systemd service unit, but wanted to share a couple of observations on the experience.

cgroupfs: this PR doesn't seem to have broken the method i had been using above (i.e. using cgroupfs as the cgroup-manager and specifying the cgroup-parent as e.g. /machine.slice/redis.service).this seems to work quite well, though it sounds as though perhaps it's not intended to? (more questions on this at the end)

systemd: when specifying systemd as the cgroup-manager, i'm not able to specify the systemd-generated service cgroup (e.g. /machine.slice/redis.service) as the cgroup-parent, since --cgroup-manager=systemd only takes --cgroup-parent arguments like xxxxx.slice.

when doing so, podman creates libpod-*.scope as children of the specified slice, rather than the systemd-generated cgroup, as below:

├─machine.slice
│ ├─libpod-0411a2a82311fbc65b137ca550d5f7e6b6863bc734759be2f515b0b67062d49a.scope
│ │ └─19587 redis-server *:637
│ ├─libpod-conmon-0411a2a82311fbc65b137ca550d5f7e6b6863bc734759be2f515b0b67062d49a.scope
│ │ └─19579 /usr/libexec/podman/conmon -s -c 0411a2a82311fbc65b137ca550d5f7e6b6863bc734759be2f515b0b67062d49a -u 0411a2a82311fbc65b137ca550d5
│ └─redis.service
│   └─19564 /bin/podman --cgroup-manager=systemd run --net=host --volume=/data:/data --log-driver=json-file --log-opt=path=/var/log/redis.log

as a result, the conmon/container processes are un-tracked by the service unit and in order for the service to successfully start, you have to do one of:

  • use Type=oneshot & RemainAfterExit=yes in the service unit
  • skip --detach so that podman run is tracked as the MainPid
  • move podman run to ExecStartPre and use podman wait as the ExecStart.

which is more like the docker run experience i'm trying to get away from.

in short, --cgroup-manager=systemd actually ends up feeling like it integrates less with systemd than --cgroup-manager=cgroupfs. i know this is probably down to runc rather than podman, but thought i'd share my impressions in case they're useful.

my preference given the above would be to continue using cgroupfs. if the sole concern with cgroupfs is:

we're inserting ourself into their hierarchy without telling systemd

would specifying Delegate=yes in the service unit reasonably be expected to resolve the risk of using cgroupfs?

thanks in advance for your time and consideration! 🙏

@mheon
Copy link
Member Author

mheon commented Mar 22, 2018

@nathwill I was talking with some of the CRI-O team, and it appears we're missing some CGroup generation logic. We should be grouping the two scopes (conmon and the rest of the container) under a service we create under the parent (individual .service for containers outside a pod, or a shared one for containers in a pod). We're missing a step in the current implementation I pushed (creating that service). Would this work better for what you're doing? You'd lose the ability to name the .service you drop in machine.slice (we'd presumably procedurally generate it) but it would otherwise act fairly similar to what you're doing now with cgroupfs

@mheon
Copy link
Member Author

mheon commented Mar 22, 2018

Also, this still has some test failures related to kernel memory limits, specifically on CentOS. Need to look into that.

@nathwill
Copy link
Contributor

nathwill commented Mar 22, 2018

You'd lose the ability to name the .service you drop in machine.slice

That would definitely break how I'm using it, as the service cgroup name is prescribed by systemd from the unit name, and is relied on heavily by systemd for tracking the service processes.

i totally understand the need for grouping in the pod use case, and wouldn't expect this to work for grouping unit file-launched containers into the same pod, but it'd be super helpful to have an option to specify the service cgroup name for non-pod containers in order to preserve the ability to use podman to run containers from service units that are able to have all the resultant container processes be tracked by systemd.

if the plan is to put all containers into pods by default, could the service cgroup could be more directly based on the pod name when provided (e.g. podman --cgroup-manager=systemd run --pod redis --cgroup-parent=machine.slice redis:3.2 => /machine.slice/redis.service cgroup path)?


// SystemdDefaultCgroupParent is the cgroup parent for the systemd cgroup
// manager in libpod
const SystemdDefaultCgroupParent = "system.slice"
Copy link
Contributor

@nathwill nathwill Mar 22, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

any particular reason for using system.slice here? my understanding from the reading i've done was that machine.slice is the preferred target for containers ("machines" being interpreted here as referring to processes launched from rootfs tarballs like OCI or nspawn supported formats, instead of normal "contained" system/user processes).

my worry with a system.slice default is namespace conflicts with system services running under system.slice. as e.g. if my pod scheduler is running etcd.service on the host, and receives an etcd pod as part of an application deployment, do they both end up under /system.slice/etcd.service? since machine.slice is unpopulated by default and intended as a home for containers, it seems like a preferable default cgroup parent, unless i'm missing something?

edit: i should add, the above is just a matter of curiosity, as i've seen several container runtimes do this, and not understood the reasoning for ignoring machine.slice.

@mheon
Copy link
Member Author

mheon commented Mar 22, 2018

Hm. We can probably wire out an option to allow setting service name (or potentially detect if there is a .service in the cgroup parent?). I'll look into this more.

@rhatdan
Copy link
Member

rhatdan commented Mar 23, 2018

@mrunalp PTAL

@rh-atomic-bot
Copy link
Collaborator

☔ The latest upstream changes (presumably 3f5da4d) made this pull request unmergeable. Please resolve the merge conflicts.

@nathwill
Copy link
Contributor

nathwill commented Apr 5, 2018

fwiw, in the latest podman, using --conmon-pidfile when running podman containers as systemd services is working quite well, even without specifying --cgroup-parent. hopefully that simplifies things a bit :)

@mheon mheon force-pushed the cgroup_handling branch from b0d66e6 to 3018f75 Compare April 9, 2018 19:50
@rh-atomic-bot
Copy link
Collaborator

☔ The latest upstream changes (presumably 39a7a77) made this pull request unmergeable. Please resolve the merge conflicts.

@rhatdan
Copy link
Member

rhatdan commented Apr 30, 2018

@mheon needs a rebase.

@mheon
Copy link
Member Author

mheon commented May 7, 2018

Alright. This should fix cgroupfs CGroup handling, which is increasingly becoming a problem. Systemd-based cgroups are still broken, but can be fixed later. I'm going to try and push this through today.

@mheon mheon force-pushed the cgroup_handling branch from d5409b7 to 88a74f8 Compare May 9, 2018 20:33
@mheon
Copy link
Member Author

mheon commented May 9, 2018

Cleanup code implemented. Assuming tests are green, this is finally good for review and merge.

Systemd cgroup manager code is next.

@mheon
Copy link
Member Author

mheon commented May 9, 2018

Alright, this is finally ready. @rhatdan PTAL

Copy link
Member

@TomSweeneyRedHat TomSweeneyRedHat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, but would like @rhatdan to take a look see.

@@ -23,6 +23,9 @@ has the capability to debug pods/images created by crio.
**--help, -h**
Print usage statement

**--cgroup-manager**
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A blog showing how this works would probably be nice once this hits the streets.

@aalba6675
Copy link

aalba6675 commented May 10, 2018

I am still seeing doubled/triple/quad cgroup scopes with a container+bind mounts.

By the second or third start/stop cycle there are 86(!) mounts of this form but only 4 are distinct mounts.
cgroup.zip

mount reports 86 cgroup mounts on

**/ctr/**
**/ctr/**/ctr/**
**/ctr/**/ctr/**/ctr/**
**/ctr/**/ctr/**/ctr/**/ctr/**

Of these 86 mounts only 4 are unique.

cgroup on /sys/fs/cgroup/systemd/libpod_parent/libpod-cd8be22a52efaed7e2790d2eb3421c00542c3eb9763bfe715c3ad23647c419e0/ctr/libpod_parent/libpod-cd8be22a52efaed7e2790d2eb3421c00542c3eb9763bfe715c3ad23647c419e0/ctr/libpod_parent/libpod-cd8be22a52efaed7e2790d2eb3421c00542c3eb9763bfe715c3ad23647c419e0/ctr/libpod_parent/libpod-cd8be22a52efaed7e2790d2eb3421c00542c3eb9763bfe715c3ad23647c419e0/ctr type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,xattr,name=systemd)
cgroup on /sys/fs/cgroup/systemd/libpod_parent/libpod-cd8be22a52efaed7e2790d2eb3421c00542c3eb9763bfe715c3ad23647c419e0/ctr/libpod_parent/libpod-cd8be22a52efaed7e2790d2eb3421c00542c3eb9763bfe715c3ad23647c419e0/ctr/libpod_parent/libpod-cd8be22a52efaed7e2790d2eb3421c00542c3eb9763bfe715c3ad23647c419e0/ctr type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,xattr,name=systemd)
cgroup on /sys/fs/cgroup/systemd/libpod_parent/libpod-cd8be22a52efaed7e2790d2eb3421c00542c3eb9763bfe715c3ad23647c419e0/ctr/libpod_parent/libpod-cd8be22a52efaed7e2790d2eb3421c00542c3eb9763bfe715c3ad23647c419e0/ctr type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,xattr,name=systemd)
cgroup on /sys/fs/cgroup/systemd/libpod_parent/libpod-cd8be22a52efaed7e2790d2eb3421c00542c3eb9763bfe715c3ad23647c419e0/ctr type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,xattr,name=systemd)

@mheon
Copy link
Member Author

mheon commented May 10, 2018

@aalba6675 I strongly suspect the bug there (and the required fix) are in oci-systemd-hook. Our internal CGroup handling follows a strict pattern for generating supported paths and will never append them. Additionally, we don't directly mount CGroups into containers, that must be requested by something else (and the only thing I know of that does this would be oci-systemd-hook).

@rhatdan
Copy link
Member

rhatdan commented May 10, 2018

@aalba6675 could you try without using a container that runs init, and see if the issue goes away.

I believe that there is a bug in oci hooks in runc, that is not calling oci-systemd-hook in the posthook phase. This is why the cgroup hooks are being left behind.

@aalba6675
Copy link

aalba6675 commented May 10, 2018

@rhatdan - yes it goes away: running a /bin/bash container with fedora:28 image there is no cgroup cruft left behind.

BTW - sorry for spamming the projectatomic repos with all these issues. I have a few non-orchestrated workflows (i..e not-suitable-for-k8s) which I'm trying to move to #nobigfatdaemons; these workflows require systemd and bind mounts.

There are also no mounts of type cgroup on /sys/fs/cgroup/systemd/libpod_parent/....

When the container is stopped, all the libpod cgroups are cleaned up.

Running /bin/bash fedora:28 container

## nothing in /sys/fs/cgroup/systemd/libpod_parent
## during podman run
$ mount | grep cgroup 
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,seclabel,mode=755)
cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime,seclabel,nsdelegate)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,xattr,name=systemd)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,freezer)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpuset)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,hugetlb)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,net_cls,net_prio)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpu,cpuacct)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,blkio)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,pids)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,devices)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,memory)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,perf_event)

lscgroup during podman run:

lscgroup | grep libpod
freezer:/libpod_parent
freezer:/libpod_parent/libpod-e4e3499ef3ee298b9cf5fdf66ee37e4866eb810e2eb648652b68366fc8835fd1
freezer:/libpod_parent/libpod-e4e3499ef3ee298b9cf5fdf66ee37e4866eb810e2eb648652b68366fc8835fd1/conmon
freezer:/libpod_parent/libpod-e4e3499ef3ee298b9cf5fdf66ee37e4866eb810e2eb648652b68366fc8835fd1/ctr
cpuset:/libpod_parent
cpuset:/libpod_parent/libpod-e4e3499ef3ee298b9cf5fdf66ee37e4866eb810e2eb648652b68366fc8835fd1
cpuset:/libpod_parent/libpod-e4e3499ef3ee298b9cf5fdf66ee37e4866eb810e2eb648652b68366fc8835fd1/conmon
cpuset:/libpod_parent/libpod-e4e3499ef3ee298b9cf5fdf66ee37e4866eb810e2eb648652b68366fc8835fd1/ctr
hugetlb:/libpod_parent
hugetlb:/libpod_parent/libpod-e4e3499ef3ee298b9cf5fdf66ee37e4866eb810e2eb648652b68366fc8835fd1
hugetlb:/libpod_parent/libpod-e4e3499ef3ee298b9cf5fdf66ee37e4866eb810e2eb648652b68366fc8835fd1/conmon
hugetlb:/libpod_parent/libpod-e4e3499ef3ee298b9cf5fdf66ee37e4866eb810e2eb648652b68366fc8835fd1/ctr
net_cls,net_prio:/libpod_parent
net_cls,net_prio:/libpod_parent/libpod-e4e3499ef3ee298b9cf5fdf66ee37e4866eb810e2eb648652b68366fc8835fd1
net_cls,net_prio:/libpod_parent/libpod-e4e3499ef3ee298b9cf5fdf66ee37e4866eb810e2eb648652b68366fc8835fd1/conmon
net_cls,net_prio:/libpod_parent/libpod-e4e3499ef3ee298b9cf5fdf66ee37e4866eb810e2eb648652b68366fc8835fd1/ctr
cpu,cpuacct:/libpod_parent
cpu,cpuacct:/libpod_parent/libpod-e4e3499ef3ee298b9cf5fdf66ee37e4866eb810e2eb648652b68366fc8835fd1
cpu,cpuacct:/libpod_parent/libpod-e4e3499ef3ee298b9cf5fdf66ee37e4866eb810e2eb648652b68366fc8835fd1/conmon
cpu,cpuacct:/libpod_parent/libpod-e4e3499ef3ee298b9cf5fdf66ee37e4866eb810e2eb648652b68366fc8835fd1/ctr
blkio:/libpod_parent
blkio:/libpod_parent/libpod-e4e3499ef3ee298b9cf5fdf66ee37e4866eb810e2eb648652b68366fc8835fd1
blkio:/libpod_parent/libpod-e4e3499ef3ee298b9cf5fdf66ee37e4866eb810e2eb648652b68366fc8835fd1/conmon
blkio:/libpod_parent/libpod-e4e3499ef3ee298b9cf5fdf66ee37e4866eb810e2eb648652b68366fc8835fd1/ctr
pids:/libpod_parent
pids:/libpod_parent/libpod-e4e3499ef3ee298b9cf5fdf66ee37e4866eb810e2eb648652b68366fc8835fd1
pids:/libpod_parent/libpod-e4e3499ef3ee298b9cf5fdf66ee37e4866eb810e2eb648652b68366fc8835fd1/conmon
pids:/libpod_parent/libpod-e4e3499ef3ee298b9cf5fdf66ee37e4866eb810e2eb648652b68366fc8835fd1/ctr
devices:/libpod_parent
devices:/libpod_parent/libpod-e4e3499ef3ee298b9cf5fdf66ee37e4866eb810e2eb648652b68366fc8835fd1
devices:/libpod_parent/libpod-e4e3499ef3ee298b9cf5fdf66ee37e4866eb810e2eb648652b68366fc8835fd1/conmon
devices:/libpod_parent/libpod-e4e3499ef3ee298b9cf5fdf66ee37e4866eb810e2eb648652b68366fc8835fd1/ctr
memory:/libpod_parent
memory:/libpod_parent/libpod-e4e3499ef3ee298b9cf5fdf66ee37e4866eb810e2eb648652b68366fc8835fd1
memory:/libpod_parent/libpod-e4e3499ef3ee298b9cf5fdf66ee37e4866eb810e2eb648652b68366fc8835fd1/conmon
memory:/libpod_parent/libpod-e4e3499ef3ee298b9cf5fdf66ee37e4866eb810e2eb648652b68366fc8835fd1/ctr
perf_event:/libpod_parent
perf_event:/libpod_parent/libpod-e4e3499ef3ee298b9cf5fdf66ee37e4866eb810e2eb648652b68366fc8835fd1
perf_event:/libpod_parent/libpod-e4e3499ef3ee298b9cf5fdf66ee37e4866eb810e2eb648652b68366fc8835fd1/conmon
perf_event:/libpod_parent/libpod-e4e3499ef3ee298b9cf5fdf66ee37e4866eb810e2eb648652b68366fc8835fd1/ctr

@mheon
Copy link
Member Author

mheon commented May 11, 2018

The lack of mounts is expected - we need those for containers running systemd as PID 1 (systemd will want to manage its own cgroups within the container). Normal containers don't need to change resource constraints, so we don't need the mounts there.

This seems to confirm it's a hook, though. The theory from @rhatdan that this is a postrun hook failing to fire and clean things up seems to make more sense than my initial bug in the hook theory, but we can't say for sure without looking into it more.

@baude
Copy link
Member

baude commented May 11, 2018

LGTM

@baude
Copy link
Member

baude commented May 11, 2018

@rh-atomic-bot r+

@rh-atomic-bot
Copy link
Collaborator

📌 Commit eb4c75a has been approved by baude

@rh-atomic-bot
Copy link
Collaborator

⌛ Testing commit eb4c75a with merge 177c27e...

rh-atomic-bot pushed a commit that referenced this pull request May 11, 2018
Signed-off-by: Matthew Heon <[email protected]>

Closes: #507
Approved by: baude
rh-atomic-bot pushed a commit that referenced this pull request May 11, 2018
Until we get Systemd cgroup manager working, this will
cause a validation error.

Signed-off-by: Matthew Heon <[email protected]>

Closes: #507
Approved by: baude
rh-atomic-bot pushed a commit that referenced this pull request May 11, 2018
Signed-off-by: Matthew Heon <[email protected]>

Closes: #507
Approved by: baude
rh-atomic-bot pushed a commit that referenced this pull request May 11, 2018
Signed-off-by: Matthew Heon <[email protected]>

Closes: #507
Approved by: baude
rh-atomic-bot pushed a commit that referenced this pull request May 11, 2018
Signed-off-by: Matthew Heon <[email protected]>

Closes: #507
Approved by: baude
rh-atomic-bot pushed a commit that referenced this pull request May 11, 2018
Signed-off-by: Matthew Heon <[email protected]>

Closes: #507
Approved by: baude
rh-atomic-bot pushed a commit that referenced this pull request May 11, 2018
Signed-off-by: Matthew Heon <[email protected]>

Closes: #507
Approved by: baude
@rh-atomic-bot
Copy link
Collaborator

☀️ Test successful - status-papr
Approved by: baude
Pushing 177c27e to master...

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 27, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 27, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants