These are my notes about configuring services with Podman on RHEL and related OSes with SELinux enabled, using compute instance in the Oracle Cloud with Oracle Linux 8. Information presented below is readily available elsewhere – see references – however, intent of this opus is to condense all of that into palatable chunks to serve as a somewhat quick answer to the question “How do I get this container running on my instance?” without needing to spend hours reading pages and pages of documentation.

System configuration

To install podman follow documentation for your OS. For Oracle Linux this boils down to1:

sudo dnf module install container-tools:ol8
sudo dnf install podman-docker

The second one is not necessary, but handy to have docker command for compatibility

We will be running containers as a user, rootless, and therefore we need to allow processes launched by our user to persist2:

sudo loginctl enable-linger opc

While at it, also set timezone, so that logs make more sense:

timedatectl list-timezones
sudo timedatectl set-timezone America/Los_Angeles


If SELinux is in enforcing mode, (as it should be) few things need to be done:

  1. For systemd to be able to manage container add container_manage_cgroup permission3:
    sudo setsebool -P container_manage_cgroup on
  2. Allow podman to relabel content of the directories to be mapped into the container4. We’ll discuss this in the next section

Podman and systemd

As an example, we’ll set up two containers:

  1. An awesome uptime monitor, “Uptime Kuma”, that runs services as root inside the container
  2. “Unifi Controller” container, that allows to specify the UID and GID for the user to be running the container as.

Uptime Kuma

This is the suggested docker run command from the project’s page:

docker run \
    -d \
    --restart=always \
    -p 3001:3001 \
    -v uptime-kuma:/app/data \
    --name uptime-kuma \

We want to transform it somewhat:

  1. We don’t want to launch the container immediately, just create it: We only need it to create systemd files, and we’ll destroy it afterwards.
  2. We also don’t want to specify any restart policy, this will be handled by systemd.
  3. Lastly, we need to label the container appropriately to facilitate auto-updates
  4. We probably don’t want a docker volume, and instead, we’ll use a folder in the current user’s home.

    mkdir ~/uptime-kuma
    podman create \
        --label "io.containers.autoupdate=registry" \
        --name uptime-kuma \
        -p 3001:3001 \
        -v /home/opc/uptime-kuma:/app/data:Z \


    1. Note the Z attribute that is added to the mount. This is needed to tell podman to re-label the content of the directory to match the label inside the container. Otherwise, container won’t be able to access the mount. The options are a comma separated list, so if you already are passing some options, such as -v /home:/mnt/readonly:ro, you would just add it like so: -v /home:/mnt/readonly:ro,Z4
    2. We are labeling the container with io.containers.autoupdate=registry flag, to tell the podman whether and how to update it.

The container now exists, but is not running. Let’s create systemd service description files, enable, and start the container:

podman generate systemd \
    --new \
    --name uptime-kuma \
    --restart-policy=always \
    > ~/.config/systemd/user/container-uptime-kuma.service
podman rm uptime-kuma

systemctl --user enable container-uptime-kuma.service
systemctl --user start container-uptime-kuma.service
systemctl --user status container-uptime-kuma.service

Few things here:

  • The --new5 flag. With that flags, containers will be created when the service starts and destroyed when the service stops. We want this to facilitate automatic updates when the upstream image changes. If we don’t pass that flag, systemd would expect the container to exist, and it only tells podman to start and stop it. There would be no way to tell it to start using a new image.
  • The default location where systemd expects user service definitions is ~/.config/systemd/user/. It may be different on your OS.
  • Pass --user argument to all systemd calls. If you forget, it will ask you for a root password; this will serve as a reminder.
  • The default timeout for start and stop seems to be 70 seconds, and if --stop-timeout argument is specified – it’s 60 + whatever is specified, which is weird. So we don’t specify any.
  • Enabling the service creates a symlink for systemd to start it automatically on system start, and then we start it in-place.
  • Delete the container before starting it: systemd wrapper will attempt to create a new one with the same name
  • Do not start/stop the container with podman anymore – use systemctl. Otherwise, systemctl will get confused.
  • The status command is purely informational.

Once the container started, look at the content of the ~/uptime-kuma folder:

$ ls -lZ uptime-kuma
total 11884
-rwxr-xr-x. 1 opc opc system_u:object_r:container_file_t:s0:c30,c974 4681728 Dec 25 20:58 kuma.db
-rwxr-xr-x. 1 opc opc system_u:object_r:container_file_t:s0:c30,c974   61440 Dec 22 00:22 kuma.db.bak0
-rwxr-xr-x. 1 opc opc system_u:object_r:container_file_t:s0:c30,c974 3043328 Dec 24 12:07 kuma.db.bak20221224200743
  1. The stuff inside Uptime Kuma runs as root. However, the owner of these files is the current user on the host – opc. That is not surprising, of course. If you look at what container sees, we will see those files as owned by root:

     $ podman unshare ls -l uptime-kuma
     total 11900
     -rwxr-xr-x. 1 root root 4698112 Dec 25 21:23 kuma.db
     -rwxr-xr-x. 1 root root   61440 Dec 22 00:22 kuma.db.bak0
     -rwxr-xr-x. 1 root root 3043328 Dec 24 12:07 kuma.db.bak20221224200743

    The podman-unshare is a little handy utility to run commands in the modified namespace. We can use it to look up the mapping in the /proc/self/uid_map file:

     $ id opc
     uid=1000(opc) gid=1000(opc) groups=1000(opc),4(adm),190(systemd-journal)
     $ podman unshare cat /proc/self/uid_map
           0       1000          1
           1     100000      65536

    The user 0 is mapped to us (user 1000), and users between 1 and 65535 are mapped to 100000 + userID - 1. We’ll see a better illustration in the next chapter, Unifi Controller.

    So, is it ok to run the stuff inside the container as root? See an excellent discussion here6.

  2. Note the container_file_t label that was added by podman as instructed by the Z flag.

Unifi Controller

Now let’s do the same thing with unifi-controller container, with minor tweaks. Just like before, the recommended command line to start the container from the docker hub page:

docker run -d \
  --name=unifi-controller \
  -e PUID=1000 \
  -e PGID=1000 \
  -e TZ=Europe/London \
  -e MEM_LIMIT=1024 \
  -e MEM_STARTUP=1024 \
  -p 8443:8443 \
  -p 3478:3478/udp \
  -p 10001:10001/udp \
  -p 8080:8080 \
  -p 1900:1900/udp \
  -p 8843:8843 \
  -p 8880:8880 \
  -p 6789:6789 \
  -p 5514:5514/udp  \
  -v <path to data>:/config \
  --restart unless-stopped \

Here the container developers expect the container to start as root, and then launch the actual application using the passed PUID and GUID of 1000. Which is the same as our default opc user. Or is it?

Transforming the command (label for updates; create, not run; tweak timezone, and add Z flag to mounts) and starting the service:

mkdir ~/unifi
podman create \
  --label "io.containers.autoupdate=registry" \
  --name=unifi-controller \
  -e PUID=1000 \
  -e PGID=1000 \
  -e TZ=America/Los_Angeles \
  -e MEM_LIMIT=2048 \
  -e MEM_STARTUP=1024  \
  -p 8443:8443 \
  -p 3478:3478/udp \
  -p 10001:10001/udp \
  -p 8080:8080 \
  -p 1900:1900/udp  \
  -p 8843:8843  \
  -p 8880:8880  \
  -p 6789:6789  \
  -p 5514:5514/udp  \
  -v /home/opc/unifi:/config:Z \

podman generate systemd --new --name unifi-controller --restart-policy=always > ~/.config/systemd/user/container-unifi-controller.service
podman rm unifi-controller

systemctl --user enable container-unifi-controller.service
systemctl --user start container-unifi-controller.service
systemctl --user status container-unifi-controller.service

We expect the data files created by the container to be owned by user 1000. As it is understood by the container. Indeed, files are owned by user 1000, just as we requested via container environment variables:

$ podman unshare ls -ln unifi
total 0
drwxr-xr-x. 6 1000 1000 176 Dec 25 21:07 data
drwxr-xr-x. 3 1000 1000  77 Dec 21 23:34 logs
drwxr-xr-x. 3 1000 1000  62 Dec 25 21:07 run

Which user does actually own then on our host?

$ ls -ln unifi
total 0
drwxr-xr-x. 6 100999 100999 176 Dec 25 21:07 data
drwxr-xr-x. 3 100999 100999  77 Dec 21 23:34 logs
drwxr-xr-x. 3 100999 100999  62 Dec 25 21:07 run

Reasonable, according to the mapping:

$ podman unshare cat /proc/self/uid_map
         0       1000          1
         1     100000      65536


With all the prerequisites in place, all is left to do is enable and start the auto-update timer:

systemctl --user enable podman-auto-update.timer
systemctl --user start podman-auto-update.timer
systemctl --user status podman-auto-update.timer

By default, the version check will be performed daily, and if the new container image is available in the registry, it will be downloaded, and the container service restarted. See ~/.config/systemd/user/ for customizations.


Firewall configuration

In the context of the examples chosen, the next logical step would be to allow the mapped ports through the firewall, some of them permanently, some temporarily until the further application configuration is made (such as configuring cloudflare tunnel for Uptime Kuma, and registering Unifi Controller with the UniFi cloud account).

Oracle linux has firewalld enabled by default, so this task is trivial:

sudo firewall-cmd \
    --permanent \
    --add-port=8080/tcp \

sudo firewall-cmd \
    --add-port=8443/tcp \
    --add-port=8080/tcp \
    --add-port=3478/tcp \

Instance hostname:

Edit /etc/oci-hostname.conf and set PRESERVE_HOSTINFO=2. Then set the hostname:

$ sudo hostnamectl set-hostname MYHOSTNAME


# Install podman
sudo dnf module install container-tools:ol8
# Enable linger
sudo loginctl enable-linger opc
# Set timezone
sudo timedatectl set-timezone America/Los_Angeles
# Allow systemd mess with containers
sudo setsebool -P container_manage_cgroup on

# Create a container (label, Z)
podman create \
       --label "io.containers.autoupdate=registry" \
       --name mycontainer \
       ... \
       -v xxx:yyy:Z \

# Generate systemd config
podman generate systemd \
  --new \
  --name mycontainer \
  --restart-policy=always \
  > ~/.config/systemd/user/container-mycontainer.service

# Remove the container 
podman rm mycontainer

# Enable, start, and check the container service:
systemctl --user enable container-mycontainer.service
systemctl --user start container-mycontainer.service
systemctl --user status container-mycontainer.service
# Enable and start auto-updater.
systemctl --user enable podman-auto-update.timer
systemctl --user start podman-auto-update.timer


  1. Installing Podman on

  2. Configure user space processes to continue after logout on

  3. Setting SELinux Permissions for Container on

  4. Dealing with user namespaces and SELinux on rootless containers on 2

  5. podman generate systemd --new on

  6. Running rootless Podman as a non-root user on