Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

69-dm-lvm.rules conflicts with 63-md-raid-arrays.rules #94

Closed
mrechte opened this issue Nov 7, 2022 · 1 comment
Closed

69-dm-lvm.rules conflicts with 63-md-raid-arrays.rules #94

mrechte opened this issue Nov 7, 2022 · 1 comment

Comments

@mrechte
Copy link

mrechte commented Nov 7, 2022

Hello,

madm provides a 63-md-raid-arrays.rules (listed below) which is supposed to start mdmonitor service. However the service is not started, because 69-dm-lvm.rules resets SYSTEMD_READY flag.

Thanks

cat /usr/lib/udev/rules.d/63-md-raid-arrays.rules 
# do not edit this file, it will be overwritten on update

SUBSYSTEM!="block", GOTO="md_end"

# handle md arrays
ACTION!="add|change", GOTO="md_end"
KERNEL!="md*", GOTO="md_end"

# partitions have no md/{array_state,metadata_version}, but should not
# for that reason be ignored.
ENV{DEVTYPE}=="partition", GOTO="md_ignore_state"

# container devices have a metadata version of e.g. 'external:ddf' and
# never leave state 'inactive'
ATTR{md/metadata_version}=="external:[A-Za-z]*", ATTR{md/array_state}=="inactive", GOTO="md_ignore_state"
TEST!="md/array_state", ENV{SYSTEMD_READY}="0", GOTO="md_end"
ATTR{md/array_state}=="clear*|inactive", ENV{SYSTEMD_READY}="0", GOTO="md_end"
ATTR{md/sync_action}=="reshape", ENV{RESHAPE_ACTIVE}="yes"
LABEL="md_ignore_state"

IMPORT{program}="/usr/bin/mdadm --detail --no-devices --export $devnode"
ENV{DEVTYPE}=="disk", ENV{MD_NAME}=="?*", SYMLINK+="disk/by-id/md-name-$env{MD_NAME}", OPTIONS+="string_escape=replace"
ENV{DEVTYPE}=="disk", ENV{MD_UUID}=="?*", SYMLINK+="disk/by-id/md-uuid-$env{MD_UUID}"
ENV{DEVTYPE}=="disk", ENV{MD_DEVNAME}=="?*", TAG+="systemd", SYMLINK+="md/$env{MD_DEVNAME}"
ENV{DEVTYPE}=="partition", ENV{MD_NAME}=="?*", SYMLINK+="disk/by-id/md-name-$env{MD_NAME}-part%n", OPTIONS+="string_escape=replace"
ENV{DEVTYPE}=="partition", ENV{MD_UUID}=="?*", SYMLINK+="disk/by-id/md-uuid-$env{MD_UUID}-part%n"
ENV{DEVTYPE}=="partition", ENV{MD_DEVNAME}=="*[^0-9]", SYMLINK+="md/$env{MD_DEVNAME}%n"
ENV{DEVTYPE}=="partition", ENV{MD_DEVNAME}=="*[0-9]", SYMLINK+="md/$env{MD_DEVNAME}p%n"


IMPORT{builtin}="blkid"
OPTIONS+="link_priority=100"
OPTIONS+="watch"
ENV{ID_FS_USAGE}=="filesystem|other|crypto", ENV{ID_FS_UUID_ENC}=="?*", SYMLINK+="disk/by-uuid/$env{ID_FS_UUID_ENC}"
ENV{ID_FS_USAGE}=="filesystem|other", ENV{ID_PART_ENTRY_UUID}=="?*", SYMLINK+="disk/by-partuuid/$env{ID_PART_ENTRY_UUID}"
ENV{ID_FS_USAGE}=="filesystem|other", ENV{ID_FS_LABEL_ENC}=="?*", SYMLINK+="disk/by-label/$env{ID_FS_LABEL_ENC}"

ENV{MD_LEVEL}=="raid[1-9]*", ENV{SYSTEMD_WANTS}+="mdmonitor.service"

# Tell systemd to run mdmon for our container, if we need it.
ENV{MD_LEVEL}=="raid[1-9]*", ENV{MD_CONTAINER}=="?*", PROGRAM="/usr/bin/readlink $env{MD_CONTAINER}", ENV{MD_MON_THIS}="%c"
ENV{MD_MON_THIS}=="?*", PROGRAM="/usr/bin/basename $env{MD_MON_THIS}", ENV{SYSTEMD_WANTS}+="mdmon@%c.service"
ENV{RESHAPE_ACTIVE}=="yes", PROGRAM="/usr/bin/basename $env{MD_MON_THIS}", ENV{SYSTEMD_WANTS}+="mdadm-grow-continue@%c.service"

LABEL="md_end"

@prajnoha
Copy link
Member

prajnoha commented Feb 1, 2023

I've removed the SYSTEMD_READY variable setting in 69-dm-lvm.rules - the way we do LVM autoactivation changed some time ago and we don't actually need to handle SYSTMED_READY anymore in our udev rules. Thanks for the report.

https://sourceware.org/git/?p=lvm2.git;a=commit;h=e7c8a825061d57efaffad80667873fa8d68d31ab

@prajnoha prajnoha closed this as completed Feb 1, 2023
jollaitbot pushed a commit to sailfishos-mirror/lvm2 that referenced this issue Feb 1, 2023
Since 67722b3, we have a new mechanism
to run the autoactivation from udev. With this change, we also replaced
the way the LVM autoactivation service is instantiatiated - instead of
setting the SYSTEM_WANTS udev variable (which systemd read and then
instantiated the service), we're now directly instantiating the
transient 'lvm-activate-<vgname>' service by calling systemd-run.

As such, we don't need to bother with setting the SYSTEMD_READY variable
for foreign devices anymore (in this case, MD and loop devices on top of
which there's a PV).

Before, we set the SYSTEMD_READY variable to make sure that the SYSTEMD_WANTS
is applied correctly - the service instantiation was edge-triggered by
flipping the SYSTEMD_READY from 0 to 1 and at the same time having the
SYSTEMD_WANTS variable set to the service name to instantiate. We're
using systemd-run now so this condition does not apply anymore.

Also, it was not completely correct to set SYSTEMD_READY for foreign
devices because there might be cases where this could cause issues,
see also lvmteam/lvm2#94.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants