Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RAID5 segment fault when extended #33

Open
Arnefar opened this issue Jun 12, 2020 · 2 comments
Open

RAID5 segment fault when extended #33

Arnefar opened this issue Jun 12, 2020 · 2 comments

Comments

@Arnefar
Copy link

Arnefar commented Jun 12, 2020

I get an segmentfault when trying to use lvextend. I know this is proabebly not what anyone would do, but a segmentfault is always bad.
The same thing works out with a striped LV adding a linear blok with lvextend -i1

Using loop device I tried out:

  vgcreate vgtest /dev/loop1 /dev/loop2 /dev/loop3
  lvcreate --type raid5 -l100%FREE -n lvtest vgtest
  vgextend vgtest /dev/loop4

All good

Now:

  lvextend -i1 -l+100%FREE vgtest/lvtest
  Segment fault (throws kernel)

Bad

dmesg
[40709.266188] traps: lvextend[45557] general protection fault ip:55a7b9278748 sp:7ffd89b6e720 error:0 in lvm[55a7b91b2000+19e000]

My system info

uname -a
Linux arnefar-X570-UD 5.4.0-33-generic #37-Ubuntu SMP Thu May 21 12:53:59 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

lsb_release -a
Description:	Ubuntu 20.04 LTS
Release:	20.04
Codename:	focal
lvextend --version
  LVM version:     2.03.07(2) (2019-11-30)
  Library version: 1.02.167 (2019-11-30)
  Driver version:  4.41.0
  Configuration:   ./configure --build=x86_64-linux-gnu --prefix=/usr --includedir=${prefix}/include --mandir=${prefix}/share/man --infodir=${prefix}/share/info --sysconfdir=/etc --localstatedir=/var --disable-silent-rules --libdir=${prefix}/lib/x86_64-linux-gnu --libexecdir=${prefix}/lib/x86_64-linux-gnu --runstatedir=/run --disable-maintainer-mode --disable-dependency-tracking --exec-prefix= --bindir=/bin --libdir=/lib/x86_64-linux-gnu --sbindir=/sbin --with-usrlibdir=/usr/lib/x86_64-linux-gnu --with-optimisation=-O2 --with-cache=internal --with-device-uid=0 --with-device-gid=6 --with-device-mode=0660 --with-default-pid-dir=/run --with-default-run-dir=/run/lvm --with-default-locking-dir=/run/lock/lvm --with-thin=internal --with-thin-check=/usr/sbin/thin_check --with-thin-dump=/usr/sbin/thin_dump --with-thin-repair=/usr/sbin/thin_repair --enable-applib --enable-blkid_wiping --enable-cmdlib --enable-dmeventd --enable-dbus-service --enable-lvmlockd-dlm --enable-lvmlockd-sanlock --enable-lvmpolld --enable-notify-dbus --enable-pkgconfig --enable-readline --enable-udev_rules --enable-udev_sync
@kergon
Copy link
Contributor

kergon commented Jun 12, 2020 via email

@Arnefar
Copy link
Author

Arnefar commented Jun 12, 2020

Allright first some info:

#lvs vgtest -o name,vg_name,size,segtype,segsize,devices
  LV     VG     LSize   Type  SSize   Devices                                                 
  lvtest vgtest 184,00m raid5 184,00m lvtest_rimage_0(0),lvtest_rimage_1(0),lvtest_rimage_2(0)

output of lastpart from lxextend --vvvv

#lvextend -vvvv vgtest/lvtest -i1 -l+100%FREE
.....
20:54:54.526153 lvextend[3413] metadata/lv_manip.c:1266  Stack vgtest/lvtest:0[2] on LV vgtest/lvtest_rmeta_2:0.
20:54:54.526160 lvextend[3413] metadata/lv_manip.c:818  Adding vgtest/lvtest:0 as an user of vgtest/lvtest_rmeta_2.
20:54:54.526168 lvextend[3413] metadata/lv_manip.c:1266  Stack vgtest/lvtest:0[2] on LV vgtest/lvtest_rimage_2:0.
20:54:54.526176 lvextend[3413] metadata/lv_manip.c:818  Adding vgtest/lvtest:0 as an user of vgtest/lvtest_rimage_2.
20:54:54.526211 lvextend[3413] toollib.c:2010  Running command for VG vgtest 1rCtmj-FPUc-zWK5-aqQi-H9yr-ia5y-L8OFMC
20:54:54.526223 lvextend[3413] activate/dev_manager.c:810  Getting device info for vgtest-lvtest [LVM-1rCtmjFPUczWK5aqQiH9yria5yL8OFMCURp8fFPVmuGO0Q1JnNjd35Adj4cB96eh].
20:54:54.526236 lvextend[3413] device_mapper/ioctl/libdm-iface.c:1853  dm info  LVM-1rCtmjFPUczWK5aqQiH9yria5yL8OFMCURp8fFPVmuGO0Q1JnNjd35Adj4cB96eh [ noopencount flush ]   [16384] (*1)
20:54:54.526250 lvextend[3413] metadata/lv_manip.c:5148  Converted 100%FREE into at most 24 physical extents.
20:54:54.526260 lvextend[3413] metadata/lv_manip.c:5486  New size for vgtest/lvtest: 70. Existing logical extents: 46 / physical extents: 72.
20:54:54.526273 lvextend[3413] format_text/archiver.c:140  Archiving volume group "vgtest" metadata (seqno 4).
20:54:54.526643 lvextend[3413] metadata/lv_manip.c:5556  Extending logical volume vgtest/lvtest to up to 280,00 MiB
20:54:54.526652 lvextend[3413] metadata/lv_manip.c:4262  Adding segment of type raid5 to LV lvtest.
20:54:54.526667 lvextend[3413] metadata/lv_manip.c:3571  Adjusted allocation request to 24 logical extents. Existing size 46. New size 70.
20:54:54.526681 lvextend[3413] metadata/pv_map.c:53  Allowing allocation on /dev/loop13 start PE 0 length 24
20:54:54.526690 lvextend[3413] metadata/lv_manip.c:3279  Trying allocation using contiguous policy.
20:54:54.526699 lvextend[3413] metadata/lv_manip.c:2878  Areas to be sorted and filled sequentially.
20:54:54.526705 lvextend[3413] metadata/lv_manip.c:2790  Still need up to 24 total extents from 24 remaining (0 positional slots):
20:54:54.526714 lvextend[3413] metadata/lv_manip.c:2794    1 (1 data/0 parity) parallel areas of 24 extents each
20:54:54.526723 lvextend[3413] metadata/lv_manip.c:2797    0 mirror logs of 0 extents each
20:54:54.526731 lvextend[3413] metadata/lv_manip.c:2449  Considering allocation area 0 as /dev/loop13 start PE 0 length 24 leaving 0.
20:54:54.526740 lvextend[3413] metadata/lv_manip.c:2033  Allocating parallel area 0 on /dev/loop13 start PE 0 length 24.
Segmentfejl (smed kerne)

Hope that's enough info

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants