Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

lvm cache volumes / flushing blocks #30

Closed
rmalchow opened this issue May 12, 2020 · 1 comment
Closed

lvm cache volumes / flushing blocks #30

rmalchow opened this issue May 12, 2020 · 1 comment

Comments

@rmalchow
Copy link

i have a cached lv and i want to extend it. afaik, i will have to remove the caching, extend the lv, and then add the caching again.

however:

lvconvert --uncache /dev/vgraid/lvraid

result in endless "flushing NNN blocks" and never finishes - same for other commands (such as lvremove vgraid/cache_data) . using lvs, i can see the following:

   LV                        VG     Attr       LSize    Pool                Origin         Data%  Meta%  Move Log Cpy%Sync Convert Devices                                                                    CacheTotalBlocks CacheUsedBlocks  CacheDirtyBlocks Type       CacheMode   
  [lvol0_pmspare]           vgraid ewi-------    2.00g                                                                            /dev/mapper/cache_data(359138)                                                                                                linear                 
  lvraid                    vgraid Cwi-aoC---   14.55t [lvraid_cache_data] [lvraid_corig] 99.99  0.58            99.32            lvraid_corig(0)                                                                      999340           999339           992494 cache      writethrough
  [lvraid_cache_data]       vgraid Cwi---C---    1.37t                                    99.99  0.58            99.32            lvraid_cache_data_cdata(0)                                                           999340           999339           992494 cache-pool writethrough
  [lvraid_cache_data_cdata] vgraid Cwi-ao----    1.37t                                                                            /dev/mapper/cache_data(0)                                                                                                     linear                 
  [lvraid_cache_data_cmeta] vgraid ewi-ao----    2.00g                                                                            /dev/mapper/cache_meta(0)                                                                                                     linear                 
  [lvraid_corig]            vgraid rwi-aoC---   14.55t                                                           100.00           lvraid_corig_rimage_0(0),lvraid_corig_rimage_1(0),lvraid_corig_rimage_2(0)                                                    raid5                  
  [lvraid_corig_rimage_0]   vgraid iwi-aor---   <7.28t                                                                            /dev/mapper/sda1_crypt(1)                                                                                                     linear                 
  [lvraid_corig_rimage_1]   vgraid iwi-aor---   <7.28t                                                                            /dev/mapper/sdb1_crypt(1)                                                                                                     linear                 
  [lvraid_corig_rimage_2]   vgraid iwi-aor---   <7.28t                                                                            /dev/mapper/sdc1_crypt(1)                                                                                                     linear                 
  [lvraid_corig_rmeta_0]    vgraid ewi-aor---    4.00m                                                                            /dev/mapper/sda1_crypt(0)                                                                                                     linear                 
  [lvraid_corig_rmeta_1]    vgraid ewi-aor---    4.00m                                                                            /dev/mapper/sdb1_crypt(0)                                                                                                     linear                 
  [lvraid_corig_rmeta_2]    vgraid ewi-aor---    4.00m                                                                            /dev/mapper/sdc1_crypt(0)                                                                                                     linear                 

one interesting thing i saw was that the cache_data lv is "not active":

  --- Logical volume ---
  Internal LV Name       lvraid_cache_data
  VG Name                vgraid
  LV UUID                3R19cf-WsX8-LJHA-4L9U-ytbw-x1Zo-YfU1Pl
  LV Write Access        read/write
  LV Creation host, time nasenhase, 2019-10-11 20:41:55 +0000
  LV Pool metadata       lvraid_cache_data_cmeta
  LV Pool data           lvraid_cache_data_cdata
  LV Status              NOT available
  LV Size                1.37 TiB
  Current LE             359138
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto

how can i resovle this, short of copying everything off? when i do a vgcfgbackup, i get this:
vgraid_orig.txt

and if i modify it to this:
vgraid_modified.txt

the orig volume mounts fine, but it is missing data ... so apparently, something about the blocks it wants to flush is important after all.

what can i do to resolve this?

@zkabelac
Copy link
Contributor

zkabelac commented Feb 1, 2023

So few updates - recent version of lvm2 supports resize of cached LVs (>= 2.03.12)

If the uncaching does not proceed - it's likely a bug on 'origin volume' side - where there is most likely a write error so the cache cannot be decommissioned onto the origin storage and ATM it actually blocks further uncaching.

If this is your issue - please open a new issue/bug and provide also kernel trace (dmesg) of your system.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants