New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LVM cache cannot be removed if cache volume is lost #35
Comments
dmesg helped out here, the --cachesettings do not seem to work properly:
|
I ran into a similar issue. In my case the cache drive was a usb ssd unplugged while the system was running, and now when I try to uncache I get this:
I am on Ubuntu 22.04 running lvm:
|
Bumping up. Good thing is that it was a writethrough cache so I hope my data is ok. Any way to remove cache from a LV in that situation? |
Is the SSD in troubles (aka fails with 'read-error' ?) . It's overall interesting you've manage to get dirty cache in writethrough mode. The recovery in this case might be non-trivial as kernel target is a bit 'dumb' and cannot skip problematic parts of device. So to get out of this case - You can activate cache origin and cache data and metadata LVs in a 'component activation' mode - this brings you all devices separately in read-only mode (just active every 'subLv' of your cached LV individually with 'lvchange -ay ...' Then you run 'dmsetup table' and grab the table line for your original cached device. Once this is done you can use 'cache_writeback' tool from device mapper persistent data tools package. Once you manage to rescue 'maximum' blocks you can you deactivate everything and then you could forcibly remove/detach your caching device from your cached LV with 'lvremove --force vgname/cachepoolname' It's a bit awkward solution for this case that should be enhanced on kernel side as well as on user-space side. In case you find any troubles with the advices in this message - it's always better to ask before doing some irreversible damage. |
Let's check the sequence: So to do that: and then: I am not sure I understood that correctly: If this is ok, the I do: Correct? |
Ok, let's say I was lucky: |
If an LVM logical volume is backed by a cached volume, and that cached volume disappears or becomes corrupt, it cannot be removed.
Example, assume a logical volume called lvmgroup/disk:
After the lvmgroup/disk has been set to cache mode, reboot, or force the ramdisk offline.
Such as:
Now try and remove the cache:
It's possible to force re-add the ram disk using the same uuid and then trying again:
At this point, the only way I found this can be fixed is to take the /etc/lvm/backup/lvmgroup file, modify it to remove the cache entries, rename the disk_corig back to disk, add "VISIBLE" flag back, and then run vgcfgrestore -f on the modified file.
The text was updated successfully, but these errors were encountered: