Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Temporary repos aren't always cleaned up #1609

Closed
mwleeds opened this issue Apr 24, 2018 · 4 comments
Closed

Temporary repos aren't always cleaned up #1609

mwleeds opened this issue Apr 24, 2018 · 4 comments

Comments

@mwleeds
Copy link
Collaborator

mwleeds commented Apr 24, 2018

Linux distribution and version

Fedora 27

Flatpak version

0.11.3

Description of the problem

If an install or update operation is interrupted or fails, the temporary repos in /var/tmp/flatpak-cache-XXXXXX/ aren't necessarily cleaned up. This is separate from #1263 because I'm talking about the repo itself not the lock file.

Steps to reproduce

  1. Run "flatpak install ..." to start an install
  2. Once the install has gotten far enough to create a repo in /var/tmp/flatpak-cache-XXXXXX/, disconnect from the Internet to make the install fail.
  3. Notice that the repo is never cleaned up.

If the operation fails because the flatpak process is killed, I suppose there's not much we can do. But if there's an error we should be able to clean up.

@alexlarsson
Copy link
Member

Well, its kinda meant to work like that. This is how a partially downloaded result are picked up again when we restart the operation if you had some sort of transient failure. We should eventually remove it though, but I think that will happen on the next successful operation, no?

@uajain
Copy link
Contributor

uajain commented Apr 25, 2018

Right. I had the same experience to what Alex explained. This is a similar issue that I am right now looking into when the disk gets full. It just aborts and no cleanup is done. On one hand, that gives a chance for user to make more space available and re-try. That won't require to fetch the already present downloaded part.

On the other hand, I have been suggested to cleanup because in case of disk-space-full, there is no successful deployment of the app and the user will be left behind, with no disk space. Alex, do you have any ideas/suggestions on that?

@ramcq
Copy link
Contributor

ramcq commented Apr 25, 2018

I think the recovery path from the full disk case should be more aggressive (like, drop all of the caches) because it's about disaster recovery and leaving the system usable again. Maybe we could remove all of the /other/ caches and on the first out of disk error, and if we have only one cache and we're still out of disk, then we have to remove it and throw the whole cache away?

Note that this out of disk fallback shouldn't be "plan A" - plan A should be avoiding getting into this state in the first place, because we're wasting either user disk space or bandwidth or both. But see ostreedev/ostree#1557 for that.

@mwleeds
Copy link
Collaborator Author

mwleeds commented Apr 28, 2018

Looks like we already have #1119 about this

@mwleeds mwleeds closed this as completed Jul 23, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants