Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RWFileHandle.Flush error: failed to transfer file from cache to remote #2095

Closed
Nodens- opened this issue Feb 26, 2018 · 10 comments
Closed

RWFileHandle.Flush error: failed to transfer file from cache to remote #2095

Nodens- opened this issue Feb 26, 2018 · 10 comments

Comments

@Nodens-
Copy link

Nodens- commented Feb 26, 2018

gocryptfs on top of rclone mount. I updated to current git and this started happening.

What is your rclone version (eg output from rclone -V)

rclone v1.39-DEV
os/arch: linux/amd64
go version: go1.9.4

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Fedora 27

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone: Version "v1.39-DEV" starting with parameters ["/usr/local/bin/rclone" "mount" "-vv" "--stats" "5m" "--allow-other" "--default-permissions" "--vfs-cache-mode" "writes" "--low-level-retries=200" "--bwlimit=250K" "--log-file=/var/log/rclone/rclone.log" "gd:" "/Drive/.remote"]

A log from the command with the -vv flag (eg output from rclone -vv copy /tmp remote:tmp)

issue.zip

@ncw ncw added this to the v1.40 milestone Feb 26, 2018
@ncw
Copy link
Member

ncw commented Mar 5, 2018

Can you have a go with the latest beta please? Lots of VFS fixes got put in in the last few days.

@Nodens-
Copy link
Author

Nodens- commented Mar 6, 2018

Initial test on:

fc32fee (HEAD -> master, origin/vfs-update, origin/master, origin/HEAD) mount, cmount: add --attr-timeout to control attribute caching in kernel

Moving a directory which contains a 2gb 7z archive that always reproduced the error went through just fine. I will leave it open though while I test a bit more because other directories produced the error randomly while moving big queues of directories containing smaller files with midnight commander.

What I noticed immediately though is a slight delay cding into directories on the remote with midnight commander, although no error in logs and network conditions are nominal. I can replicate this latency with ls in the shell. Something related to the caching of the file tree maybe?

EDIT: For clarification, there was always a tiny amount of latency there, now it's just a lot more noticeable. This the result of ls without arguments on a directory with 746 directories inside.
real 0m6.793s
user 0m0.002s
sys 0m0.006s

Unfortunately I did not take a measurement with the previous binary.

@ncw
Copy link
Member

ncw commented Mar 6, 2018

Moving a directory which contains a 2gb 7z archive that always reproduced the error went through just fine. I will leave it open though while I test a bit more because other directories produced the error randomly while moving big queues of directories containing smaller files with midnight commander.

Good news!

What I noticed immediately though is a slight delay cding into directories on the remote with midnight commander, although no error in logs and network conditions are nominal. I can replicate this latency with ls in the shell. Something related to the caching of the file tree maybe?

That is interesting, thanks for reporting it.

It is almost certainly to do with fc32fee where I've set the --attr-timeout to 0 by default. That may have been too conservative though (it is the most accurate setting but also the slowest), so I'm wondering whether the default should be maybe --attr-timeout 1s or --attr-timeout 100ms. Can you experiment with those? I think the previous setting was 60s which is way too long for a network file system.

@Nodens-
Copy link
Author

Nodens- commented Mar 7, 2018

Further testing on the original issue looks good. Can't reproduce so far.

On the latency side of things. The previous posted measurement was while uploading was going on (limited bw, nowhere near saturating upstream):
default
real 0m1.916s
user 0m0.005s
sys 0m0.003s

--attr-timeout 1s:
real 0m0.481s
user 0m0.003s
sys 0m0.004s

--attr-timeout 100ms
real 0m0.447s
user 0m0.001s
sys 0m0.006s

--attr-timeout 100ms (while uploading with same conditions as yesterday)
real 0m0.455s
user 0m0.003s
sys 0m0.004s

How dangerous is 100ms?
If I understand this correctly this affects attribute changes that happen via all other sources except the current instance of rclone/kernel, correct?
Since modification time is an attribute change that literally translates to any change whatsoever, including changes by rclone vfs running on another system, mounting the same remote, correct?

@ncw
Copy link
Member

ncw commented Mar 7, 2018

Great testing thank you.

How dangerous is 100ms?

What it means is that there is a 100ms window when the kernel is hanging onto a cached value where the real value has changed externally.

If I understand this correctly this affects attribute changes that happen via all other sources except the current instance of rclone/kernel, correct?

Yes that is correct.

Since modification time is an attribute change that literally translates to any change whatsoever, including changes by rclone vfs running on another system, mounting the same remote, correct?

No, you need to actually change the file to change the modification time. I think you are thinking of access time which rclone doesn't track.

It is no big deal if the modification time is wrong, what does cause problems is if the length is wrong. So the kernel thinks the file is 100 bytes long, but it is actually 200 bytes long after an update. Those 100 bytes get chopped off with no errors.

Do you think I should change the default to 100ms - that is a very small window of opportunity for badness to happen and it does increase the performance dramatically?

@Nodens-
Copy link
Author

Nodens- commented Mar 8, 2018

No, you need to actually change the file to change the modification time. I think you are thinking of access time which rclone doesn't track.

I was not thinking of access time but modification time indeed. By "any change whatsoever" I meant any write op. Commit did not describe the exact conditions for corruption or the kind of corruption and I was thinking that any write operation inside a directory (such as simply writing a new file) will change the modification timestamp of said directory. My main concern is corruption while operating on the remote via another instance of rclone running elsewhere which is something I am currently doing as I sync some directory trees between other linux systems.

It is no big deal if the modification time is wrong, what does cause problems is if the length is wrong. So the kernel thinks the file is 100 bytes long, but it is actually 200 bytes long after an update. Those 100 bytes get chopped off with no errors.

Thank you for the clarification.

Do you think I should change the default to 100ms - that is a very small window of opportunity for badness to happen and it does increase the performance dramatically?

This is a tough question. In my eyes, data reliability > performance but this only affects files which are modified by different sources and the time window is rather small. Granted that the most common use case is a single mount for data storage and that the difference in performance is huge, it could make sense to change the default BUT on the other hand any user who decides to setup something more complicated will be oblivious of this issue. I am also assuming that this is impossible to handle as there's no mechanism similar to inotify etc provided by the remote APIs.

My suggestion would be to leave the default as it is, to ensure data reliability and add a "Performance Tweaks" section to the documentation that lists this parameter with a caveat clearly describing the limitation so someone can make an informed decision on their own on if they should change the default or not. Other performance related parameters could be listed there as well with any possible quirks.

For my use case, I will just isolate the directory trees used for syncing into their own rclone instance with the default 0 setting and use 100ms for everything else.

@Nodens-
Copy link
Author

Nodens- commented Mar 9, 2018

I just had an instance of the original error again. Unfortunately I had turned off -vv

2018/03/08 23:16:10 ERROR : Wintermute/3fBet0IZnlXTblSF5GtFTw/YjsNF-sVu9FgIm9J7m9IRle9SqoVCTrZHQAT2--R9L193cYf7iaXDSRlKaHv7_c8/VsdWLQdKBfg-5PaUN3tReg: corrupted on transfer: MD5 hash differ "bfe2034e13cf09bf9f5b6c5cda3abb55" vs "f152c9ed4cd89c0d478138b589b4d568"
2018/03/08 23:16:10 ERROR : Wintermute/3fBet0IZnlXTblSF5GtFTw/YjsNF-sVu9FgIm9J7m9IRle9SqoVCTrZHQAT2--R9L193cYf7iaXDSRlKaHv7_c8/VsdWLQdKBfg-5PaUN3tReg(0xc421cc2b40): failed to transfer file from cache to remote: corrupted on transfer: MD5 hash differ "bfe2034e13cf09bf9f5b6c5cda3abb55" vs "f152c9ed4cd89c0d478138b589b4d568"
2018/03/08 23:16:10 ERROR : Wintermute/3fBet0IZnlXTblSF5GtFTw/YjsNF-sVu9FgIm9J7m9IRle9SqoVCTrZHQAT2--R9L193cYf7iaXDSRlKaHv7_c8/VsdWLQdKBfg-5PaUN3tReg(0xc421cc2b40): RWFileHandle.Flush error: failed to transfer file from cache to remote: corrupted on transfer: MD5 hash differ "bfe2034e13cf09bf9f5b6c5cda3abb55" vs "f152c9ed4cd89c0d478138b589b4d568"

@ncw
Copy link
Member

ncw commented Mar 13, 2018

@Nodens- wrote:

I was thinking that any write operation inside a directory (such as simply writing a new file) will change the modification timestamp of said directory

rclone doesn't (yet) track directory modification times, so you are safe there.

My main concern is corruption while operating on the remote via another instance of rclone running elsewhere which is something I am currently doing as I sync some directory trees between other linux systems.

If you are only uploading new files then I think everything will be fine. If you are changing existing files, then there are windows of opportunity to get inconsistent results. However I've been trying to minimise these as this is exactly one of the use cases I've been working on. https://beta.rclone.org is served from an rclone mount which mounts a storage bucket. I hope with the latest changes that I've reduced that windows down to zero (provided you don't set --attr-timeout) but since the cloud storage systems are only eventually consistent I expect there are still some ways you could get inconsistent data. These inconsistent results should only ever be temporary though.

My suggestion would be to leave the default as it is, to ensure data reliability and add a "Performance Tweaks" section to the documentation that lists this parameter with a caveat clearly describing the limitation so someone can make an informed decision on their own on if they should change the default or not

I think I'm going to take your suggestion. I wrote this in the docs

Attribute caching

You can use the flag --attr-timeout to set the time the kernel caches
the attributes (size, modification time etc) for directory entries.

The default is 0s - no caching - which is recommended for filesystems
which can change outside the control of the kernel.

If you set it higher ('1s' or '1m' say) then the kernel will call back
to rclone less often making it more efficient, however there may be
strange effects when files change on the remote.

This is the same as setting the attr_timeout option in mount.fuse.

I just had an instance of the original error again. Unfortunately I had turned off -vv

:-(

That looks like a different problem, as if the cached file was being written to while it was being uploaded (which shouldn't be possible) or possibly it was a genuine corrupted upload (unlikely).

Any chance you could replicate with with a -vv log?

@Nodens-
Copy link
Author

Nodens- commented Mar 13, 2018

That looks like a different problem, as if the cached file was being written to while it was being uploaded (which shouldn't be possible) or possibly it was a genuine corrupted upload (unlikely).

Any chance you could replicate with with a -vv log?

I did not pay full attention to the log as it looked like the same on first glance and frankly I was not expecting something new heh. I did eventually notice the error was different but I was trying to replicate with -vv before providing further feedback. I have been running uploads non-stop for 4 days now with -vv, including rerunning the queue that first produced this but nothing so far.
I will go ahead and close this and if and when I manage to reproduce this I will open a new issue as it is unrelated anyhow.

@Nodens- Nodens- closed this as completed Mar 13, 2018
@ncw
Copy link
Member

ncw commented Mar 13, 2018

@Nodens- that sounds like a good plan, and thank you for all your testing - your excellent bug reports are making rclone better :-)

ncw added a commit that referenced this issue Mar 23, 2018
This  works around these 3 problems:

  * rclone using too much memory #2157
  * rclone not serving files to samba
    * https://forum.rclone.org/t/rclone-1-39-vs-1-40-mount-issue/5112
  * excessive time listing directories #2095
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants