Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix regression: termination of long downloads after StreamTimeout seconds #141

Merged
merged 5 commits into from
Dec 12, 2020

Conversation

notsure2
Copy link
Contributor

@notsure2 notsure2 commented Dec 6, 2020

  • Even if not broadcasting in a loop, we still need to update the read deadline.
  • Don't enforce the timeout after the first data is written.

Fixes #140

- Even if not broadcasting in a loop, we still need to update the read deadline.
- Don't enforce the timeout after the first data is written.
@notsure2 notsure2 force-pushed the fix-termination-of-long-downloads branch from 36e25dd to ee5968a Compare December 6, 2020 15:52
@codecov
Copy link

codecov bot commented Dec 6, 2020

Codecov Report

Merging #141 (9d6ab39) into master (46f4235) will decrease coverage by 0.08%.
The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #141      +/-   ##
==========================================
- Coverage   67.69%   67.61%   -0.09%     
==========================================
  Files          37       37              
  Lines        2402     2399       -3     
==========================================
- Hits         1626     1622       -4     
  Misses        612      612              
- Partials      164      165       +1     
Impacted Files Coverage Δ
internal/server/dispatcher.go 61.53% <ø> (-0.22%) ⬇️
internal/client/piper.go 45.33% <100.00%> (+1.49%) ⬆️
internal/multiplex/streamBufferedPipe.go 90.54% <0.00%> (-2.71%) ⬇️
cmd/ck-client/ck-client.go 0.00% <0.00%> (ø)
internal/multiplex/session.go 86.80% <0.00%> (+0.22%) ⬆️
internal/multiplex/datagramBufferedPipe.go 83.72% <0.00%> (+1.16%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 46f4235...9d6ab39. Read the comment docs.

@notsure2
Copy link
Contributor Author

I explained my reasoning and the correct behavior in my bug report on shadowsocks-rust, because it has the same bugs: shadowsocks/shadowsocks-rust#343

In case of Cloak since it is a point-to-point tunnel, for UDP it should be handled as TCP, only timeout on first byte. After that no timeout at all.

@notsure2
Copy link
Contributor Author

I have been testing these changes now for many days, and so far Cloak is behaving properly now. I use it 24x7.

@cbeuw
Copy link
Owner

cbeuw commented Dec 11, 2020

I agree with you fully on points raised in shadowsocks/shadowsocks-rust#343 and Cloak should exihbit the same behaviour as shadowsocks-libev currently does.

In terms of implementation, I think it's better to not change the behaviour of stream/datagramBufferedPipe, because they could be seen as standalone structures. The caller should be responsible in implementing the initial-timeout-only behaviour.

In fact, I don't think we even need to change the multiplex module at all. Would it work if you keep these changes

_ = localConn.SetReadDeadline(time.Now().Add(streamTimeout))
i, err := io.ReadAtLeast(localConn, data, 1)
if err != nil {
log.Errorf("Failed to read first packet from proxy client: %v", err)
localConn.Close()
return
}
var zeroTime time.Time
_ = localConn.SetReadDeadline(zeroTime)

Revert the changes in stream/datagramBufferedPipe.go

And in dispatcher.go on server side, simply delete SetWriteToTimeout?

// if stream has nothing to send to proxy server for sta.Timeout period of time, stream will return error
newStream.(*mux.Stream).SetWriteToTimeout(sta.Timeout)
go func() {

Since streams in Cloak are lazily-opened, meaning that the server will only allocate resource for a stream once it receives data, there isn't any need even for initial timeout because we are always guaranteed to get data immediately on first read, meaning we don't need a timeout at all.

@notsure2
Copy link
Contributor Author

Ok seems with your suggested changes things seem fine still.

@notsure2
Copy link
Contributor Author

@cbeuw can we get review and merge please :-) this is a very annoying bug.

@cbeuw cbeuw merged commit e77fd4c into cbeuw:master Dec 12, 2020
@notsure2
Copy link
Contributor Author

Don't forget android also.

@cbeuw
Copy link
Owner

cbeuw commented Dec 12, 2020

Thanks! I'll do a new release once I merge the CDN PR

@notsure2 notsure2 deleted the fix-termination-of-long-downloads branch December 13, 2020 17:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Regression in 2.3.2: Long download closed after StreamTimeout even if data still moving
2 participants