Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

在一个session中,一条读处理较慢的stream,可能饿死其他stream? #18

Closed
longzhiri opened this issue Apr 12, 2017 · 7 comments
Labels
HOLB smux v2 smux version 2 protocol

Comments

@longzhiri
Copy link

session中的所有stream共享一个读token,如果一个stream处理包慢,读包慢,很快耗尽session的读token,导致其他正常的stream读不出包处理了?看代码想到的问题,没有测试过,想确认是否存在这个问题。

@xtaci
Copy link
Owner

xtaci commented Apr 12, 2017

完全正确

@xtaci xtaci closed this as completed Apr 18, 2017
@cs8425
Copy link

cs8425 commented Apr 24, 2017

自己嚕的TCP over TCP已踩到這問題...
改成每個stream一個token
加入2個控制封包: full, empty
這樣是否會比較好?
正在改寫中
如果完成後沒問題會提交PR

@xtaci
Copy link
Owner

xtaci commented Apr 24, 2017

@cs8425 不清楚你说的每个stream一个token怎么实现,你又无法预知下一个strream id是什么的时候

@cs8425
Copy link

cs8425 commented Apr 24, 2017

@xtaci
目前大致構想是這樣的:
廢掉Session的token bucket
Stream結構裡面加入token
Session的recvLoop(), cmdPSH的時候檢查Stream的token(假設為A端)
不足的話傳送stream id + full到對面(假設為B端)
block住B端那個stream id的Write
直到A端Read夠多的資料出來(token夠多了)
A端傳送stream id + empty到B端
讓B端的那個stream可以繼續Write
不知道這樣會不會有什麼問題就是

@xtaci
Copy link
Owner

xtaci commented Apr 24, 2017

这样stream越多,内存用量越大

@cs8425
Copy link

cs8425 commented Apr 24, 2017

是的
理想情況下可能還需要一個機制來控制最大同時連線數
不過理論上來說
除非夠多的stream都很慢
而且對面沒收到控制封包block住Write操作
否則記憶體用量應該是可以接受的

@xtaci
Copy link
Owner

xtaci commented Sep 22, 2019

@xtaci xtaci reopened this Sep 22, 2019
@xtaci xtaci pinned this issue Sep 22, 2019
@xtaci xtaci added HOLB smux v2 smux version 2 protocol labels Sep 22, 2019
@xtaci xtaci closed this as completed Oct 3, 2019
@xtaci xtaci unpinned this issue Jan 4, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
HOLB smux v2 smux version 2 protocol
Projects
None yet
Development

No branches or pull requests

3 participants