Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v2ray 应加入 Fake IP、代理组等实用性功能 #377

Closed
RPRX opened this issue Oct 31, 2020 · 29 comments
Closed

v2ray 应加入 Fake IP、代理组等实用性功能 #377

RPRX opened this issue Oct 31, 2020 · 29 comments
Labels
enhancement New feature or request Welcome PR

Comments

@RPRX
Copy link
Contributor

RPRX commented Oct 31, 2020

加个 map 很难吗

v2ray/v2ray-core#2237

代理组(高级负载均衡、fallback)功能貌似在计划中

@RPRX RPRX added enhancement New feature or request Welcome PR labels Oct 31, 2020
@RPRX
Copy link
Contributor Author

RPRX commented Oct 31, 2020

这个功能主要是利好各种“类透明代理”,比如 TUN/TAP、路由器、Android

@lucifer9
Copy link
Member

有些app啥的必须启用 always_real_ip
比如 iOS 上的 BOSS直聘
如果要用fake ip的话一定记得提醒用户遇到连不上的域名试试 always_real_ip
我被坑过超过3天,还学会了debug rust才知道这么做

@yuhan6665
Copy link
Contributor

I can try simplify commits from v2ray/v2ray-core#2237 and rebase on master. Though it might take me quite a while to understand all related code.
@RPRX let me know if there is any suggestions

@RPRX
Copy link
Contributor Author

RPRX commented Nov 1, 2020

@yuhan6665

我觉得可以开始这项工作了

@RPRX
Copy link
Contributor Author

RPRX commented Nov 1, 2020

@yuhan6665

用 sync.Map

@ghost
Copy link

ghost commented Nov 1, 2020

@RPRX

v2ray/v2ray-core#2254
May also be interesting since you mentioned proxy fallback.

@RPRX
Copy link
Contributor Author

RPRX commented Nov 2, 2020

@acc557

关于代理组,其他开发者似乎有另外的规划

@badO1a5A90
Copy link
Contributor

v2ray/v2ray-core#2254
May also be interesting since you mentioned proxy fallback.

v2ray/v2ray-core#1924
我觉得这个更好用,动态切换最佳性能的线路.

@lucifer9
Copy link
Member

lucifer9 commented Nov 3, 2020

测速来决定出口有个坑就是类似网银这种会判断你来连接的IP是不是一段时间内变化,如果变化了就需要重新登陆。一般会选择设置一个阈值,两个出口速度差超过阈值才切换。然后就会遇到阈值设置多少合适的问题...当然还可以再加条件判断,比如限制去某个IP的链接绑定某个出口,这次会遇到银行有好多IP的问题...再添加条件?那就太复杂了吧。
这个就跟之前fakeip会遇到某些域名必须被加到always_real_ip里面才行一样。这种会影响实际用途,但是确实是很小概率,一旦遇到用户很难首先定位到真正的原因,会花大量的时间来从其他方向排除,甚至可能最终也不会意识到真正的原因在哪里。

@badO1a5A90
Copy link
Contributor

测速来决定出口有个坑就是类似网银这种会判断你来连接的IP是不是一段时间内变化,如果变化了就需要重新登陆。一般会选择设置一个阈值,两个出口速度差超过阈值才切换。然后就会遇到阈值设置多少合适的问题...当然还可以再加条件判断,比如限制去某个IP的链接绑定某个出口,这次会遇到银行有好多IP的问题...再添加条件?那就太复杂了吧。
这个就跟之前fakeip会遇到某些域名必须被加到always_real_ip里面才行一样。这种会影响实际用途,但是确实是很小概率,一旦遇到用户很难首先定位到真正的原因,会花大量的时间来从其他方向排除,甚至可能最终也不会意识到真正的原因在哪里。

确实如此。不光网银,YouTube其实也会有类似问题。
可以通过路由规则根本上解决,也可以加切换速度差的阈值。
更好一点是通过增加加出口权重,避免性能差不多线路频繁切换。(性能高的线路权重高,如果切换到权重低的线路说明此时肯定进入不稳定状态了,稳定后也会自动切回来;也可以适合类似稳定但流量小的备用线路权重低,常用低延迟线路权重高等用法)

@kslr
Copy link
Contributor

kslr commented Nov 3, 2020

通过在相同域下保证使用同一个路线

@badO1a5A90
Copy link
Contributor

badO1a5A90 commented Nov 3, 2020

@badO1a5A90 Even better. Have you tried this on the current master branch of the v2fly repo?

是的,事实上我自己一直在用这个PR.

记得很久前有一次更新后.这个RP应该有一部分涉及路由的config.pb.go的代码和现在的库有冲突,需要跟上上更新.我并不是很熟悉,保留了原来的代码,似乎也没有任何问题.(应该也因为路由的config没什么功能改变过)

我有多条线路,常用线路基础延迟低带宽高流量大,但是偶然会丢包不稳定.备用线路流量小带宽低且延迟高一些,但确保稳定.所以我有在用这个RP,(并且我略微修改了一下,就是在它的算分基础上*权重作为最后比较,更确保在线路状态确实很差的情况下才进行切换)
部分切换出口IP不能用的网站,则路由规则中强制指定出口.

@ghost
Copy link

ghost commented Nov 3, 2020

@badO1a5A90 Nice! I will have a try. Do you know which one is the last working version?

Maybe fallback strategy can also be implemented as a special case inside this speedtest strategy.
Like, when setting interval: 0, only trigger the testOutboud when the current outbound fails. That would usually be enough for keep the line working.

leaf has this failover mode
https://github.com/eycorsican/leaf/blob/master/README.zh.md#failover

@ghost
Copy link

ghost commented Nov 3, 2020

通过在相同域下保证使用同一个路线

Maybe this should be the default for all kinds of strategies.

@badO1a5A90
Copy link
Contributor

@badO1a5A90 Nice! I will have a try. Do you know which one is the last working version?

09b81b7
这个commit的app/router/config.pb.go和之前的有冲突,是因为我自己不太懂这部分的代码,所以app/router/config.pb.go一直用的是旧的而已,同时也一直用着那个RP,其余的代码都可以同步最新的代码,并没有其他问题.
按这个RP的算法,如果某一台代理服务器down了或者下线了,得分会是0,也是可以知道的.

@ghost
Copy link

ghost commented Nov 8, 2020

The only problem of v2ray/v2ray-core#1924 is that it only test speed periodically. Even if the current outbound fails, it will not realize it until the next testing cycle.

I can be improved by integrating with v2ray/v2ray-core#2254, which schedules a speed test right after an amount of failure is observed on the current outbound.

I'd really appreciate if someone could make an improved version of 1a574c1, as I am not familiar with golang :)

@beanslel
Copy link

beanslel commented Nov 13, 2020

I disagree that v2ray/v2ray-core#2254 on its own is not useful. The use-case of v2ray/v2ray-core#2254 is different from v2ray/v2ray-core#1924. The first is to prevent downtime, the second is to maximize performance. The speedtest feature is nice, but should be optional. Personally I think preventing downtime is more important than maximizing performance. Users should be able to choose which one best fits their use-case.

Let's say you have a high performance server, but the bandwidth is very expensive. So you want to use a medium performance server with cheap bandwidth as the primary server and the high performance server as backup. With the speedtest function that's not possible because the high performance server will always be first.

What about the following approach:

  1. The user can set the priority order of the servers in the fallback balancer.

  2. The balancer will always choose the highest order server if it's available. If the higher order server goes down (based on maxAttempts), switch to a lower order server. Then periodically try the higher order server and as soon as it comes back online, switch back to that one.

  3. Speedtest can be enabled as an optional feature. If enabled, a periodic check is made with speedtest that automatically orders the servers from high to low priority. Then the same approach as 2. is used.

  4. The user can also set all servers at the same priority level. In this case, any available server is selected round-robin style (just switch to the next available server after maxAttempts and stay there).

This way the user can adapt the balancer to best fit the use-case. My golang skills are not so great too, hopefully someone can help work on this.

@rurirei
Copy link
Contributor

rurirei commented Nov 13, 2020

the issue is shifting away from its theme..

@RPRX
Copy link
Contributor Author

RPRX commented Nov 13, 2020

the issue is shifting away from its theme..

没关系,内容也提到了代理组,这就改标题

@RPRX RPRX changed the title v2ray 应加入 Fake IP 等实用性功能 v2ray 应加入 Fake IP、代理组等实用性功能 Nov 13, 2020
@rurirei
Copy link
Contributor

rurirei commented Nov 13, 2020

Personally I think preventing downtime is more important than maximizing performance.

i'll do and only do some work on #2254 .

@beanslel
Copy link

I was also thinking to try porting v2ray/v2ray-core#2254 to v2fly and then go from there. The original author seems inactive. Maybe it's better to start over from scratch?

@rurirei
Copy link
Contributor

rurirei commented Nov 14, 2020

did one met this

// go generate


failed 5 minutes ago in 2m 59s
Search logs
3s
1s
2s
3s
2s
2m 48s
go: downloading google.golang.org/protobuf v1.25.0
go: downloading github.com/golang/protobuf v1.4.3
go: downloading github.com/pires/go-proxyproto v0.3.1
go: downloading golang.org/x/sys v0.0.0-20201107080550-4d91cf3a1aaf
go: downloading github.com/golang/mock v1.4.4
go: downloading golang.org/x/tools v0.0.0-20191216052735-49a3e744a425
google.golang.org/protobuf/types/descriptorpb
google.golang.org/protobuf/types/pluginpb
google.golang.org/protobuf/reflect/protodesc
google.golang.org/protobuf/compiler/protogen
google.golang.org/protobuf/cmd/protoc-gen-go/internal_gengo
google.golang.org/protobuf/cmd/protoc-gen-go
go: downloading google.golang.org/grpc v1.33.2
go: finding module for package google.golang.org/grpc/cmd/protoc-gen-go-grpc
go: downloading google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.0.1
go: found google.golang.org/grpc/cmd/protoc-gen-go-grpc in google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.0.1
google.golang.org/grpc/cmd/protoc-gen-go-grpc
go: downloading github.com/gogo/protobuf v1.1.1
github.com/gogo/protobuf/protoc-gen-gogo/generator/internal/remap
github.com/gogo/protobuf/proto
github.com/gogo/protobuf/protoc-gen-gogo/descriptor
github.com/gogo/protobuf/protoc-gen-gogo/plugin
github.com/gogo/protobuf/gogoproto
github.com/gogo/protobuf/vanity
github.com/gogo/protobuf/protoc-gen-gogo/generator
github.com/gogo/protobuf/plugin/testgen
github.com/gogo/protobuf/plugin/defaultcheck
github.com/gogo/protobuf/plugin/embedcheck
github.com/gogo/protobuf/plugin/enumstringer
github.com/gogo/protobuf/plugin/compare
github.com/gogo/protobuf/plugin/description
github.com/gogo/protobuf/plugin/equal
github.com/gogo/protobuf/plugin/face
github.com/gogo/protobuf/plugin/gostring
github.com/gogo/protobuf/plugin/marshalto
github.com/gogo/protobuf/plugin/oneofcheck
github.com/gogo/protobuf/plugin/populate
github.com/gogo/protobuf/plugin/size
github.com/gogo/protobuf/plugin/stringer
github.com/gogo/protobuf/plugin/union
github.com/gogo/protobuf/plugin/unmarshal
github.com/gogo/protobuf/protoc-gen-gogo/grpc
github.com/gogo/protobuf/vanity/command
github.com/gogo/protobuf/protoc-gen-gofast
Make sure that you have `protoc` in your system path or current path. To download it, please visit https://github.com/protocolbuffers/protobuf/releases
exit status 1
proto.go:8: running "go": exit status 1
make: *** [build] Error 1
Makefile:2: recipe for target 'build' failed

@Loyalsoldier
Copy link
Contributor

@rurirei

Make sure that you have `protoc` in your system path or current path. To download it, please visit https://github.com/protocolbuffers/protobuf/releases
exit status 1

Download protoc binary executable and place it in your PATH.

@rurirei
Copy link
Contributor

rurirei commented Nov 14, 2020

@rurirei

Make sure that you have `protoc` in your system path or current path. To download it, please visit https://github.com/protocolbuffers/protobuf/releases
exit status 1

Download protoc binary executable and place it in your PATH.

thanks as solved.

@beanslel
Copy link

beanslel commented Nov 15, 2020

I will take a look and see if I can get get it ready for a PR with basic functionality, but I need to familiarize myself more with the v2ray codebase so it might take a while.

Thanks @rurirei ! I pulled your code into my fork and will try to build on it.

@kslr
Copy link
Contributor

kslr commented Nov 16, 2020

Maybe exposing an external balancer API, that allows a third-party to pull outbound tags, and configure the balacner, would be the ultimate solution. Users can design whatever rule or strategy themselves using vmessping, traffic stats, or maybe some AI, to pick the best outbound.

And it should be easier than dynamically changing rules (#286).

#229

@rurirei
Copy link
Contributor

rurirei commented Nov 16, 2020

configuration released at branch balancer-dev @acc557

    {
       "tag": "balancer",
       "selector": [
          "a",
          "b",
          "c"
       ],
       "settings": {
          "strategy": "fallback",  // fallback or random. default to random
       },
       "strategySettings": {
          "maxAttempts": 10
       }
    }

@ghost
Copy link

ghost commented Nov 17, 2020

Maybe exposing an external balancer API, that allows a third-party to pull outbound tags, and configure the balacner, would be the ultimate solution. Users can design whatever rule or strategy themselves using vmessping, traffic stats, or maybe some AI, to pick the best outbound.
And it should be easier than dynamically changing rules (#286).

#229

I think #229 is for getting the current users' browsing behavior, and it will be great when being used together with a balancer update API.

But I think some basic strategy like fallback, speedtest ,speedtest-when-fail should be built-in of v2ray.

@ghost
Copy link

ghost commented Dec 16, 2020

Share my patch of fallback + speed test
It will only perform speed test after 10 fails.

git remote add new1 https://github.com/jonas-burn/v2ray-core.git
git fetch new1
git cherry-pick -X ours  1a574c17bdc1a04c14374cc8cc4f866b11ca94d3
git am -3 fallback.patch
---
 app/proxyman/outbound/handler.go         |  5 ++
 app/router/balancing_optimal_strategy.go | 60 +++++++++++++++++++++++-
 2 files changed, 63 insertions(+), 2 deletions(-)

diff --git a/app/proxyman/outbound/handler.go b/app/proxyman/outbound/handler.go
index f899e6b8..75146c57 100644
--- a/app/proxyman/outbound/handler.go
+++ b/app/proxyman/outbound/handler.go
@@ -5,6 +5,7 @@ import (
 
 	"v2ray.com/core"
 	"v2ray.com/core/app/proxyman"
+	"v2ray.com/core/app/router"
 	"v2ray.com/core/common"
 	"v2ray.com/core/common/mux"
 	"v2ray.com/core/common/net"
@@ -51,6 +52,7 @@ type Handler struct {
 	streamSettings  *internet.MemoryStreamConfig
 	proxy           proxy.Outbound
 	outboundManager outbound.Manager
+	balancerManager router.BalancerManager
 	mux             *mux.ClientManager
 	uplinkCounter   stats.Counter
 	downlinkCounter stats.Counter
@@ -63,6 +65,7 @@ func NewHandler(ctx context.Context, config *core.OutboundHandlerConfig) (outbou
 	h := &Handler{
 		tag:             config.Tag,
 		outboundManager: v.GetFeature(outbound.ManagerType()).(outbound.Manager),
+		balancerManager: router.NewBalancerManager(),
 		uplinkCounter:   uplinkCounter,
 		downlinkCounter: downlinkCounter,
 	}
@@ -135,12 +138,14 @@ func (h *Handler) Dispatch(ctx context.Context, link *transport.Link) {
 		if err := h.mux.Dispatch(ctx, link); err != nil {
 			newError("failed to process mux outbound traffic").Base(err).WriteToLog(session.ExportIDToError(ctx))
 			common.Interrupt(link.Writer)
+			h.balancerManager.AddFailedAttempts()
 		}
 	} else {
 		if err := h.proxy.Process(ctx, link, h); err != nil {
 			// Ensure outbound ray is properly closed.
 			newError("failed to process outbound traffic").Base(err).WriteToLog(session.ExportIDToError(ctx))
 			common.Interrupt(link.Writer)
+			h.balancerManager.AddFailedAttempts()
 		} else {
 			common.Must(common.Close(link.Writer))
 		}
diff --git a/app/router/balancing_optimal_strategy.go b/app/router/balancing_optimal_strategy.go
index 8906a8eb..a3822119 100644
--- a/app/router/balancing_optimal_strategy.go
+++ b/app/router/balancing_optimal_strategy.go
@@ -11,7 +11,9 @@ import (
 	"net/url"
 	"sort"
 	"sync"
+	"sync/atomic"
 	"time"
+
 	"v2ray.com/core/common/net"
 	"v2ray.com/core/common/session"
 	"v2ray.com/core/transport"
@@ -21,6 +23,54 @@ import (
 	"v2ray.com/core/features/outbound"
 )
 
+type BalancerManager struct {
+	maxAttempts    int64
+	failedAttempts int64
+}
+
+// temp and will removed
+var balancerManager BalancerManager
+
+func newBalancerManager() {
+	const defaultMaxAttempts = 20
+	balancerManager = BalancerManager{
+		failedAttempts: int64(0),
+		maxAttempts:    int64(defaultMaxAttempts),
+	}
+}
+
+func NewBalancerManager() BalancerManager {
+	return balancerManager
+}
+
+// GetFailedAttempts implements outbound.FailedAttemptsRecorder
+func (m *BalancerManager) getFailedAttempts() int64 {
+	return atomic.LoadInt64(&m.failedAttempts)
+}
+func (m *BalancerManager) GetFailedAttempts() int64 {
+	return balancerManager.getFailedAttempts()
+}
+
+// ResetFailedAttempts implements outbound.FailedAttemptsRecorder
+func (m *BalancerManager) resetFailedAttempts() int64 {
+	return atomic.SwapInt64(&m.failedAttempts, int64(0))
+}
+func (m *BalancerManager) ResetFailedAttempts() int64 {
+	return balancerManager.resetFailedAttempts()
+}
+
+// AddFailedAttempts implements outbound.FailedAttemptsRecorder
+func (m *BalancerManager) addFailedAttempts() int64 {
+	return atomic.AddInt64(&m.failedAttempts, int64(1))
+}
+func (m *BalancerManager) AddFailedAttempts() int64 {
+	return balancerManager.addFailedAttempts()
+}
+
+func init() {
+	newBalancerManager()
+}
+
 // OptimalStrategy pick outbound by net speed
 type OptimalStrategy struct {
 	timeout       time.Duration
@@ -102,6 +152,12 @@ type optimalStrategyTestResult struct {
 
 // periodic execute function
 func (s *OptimalStrategy) run() error {
+
+	if balancerManager.GetFailedAttempts() < balancerManager.maxAttempts {
+		newError(fmt.Sprintf("skip a test for not enough fails %d", balancerManager.GetFailedAttempts())).AtInfo().WriteToLog()
+		return nil
+	}
+
 	tags := s.tags
 	count := s.count
 
@@ -122,8 +178,8 @@ func (s *OptimalStrategy) run() error {
 	})
 
 	s.tag = results[0].tag
-	newError(fmt.Sprintf("Balance OptimalStrategy now pick detour [%s](score: %.2f) from %s", results[0].tag, results[0].score, tags)).AtInfo().WriteToLog()
-
+	newError(fmt.Sprintf("Balance OptimalStrategy now pick detour [%s](score: %.2f) from %s", results[0].tag, results[0].score, tags)).AtWarning().WriteToLog()
+	balancerManager.ResetFailedAttempts()
 	return nil
 }
 
-- 


Feel free to use it on your own build or make a PR with this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request Welcome PR
Projects
None yet
Development

No branches or pull requests

8 participants