Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support grpc protocol proxy #793

Merged
merged 31 commits into from
Jan 13, 2023

Conversation

sodaRyCN
Copy link
Contributor

@sodaRyCN sodaRyCN commented Sep 12, 2022

企业微信截图_16633259236814
This PR is for the proxy that easegress can support the gRPC protocol.
The simple process is shown in the figure above, which can forward the traffic on the gRPC client side to the downstream gRPC server according to the configured pipeline rules. It should be reminded that gRPC stream is a full-duplex protocol, so the traffic can also be pushed from the server side to the client side, the above picture is more to show the process of proxy, but not to emphasise the characteristics of full-duplex.
The main modification is the addition of the following modules:

  1. pkg/object/grpcserver
  • grpcserver: implements the TrafficGate interface, managed by the Supervisor. It manages the life cycle of the runtime
  • runtime: wrapper the grpc.server listener and listen for events to manage the server's life cycle, such as restart/shutdown/hot update, etc.
  • mux: implements the grpc.UnknownServiceHandler interface to process all traffic entering the runtime, and implements the same route capability as object/httpserver/mux
  1. pkg/filter/grpc
  • proxy: implements the filter interface
  • pool: the actual traffic forwarding is completed. Like /filter/proxy/pool, it supports docking with the service registry centre and statically configures the downstream instance address. You can configure whether to use the connection pool: some gRPC servers manage physical connections by themselves, and they do not consider gRPC gateway scenarios, such as Nacos: when the actual client goes offline, the server will detect and actively close the server's Connection. because gRPC is based on HTT2, there could be a connection from the same easegress used by multiple real clients to the real server. In this case, the server actively closes the connection, which will cause other normal clients to be affected.
  • load balancer: provides load balancers such as round-robin, random, IPHash, etc., and adds a forward proxy balancer. Because gRPC.Server does not support tunnelling, it adopts the method of parsing headers
  1. pkg/protocols/grpcprot
  • request: implements protocols.Request, wrapper grpc.ServerStream
  • response:implements protocols.Response, wrapper status.Status
  • header: wrapper metadata.md, used by grpc.InComingContext/OutgoingContext
  • trailer: sent by the gRPC server side at the end of the data when stream closed. It has the same structure and method as the Header, so it is used as an alias for the Header
  1. pkg/util/connectionpool
  • pool: Defines the connection pool interface
  • grpc/codec: implements grpc.Codec instead of encoding.Codec, because encoding.Codec would change the header: use encoding.Codec, then header's content-type=application/grpc -> content-type=application/grpc+encoding.Codec.Name()
  • grpc/pool: implements pool interface. According to the target address segmented lock, the connection of each address is designed according to the Producer-Consumer pattern

@localvar localvar requested review from xxx7xxxx, localvar and suchen-sci and removed request for localvar September 13, 2022 00:38
2.use emptypb instead of anypb
@codecov-commenter
Copy link

codecov-commenter commented Sep 13, 2022

Codecov Report

Base: 75.89% // Head: 73.14% // Decreases project coverage by -2.74% ⚠️

Coverage data is based on head (006a5dc) compared to base (dd88bd8).
Patch coverage: 48.50% of modified lines in pull request are covered.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #793      +/-   ##
==========================================
- Coverage   75.89%   73.14%   -2.75%     
==========================================
  Files         115      130      +15     
  Lines       13464    14969    +1505     
==========================================
+ Hits        10218    10949     +731     
- Misses       2674     3400     +726     
- Partials      572      620      +48     
Impacted Files Coverage Δ
pkg/filters/grpcproxy/codec.go 0.00% <0.00%> (ø)
pkg/protocols/grpcprot/grpc.go 0.00% <0.00%> (ø)
pkg/protocols/grpcprot/response.go 0.00% <0.00%> (ø)
pkg/protocols/httpprot/response.go 83.06% <0.00%> (-0.89%) ⬇️
pkg/filters/grpcproxy/pool.go 9.22% <9.22%> (ø)
pkg/filters/grpcproxy/loadbalance.go 16.80% <16.80%> (ø)
pkg/protocols/grpcprot/header.go 43.63% <43.63%> (ø)
pkg/filters/grpcproxy/proxy.go 48.61% <48.61%> (ø)
pkg/object/grpcserver/mux.go 51.98% <51.98%> (ø)
pkg/protocols/grpcprot/request.go 63.41% <63.41%> (ø)
... and 12 more

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

doc/cookbook/grpc-server.md Outdated Show resolved Hide resolved
go.mod Outdated Show resolved Hide resolved
pkg/filters/grpcproxy/loadbalance.go Outdated Show resolved Hide resolved
pkg/object/grpcserver/mux.go Outdated Show resolved Hide resolved
pkg/object/grpcserver/mux.go Outdated Show resolved Hide resolved
pkg/filters/grpcproxy/pool.go Outdated Show resolved Hide resolved
pkg/filters/grpcproxy/proxy.go Outdated Show resolved Hide resolved
pkg/filters/grpcproxy/proxy.go Outdated Show resolved Hide resolved
pkg/util/connectionpool/grpc/pool.go Outdated Show resolved Hide resolved
pkg/util/connectionpool/grpc/pool.go Outdated Show resolved Hide resolved
2.corrected according to review comments
@sodaRyCN sodaRyCN closed this Sep 14, 2022
@haoel haoel reopened this Sep 15, 2022
@jiekun
Copy link

jiekun commented Sep 15, 2022

Would you mind linking an issue or concluding your change on the PR description rather than leaving it empty?

@sodaRyCN
Copy link
Contributor Author

Would you mind linking an issue or concluding your change on the PR description rather than leaving it empty?

yeah, I will supply the description of my change as soon as possible.

doc/cookbook/grpc-server.md Outdated Show resolved Hide resolved
doc/cookbook/grpc-server.md Outdated Show resolved Hide resolved
doc/cookbook/grpc-server.md Outdated Show resolved Hide resolved
doc/cookbook/grpc-server.md Outdated Show resolved Hide resolved
pkg/option/option.go Outdated Show resolved Hide resolved
pkg/object/grpcserver/grpcserver.go Outdated Show resolved Hide resolved
pkg/object/ingresscontroller/translator.go Outdated Show resolved Hide resolved
pkg/option/option.go Outdated Show resolved Hide resolved
Comment on lines 280 to 288
// GetFirstInHeader returns the first value for a given key.
// k is converted to lowercase before searching in md.
func (r *Request) GetFirstInHeader(k string) string {
v := r.header.md.Get(k)
if v == nil {
return ""
}
return v[0]
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the name of this function is a little confusing, if it is hard to find a good name, I propose removing it and using r.RawHeader().Get(k) directly

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the value structure in the grpc header is []string, so sometimes it will be necessary to obtain the first one, for reference
scg . it will be better to use r.RawHeader().GetFirst(k)

return lb.Servers[hash.Sum32()%uint32(len(lb.Servers))]
}

type forwardLoadBalancer struct {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why use the word forward?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

behavior like a forward proxy

pkg/util/connectionpool/grpc/pool.go Outdated Show resolved Hide resolved
pkg/util/connectionpool/grpc/pool.go Outdated Show resolved Hide resolved
pkg/util/connectionpool/pool.go Outdated Show resolved Hide resolved
pkg/util/connectionpool/grpc/pool.go Outdated Show resolved Hide resolved
pkg/util/connectionpool/grpc/pool.go Outdated Show resolved Hide resolved
pkg/util/connectionpool/grpc/pool.go Outdated Show resolved Hide resolved
pkg/util/connectionpool/grpc/pool.go Outdated Show resolved Hide resolved
doc/cookbook/grpc-server.md Outdated Show resolved Hide resolved
pkg/filters/grpcproxy/codec.go Outdated Show resolved Hide resolved
doc/cookbook/grpc-server.md Outdated Show resolved Hide resolved
Comment on lines +397 to +399
// Explicitly *do not Close* c2sErrChan and c2sErrChan, otherwise the select below will not terminate.
// Channels do not have to be closed, it is just a control flow mechanism, see
// https://groups.google.com/forum/#!msg/golang-nuts/pZwdYRGxCIk/qpbHxRRPJdUJ
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it is better to close the channels in the goroutine created by sp.forwardE2E, just before it exit.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

closing the channel is not always a good option in this scenario. because the client indicates that it will not continue to send datato server on the ClientStream after called ClientStream.CloseSend, the server can take a long time to handle business before return IO.EOF or continue to send data in one direction, and the client can also continue processing , so C2S and S2C are not always sync. In this scenario, select in for loop will always select the closed channel and take a lot of CPU

pkg/filters/grpcproxy/pool.go Outdated Show resolved Hide resolved
pkg/filters/grpcproxy/loadbalance.go Outdated Show resolved Hide resolved
pkg/filters/grpcproxy/pool.go Outdated Show resolved Hide resolved

// RequestMatcherSpec describe RequestMatcher
type RequestMatcherSpec struct {
Policy string `yaml:"policy" jsonschema:"omitempty,enum=,enum=general,enum=ipHash,enum=headerHash,enum=random"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto.

pkg/filters/grpcproxy/requestmatch.go Outdated Show resolved Hide resolved
pkg/filters/grpcproxy/requestmatch.go Outdated Show resolved Hide resolved
pkg/filters/grpcproxy/server.go Outdated Show resolved Hide resolved
pkg/object/grpcserver/runtime.go Outdated Show resolved Hide resolved
@xxx7xxxx xxx7xxxx merged commit 65f7024 into easegress-io:main Jan 13, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants