Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Server side streaming #226

Closed
chhavi-peloton opened this issue Sep 21, 2020 · 6 comments
Closed

Server side streaming #226

chhavi-peloton opened this issue Sep 21, 2020 · 6 comments
Labels
CLI Issues for ghz CLI question Further information is requested

Comments

@chhavi-peloton
Copy link

Proto file(s)

rpc SubscribeChannels(SubscribeChannelsRequest) returns (stream Message)
rpc CreateMessage(CreateMessageRequest) returns (Message) 

message CreateMessageRequest {
    string parent = 1; 
    Message message = 2;
}
message Message {
    string name = 1; 
    string message_type = 2; 
    map<string, string> data = 3;
}

message Subscription {
    string channel_name = 1;
    string last_message_name = 2; 
}
message SubscribeChannelsRequest {
    repeated Subscription subscriptions = 1;
}

Command line arguments / config

ghz --stream-interval=30s --call myService.SubscribeChannels \
    -d '{"subscriptions": [{"channel_name": "channels/test"}]}' \
    myHost:443

While, that is happening, I have tried hitting the endpoint with grpcurl to CreateMessage

grpcurl -d '{"message": {"name": "channels/test/messages/m2", "data": {"test-body":"test-body"}}, "parent": "channels/test"}' myHost:443 myService.CreateMessage

Describe the bug
I get an error:

Summary:
  Count:	200
  Total:	80.01 s
  Slowest:	0 ns
  Fastest:	0 ns
  Average:	20.00 s
  Requests/sec:	2.50

Response time histogram:

Latency distribution:

Status code distribution:
  [DeadlineExceeded]   200 responses

Error distribution:
  [200]   rpc error: code = DeadlineExceeded desc = context deadline exceeded

To Reproduce

  1. Have the service up
  2. Run ghz command

Expected behavior
The behavior of my service is the following:
I have two endpoints. When I hit SubscribeChannels, it's supposed to hang forever.
I send a CreateMessage request. I want the latency between sending a CreateMessage request and SubscribeChannels endpoint to receive the Message. Is the possible using the ghz tool, and if so, how do I accomplish this?

Environment
ghz: 0.59.0

@bojand
Copy link
Owner

bojand commented Sep 24, 2020

Hello, I am not sure if I understand the issue correctly, but if SubscribeChannels call hangs and does not return a response ever we would expect the result to be an DeadlineExceeded error. Note that the default timeout is 20s. This all seems to line up with the results you are seeing. You may try setting that to 0 to indicate indefinite call and see if that makes a difference. If I am understanding the context correctly, note that if the call hangs and never sends a response message back in the stream, the stream interval option is not very meaningful or useful here.

@bojand bojand added CLI Issues for ghz CLI question Further information is requested labels Sep 24, 2020
@mml21
Copy link

mml21 commented Sep 28, 2020

We are seeing something similar in the sense that server streaming endpoints (where you get an initial state of the world and subsequent data updates) always end up with error codes from either timeouts/deadline exceeded or client cancellation.

Summary:
  Count:        100
  Total:        40.05 s
  Slowest:      0 ns
  Fastest:      0 ns
  Average:      20.00 s
  Requests/sec: 2.50

Response time histogram:

Latency distribution:

Status code distribution:
  [DeadlineExceeded]   100 responses

Error distribution:
  [100]   rpc error: code = DeadlineExceeded desc = context deadline exceeded

I've tried then specifying a timeout of 0 to indicate indefinite call but how do we get the final results after some time?
I was wondering what is the right way to make server streaming endpoints produce relevant metrics.

I've tried specifying the following options:

--timeout=0 --duration=30s --duration-stop=wait --total=50 never completes as expected.

--timeout=0 --duration=30s --duration-stop=ignore--total=50 completes with no metric outputs:

Summary:
  Count:        0
  Total:        30.00 s
  Slowest:      0 ns
  Fastest:      0 ns
  Average:      0 ns
  Requests/sec: 0.00

Response time histogram:

Latency distribution:

--timeout=0 --duration=30s --duration-stop=close --total=50 which always completes with:

Status code distribution:
  [Canceled]   50 responses

Error distribution:
  [50]   rpc error: code = Canceled desc = grpc: the client connection is closing

Would it be possible to add a new --duration-stop cancel option or similar that then reports metrics from the server streaming call up to requested cancellation?

@bojand
Copy link
Owner

bojand commented Oct 2, 2020

Hello, thanks for additional comments on this issue. I think I understand the request; which seems slightly different from the original question as I understood it since the original comment said the server "hangs". But I see what you mean in that ghz doesn't really provide a control mechanism to end the stream client-side. This is somewhat similar to #184, but the control predicate is really just a timeout where we close the stream in ghz client. Is this correct summary of the issue? My time is limited right now, but hopefully I can get to this in the near future.

@mml21
Copy link

mml21 commented Oct 2, 2020

Exactly this is the correct summary. Thanks @bojand for your reply

@bojand
Copy link
Owner

bojand commented Jan 2, 2021

I think this should be addressed with the options introduced in 0.80.0.

@bojand bojand closed this as completed Jan 2, 2021
@currysunxu
Copy link

currysunxu commented Mar 10, 2022

Hi @mml21 @bojand @chhavi-peloton

I've similar issue when call grpc server streaming with new options in 0.80.0 release, like -stream-call-duration 10000ms,--stream-call-count=10

Summary:
Count: 1
Total: 20.00 s
Slowest: 0 ns
Fastest: 0 ns
Average: 20.00 s
Requests/sec: 0.05

Response time histogram:

Latency distribution:

Status code distribution:
[DeadlineExceeded] 1 responses

Error distribution:
[1] rpc error: code = DeadlineExceeded desc = context deadline exceeded

** however**, when I use BloomRPC tool with same proto file to call grpc method, which is successful. I mean server streaming response would always return data unless I stop the connection(channel).

attach my proto and cli for you reference

Proto file(s):
service TrService { rpc GetTrafficData(TrRequest) returns (stream TrResponse) {} }

ghz --insecure
--proto ./trafficproxy.proto
--call trafficproxy.TrafficService.GetTrafficData
--total=1
--stream-call-duration 10000ms
--stream-call-count=10
-d '{xxxxx}'
my.company.dns.com:10086

Could you please give me any suggestion?
Thanks in advance

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLI Issues for ghz CLI question Further information is requested
Projects
None yet
Development

No branches or pull requests

4 participants