Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Seeking configuration for Read timeout to XXX after 120000 ms #880

Closed
pkolodziejczyk opened this issue Jun 28, 2021 · 29 comments
Closed

Seeking configuration for Read timeout to XXX after 120000 ms #880

pkolodziejczyk opened this issue Jun 28, 2021 · 29 comments

Comments

@pkolodziejczyk
Copy link

Hi,

Version of otoroshi : 1.4.22 as docker image (1.4.22-jdk8)

I have a timeout that i don't know how to change the default configuration :

[error] otoroshi-error-handler - Server Error Read timeout to XXX after 120000 ms from 192.168.182.240 on GET XXXX - Te -> trailers;Host -> YYYY ;Accept -> /;Cookie -> ZZZZZ

java.util.concurrent.TimeoutException: Read timeout to XXX after 120000 ms

at play.shaded.ahc.org.asynchttpclient.netty.timeout.TimeoutTimerTask.expire(TimeoutTimerTask.java:43)

at play.shaded.ahc.org.asynchttpclient.netty.timeout.ReadTimeoutTimerTask.run(ReadTimeoutTimerTask.java:54)

at play.shaded.ahc.io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:663)

at play.shaded.ahc.io.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:738)

at play.shaded.ahc.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:466)

at java.lang.Thread.run(Thread.java:748)

As I understand it. It's configuration of playframework that need to be change. Since otoroshi use it.

I have try the solution proposed here :

playframework/play-ws#202

and put the configuration in otoroshi configuration but it failed.

@mathieuancelin
Copy link
Member

mathieuancelin commented Jun 29, 2021

Hi @pkolodziejczyk

you just have to go in your service settings page and change the value for the timeouts. Tell me if everything works fine !

Capture d’écran 2021-06-29 à 08 53 58

@pkolodziejczyk
Copy link
Author

pkolodziejczyk commented Jun 29, 2021

I have modified that configuration, with different value for each fields to identifiy them if they were trigger.

configuration_docService

But I got the the same timeout in my log :

2021-06-29 09:47:47,421 [WARN] from otoroshi-circuit-breaker in otoroshi-actor-system-akka.actor.default-dispatcher-78494 - Error calling DocumentServiceSynapse : GET /document-service/api/v1/documents/XXXX/content (1/1 attemps) : Read timeout to svc-documentservice.actineo/10.43.85.254:8080 after 120000 ms
2021-06-29 09:47:47,485 [WARN] from otoroshi-circuit-breaker in otoroshi-actor-system-akka.actor.default-dispatcher-78494 - Retry failure (1 attemps) for DocumentServiceSynapse : GET /document-service/api/v1/documents/XXXX/content => Read timeout to svc-documentservice.actineo/10.43.85.254:8080 after 120000 ms

That why I was trying the solution with config file of Otoroshi.

Note : I have try to curl the URL that otoroshi call from the commande line on Otoroshi pod (Docker) and it's respond a code HTTP 200 (but with more than 120 sec).

I have also try to desactive the section circuit-breaker to let it pass as it. But, I got a Bad Gateway after 60 sec.

@mathieuancelin
Copy link
Member

That's weird !

Is it possible for you to try the same thing with the latest 1.5.0 version (https://github.com/MAIF/otoroshi/releases/tag/v1.5.0-alpha.18) and see if it behaves the same ?

@mathieuancelin
Copy link
Member

hey @pkolodziejczyk

i fixed a bug (#883) that could cause the issue (not 100% sure as I can't reproduce it).
I will create a release soon so you can test it.

some things you can try to fix your bug

  • enabled the flag Use new http client on your service, it will then use an httpclient with more fine tunings about timeouts
  • change your timeout settings like : Client global timeout == (Client attempts * Client call timeout) == (Client attempts * Client call and stream timeout + 10000) == (Client attempts * Client idle timeout). You can think it's not precise, but as I don't know what you are proxying, It should handle everything. Client call timeout is the timeout for handling the request by the server without response streaming, Client call and stream timeout is the timeout for handling the request by the server with response streaming, Client idle timeout is how much time we can wait for the connection without any bytes passing.
  • save your service
  • restart otoroshi, and your settings should be used by otoroshi

@mathieuancelin
Copy link
Member

hey @pkolodziejczyk

you can try https://github.com/MAIF/otoroshi/releases/tag/v1.5.0-alpha.19 and tell me how it behaves

@pkolodziejczyk
Copy link
Author

I will try your solution.

Question : why the restart of the otoroshi instance ?

Because, I didn't restart otoroshi after saving the new configuration. for my others modifications.

For the test of the new version, if my schedule allow it, I will test the image 1.5.0-alpha.19-jdk8

@pkolodziejczyk
Copy link
Author

Exception in thread "pool-2-thread-1" java.lang.UnsupportedClassVersionError: com/clevercloud/biscuit/token/builder/Utils has been compiled by a more recent version of the Java Runtime (class file version 55.0), this version of the Java Runtime only recognizes class file versions up to 52.0

at java.lang.ClassLoader.defineClass1(Native Method)

at java.lang.ClassLoader.defineClass(ClassLoader.java:756)

When launching the image 1.5.0-alpha.19-jdk8

I will try with the 1.5.0-alpha.19-jdk11

@pkolodziejczyk
Copy link
Author

container_1.5.0-alpha.19-jdk11.relyens.1.1.log

Look like, the new "checking otoroshi updates" is making otoroshi crash in my environnement. My pods don't have acces to the net.

@mathieuancelin mathieuancelin added this to the v1.5.0 milestone Jul 2, 2021
@mathieuancelin mathieuancelin self-assigned this Jul 2, 2021
@mathieuancelin
Copy link
Member

@pkolodziejczyk the restart is because of a caching bug that should be fixed by now.

You can try the fixed version by trying the following docker image: maif/otoroshi:1.5.0-dev-1625219308

If you still have issues, can you try to add the following env. variable:

OTOROSHI_LOGGERS_OTOROSHI_CLIENT_CONFIG=DEBUG

and show me the full log ?

About the JDK8 version, I wasn't aware of that one. I will drop JDK8 support a little bit sooner that I expected (september 2021) I guess.

About update check crashing the image, I shouldn't be the case. The log can be a little bit raw but everything should continue to work.

@pkolodziejczyk
Copy link
Author

Will test it on Monday and give you the results.

Thanks for the time.

@mathieuancelin
Copy link
Member

As there is no more comments, I'm closing this one, but it can be reopened

@pkolodziejczyk
Copy link
Author

pkolodziejczyk commented Aug 4, 2021

Sorry for the late reply.

I have tested with the docker image 1.5.0-beta.1

And still have :

[] otoroshi-client-config - [circuitbreaker] using globalTimeout: 300004 milliseconds 
[] otoroshi-client-config - [circuitbreaker] using callTimeout - 1: 300000 milliseconds 
[] otoroshi-client-config - [circuitbreaker] using callTimeout - 2: 300000 milliseconds 
[] otoroshi-client-config - [gateway] using callAndStreamTimeout: 300001 milliseconds 
[] otoroshi-client-config - [circuitbreaker] using globalTimeout: 300004 milliseconds 
[] otoroshi-client-config - [circuitbreaker] no breaker rebuild 
[] otoroshi-client-config - [gateway] using callAndStreamTimeout: 300001 milliseconds 
[] otoroshi-circuit-breaker - Error calling DocumentServiceSynapse : GET /document-service/api/v1/documents/HDS-DANES0002-AFFR301300221/content (1/1 attempts) : Read timeout to svc-documentservice.actineo/10.43.192.166:8080 after 120000 ms 
[] otoroshi-circuit-breaker - Retry failure (1 attempts) for DocumentServiceSynapse : GET /document-service/api/v1/documents/HDS-DANES0002-AFFR301300221/content => Read timeout to svc-documentservice.actineo/10.43.192.166:8080 after 120000 ms 

Still don't understand where come from that 120000 ms

I haven't check if there is new value to fill the interface in 1.5.0
Cause by #897

Edit :

Made a new test with the custom timeout settings part filled

[debug] otoroshi-client-config - [circuitbreaker] using globalTimeout: 300005 milliseconds
[debug] otoroshi-client-config - [circuitbreaker] using callTimeout - 1: 300001 milliseconds
[debug] otoroshi-client-config - [circuitbreaker] using callTimeout - 2: 300001 milliseconds
[debug] otoroshi-client-config - [gateway] using callAndStreamTimeout: 300002 milliseconds
[error] otoroshi-http-handler - error while talking with downstream service - no state in response header - {"reason":"no state in response header","expected_token_issuer":"Otoroshi","expected_token_challenge_version":"V1","expected_token_ttl_seconds":30,"expected_token_state":"9gm]XQvNh8B5sRW3!yl_zQcp&jYFrYk_Fyc!-8nWPv3wp]K/eb[cvOqhaUH4B(%+%CNII7uXGnpP6C_Qb/nkXYcmBcn*P9$PMgCjk3sR/MyvV!7FONpbK;rpPMafieDO","at":1628088301786,"at_sec":1628088301,"leeway":10,"token":{"extracted_state":"--","iat":-1,"exp":-1,"nbf":-1},"request":{"uri":"/swagger-ui/index.html?configUrl=/v3/api-docs/swagger-config","method":"GET","query":"configUrl=/v3/api-docs/swagger-config","headers":{"Host":"taskservice.synapse-pre.relyens.eu","Accept":"text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8","Cookie":"otoroshi-session=eyJhbGciOiJIUzI1NiJ9.eyJkYXRhIjp7ImJvdXNyIjoidmF6QTZROEN0MFF0M2NTM2Y3ZW1kUnBmTHdUT3JUMFQzS3dWVFU0dzZrNVVqZGV1cjhYRlp3dDJGdGxZc29JUSJ9LCJleHAiOjE2MjgzNDM2NzgsIm5iZiI6MTYyODA4NDQ3OCwiaWF0IjoxNjI4MDg0NDc4fQ.4a-MX_K7rSYJnP9xVkSAQ-bJwvuPiWKJsZ8gZD0d8Ps","Referer":"http://sl-0-code2.sham.fr:1234/","X-Real-Ip":"172.19.4.13","User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:90.0) Gecko/20100101 Firefox/90.0","Cache-Control":"max-age=0","Remote-Address":"172.19.14.38:35084","Sec-Fetch-Dest":"document","Sec-Fetch-Mode":"navigate","Sec-Fetch-Site":"cross-site","Sec-Fetch-User":"?1","Timeout-Access":"<function1>","Accept-Encoding":"gzip, deflate, br","Accept-Language":"fr,fr-FR;q=0.8,en-US;q=0.5,en;q=0.3","Raw-Request-URI":"/swagger-ui/index.html?configUrl=/v3/api-docs/swagger-config","X-Forwarded-For":"172.19.34.60, 172.19.4.13","Tls-Session-Info":"Session(1628088296224|SSL_NULL_WITH_NULL_NULL)","X-Forwarded-Host":"taskservice.synapse-pre.relyens.eu:443","X-Forwarded-Port":"443","X-Forwarded-Proto":"https","X-Forwarded-Server":"nk-ly-sircmpre01","Upgrade-Insecure-Requests":"1"}},"response":{"status":200,"raw_state_header":"--","headers":{"Date":"Wed, 04 Aug 202
1 14:45:01 GMT","Vary":"Access-Control-Request-Headers","Content-Type":"text/html","Accept-Ranges":"bytes","Last-Modified":"Mon, 02 Aug 2021 09:35:35 GMT","Content-Length":"1424","synapse-State-Resp":"--"}}}
[debug] otoroshi-client-config - [circuitbreaker] using globalTimeout: 300009 milliseconds
[debug] otoroshi-client-config - [circuitbreaker] using callTimeout - 1: 300008 milliseconds
[debug] otoroshi-client-config - [circuitbreaker] using callTimeout - 2: 300008 milliseconds
[debug] otoroshi-client-config - [gateway] using callAndStreamTimeout: 300005 milliseconds
[error] otoroshi-jobs-software-updates - Unable to check new otoroshi version: Request timeout to updates.otoroshi.io:443 after 10000 ms
[debug] otoroshi-client-config - [circuitbreaker] using globalTimeout: 300009 milliseconds
[debug] otoroshi-client-config - [circuitbreaker] no breaker rebuild
[debug] otoroshi-client-config - [gateway] using callAndStreamTimeout: 300005 milliseconds
[warn] otoroshi-circuit-breaker - Error calling DocumentServiceSynapse : GET /document-service/api/v1/documents/HDS-DANES0002-AFFR301300221/content (1/1 attempts) : Read timeout to svc-documentservice.actineo/10.43.192.166:8080 after 120000 ms
[warn] otoroshi-circuit-breaker - Retry failure (1 attempts) for DocumentServiceSynapse : GET /document-service/api/v1/documents/HDS-DANES0002-AFFR301300221/content => Read timeout to svc-documentservice.actineo/10.43.192.166:8080 after 120000 ms

Still that 120000 ms value in place.

@mathieuancelin
Copy link
Member

Ok, so I guess the issue is fixed, you now have an issue with otoroshi protocol @pkolodziejczyk right (#898) ?

@pkolodziejczyk
Copy link
Author

No. The timeout at 120 000ms is here.

The other issue concern an other call on an other service. (#898)

I just wait to have the next testable version with (#897) to check if the configuration is right in 1.5. All value are at 300 00xms in 1.4.22 and the log show those values. before the time out :

Read timeout to svc-documentservice.actineo/10.43.192.166:8080 after 120000 ms

@mathieuancelin
Copy link
Member

Can you remind me your client configuration ?

@mathieuancelin mathieuancelin reopened this Aug 6, 2021
@mathieuancelin
Copy link
Member

Ok, the log about 120000ms is still there, but otoroshi send a response back before 120000ms right ? it should return after 30008 ms

@mathieuancelin
Copy link
Member

@pkolodziejczyk i found the issue, totally forgot about this one. just set env var. PROXY_IDLE_TIMEOUT=3600000 and everything should operate as intended. The next version will have better default settings for this value

@pkolodziejczyk
Copy link
Author

Thanks, will try it.

@pkolodziejczyk
Copy link
Author

I have try with the 1.5.0-beta.1

16:09:31 [debug] otoroshi-client-config - [circuitbreaker] using globalTimeout: 300011 milliseconds 
16:09:31 [debug] otoroshi-client-config - [circuitbreaker] no breaker rebuild 
16:09:31 [debug] otoroshi-client-config - [gateway] using callAndStreamTimeout: 300009 milliseconds 
16:11:43 [debug] otoroshi-client-config - [circuitbreaker] using globalTimeout: 30000 milliseconds 
16:11:43 [debug] otoroshi-client-config - [circuitbreaker] no breaker rebuild 
16:11:43 [debug] otoroshi-client-config - [gateway] using callAndStreamTimeout: 120000 milliseconds 
16:11:43 [debug] otoroshi-http-handler - error while talking with downstream service - no state in response header - {"reason":"no state in response header","expected_token_issuer":"Otoroshi","expected_token_challenge_version":"V1","expected_token_ttl_seconds":30,"expected_token_state":".3JY5tBVc.tfZkpJGdz&&c/nwWU]3l3-xV0%2bPp;)[bCr:Y7:a*5HpzO0sMC/0;2)]OJ.-CEr$%rO0Y4qPU;Rn<jPGU0=!b3HWczH*c5Q2iglvfYWg%I-)AP%llsN51","at":1628259103757,"at_sec":1628259103,"leeway":10,"token":{"extracted_state":"--","iat":-1,"exp":-1,"nbf":-1},"request":{"uri":"/v1/claims/?page=0&per_page=20","method":"GET","query":"page=0&per_page=20","headers":{"Host":"api.synapse-pre.relyens.eu","Accept":"*/*","X-Real-Ip":"172.19.4.13","User-Agent":"PostmanRuntime/7.28.2","Authorization":"Basic d3doZXIwOW5ld3Jvd2F6Njp5cGdqOWQzYWI2M2JxbGVob3RqNnlhczZ5c2VqdHV6ajEzNjQ4YzBiY2RqbTRicWM5NTQweXprb2t5enY0ZzE1","Postman-Token":"b4be6431-1be8-4614-bd1c-b0041d78879a","Remote-Address":"172.19.14.38:58104","Timeout-Access":"<function1>","Accept-Encoding":"gzip, deflate, br","Raw-Request-URI":"/v1/claims/?page=0&per_page=20","X-Forwarded-For":"192.168.181.162, 172.19.4.13","Tls-Session-Info":"Session(1628255271986|SSL_NULL_WITH_NULL_NULL)","X-Forwarded-Host":"api.synapse-pre.relyens.eu:443","X-Forwarded-Port":"443","X-Forwarded-Proto":"https","X-Forwarded-Server":"nk-ly-sircmpre01"}},"response":{"status":400,"raw_state_header":"--","headers":{"Date":"Fri, 06 Aug 2021 14:11:43 GMT","Connection":"close","Content-Type":"text/html;charset=utf-8","Content-Length":"1181","Content-Language":"en"}}} 

The otoroshi response after 1min 0.42sec :

The server was not able to produce a timely response to your request.
Please try again in a short while!

Downservice log :

2021-08-06 16:09:31.519  INFO 1 --- [nio-8080-exec-5] c.sham.services.document.AbstractLogger  : [RestInterceptor]START Appel REST DocumentServiceController.getDocumentContent, arguments=[HDS-DANES0002-AFFR301300221]
2021-08-06 16:09:31.519  INFO 1 --- [nio-8080-exec-5] c.sham.services.document.AbstractLogger  : [StrategyInterceptor]START Strategy DocumentHDSStrategy.getContentByIdentity, arguments=[HDS-DANES0002-AFFR301300221]
2021-08-06 16:09:31.520  INFO 1 --- [nio-8080-exec-5] c.s.s.document.service.HDSService        : init SecuredDocumentService with ShareFolder /data/PRE_DDM/securedFile/
2021-08-06 16:11:37.479  INFO 1 --- [nio-8080-exec-5] c.sham.services.document.AbstractLogger  : [StrategyInterceptor]END Service DocumentHDSStrategy.getContentByIdentity <125959 ms>
2021-08-06 16:11:38.172  INFO 1 --- [nio-8080-exec-5] c.sham.services.document.AbstractLogger  : [StrategyInterceptor]START Strategy DocumentHDSStrategy.cleanUp, arguments=[FileContent(file=/data/PRE_DDM/securedFile/EclipseMAT.zip, filename=EclipseMAT.zip)]
2021-08-06 16:11:38.172  INFO 1 --- [nio-8080-exec-5] c.s.s.d.strategy.DocumentHDSStrategy     : On supprime le fichier après utilisation /data/PRE_DDM/securedFile/EclipseMAT.zip
2021-08-06 16:11:38.180  INFO 1 --- [nio-8080-exec-5] c.sham.services.document.AbstractLogger  : [StrategyInterceptor]END Service DocumentHDSStrategy.cleanUp <8 ms>
2021-08-06 16:11:38.180  INFO 1 --- [nio-8080-exec-5] c.sham.services.document.AbstractLogger  : [RestInterceptor]END Appel REST DocumentServiceController.getDocumentContent <126661 ms>

@pkolodziejczyk
Copy link
Author

pkolodziejczyk commented Aug 6, 2021

More log with 1.5.0-beta.2

image

I got a timeout at 120 secondes.

<html>

<head>
	<title>504 Gateway Time-out</title>
</head>

<body>
	<center>
		<h1>504 Gateway Time-out</h1>
	</center>
	<hr>
	<center>nginx</center>
</body>

</html>

But, I am not sur if it's otoroshi or our proxy on front of it. (I asked my OPS Teams to check it.)

Note : the new interface with the direct access to APIKEY is just nice.

@mathieuancelin
Copy link
Member

@pkolodziejczyk everything should be fixed with beta.3 that is out now

@pkolodziejczyk
Copy link
Author

pkolodziejczyk commented Aug 6, 2021

Same behaviour from Otoroshi version 1.5.0-beta.3

A second call after 60 secondes and timeout at 120 secondes

17:27:26 [debug] otoroshi-client-config - [circuitbreaker] using globalTimeout: 300011 milliseconds
17:27:26 [debug] otoroshi-client-config - [circuitbreaker] using callTimeout - 1: 300010 milliseconds
17:27:26 [debug] otoroshi-client-config - [circuitbreaker] using callTimeout - 2: 300010 milliseconds
17:28:25 [debug] otoroshi-client-config - [gateway] using callAndStreamTimeout: 300009 milliseconds
17:28:25 [debug] otoroshi-client-config - [circuitbreaker] using globalTimeout: 300011 milliseconds
17:28:25 [debug] otoroshi-client-config - [circuitbreaker] no breaker rebuild
17:29:31 [debug] otoroshi-client-config - [gateway] using callAndStreamTimeout: 300009 milliseconds
17:29:31 [debug] otoroshi-client-config - [circuitbreaker] using globalTimeout: 300011 milliseconds
17:29:31 [debug] otoroshi-client-config - [circuitbreaker] no breaker rebuild
17:30:31 [debug] otoroshi-client-config - [gateway] using callAndStreamTimeout: 300009 milliseconds
17:30:31 [debug] otoroshi-client-config - [circuitbreaker] using globalTimeout: 300011 milliseconds
17:30:31 [debug] otoroshi-client-config - [circuitbreaker] no breaker rebuild
17:30:31 [debug] otoroshi-client-config - [gateway] using callAndStreamTimeout: 300009 milliseconds
17:33:25 [warn] otoroshi-circuit-breaker - Error calling DocumentServiceSynapse : GET /document-service/api/v1/documents/HDS-DANES0002-AFFR301300221/content (1/1 attempts) : Circuit Breaker Timed out.
17:34:31 [warn] otoroshi-circuit-breaker - Error calling DocumentServiceSynapse : GET /document-service/api/v1/documents/HDS-DANES0002-AFFR301300221/content (1/1 attempts) : Circuit Breaker Timed out.

My downstream service didn't responded in time. So, the warn is logical. But, should be alone. Since "Client attempts" is set to 1and "C.breaker retry delay" is set to 300005 delay ms

Second Test : But, I am not sur if the first warn is not from the previous test.

[ debug] otoroshi-client-config - [circuitbreaker] using globalTimeout: 300011 milliseconds
[debug] otoroshi-client-config - [circuitbreaker] no breaker rebuild
[debug] otoroshi-client-config - [gateway] using callAndStreamTimeout: 300009 milliseconds
[warn] otoroshi-circuit-breaker - Error calling DocumentServiceSynapse : GET /document-service/api/v1/documents/HDS-DANES0002-AFFR301300221/content (1/1 attempts) : Circuit Breaker Timed out.
[debug] otoroshi-client-config - [circuitbreaker] using globalTimeout: 300011 milliseconds
[debug] otoroshi-client-config - [circuitbreaker] no breaker rebuild
[debug] otoroshi-client-config - [gateway] using callAndStreamTimeout: 300009 milliseconds
[warn] otoroshi-circuit-breaker - Retry failure (1 attempts) for DocumentServiceSynapse : GET /document-service/api/v1/documents/HDS-DANES0002-AFFR301300221/content => Circuit Breaker Timed out.
[warn] otoroshi-circuit-breaker - Retry failure (1 attempts) for DocumentServiceSynapse : GET /document-service/api/v1/documents/HDS-DANES0002-AFFR301300221/content => Circuit Breaker Timed out.
[warn] otoroshi-circuit-breaker - Retry failure (1 attempts) for DocumentServiceSynapse : GET /document-service/api/v1/documents/HDS-DANES0002-AFFR301300221/content => Circuit Breaker Timed out.

I am doing more testing. But there is something with the double call, that i don't explain with the configuration.

@mathieuancelin
Copy link
Member

@pkolodziejczyk I have trouble to reproduce the issue. Here is a service descriptor that call a service that generates long responses. In the video I call the service on a beta.3 version

timeout.mov

service-descriptor-my-service-1628265802741.json.zip
and nothing happens

@pkolodziejczyk
Copy link
Author

Here is my test :

OTOROSHI_880_TEST_01

service-descriptor-TaskServiceSynapse-1628507477222.zip

The timeout is probably on my OPS Teams side. But, I am less sure about the double trigger.

@pkolodziejczyk
Copy link
Author

OTOROSHI_880_TEST_02.mp4

Better format to look here.

And with useAkkaHttpClient true.

@pkolodziejczyk
Copy link
Author

I got a response for my Operation Team.
The timeout was on their side and now all is working perfectly. (no timeout no retry.)

Thanks a lot for your support and your product.

@KpTn6974
Copy link

Hi Mathieu,

I'm one of the architects working with Patrick
I'm not totally confident with the fact to install a beta in our production environment to correct this behaviour.

Did you have an idea when you will release a new version with this correction ?

Depending on the date we will decide if we install the beta or wait for the release.

@mathieuancelin
Copy link
Member

Hi @pkolodziejczyk and @KpTn6974

sorry for the delay, I was in vacations. I'm glad the issue is fixed on your side.

I understand that you don't want to deploy a beta version in production. I have no specific date in mind about the release as I am quite busy right now, but I guess it will be before the end of 2021. However, I can assure you that the beta version are quite stable and should not introduce breaking stuff until the release, only bug fixes and a new documentation.

@KpTn6974
Copy link

Hi @mathieuancelin,

Thanks for your reply. And no worries for the delay, vacations are a good thing also ;-)

We will discuss it internally and I think deploy the beta on testing and preproduction environment to validate it. Before going to production.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants