-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Retry.retryBackOff() should throw a meaningful exception on retries exhausted #2052
Comments
Historically we've avoided exposing new exception types, but instead allow identification of reactor-specific errors via Internally we could have a dedicated private type that extends This could eventually be phased out in 3.4 in favor of simply extending In any case, the recommended way to detect retry exhausted would be to invoke the That could be one way of addressing this issue, what do you think @trantienduchn? |
Note there is ongoing work on retry in general, in #2040, and this issue could be moved to core and added to the list the above PR attempts to fix (addons own retry builder will probably be deprecated at some point). |
Thank you! Good initiative! I'm waiting for it, so the |
it might be available earlier, like 3.3.4. I'm not sure what's the best future for |
Damn, sorry it's in core, you're right! |
This big commit is a large refactor of the `retryWhen` operator in order to add several features. Fixes #1978 Fixes #1905 Fixes #2063 Fixes #2052 Fixes #2064 * Expose more state to `retryWhen` companion (#1978) This introduces a retryWhen variant based on a `Retry` functional interface. This "function" deals not with a Flux of `Throwable` but of `RetrySignal`. This allows retry function to check if there was some success (onNext) since last retry attempt, in which case the current attempt can be interpreted as if this was the first ever error. This is especially useful for cases where exponential backoff delays should be reset, for long lived sequences that only see intermittent bursts of errors (transient errors). We take that opportunity to offer a builder for such a function that could take transient errors into account. * the `Retry` builders Inspired by the `Retry` builder in addons, we introduce two classes: `RetrySpec` and `RetryBackoffSpec`. We name them Spec and not Builder because they don't require to call a `build()` method. Rather, each configuration step produces A) a new instance (copy on write) that B) is by itself already a `Retry`. The `Retry` + `xxxSpec` approach allows us to offer 2 standard strategies that both support transient error handling, while letting users write their own strategy (either as a standalone `Retry` concrete implementation, or as a builder/spec that builds one). Both specs allow to handle `transientErrors(boolean)`, which when true relies on the extra state exposed by the `RetrySignal`. For the simple case, this means that the remaining number of retries is reset in case of onNext. For the exponential case, this means retry delay is reset to minimum after an onNext (#1978). Additionally, the introduction of the specs allows us to add more features and support some features on more combinations, see below. * `filter` exceptions (#1905) Previously we could only filter exceptions to be retried on the simple long-based `retry` methods. With the specs we can `filter` in both immediate and exponential backoff retry strategies. * Add pre/post attempt hooks (#2063) The specs let the user configure two types of pre/post hooks. Note that if the retry attempt is denied (eg. we've reached the maximum number of attempts), these hooks are NOT executed. Synchronous hooks (`doBeforeRetry` and `doAfterRetry`) are side effects that should not block for too long and are executed right before and right after the retry trigger is sent by the companion publisher. Asynchronous hooks (`doBeforeRetryAsync` and `doAfterRetryAsync`) are composed into the companion publisher which generates the triggers, and they both delay the emission of said trigger in non-blocking and asynchronous fashion. Having pre and post hooks allows a user to better manage the order in which these asynchronous side effect should be performed. * Retry exhausted meaningful exception (#2052) The `Retry` function implemented by both spec throw a `RuntimeException` with a meaningful message when the configured maximum amount of attempts is reached. That exception can be pinpointed by calling the utility `Exceptions.isRetryExhausted` method. For further customization, users can replace that default with their own custom exception via `onRetryExhaustedThrow`. The BiFunction lets user access the Spec, which has public final fields that can be used to produce a meaningful message. * Ensure retry hooks completion is taken into account (#2064) The old `retryBackoff` would internally use a `flatMap`, which can cause issues. The Spec functions use `concatMap`. /!\ CAVEAT This commit deprecates all of the retryBackoff methods as well as the original `retryWhen` (based on Throwable companion publisher) in order to introduce the new `RetrySignal` based signature. The use of `Retry` explicit type lifts any ambiguity when using the Spec but using a lambda instead will raise some ambiguity at call sites of `retryWhen`. We deem that acceptable given that the migration is quite easy (turn `e -> whatever(e)` to `(Retry) rs -> whatever(rs.failure())`). Furthermore, `retryWhen` is an advanced operator, and we expect most uses to be combined with the retry builder in reactor-extra, which lifts the ambiguity itself.
This big commit is a large refactor of the `retryWhen` operator in order to add several features. Fixes #1978 Fixes #1905 Fixes #2063 Fixes #2052 Fixes #2064 * Expose more state to `retryWhen` companion (#1978) This introduces a retryWhen variant based on a `Retry` functional interface. This "function" deals not with a Flux of `Throwable` but of `RetrySignal`. This allows retry function to check if there was some success (onNext) since last retry attempt, in which case the current attempt can be interpreted as if this was the first ever error. This is especially useful for cases where exponential backoff delays should be reset, for long lived sequences that only see intermittent bursts of errors (transient errors). We take that opportunity to offer a builder for such a function that could take transient errors into account. * the `Retry` builders Inspired by the `Retry` builder in addons, we introduce two classes: `RetrySpec` and `RetryBackoffSpec`. We name them Spec and not Builder because they don't require to call a `build()` method. Rather, each configuration step produces A) a new instance (copy on write) that B) is by itself already a `Retry`. The `Retry` + `xxxSpec` approach allows us to offer 2 standard strategies that both support transient error handling, while letting users write their own strategy (either as a standalone `Retry` concrete implementation, or as a builder/spec that builds one). Both specs allow to handle `transientErrors(boolean)`, which when true relies on the extra state exposed by the `RetrySignal`. For the simple case, this means that the remaining number of retries is reset in case of onNext. For the exponential case, this means retry delay is reset to minimum after an onNext (#1978). Additionally, the introduction of the specs allows us to add more features and support some features on more combinations, see below. * `filter` exceptions (#1905) Previously we could only filter exceptions to be retried on the simple long-based `retry` methods. With the specs we can `filter` in both immediate and exponential backoff retry strategies. * Add pre/post attempt hooks (#2063) The specs let the user configure two types of pre/post hooks. Note that if the retry attempt is denied (eg. we've reached the maximum number of attempts), these hooks are NOT executed. Synchronous hooks (`doBeforeRetry` and `doAfterRetry`) are side effects that should not block for too long and are executed right before and right after the retry trigger is sent by the companion publisher. Asynchronous hooks (`doBeforeRetryAsync` and `doAfterRetryAsync`) are composed into the companion publisher which generates the triggers, and they both delay the emission of said trigger in non-blocking and asynchronous fashion. Having pre and post hooks allows a user to better manage the order in which these asynchronous side effect should be performed. * Retry exhausted meaningful exception (#2052) The `Retry` function implemented by both spec throw a `RuntimeException` with a meaningful message when the configured maximum amount of attempts is reached. That exception can be pinpointed by calling the utility `Exceptions.isRetryExhausted` method. For further customization, users can replace that default with their own custom exception via `onRetryExhaustedThrow`. The BiFunction lets user access the Spec, which has public final fields that can be used to produce a meaningful message. * Ensure retry hooks completion is taken into account (#2064) The old `retryBackoff` would internally use a `flatMap`, which can cause issues. The Spec functions use `concatMap`. /!\ CAVEAT This commit deprecates all of the retryBackoff methods as well as the original `retryWhen` (based on Throwable companion publisher) in order to introduce the new `RetrySignal` based signature. The use of `Retry` explicit type lifts any ambiguity when using the Spec but using a lambda instead will raise some ambiguity at call sites of `retryWhen`. We deem that acceptable given that the migration is quite easy (turn `e -> whatever(e)` to `(Retry) rs -> whatever(rs.failure())`). Furthermore, `retryWhen` is an advanced operator, and we expect most uses to be combined with the retry builder in reactor-extra, which lifts the ambiguity itself.
This big commit is a large refactor of the `retryWhen` operator in order to add several features. Fixes #1978 Fixes #1905 Fixes #2063 Fixes #2052 Fixes #2064 * Expose more state to `retryWhen` companion (#1978) This introduces a retryWhen variant based on a `Retry` functional interface. This "function" deals not with a Flux of `Throwable` but of `RetrySignal`. This allows retry function to check if there was some success (onNext) since last retry attempt, in which case the current attempt can be interpreted as if this was the first ever error. This is especially useful for cases where exponential backoff delays should be reset, for long lived sequences that only see intermittent bursts of errors (transient errors). We take that opportunity to offer a builder for such a function that could take transient errors into account. * the `Retry` builders Inspired by the `Retry` builder in addons, we introduce two classes: `RetrySpec` and `RetryBackoffSpec`. We name them Spec and not Builder because they don't require to call a `build()` method. Rather, each configuration step produces A) a new instance (copy on write) that B) is by itself already a `Retry`. The `Retry` + `xxxSpec` approach allows us to offer 2 standard strategies that both support transient error handling, while letting users write their own strategy (either as a standalone `Retry` concrete implementation, or as a builder/spec that builds one). Both specs allow to handle `transientErrors(boolean)`, which when true relies on the extra state exposed by the `RetrySignal`. For the simple case, this means that the remaining number of retries is reset in case of onNext. For the exponential case, this means retry delay is reset to minimum after an onNext (#1978). Additionally, the introduction of the specs allows us to add more features and support some features on more combinations, see below. * `filter` exceptions (#1905) Previously we could only filter exceptions to be retried on the simple long-based `retry` methods. With the specs we can `filter` in both immediate and exponential backoff retry strategies. * Add pre/post attempt hooks (#2063) The specs let the user configure two types of pre/post hooks. Note that if the retry attempt is denied (eg. we've reached the maximum number of attempts), these hooks are NOT executed. Synchronous hooks (`doBeforeRetry` and `doAfterRetry`) are side effects that should not block for too long and are executed right before and right after the retry trigger is sent by the companion publisher. Asynchronous hooks (`doBeforeRetryAsync` and `doAfterRetryAsync`) are composed into the companion publisher which generates the triggers, and they both delay the emission of said trigger in non-blocking and asynchronous fashion. Having pre and post hooks allows a user to better manage the order in which these asynchronous side effect should be performed. * Retry exhausted meaningful exception (#2052) The `Retry` function implemented by both spec throw a `RuntimeException` with a meaningful message when the configured maximum amount of attempts is reached. That exception can be pinpointed by calling the utility `Exceptions.isRetryExhausted` method. For further customization, users can replace that default with their own custom exception via `onRetryExhaustedThrow`. The BiFunction lets user access the Spec, which has public final fields that can be used to produce a meaningful message. * Ensure retry hooks completion is taken into account (#2064) The old `retryBackoff` would internally use a `flatMap`, which can cause issues. The Spec functions use `concatMap`. /!\ CAVEAT This commit deprecates all of the retryBackoff methods as well as the original `retryWhen` (based on Throwable companion publisher) in order to introduce the new `RetrySignal` based signature. The use of `Retry` explicit type lifts any ambiguity when using the Spec but using a lambda instead will raise some ambiguity at call sites of `retryWhen`. We deem that acceptable given that the migration is quite easy (turn `e -> whatever(e)` to `(Retry) rs -> whatever(rs.failure())`). Furthermore, `retryWhen` is an advanced operator, and we expect most uses to be combined with the retry builder in reactor-extra, which lifts the ambiguity itself.
I see the
Retry.retryBackOff()
does throw anIllegalStateException Retries exhausted...
if all retries are failed.Motivation
It's harder to catch the retries exhausted error because the exception type is too common. I have to rely on the exception messages and that's not a good practice.
Desired solution
A new type dedicating on that error? Like
RetriesExhaustedException
maybe...The text was updated successfully, but these errors were encountered: