From 089a96f23f50f8437b92b00525f38335780a1e34 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Simon=20Basl=C3=A9?= Date: Thu, 28 Nov 2019 19:39:24 +0100 Subject: [PATCH 1/3] Refactor of retryWhen to switch to a Spec/Builder model (#1979) This big commit is a large refactor of the `retryWhen` operator in order to add several features. Fixes #1978 Fixes #1905 Fixes #2063 Fixes #2052 Fixes #2064 * Expose more state to `retryWhen` companion (#1978) This introduces a retryWhen variant based on a `Retry` functional interface. This "function" deals not with a Flux of `Throwable` but of `RetrySignal`. This allows retry function to check if there was some success (onNext) since last retry attempt, in which case the current attempt can be interpreted as if this was the first ever error. This is especially useful for cases where exponential backoff delays should be reset, for long lived sequences that only see intermittent bursts of errors (transient errors). We take that opportunity to offer a builder for such a function that could take transient errors into account. * the `Retry` builders Inspired by the `Retry` builder in addons, we introduce two classes: `RetrySpec` and `RetryBackoffSpec`. We name them Spec and not Builder because they don't require to call a `build()` method. Rather, each configuration step produces A) a new instance (copy on write) that B) is by itself already a `Retry`. The `Retry` + `xxxSpec` approach allows us to offer 2 standard strategies that both support transient error handling, while letting users write their own strategy (either as a standalone `Retry` concrete implementation, or as a builder/spec that builds one). Both specs allow to handle `transientErrors(boolean)`, which when true relies on the extra state exposed by the `RetrySignal`. For the simple case, this means that the remaining number of retries is reset in case of onNext. For the exponential case, this means retry delay is reset to minimum after an onNext (#1978). Additionally, the introduction of the specs allows us to add more features and support some features on more combinations, see below. * `filter` exceptions (#1905) Previously we could only filter exceptions to be retried on the simple long-based `retry` methods. With the specs we can `filter` in both immediate and exponential backoff retry strategies. * Add pre/post attempt hooks (#2063) The specs let the user configure two types of pre/post hooks. Note that if the retry attempt is denied (eg. we've reached the maximum number of attempts), these hooks are NOT executed. Synchronous hooks (`doBeforeRetry` and `doAfterRetry`) are side effects that should not block for too long and are executed right before and right after the retry trigger is sent by the companion publisher. Asynchronous hooks (`doBeforeRetryAsync` and `doAfterRetryAsync`) are composed into the companion publisher which generates the triggers, and they both delay the emission of said trigger in non-blocking and asynchronous fashion. Having pre and post hooks allows a user to better manage the order in which these asynchronous side effect should be performed. * Retry exhausted meaningful exception (#2052) The `Retry` function implemented by both spec throw a `RuntimeException` with a meaningful message when the configured maximum amount of attempts is reached. That exception can be pinpointed by calling the utility `Exceptions.isRetryExhausted` method. For further customization, users can replace that default with their own custom exception via `onRetryExhaustedThrow`. The BiFunction lets user access the Spec, which has public final fields that can be used to produce a meaningful message. * Ensure retry hooks completion is taken into account (#2064) The old `retryBackoff` would internally use a `flatMap`, which can cause issues. The Spec functions use `concatMap`. /!\ CAVEAT This commit deprecates all of the retryBackoff methods as well as the original `retryWhen` (based on Throwable companion publisher) in order to introduce the new `RetrySignal` based signature. The use of `Retry` explicit type lifts any ambiguity when using the Spec but using a lambda instead will raise some ambiguity at call sites of `retryWhen`. We deem that acceptable given that the migration is quite easy (turn `e -> whatever(e)` to `(Retry) rs -> whatever(rs.failure())`). Furthermore, `retryWhen` is an advanced operator, and we expect most uses to be combined with the retry builder in reactor-extra, which lifts the ambiguity itself. --- docs/asciidoc/apdx-operatorChoice.adoc | 5 +- docs/asciidoc/apdx-reactorExtra.adoc | 3 + docs/asciidoc/coreFeatures.adoc | 62 +- docs/asciidoc/faq.adoc | 36 +- docs/asciidoc/snippetRetryWhenRetry.adoc | 24 +- .../main/java/reactor/core/Exceptions.java | 48 ++ .../java/reactor/core/publisher/Flux.java | 112 +++- .../reactor/core/publisher/FluxRetryWhen.java | 143 ++--- .../java/reactor/core/publisher/Mono.java | 111 +++- .../reactor/core/publisher/MonoRetryWhen.java | 10 +- .../marbles/retryWhenSpecForFlux.svg | 246 ++++++++ .../marbles/retryWhenSpecForMono.svg | 209 +++++++ .../util/retry/ImmutableRetrySignal.java | 62 ++ .../main/java/reactor/util/retry/Retry.java | 139 +++++ .../reactor/util/retry/RetryBackoffSpec.java | 566 ++++++++++++++++++ .../java/reactor/util/retry/RetrySpec.java | 381 ++++++++++++ .../java/reactor/core/ExceptionsTest.java | 29 + .../publisher/FluxRetryPredicateTest.java | 3 +- .../core/publisher/FluxRetryWhenTest.java | 254 ++++++-- .../core/publisher/MonoRetryWhenTest.java | 48 +- .../guide/GuideDebuggingExtraTests.java | 5 +- .../test/java/reactor/guide/GuideTests.java | 108 +++- .../util/retry/RetryBackoffSpecTest.java | 390 ++++++++++++ .../reactor/util/retry/RetrySpecTest.java | 404 +++++++++++++ 24 files changed, 3163 insertions(+), 235 deletions(-) create mode 100644 reactor-core/src/main/java/reactor/core/publisher/doc-files/marbles/retryWhenSpecForFlux.svg create mode 100644 reactor-core/src/main/java/reactor/core/publisher/doc-files/marbles/retryWhenSpecForMono.svg create mode 100644 reactor-core/src/main/java/reactor/util/retry/ImmutableRetrySignal.java create mode 100644 reactor-core/src/main/java/reactor/util/retry/Retry.java create mode 100644 reactor-core/src/main/java/reactor/util/retry/RetryBackoffSpec.java create mode 100644 reactor-core/src/main/java/reactor/util/retry/RetrySpec.java create mode 100644 reactor-core/src/test/java/reactor/util/retry/RetryBackoffSpecTest.java create mode 100644 reactor-core/src/test/java/reactor/util/retry/RetrySpecTest.java diff --git a/docs/asciidoc/apdx-operatorChoice.adoc b/docs/asciidoc/apdx-operatorChoice.adoc index 0ef7dbd36d..23176066fc 100644 --- a/docs/asciidoc/apdx-operatorChoice.adoc +++ b/docs/asciidoc/apdx-operatorChoice.adoc @@ -227,9 +227,10 @@ I want to deal with: ** by falling back: *** to a value: `onErrorReturn` *** to a `Publisher` or `Mono`, possibly different ones depending on the error: `Flux#onErrorResume` and `Mono#onErrorResume` -** by retrying: `retry` +** by retrying... +*** ...with a simple policy (max number of attempts): `retry()`, `retry(long)` *** ...triggered by a companion control Flux: `retryWhen` -*** ... using a standard backoff strategy (exponential backoff with jitter): `retryBackoff` +*** ...using a standard backoff strategy (exponential backoff with jitter): `retryWhen(Retry.backoff(...))` * I want to deal with backpressure "errors" (request max from upstream and apply the strategy when downstream does not produce enough request)... ** by throwing a special `IllegalStateException`: `Flux#onBackpressureError` diff --git a/docs/asciidoc/apdx-reactorExtra.adoc b/docs/asciidoc/apdx-reactorExtra.adoc index 6ac8d883bc..d48cde4b2d 100644 --- a/docs/asciidoc/apdx-reactorExtra.adoc +++ b/docs/asciidoc/apdx-reactorExtra.adoc @@ -73,6 +73,9 @@ Since 3.2.0, one of the most advanced retry strategies offered by these utilitie also part of the `reactor-core` main artifact directly. Exponential backoff is available as the `Flux#retryBackoff` operator. +Since 3.3.4, the `Retry` builder is offered directly in core and has a few more possible +customizations, being based on a `RetrySignal` that encapsulates additional state than the +error. [[extra-schedulers]] == Schedulers diff --git a/docs/asciidoc/coreFeatures.adoc b/docs/asciidoc/coreFeatures.adoc index d4c2f4ad8e..814a5da84b 100644 --- a/docs/asciidoc/coreFeatures.adoc +++ b/docs/asciidoc/coreFeatures.adoc @@ -879,8 +879,8 @@ There is a more advanced version of `retry` (called `retryWhen`) that uses a "`c created by the operator but decorated by the user, in order to customize the retry condition. -The companion `Flux` is a `Flux` that gets passed to a `Function`, the sole -parameter of `retryWhen`. As the user, you define that function and make it return a new +The companion `Flux` is a `Flux` that gets passed to a `Retry` strategy/function, +supplied as the sole parameter of `retryWhen`. As the user, you define that function and make it return a new `Publisher`. Retry cycles go as follows: . Each time an error happens (giving potential for a retry), the error is emitted into the @@ -902,11 +902,13 @@ companion would effectively swallow an error. Consider the following way of emul Flux flux = Flux .error(new IllegalArgumentException()) // <1> .doOnError(System.out::println) // <2> - .retryWhen(companion -> companion.take(3)); // <3> + .retryWhen(() -> // <3> + companion -> companion.take(3)); // <4> ---- <1> This continuously produces errors, calling for retry attempts. <2> `doOnError` before the retry lets us log and see all failures. -<3> Here, we consider the first three errors as retry-able (`take(3)`) and then give up. +<3> The `Retry` function is passed as a `Supplier` +<4> Here, we consider the first three errors as retry-able (`take(3)`) and then give up. ==== In effect, the preceding example results in an empty `Flux`, but it completes successfully. Since @@ -916,9 +918,61 @@ In effect, the preceding example results in an empty `Flux`, but it completes su Getting to the same behavior involves a few additional tricks: include::snippetRetryWhenRetry.adoc[] +TIP: One can use the builders exposed in `Retry` to achieve the same in a more fluent manner, as +well as more finely tuned retry strategies: `errorFlux.retryWhen(Retry.max(3));`. + TIP: You can use similar code to implement an "`exponential backoff and retry`" pattern, as shown in the <>. +The core-provided `Retry` builders, `RetrySpec` and `RetryBackoffSpec`, both allow advanced customizations like: + +- setting the `filter(Predicate)` for the exceptions that can trigger a retry +- modifying such a previously set filter through `modifyErrorFilter(Function)` +- triggering a side effect like logging around the retry trigger (ie for backoff before and after the delay), provided the retry is validated (`doBeforeRetry()` and `doAfterRetry()` are additive) +- triggering an asynchronous `Mono` around the retry trigger, which allows to add asynchronous behavior on top of the base delay but thus further delay the trigger (`doBeforeRetryAsync` and `doAfterRetryAsync` are additive) +- customizing the exception in case the maximum number of attempts has been reached, through `onRetryExhaustedThrow(BiFunction)`. +By default, `Exceptions.retryExhausted(...)` is used, which can be distinguished with `Exceptions.isRetryExhausted(Throwable)` +- activating the handling of _transient errors_ (see below) + +Transient error handling in the `Retry` specs makes use of `RetrySignal#totalRetriesInARow()`: to check whether to retry or not and to compute the retry delays, the index used is an alternative one that is reset to 0 each time an `onNext` is emitted. +This has the consequence that if a re-subscribed source generates some data before failing again, previous failures don't count toward the maximum number of retry attempts. +In the case of exponential backoff strategy, this also means that the next attempt will be back to using the minimum `Duration` backoff instead of a longer one. +This can be especially useful for long-lived sources that see sporadic bursts of errors (or _transient_ errors), where each burst should be retried with its own backoff. + +==== +[source,java] +---- +AtomicInteger errorCount = new AtomicInteger(); // <1> +AtomicInteger transientHelper = new AtomicInteger(); +Flux transientFlux = Flux.generate(sink -> { + int i = transientHelper.getAndIncrement(); + if (i == 10) { // <2> + sink.next(i); + sink.complete(); + } + else if (i % 3 == 0) { // <3> + sink.next(i); + } + else { + sink.error(new IllegalStateException("Transient error at " + i)); // <4> + } +}) + .doOnError(e -> errorCount.incrementAndGet()); + +transientFlux.retryWhen(Retry.max(2).transientErrors(true)) // <5> + .blockLast(); +assertThat(errorCount).hasValue(6); // <6> +---- +<1> We will count the number of errors in the retried sequence. +<2> We `generate` a source that has bursts of errors. It will successfully complete when the counter reaches 10. +<3> If the `transientHelper` atomic is at a multiple of `3`, we emit `onNext` and thus end the current burst. +<4> In other cases we emit an `onError`. That's 2 out of 3 times, so bursts of 2 `onError` interrupted by 1 `onNext`. +<5> We use `retryWhen` on that source, configured for at most 2 retry attempts, but in `transientErrors` mode. +<6> At the end, the sequence reaches `onNext(10)` and completes, after `6` errors have been registered in `errorCount`. +==== + +Without the `transientErrors(true)`, the configured maximum attempt of `2` would be reached by the second burst and the sequence would fail after having emitted `onNext(3)`. + === Handling Exceptions in Operators or Functions In general, all operators can themselves contain code that potentially trigger an diff --git a/docs/asciidoc/faq.adoc b/docs/asciidoc/faq.adoc index b69737b0cf..c5c931d4f8 100644 --- a/docs/asciidoc/faq.adoc +++ b/docs/asciidoc/faq.adoc @@ -176,32 +176,34 @@ an unstable state and is not likely to immediately recover from it. So blindly retrying immediately is likely to produce yet another error and add to the instability. -Since `3.2.0.RELEASE`, Reactor comes with such a retry baked in: `Flux.retryBackoff`. +Since `3.3.4.RELEASE`, Reactor comes with a builder for such a retry baked in: `Retry.backoff`. -The following example shows how to implement an exponential backoff with `retryWhen`. +The following example showcases a simple use of the builder, with hooks logging message right before +and after the retry attempt delays. It delays retries and increases the delay between each attempt (pseudocode: delay = attempt number * 100 milliseconds): ==== [source,java] ---- +AtomicInteger errorCount = new AtomicInteger(); Flux flux = -Flux.error(new IllegalArgumentException()) - .retryWhen(companion -> companion - .doOnNext(s -> System.out.println(s + " at " + LocalTime.now())) // <1> - .zipWith(Flux.range(1, 4), (error, index) -> { // <2> - if (index < 4) return index; - else throw Exceptions.propagate(error); - }) - .flatMap(index -> Mono.delay(Duration.ofMillis(index * 100))) // <3> - .doOnNext(s -> System.out.println("retried at " + LocalTime.now())) // <4> - ); +Flux.error(new IllegalStateException("boom")) + .doOnError(e -> { // <1> + errorCount.incrementAndGet(); + System.out.println(e + " at " + LocalTime.now()); + }) + .retryWhen(Retry + .backoff(3, Duration.ofMillis(100)).jitter(0d) // <2> + .doAfterRetry(rs -> System.out.println("retried at " + LocalTime.now())) // <3> + .onRetryExhaustedThrow((spec, rs) -> rs.failure()) // <4> + ); ---- -<1> We log the time of errors. -<2> We use the `retryWhen` + `zipWith` trick to propagate the error after three -retries. -<3> Through `flatMap`, we cause a delay that depends on the attempt's index. -<4> We also log the time at which the retry happens. +<1> We will log the time of errors emitted by the source and count them. +<2> We configure an exponential backoff retry with at most 3 attempts and no jitter. +<3> We also log the time at which the retry happens. +<4> By default an `Exceptions.retryExhausted` exception would be thrown, with the last `failure()` as a cause. +Here we customize that to directly emit the cause as `onError`. ==== When subscribed to, this fails and terminates after printing out the following: diff --git a/docs/asciidoc/snippetRetryWhenRetry.adoc b/docs/asciidoc/snippetRetryWhenRetry.adoc index 4503057204..e1260b8d03 100644 --- a/docs/asciidoc/snippetRetryWhenRetry.adoc +++ b/docs/asciidoc/snippetRetryWhenRetry.adoc @@ -1,20 +1,20 @@ ==== [source,java] ---- +AtomicInteger errorCount = new AtomicInteger(); Flux flux = -Flux.error(new IllegalArgumentException()) - .retryWhen(companion -> companion - .zipWith(Flux.range(1, 4), // <1> - (error, index) -> { // <2> - if (index < 4) return index; // <3> - else throw Exceptions.propagate(error); // <4> - }) - ); + Flux.error(new IllegalArgumentException()) + .doOnError(e -> errorCount.incrementAndGet()) + .retryWhen(() -> companion -> // <1> + companion.map(rs -> { // <2> + if (rs.totalRetries() < 3) return rs.totalRetries(); // <3> + else throw Exceptions.propagate(rs.failure()); // <4> + }) + ); ---- -<1> Trick one: use `zip` and a `range` of "number of acceptable retries + 1". -<2> The `zip` function lets you count the retries while keeping track of the original -error. -<3> To allow for three retries, indexes before 4 return a value to emit. +<1> `retryWhen` expects a `Supplier` +<2> The companion emits `RetrySignal` objects, which bear number of retries so far and last failure +<3> To allow for three retries, we consider indexes < 3 and return a value to emit (here we simply return the index). <4> In order to terminate the sequence in error, we throw the original exception after these three retries. ==== diff --git a/reactor-core/src/main/java/reactor/core/Exceptions.java b/reactor-core/src/main/java/reactor/core/Exceptions.java index a274f42bbe..a95ddf3d60 100644 --- a/reactor-core/src/main/java/reactor/core/Exceptions.java +++ b/reactor-core/src/main/java/reactor/core/Exceptions.java @@ -16,6 +16,7 @@ package reactor.core; +import java.time.Duration; import java.util.ArrayList; import java.util.Arrays; import java.util.Collections; @@ -23,8 +24,11 @@ import java.util.Objects; import java.util.concurrent.RejectedExecutionException; import java.util.concurrent.atomic.AtomicReferenceFieldUpdater; +import java.util.function.Supplier; +import reactor.core.publisher.Flux; import reactor.util.annotation.Nullable; +import reactor.util.retry.Retry; /** * Global Reactor Core Exception handling and utils to operate on. @@ -275,6 +279,19 @@ public static RejectedExecutionException failWithRejected(String message) { return new ReactorRejectedExecutionException(message); } + /** + * Return a new {@link RuntimeException} that represents too many failures on retry. + * This nature can be detected via {@link #isRetryExhausted(Throwable)}. + * The cause of the last retry attempt is passed and stored as this exception's {@link Throwable#getCause() cause}. + * + * @param message the message + * @param cause the cause of the last retry attempt that failed (or null if irrelevant) + * @return a new {@link RuntimeException} representing retry exhaustion due to too many attempts + */ + public static RuntimeException retryExhausted(String message, @Nullable Throwable cause) { + return cause == null ? new RetryExhaustedException(message) : new RetryExhaustedException(message, cause); + } + /** * Check if the given exception represents an {@link #failWithOverflow() overflow}. * @param t the {@link Throwable} error to check @@ -324,6 +341,18 @@ public static boolean isMultiple(@Nullable Throwable t) { return t instanceof CompositeException; } + /** + * Check a {@link Throwable} to see if it indicates too many retry attempts have failed. + * Such an exception can be created via {@link #retryExhausted(long, Throwable)} or + * {@link #retryExhausted(Duration)}. + * + * @param t the {@link Throwable} to check, {@literal null} always yields {@literal false} + * @return true if the Throwable is an instance representing retry exhaustion, false otherwise + */ + public static boolean isRetryExhausted(@Nullable Throwable t) { + return t instanceof RetryExhaustedException; + } + /** * Check a {@link Throwable} to see if it is a traceback, as created by the checkpoint operator or debug utilities. * @@ -667,6 +696,25 @@ static final class OverflowException extends IllegalStateException { } } + /** + * A specialized {@link IllegalStateException} to signify a {@link Flux#retryWhen(Retry) retry} + * has failed (eg. after N attempts, or a timeout). + * + * @see #retryExhausted(long, Throwable) + * @see #retryExhausted(Duration) + * @see #isRetryExhausted(Throwable) + */ + static final class RetryExhaustedException extends IllegalStateException { + + RetryExhaustedException(String message) { + super(message); + } + + RetryExhaustedException(String message, Throwable cause) { + super(message, cause); + } + } + static class ReactorRejectedExecutionException extends RejectedExecutionException { ReactorRejectedExecutionException(String message, Throwable cause) { diff --git a/reactor-core/src/main/java/reactor/core/publisher/Flux.java b/reactor-core/src/main/java/reactor/core/publisher/Flux.java index 355e3b1fef..5e58238573 100644 --- a/reactor-core/src/main/java/reactor/core/publisher/Flux.java +++ b/reactor-core/src/main/java/reactor/core/publisher/Flux.java @@ -49,6 +49,7 @@ import org.reactivestreams.Publisher; import org.reactivestreams.Subscriber; import org.reactivestreams.Subscription; + import reactor.core.CorePublisher; import reactor.core.CoreSubscriber; import reactor.core.Disposable; @@ -74,6 +75,7 @@ import reactor.util.function.Tuple7; import reactor.util.function.Tuple8; import reactor.util.function.Tuples; +import reactor.util.retry.Retry; /** * A Reactive Streams {@link Publisher} with rx operators that emits 0 to N elements, and then completes @@ -7176,7 +7178,9 @@ public final Flux retry(long numRetries) { * @param retryMatcher the predicate to evaluate if retry should occur based on a given error signal * * @return a {@link Flux} that retries on onError if the predicates matches. + * @deprecated use {@link #retryWhen(Retry)} instead, to be removed in 3.4 */ + @Deprecated public final Flux retry(Predicate retryMatcher) { return onAssembly(new FluxRetryPredicate<>(this, retryMatcher)); } @@ -7193,7 +7197,9 @@ public final Flux retry(Predicate retryMatcher) { * * @return a {@link Flux} that retries on onError up to the specified number of retry * attempts, only if the predicate matches. + * @deprecated use {@link #retryWhen(Retry)} instead, to be removed in 3.4 */ + @Deprecated public final Flux retry(long numRetries, Predicate retryMatcher) { return defer(() -> retry(countingPredicate(retryMatcher, numRetries))); } @@ -7211,8 +7217,9 @@ public final Flux retry(long numRetries, Predicate retryMa * Note that if the companion {@link Publisher} created by the {@code whenFactory} * emits {@link Context} as trigger objects, these {@link Context} will be merged with * the previous Context: - *

-	 * .retryWhen(errorCurrentAttempt -> errorCurrentAttempt.handle((lastError, sink) -> {
+	 * 
+ *
{@code
+	 * Function, Publisher> customFunction = errorCurrentAttempt -> errorCurrentAttempt.handle((lastError, sink) -> {
 	 * 	    Context ctx = sink.currentContext();
 	 * 	    int rl = ctx.getOrDefault("retriesLeft", 0);
 	 * 	    if (rl > 0) {
@@ -7221,19 +7228,80 @@ public final Flux retry(long numRetries, Predicate retryMa
 	 *		        "lastError", lastError
 	 *		    ));
 	 * 	    } else {
-	 * 	        sink.error(new IllegalStateException("retries exhausted", lastError));
+	 * 	        sink.error(Exceptions.retryExhausted("retries exhausted", lastError));
 	 * 	    }
-	 * }))
-	 * 
+ * }); + * Flux retried = originalFlux.retryWhen(customFunction); + * }
+ * * * @param whenFactory the {@link Function} that returns the associated {@link Publisher} * companion, given a {@link Flux} that signals each onError as a {@link Throwable}. * * @return a {@link Flux} that retries on onError when the companion {@link Publisher} produces an * onNext signal + * @deprecated use {@link #retryWhen(Retry)} instead, to be removed in 3.4 */ + @Deprecated public final Flux retryWhen(Function, ? extends Publisher> whenFactory) { - return onAssembly(new FluxRetryWhen<>(this, whenFactory)); + Objects.requireNonNull(whenFactory, "whenFactory"); + return onAssembly(new FluxRetryWhen<>(this, fluxRetryWhenState -> fluxRetryWhenState + .map(Retry.RetrySignal::failure) + .as(whenFactory))); + } + + /** + * Retries this {@link Flux} in response to signals emitted by a companion {@link Publisher}. + * The companion is generated by the provided {@link Retry} instance, see {@link Retry#max(long)}, {@link Retry#maxInARow(long)} + * and {@link Retry#backoff(long, Duration)} for readily available strategy builders. + *

+ * The operator generates a base for the companion, a {@link Flux} of {@link reactor.util.retry.Retry.RetrySignal} + * which each give metadata about each retryable failure whenever this {@link Flux} signals an error. The final companion + * should be derived from that base companion and emit data in response to incoming onNext (although it can emit less + * elements, or delay the emissions). + *

+ * Terminal signals in the companion terminate the sequence with the same signal, so emitting an {@link Subscriber#onError(Throwable)} + * will fail the resulting {@link Flux} with that same error. + *

+ * + *

+ * Note that the {@link Retry.RetrySignal} state can be transient and change between each source + * {@link org.reactivestreams.Subscriber#onError(Throwable) onError} or + * {@link org.reactivestreams.Subscriber#onNext(Object) onNext}. If processed with a delay, + * this could lead to the represented state being out of sync with the state at which the retry + * was evaluated. Map it to {@link Retry.RetrySignal#copy()} right away to mediate this. + *

+ * Note that if the companion {@link Publisher} created by the {@code whenFactory} + * emits {@link Context} as trigger objects, these {@link Context} will be merged with + * the previous Context: + *

+	 * {@code
+	 * Retry customStrategy = companion -> companion.handle((retrySignal, sink) -> {
+	 * 	    Context ctx = sink.currentContext();
+	 * 	    int rl = ctx.getOrDefault("retriesLeft", 0);
+	 * 	    if (rl > 0) {
+	 *		    sink.next(Context.of(
+	 *		        "retriesLeft", rl - 1,
+	 *		        "lastError", retrySignal.failure()
+	 *		    ));
+	 * 	    } else {
+	 * 	        sink.error(Exceptions.retryExhausted("retries exhausted", retrySignal.failure()));
+	 * 	    }
+	 * });
+	 * Flux retried = originalFlux.retryWhen(customStrategy);
+	 * }
+ *
+ * + * @param retrySpec the {@link Retry} strategy that will generate the companion {@link Publisher}, + * given a {@link Flux} that signals each onError as a {@link reactor.util.retry.Retry.RetrySignal}. + * + * @return a {@link Flux} that retries on onError when a companion {@link Publisher} produces an onNext signal + * @see Retry#max(long) + * @see Retry#maxInARow(long) + * @see Retry#backoff(long, Duration) + */ + public final Flux retryWhen(Retry retrySpec) { + return onAssembly(new FluxRetryWhen<>(this, retrySpec)); } /** @@ -7265,9 +7333,11 @@ public final Flux retryWhen(Function, ? extends Publisher> * @param firstBackoff the first backoff delay to apply then grow exponentially. Also * minimum delay even taking jitter into account. * @return a {@link Flux} that retries on onError with exponentially growing randomized delays between retries. + * @deprecated use {@link #retryWhen(Retry)} with a {@link Retry#backoff(long, Duration)} base, to be removed in 3.4 */ + @Deprecated public final Flux retryBackoff(long numRetries, Duration firstBackoff) { - return retryBackoff(numRetries, firstBackoff, FluxRetryWhen.MAX_BACKOFF, 0.5d); + return retryWhen(Retry.backoff(numRetries, firstBackoff)); } /** @@ -7301,9 +7371,13 @@ public final Flux retryBackoff(long numRetries, Duration firstBackoff) { * minimum delay even taking jitter into account. * @param maxBackoff the maximum delay to apply despite exponential growth and jitter. * @return a {@link Flux} that retries on onError with exponentially growing randomized delays between retries. + * @deprecated use {@link #retryWhen(Retry)} with a {@link Retry#backoff(long, Duration)} base, to be removed in 3.4 */ + @Deprecated public final Flux retryBackoff(long numRetries, Duration firstBackoff, Duration maxBackoff) { - return retryBackoff(numRetries, firstBackoff, maxBackoff, 0.5d); + return retryWhen(Retry + .backoff(numRetries, firstBackoff) + .maxBackoff(maxBackoff)); } /** @@ -7339,9 +7413,14 @@ public final Flux retryBackoff(long numRetries, Duration firstBackoff, Durati * @param maxBackoff the maximum delay to apply despite exponential growth and jitter. * @param backoffScheduler the {@link Scheduler} on which the delays and subsequent attempts are executed. * @return a {@link Flux} that retries on onError with exponentially growing randomized delays between retries. + * @deprecated use {@link #retryWhen(Retry)} with a {@link Retry#backoff(long, Duration)} base, to be removed in 3.4 */ + @Deprecated public final Flux retryBackoff(long numRetries, Duration firstBackoff, Duration maxBackoff, Scheduler backoffScheduler) { - return retryBackoff(numRetries, firstBackoff, maxBackoff, 0.5d, backoffScheduler); + return retryWhen(Retry + .backoff(numRetries, firstBackoff) + .maxBackoff(maxBackoff) + .scheduler(backoffScheduler)); } /** @@ -7377,9 +7456,14 @@ public final Flux retryBackoff(long numRetries, Duration firstBackoff, Durati * @param maxBackoff the maximum delay to apply despite exponential growth and jitter. * @param jitterFactor the jitter percentage (as a double between 0.0 and 1.0). * @return a {@link Flux} that retries on onError with exponentially growing randomized delays between retries. + * @deprecated use {@link #retryWhen(Retry)} with a {@link Retry#backoff(long, Duration)} base */ + @Deprecated public final Flux retryBackoff(long numRetries, Duration firstBackoff, Duration maxBackoff, double jitterFactor) { - return retryBackoff(numRetries, firstBackoff, maxBackoff, jitterFactor, Schedulers.parallel()); + return retryWhen(Retry + .backoff(numRetries, firstBackoff) + .maxBackoff(maxBackoff) + .jitter(jitterFactor)); } /** @@ -7418,9 +7502,15 @@ public final Flux retryBackoff(long numRetries, Duration firstBackoff, Durati * @param backoffScheduler the {@link Scheduler} on which the delays and subsequent attempts are executed. * @param jitterFactor the jitter percentage (as a double between 0.0 and 1.0). * @return a {@link Flux} that retries on onError with exponentially growing randomized delays between retries. + * @deprecated use {@link #retryWhen(Retry)} with a {@link Retry#backoff(long, Duration)} base */ + @Deprecated public final Flux retryBackoff(long numRetries, Duration firstBackoff, Duration maxBackoff, double jitterFactor, Scheduler backoffScheduler) { - return retryWhen(FluxRetryWhen.randomExponentialBackoffFunction(numRetries, firstBackoff, maxBackoff, jitterFactor, backoffScheduler)); + return retryWhen(Retry + .backoff(numRetries, firstBackoff) + .maxBackoff(maxBackoff) + .jitter(jitterFactor) + .scheduler(backoffScheduler)); } /** diff --git a/reactor-core/src/main/java/reactor/core/publisher/FluxRetryWhen.java b/reactor-core/src/main/java/reactor/core/publisher/FluxRetryWhen.java index fda4194dc8..b85410a5c3 100644 --- a/reactor-core/src/main/java/reactor/core/publisher/FluxRetryWhen.java +++ b/reactor-core/src/main/java/reactor/core/publisher/FluxRetryWhen.java @@ -15,9 +15,7 @@ */ package reactor.core.publisher; -import java.time.Duration; import java.util.Objects; -import java.util.concurrent.ThreadLocalRandom; import java.util.concurrent.atomic.AtomicIntegerFieldUpdater; import java.util.function.Function; import java.util.stream.Stream; @@ -25,12 +23,13 @@ import org.reactivestreams.Publisher; import org.reactivestreams.Subscriber; import org.reactivestreams.Subscription; + import reactor.core.CorePublisher; import reactor.core.CoreSubscriber; import reactor.core.Scannable; -import reactor.core.scheduler.Scheduler; import reactor.util.annotation.Nullable; import reactor.util.context.Context; +import reactor.util.retry.Retry; /** * Retries a source when a companion sequence signals @@ -44,21 +43,18 @@ */ final class FluxRetryWhen extends InternalFluxOperator { - static final Duration MAX_BACKOFF = Duration.ofMillis(Long.MAX_VALUE); - - final Function, ? extends Publisher> whenSourceFactory; + final Retry whenSourceFactory; - FluxRetryWhen(Flux source, - Function, ? extends Publisher> whenSourceFactory) { + FluxRetryWhen(Flux source, Retry whenSourceFactory) { super(source); this.whenSourceFactory = Objects.requireNonNull(whenSourceFactory, "whenSourceFactory"); } - static void subscribe(CoreSubscriber s, Function, ? - extends Publisher> whenSourceFactory, CorePublisher source) { + static void subscribe(CoreSubscriber s, + Retry whenSourceFactory, + CorePublisher source) { RetryWhenOtherSubscriber other = new RetryWhenOtherSubscriber(); - Subscriber signaller = Operators.serialize(other.completionSignal); + Subscriber signaller = Operators.serialize(other.completionSignal); signaller.onSubscribe(Operators.emptySubscription()); @@ -66,21 +62,18 @@ static void subscribe(CoreSubscriber s, Function main = new RetryWhenMainSubscriber<>(serial, signaller, source); - other.main = main; + other.main = main; serial.onSubscribe(main); Publisher p; - try { - p = Objects.requireNonNull(whenSourceFactory.apply(other), - "The whenSourceFactory returned a null Publisher"); + p = Objects.requireNonNull(whenSourceFactory.generateCompanion(other), "The whenSourceFactory returned a null Publisher"); } catch (Throwable e) { s.onError(Operators.onOperatorError(e, s.currentContext())); return; } - p.subscribe(other); if (!main.cancelled) { @@ -94,15 +87,20 @@ public CoreSubscriber subscribeOrReturn(CoreSubscriber act return null; } - static final class RetryWhenMainSubscriber extends - Operators.MultiSubscriptionSubscriber { + static final class RetryWhenMainSubscriber extends Operators.MultiSubscriptionSubscriber + implements Retry.RetrySignal { final Operators.DeferredSubscription otherArbiter; - final Subscriber signaller; + final Subscriber signaller; final CorePublisher source; + long totalFailureIndex = 0L; + long subsequentFailureIndex = 0L; + @Nullable + Throwable lastFailure = null; + Context context; volatile int wip; @@ -111,7 +109,8 @@ static final class RetryWhenMainSubscriber extends long produced; - RetryWhenMainSubscriber(CoreSubscriber actual, Subscriber signaller, + RetryWhenMainSubscriber(CoreSubscriber actual, + Subscriber signaller, CorePublisher source) { super(actual); this.signaller = signaller; @@ -120,6 +119,22 @@ static final class RetryWhenMainSubscriber extends this.context = actual.currentContext(); } + @Override + public long totalRetries() { + return this.totalFailureIndex - 1; + } + + @Override + public long totalRetriesInARow() { + return this.subsequentFailureIndex - 1; + } + + @Override + public Throwable failure() { + assert this.lastFailure != null; + return this.lastFailure; + } + @Override public Context currentContext() { return this.context; @@ -136,15 +151,15 @@ public void cancel() { otherArbiter.cancel(); super.cancel(); } - } - public void setWhen(Subscription w) { + void swap(Subscription w) { otherArbiter.set(w); } @Override public void onNext(T t) { + subsequentFailureIndex = 0; actual.onNext(t); produced++; @@ -152,6 +167,9 @@ public void onNext(T t) { @Override public void onError(Throwable t) { + totalFailureIndex++; + subsequentFailureIndex++; + lastFailure = t; long p = produced; if (p != 0L) { produced = 0; @@ -160,11 +178,12 @@ public void onError(Throwable t) { otherArbiter.request(1); - signaller.onNext(t); + signaller.onNext(this); } @Override public void onComplete() { + lastFailure = null; otherArbiter.cancel(); actual.onComplete(); @@ -202,11 +221,11 @@ void whenComplete() { } } - static final class RetryWhenOtherSubscriber extends Flux - implements InnerConsumer, OptimizableOperator { + static final class RetryWhenOtherSubscriber extends Flux + implements InnerConsumer, OptimizableOperator { RetryWhenMainSubscriber main; - final DirectProcessor completionSignal = new DirectProcessor<>(); + final DirectProcessor completionSignal = new DirectProcessor<>(); @Override public Context currentContext() { @@ -224,7 +243,7 @@ public Object scanUnsafe(Attr key) { @Override public void onSubscribe(Subscription s) { - main.setWhen(s); + main.swap(s); } @Override @@ -243,84 +262,24 @@ public void onComplete() { } @Override - public void subscribe(CoreSubscriber actual) { + public void subscribe(CoreSubscriber actual) { completionSignal.subscribe(actual); } @Override - public CoreSubscriber subscribeOrReturn(CoreSubscriber actual) { + public CoreSubscriber subscribeOrReturn(CoreSubscriber actual) { return actual; } @Override - public DirectProcessor source() { + public DirectProcessor source() { return completionSignal; } @Override - public OptimizableOperator nextOptimizableSource() { + public OptimizableOperator nextOptimizableSource() { return null; } } - static Function, Publisher> randomExponentialBackoffFunction( - long numRetries, Duration firstBackoff, Duration maxBackoff, - double jitterFactor, Scheduler backoffScheduler) { - if (jitterFactor < 0 || jitterFactor > 1) throw new IllegalArgumentException("jitterFactor must be between 0 and 1 (default 0.5)"); - Objects.requireNonNull(firstBackoff, "firstBackoff is required"); - Objects.requireNonNull(maxBackoff, "maxBackoff is required"); - Objects.requireNonNull(backoffScheduler, "backoffScheduler is required"); - - return t -> t.index() - .flatMap(t2 -> { - long iteration = t2.getT1(); - - if (iteration >= numRetries) { - return Mono.error(new IllegalStateException("Retries exhausted: " + iteration + "/" + numRetries, t2.getT2())); - } - - Duration nextBackoff; - try { - nextBackoff = firstBackoff.multipliedBy((long) Math.pow(2, iteration)); - if (nextBackoff.compareTo(maxBackoff) > 0) { - nextBackoff = maxBackoff; - } - } - catch (ArithmeticException overflow) { - nextBackoff = maxBackoff; - } - - //short-circuit delay == 0 case - if (nextBackoff.isZero()) { - return Mono.just(iteration); - } - - ThreadLocalRandom random = ThreadLocalRandom.current(); - - long jitterOffset; - try { - jitterOffset = nextBackoff.multipliedBy((long) (100 * jitterFactor)) - .dividedBy(100) - .toMillis(); - } - catch (ArithmeticException ae) { - jitterOffset = Math.round(Long.MAX_VALUE * jitterFactor); - } - long lowBound = Math.max(firstBackoff.minus(nextBackoff) - .toMillis(), -jitterOffset); - long highBound = Math.min(maxBackoff.minus(nextBackoff) - .toMillis(), jitterOffset); - - long jitter; - if (highBound == lowBound) { - if (highBound == 0) jitter = 0; - else jitter = random.nextLong(highBound); - } - else { - jitter = random.nextLong(lowBound, highBound); - } - Duration effectiveBackoff = nextBackoff.plusMillis(jitter); - return Mono.delay(effectiveBackoff, backoffScheduler); - }); - } } diff --git a/reactor-core/src/main/java/reactor/core/publisher/Mono.java b/reactor-core/src/main/java/reactor/core/publisher/Mono.java index 2f7d9edb8f..6d0b9ed3af 100644 --- a/reactor-core/src/main/java/reactor/core/publisher/Mono.java +++ b/reactor-core/src/main/java/reactor/core/publisher/Mono.java @@ -42,19 +42,20 @@ import org.reactivestreams.Publisher; import org.reactivestreams.Subscriber; import org.reactivestreams.Subscription; + import reactor.core.CorePublisher; import reactor.core.CoreSubscriber; import reactor.core.Disposable; import reactor.core.Exceptions; import reactor.core.Fuseable; +import reactor.core.Scannable; import reactor.core.publisher.FluxOnAssembly.AssemblyLightSnapshot; import reactor.core.publisher.FluxOnAssembly.AssemblySnapshot; -import reactor.util.Metrics; -import reactor.core.Scannable; import reactor.core.scheduler.Scheduler; import reactor.core.scheduler.Scheduler.Worker; import reactor.core.scheduler.Schedulers; import reactor.util.Logger; +import reactor.util.Metrics; import reactor.util.annotation.Nullable; import reactor.util.concurrent.Queues; import reactor.util.context.Context; @@ -66,6 +67,7 @@ import reactor.util.function.Tuple7; import reactor.util.function.Tuple8; import reactor.util.function.Tuples; +import reactor.util.retry.Retry; /** * A Reactive Streams {@link Publisher} with basic rx operators that completes successfully by @@ -3651,7 +3653,9 @@ public final Mono retry(long numRetries) { * @param retryMatcher the predicate to evaluate if retry should occur based on a given error signal * * @return a {@link Mono} that retries on onError if the predicates matches. + * @deprecated use {@link #retryWhen(Retry)} instead, to be removed in 3.4 */ + @Deprecated public final Mono retry(Predicate retryMatcher) { return onAssembly(new MonoRetryPredicate<>(this, retryMatcher)); } @@ -3668,8 +3672,9 @@ public final Mono retry(Predicate retryMatcher) { * * @return a {@link Mono} that retries on onError up to the specified number of retry * attempts, only if the predicate matches. - * + * @deprecated use {@link #retryWhen(Retry)} instead, to be removed in 3.4 */ + @Deprecated public final Mono retry(long numRetries, Predicate retryMatcher) { return defer(() -> retry(Flux.countingPredicate(retryMatcher, numRetries))); } @@ -3686,16 +3691,93 @@ public final Mono retry(long numRetries, Predicate retryMa *

* Note that if the companion {@link Publisher} created by the {@code whenFactory} * emits {@link Context} as trigger objects, the content of these Context will be added - * to the operator's own {@link Context}. + * to the operator's own {@link Context}: + *

+ *
+	 * {@code
+	 * Function, Publisher> customFunction = errorCurrentAttempt -> errorCurrentAttempt.handle((lastError, sink) -> {
+	 * 	    Context ctx = sink.currentContext();
+	 * 	    int rl = ctx.getOrDefault("retriesLeft", 0);
+	 * 	    if (rl > 0) {
+	 *		    sink.next(Context.of(
+	 *		        "retriesLeft", rl - 1,
+	 *		        "lastError", lastError
+	 *		    ));
+	 * 	    } else {
+	 * 	        sink.error(Exceptions.retryExhausted("retries exhausted", lastError));
+	 * 	    }
+	 * });
+	 * Mono retried = originalMono.retryWhen(customFunction);
+	 * }
+ *
* * @param whenFactory the {@link Function} that returns the associated {@link Publisher} * companion, given a {@link Flux} that signals each onError as a {@link Throwable}. * * @return a {@link Mono} that retries on onError when the companion {@link Publisher} produces an * onNext signal + * @deprecated use {@link #retryWhen(Retry)} instead, to be removed in 3.4 */ + @Deprecated public final Mono retryWhen(Function, ? extends Publisher> whenFactory) { - return onAssembly(new MonoRetryWhen<>(this, whenFactory)); + Objects.requireNonNull(whenFactory, "whenFactory"); + return onAssembly(new MonoRetryWhen<>(this, (Flux rws) -> whenFactory.apply(rws.map( + Retry.RetrySignal::failure)))); + } + + /** + * Retries this {@link Mono} in response to signals emitted by a companion {@link Publisher}. + * The companion is generated by the provided {@link Retry} instance, see {@link Retry#max(long)}, {@link Retry#maxInARow(long)} + * and {@link Retry#backoff(long, Duration)} for readily available strategy builders. + *

+ * The operator generates a base for the companion, a {@link Flux} of {@link reactor.util.retry.Retry.RetrySignal} + * which each give metadata about each retryable failure whenever this {@link Mono} signals an error. The final companion + * should be derived from that base companion and emit data in response to incoming onNext (although it can emit less + * elements, or delay the emissions). + *

+ * Terminal signals in the companion terminate the sequence with the same signal, so emitting an {@link Subscriber#onError(Throwable)} + * will fail the resulting {@link Mono} with that same error. + *

+ * + *

+ * Note that the {@link Retry.RetrySignal} state can be transient and change between each source + * {@link org.reactivestreams.Subscriber#onError(Throwable) onError} or + * {@link org.reactivestreams.Subscriber#onNext(Object) onNext}. If processed with a delay, + * this could lead to the represented state being out of sync with the state at which the retry + * was evaluated. Map it to {@link Retry.RetrySignal#copy()} right away to mediate this. + *

+ * Note that if the companion {@link Publisher} created by the {@code whenFactory} + * emits {@link Context} as trigger objects, these {@link Context} will be merged with + * the previous Context: + *

+ *
+	 * {@code
+	 * Retry customStrategy = companion -> companion.handle((retrySignal, sink) -> {
+	 * 	    Context ctx = sink.currentContext();
+	 * 	    int rl = ctx.getOrDefault("retriesLeft", 0);
+	 * 	    if (rl > 0) {
+	 *		    sink.next(Context.of(
+	 *		        "retriesLeft", rl - 1,
+	 *		        "lastError", retrySignal.failure()
+	 *		    ));
+	 * 	    } else {
+	 * 	        sink.error(Exceptions.retryExhausted("retries exhausted", retrySignal.failure()));
+	 * 	    }
+	 * });
+	 * Mono retried = originalMono.retryWhen(customStrategy);
+	 * }
+ *
+ * + * @param retrySpec the {@link Retry} strategy that will generate the companion {@link Publisher}, + * given a {@link Flux} that signals each onError as a {@link reactor.util.retry.Retry.RetrySignal}. + * + * @return a {@link Mono} that retries on onError when a companion {@link Publisher} produces an onNext signal + * @see Retry#max(long) + * @see Retry#maxInARow(long) + * @see Retry#backoff(long, Duration) + */ + public final Mono retryWhen(Retry retrySpec) { + return onAssembly(new MonoRetryWhen<>(this, retrySpec)); } /** @@ -3727,9 +3809,11 @@ public final Mono retryWhen(Function, ? extends Publisher> * @param firstBackoff the first backoff delay to apply then grow exponentially. Also * minimum delay even taking jitter into account. * @return a {@link Mono} that retries on onError with exponentially growing randomized delays between retries. + * @deprecated use {@link #retryWhen(Retry)} with a {@link Retry#backoff(long, Duration)} base, to be removed in 3.4 */ + @Deprecated public final Mono retryBackoff(long numRetries, Duration firstBackoff) { - return retryBackoff(numRetries, firstBackoff, Duration.ofMillis(Long.MAX_VALUE), 0.5d); + return retryWhen(Retry.backoff(numRetries, firstBackoff)); } /** @@ -3763,7 +3847,9 @@ public final Mono retryBackoff(long numRetries, Duration firstBackoff) { * minimum delay even taking jitter into account. * @param maxBackoff the maximum delay to apply despite exponential growth and jitter. * @return a {@link Mono} that retries on onError with exponentially growing randomized delays between retries. + * @deprecated use {@link #retryWhen(Retry)} with a {@link Retry#backoff(long, Duration)} base, to be removed in 3.4 */ + @Deprecated public final Mono retryBackoff(long numRetries, Duration firstBackoff, Duration maxBackoff) { return retryBackoff(numRetries, firstBackoff, maxBackoff, 0.5d); } @@ -3801,7 +3887,9 @@ public final Mono retryBackoff(long numRetries, Duration firstBackoff, Durati * @param maxBackoff the maximum delay to apply despite exponential growth and jitter. * @param backoffScheduler the {@link Scheduler} on which the delays and subsequent attempts are executed. * @return a {@link Mono} that retries on onError with exponentially growing randomized delays between retries. + * @deprecated use {@link #retryWhen(Retry)} with a {@link Retry#backoff(long, Duration)} base, to be removed in 3.4 */ + @Deprecated public final Mono retryBackoff(long numRetries, Duration firstBackoff, Duration maxBackoff, Scheduler backoffScheduler) { return retryBackoff(numRetries, firstBackoff, maxBackoff, 0.5d, backoffScheduler); } @@ -3839,7 +3927,9 @@ public final Mono retryBackoff(long numRetries, Duration firstBackoff, Durati * @param maxBackoff the maximum delay to apply despite exponential growth and jitter. * @param jitterFactor the jitter percentage (as a double between 0.0 and 1.0). * @return a {@link Mono} that retries on onError with exponentially growing randomized delays between retries. + * @deprecated use {@link #retryWhen(Retry)} with a {@link Retry#backoff(long, Duration)} base, to be removed in 3.4 */ + @Deprecated public final Mono retryBackoff(long numRetries, Duration firstBackoff, Duration maxBackoff, double jitterFactor) { return retryBackoff(numRetries, firstBackoff, maxBackoff, jitterFactor, Schedulers.parallel()); } @@ -3880,9 +3970,16 @@ public final Mono retryBackoff(long numRetries, Duration firstBackoff, Durati * @param backoffScheduler the {@link Scheduler} on which the delays and subsequent attempts are executed. * @param jitterFactor the jitter percentage (as a double between 0.0 and 1.0). * @return a {@link Mono} that retries on onError with exponentially growing randomized delays between retries. + * @deprecated use {@link #retryWhen(Retry)} with a {@link Retry#backoff(long, Duration)} base, to be removed in 3.4 */ + @Deprecated public final Mono retryBackoff(long numRetries, Duration firstBackoff, Duration maxBackoff, double jitterFactor, Scheduler backoffScheduler) { - return retryWhen(FluxRetryWhen.randomExponentialBackoffFunction(numRetries, firstBackoff, maxBackoff, jitterFactor, backoffScheduler)); + return retryWhen(Retry + .backoff(numRetries, firstBackoff) + .maxBackoff(maxBackoff) + .jitter(jitterFactor) + .scheduler(backoffScheduler) + .transientErrors(false)); } /** diff --git a/reactor-core/src/main/java/reactor/core/publisher/MonoRetryWhen.java b/reactor-core/src/main/java/reactor/core/publisher/MonoRetryWhen.java index a4b29441f0..56a75df62f 100644 --- a/reactor-core/src/main/java/reactor/core/publisher/MonoRetryWhen.java +++ b/reactor-core/src/main/java/reactor/core/publisher/MonoRetryWhen.java @@ -21,6 +21,7 @@ import org.reactivestreams.Publisher; import reactor.core.CoreSubscriber; +import reactor.util.retry.Retry; /** * retries a source when a companion sequence signals an item in response to the main's @@ -35,14 +36,11 @@ */ final class MonoRetryWhen extends InternalMonoOperator { - final Function, ? extends Publisher> - whenSourceFactory; + final Retry whenSourceFactory; - MonoRetryWhen(Mono source, - Function, ? extends Publisher> whenSourceFactory) { + MonoRetryWhen(Mono source, Retry whenSourceFactory) { super(source); - this.whenSourceFactory = - Objects.requireNonNull(whenSourceFactory, "whenSourceFactory"); + this.whenSourceFactory = Objects.requireNonNull(whenSourceFactory, "whenSourceFactory"); } @Override diff --git a/reactor-core/src/main/java/reactor/core/publisher/doc-files/marbles/retryWhenSpecForFlux.svg b/reactor-core/src/main/java/reactor/core/publisher/doc-files/marbles/retryWhenSpecForFlux.svg new file mode 100644 index 0000000000..0fbca3b1f9 --- /dev/null +++ b/reactor-core/src/main/java/reactor/core/publisher/doc-files/marbles/retryWhenSpecForFlux.svg @@ -0,0 +1,246 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + subscribe() + + + + + + + + + subscribe() + + + + + + + subscribe() + + + + + + + retryWhen(spec) + + + subscribe() + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + ! + + + + ! + + + + + + + + + + + + + + + + 0 + + + + + + + + + + + 1 + + + + + + + + + + + 2 + + + + + + Retry.backoff(2,Duration.ofMillis(100)) + + + ) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + ! + + + + ! + + + + 0 + + + + 1 + + + + 2 + + + spec = + + + + + + + diff --git a/reactor-core/src/main/java/reactor/core/publisher/doc-files/marbles/retryWhenSpecForMono.svg b/reactor-core/src/main/java/reactor/core/publisher/doc-files/marbles/retryWhenSpecForMono.svg new file mode 100644 index 0000000000..3ef68f7d56 --- /dev/null +++ b/reactor-core/src/main/java/reactor/core/publisher/doc-files/marbles/retryWhenSpecForMono.svg @@ -0,0 +1,209 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + subscribe() + + + + + + + + subscribe() + + + + + + + subscribe() + + + + + + + retryWhen(spec) + + + subscribe() + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + ! + + + + ! + + + + + + + + + + + Retry.backoff(2,Duration.ofMillis(100)) + + + ) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + ! + + + + ! + + + + 0 + + + + 1 + + + + 2 + + + spec = + + + + + + + + + + + + + + + 0 + + + + + + + + + + + 1 + + + + + + + + + + + 2 + + + diff --git a/reactor-core/src/main/java/reactor/util/retry/ImmutableRetrySignal.java b/reactor-core/src/main/java/reactor/util/retry/ImmutableRetrySignal.java new file mode 100644 index 0000000000..19aea93b78 --- /dev/null +++ b/reactor-core/src/main/java/reactor/util/retry/ImmutableRetrySignal.java @@ -0,0 +1,62 @@ +/* + * Copyright (c) 2011-Present Pivotal Software Inc, All Rights Reserved. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package reactor.util.retry; + +/** + * An immutable {@link reactor.util.retry.Retry.RetrySignal} that can be used for retained + * copies of mutable implementations. + * + * @author Simon Baslé + */ +final class ImmutableRetrySignal implements Retry.RetrySignal { + + final long failureTotalIndex; + final long failureSubsequentIndex; + final Throwable failure; + + ImmutableRetrySignal(long failureTotalIndex, long failureSubsequentIndex, + Throwable failure) { + this.failureTotalIndex = failureTotalIndex; + this.failureSubsequentIndex = failureSubsequentIndex; + this.failure = failure; + } + + @Override + public long totalRetries() { + return this.failureTotalIndex; + } + + @Override + public long totalRetriesInARow() { + return this.failureSubsequentIndex; + } + + @Override + public Throwable failure() { + return this.failure; + } + + @Override + public Retry.RetrySignal copy() { + return this; + } + + @Override + public String toString() { + return "attempt #" + (failureTotalIndex + 1) + " (" + (failureSubsequentIndex + 1) + " in a row), last failure={" + failure + '}'; + } +} diff --git a/reactor-core/src/main/java/reactor/util/retry/Retry.java b/reactor-core/src/main/java/reactor/util/retry/Retry.java new file mode 100644 index 0000000000..643df0e4ca --- /dev/null +++ b/reactor-core/src/main/java/reactor/util/retry/Retry.java @@ -0,0 +1,139 @@ +/* + * Copyright (c) 2011-Present Pivotal Software Inc, All Rights Reserved. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package reactor.util.retry; + +import java.time.Duration; +import java.util.function.Supplier; + +import org.reactivestreams.Publisher; + +import reactor.core.publisher.Flux; +import reactor.core.scheduler.Schedulers; + +import static reactor.util.retry.RetrySpec.*; + +/** + * Functional interface to configure retries depending on a companion {@link Flux} of {@link RetrySignal}, + * as well as builders for such {@link Flux#retryWhen(Retry)} retries} companions. + * + * @author Simon Baslé + */ +@FunctionalInterface +public interface Retry { + + /** + * The intent of the functional {@link Retry} class is to let users configure how to react to {@link RetrySignal} + * by providing the operator with a companion publisher. Any {@link org.reactivestreams.Subscriber#onNext(Object) onNext} + * emitted by this publisher will trigger a retry, but if that emission is delayed compared to the original signal then + * the attempt is delayed as well. This method generates the companion, out of a {@link Flux} of {@link RetrySignal}, + * which itself can serve as the simplest form of retry companion (indefinitely and immediately retry on any error). + * + * @param retrySignalCompanion the original {@link Flux} of {@link RetrySignal}, notifying of each source error that + * _might_ result in a retry attempt, with context around the error and current retry cycle. + * @return the actual companion to use, which might delay or limit retry attempts + */ + Publisher generateCompanion(Flux retrySignalCompanion); + + /** + * State for a {@link Flux#retryWhen(Retry)} Flux retry} or {@link reactor.core.publisher.Mono#retryWhen(Retry) Mono retry}. + * A flux of states is passed to the user, which gives information about the {@link #failure()} that potentially triggers + * a retry as well as two indexes: the number of errors that happened so far (and were retried) and the same number, + * but only taking into account subsequent errors (see {@link #totalRetriesInARow()}). + */ + interface RetrySignal { + + /** + * The ZERO BASED index number of this error (can also be read as how many retries have occurred + * so far), since the source was first subscribed to. + * + * @return a 0-index for the error, since original subscription + */ + long totalRetries(); + + /** + * The ZERO BASED index number of this error since the beginning of the current burst of errors. + * This is reset to zero whenever a retry is made that is followed by at least one + * {@link org.reactivestreams.Subscriber#onNext(Object) onNext}. + * + * @return a 0-index for the error in the current burst of subsequent errors + */ + long totalRetriesInARow(); + + /** + * The current {@link Throwable} that needs to be evaluated for retry. + * + * @return the current failure {@link Throwable} + */ + Throwable failure(); + + /** + * Return an immutable copy of this {@link RetrySignal} which is guaranteed to give a consistent view + * of the state at the time at which this method is invoked. + * This is especially useful if this {@link RetrySignal} is a transient view of the state of the underlying + * retry subscriber, + * + * @return an immutable copy of the current {@link RetrySignal}, always safe to use + */ + default RetrySignal copy() { + return new ImmutableRetrySignal(totalRetries(), totalRetriesInARow(), failure()); + } + } + + /** + * A {@link RetryBackoffSpec} preconfigured for exponential backoff strategy with jitter, given a maximum number of retry attempts + * and a minimum {@link Duration} for the backoff. + * + * @param maxAttempts the maximum number of retry attempts to allow + * @param minBackoff the minimum {@link Duration} for the first backoff + * @return the builder for further configuration + * @see RetryBackoffSpec#maxAttempts(long) + * @see RetryBackoffSpec#minBackoff(Duration) + */ + static RetryBackoffSpec backoff(long maxAttempts, Duration minBackoff) { + return new RetryBackoffSpec(maxAttempts, t -> true, false, minBackoff, MAX_BACKOFF, 0.5d, Schedulers.parallel(), + NO_OP_CONSUMER, NO_OP_CONSUMER, NO_OP_BIFUNCTION, NO_OP_BIFUNCTION, + RetryBackoffSpec.BACKOFF_EXCEPTION_GENERATOR); + } + + /** + * A {@link RetrySpec} preconfigured for a simple strategy with maximum number of retry attempts. + * + * @param max the maximum number of retry attempts to allow + * @return the builder for further configuration + * @see RetrySpec#maxAttempts(long) + */ + static RetrySpec max(long max) { + return new RetrySpec(max, t -> true, false, NO_OP_CONSUMER, NO_OP_CONSUMER, NO_OP_BIFUNCTION, NO_OP_BIFUNCTION, + RetrySpec.RETRY_EXCEPTION_GENERATOR); + } + + /** + * A {@link RetrySpec} preconfigured for a simple strategy with maximum number of retry attempts over + * subsequent transient errors. An {@link org.reactivestreams.Subscriber#onNext(Object)} between + * errors resets the counter (see {@link RetrySpec#transientErrors(boolean)}). + * + * @param maxInARow the maximum number of retry attempts to allow in a row, reset by successful onNext + * @return the builder for further configuration + * @see RetrySpec#maxAttempts(long) + * @see RetrySpec#transientErrors(boolean) + */ + static RetrySpec maxInARow(long maxInARow) { + return new RetrySpec(maxInARow, t -> true, true, NO_OP_CONSUMER, NO_OP_CONSUMER, NO_OP_BIFUNCTION, NO_OP_BIFUNCTION, + RETRY_EXCEPTION_GENERATOR); + } + +} diff --git a/reactor-core/src/main/java/reactor/util/retry/RetryBackoffSpec.java b/reactor-core/src/main/java/reactor/util/retry/RetryBackoffSpec.java new file mode 100644 index 0000000000..fe061f8aeb --- /dev/null +++ b/reactor-core/src/main/java/reactor/util/retry/RetryBackoffSpec.java @@ -0,0 +1,566 @@ +/* + * Copyright (c) 2011-Present Pivotal Software Inc, All Rights Reserved. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package reactor.util.retry; + +import java.time.Duration; +import java.util.Objects; +import java.util.concurrent.ThreadLocalRandom; +import java.util.function.BiFunction; +import java.util.function.Consumer; +import java.util.function.Function; +import java.util.function.Predicate; +import java.util.function.Supplier; + +import reactor.core.Exceptions; +import reactor.core.publisher.Flux; +import reactor.core.publisher.Mono; +import reactor.core.scheduler.Scheduler; +import reactor.core.scheduler.Schedulers; + +/** + * A builder for a retry strategy based on exponential backoffs, with fine grained options. + * Retry delays are randomized with a user-provided {@link #jitter(double)} factor between {@code 0.d} (no jitter) + * and {@code 1.0} (default is {@code 0.5}). + * Even with the jitter, the effective backoff delay cannot be less than {@link #minBackoff(Duration)} + * nor more than {@link #maxBackoff(Duration)}. The delays and subsequent attempts are executed on the + * provided backoff {@link #scheduler(Scheduler)}. + *

+ * Only errors that match the {@link #filter(Predicate)} are retried (by default all), + * and the number of attempts can also limited with {@link #maxAttempts(long)}. + * When the maximum attempt of retries is reached, a runtime exception is propagated downstream which + * can be pinpointed with {@link reactor.core.Exceptions#isRetryExhausted(Throwable)}. The cause of + * the last attempt's failure is attached as said {@link reactor.core.Exceptions#retryExhausted(String, Throwable) retryExhausted} + * exception's cause. This can be customized with {@link #onRetryExhaustedThrow(BiFunction)}. + *

+ * Additionally, to help dealing with bursts of transient errors in a long-lived Flux as if each burst + * had its own backoff, one can choose to set {@link #transientErrors(boolean)} to {@code true}. + * The comparison to {@link #maxAttempts(long)} will then be done with the number of subsequent attempts + * that failed without an {@link org.reactivestreams.Subscriber#onNext(Object) onNext} in between. + *

+ * The {@link RetryBackoffSpec} is copy-on-write and as such can be stored as a "template" and further configured + * by different components without a risk of modifying the original configuration. + * + * @author Simon Baslé + */ +public final class RetryBackoffSpec implements Retry, Supplier { + + static final BiFunction BACKOFF_EXCEPTION_GENERATOR = (builder, rs) -> + Exceptions.retryExhausted("Retries exhausted: " + ( + builder.isTransientErrors + ? rs.totalRetriesInARow() + "/" + builder.maxAttempts + " in a row (" + rs.totalRetries() + " total)" + : rs.totalRetries() + "/" + builder.maxAttempts + ), rs.failure()); + + /** + * The configured minimum backoff {@link Duration}. + * @see #minBackoff(Duration) + */ + public final Duration minBackoff; + + /** + * The configured maximum backoff {@link Duration}. + * @see #maxBackoff(Duration) + */ + public final Duration maxBackoff; + + /** + * The configured jitter factor, as a {@code double}. + * @see #jitter(double) + */ + public final double jitterFactor; + + /** + * The configured {@link Scheduler} on which to execute backoffs. + * @see #scheduler(Scheduler) + */ + public final Scheduler backoffScheduler; + + /** + * The configured maximum for retry attempts. + * + * @see #maxAttempts(long) + */ + public final long maxAttempts; + + /** + * The configured {@link Predicate} to filter which exceptions to retry. + * @see #filter(Predicate) + * @see #modifyErrorFilter(Function) + */ + public final Predicate errorFilter; + + /** + * The configured transient error handling flag. + * @see #transientErrors(boolean) + */ + public final boolean isTransientErrors; + + final Consumer syncPreRetry; + final Consumer syncPostRetry; + final BiFunction, Mono> asyncPreRetry; + final BiFunction, Mono> asyncPostRetry; + + final BiFunction retryExhaustedGenerator; + + /** + * Copy constructor. + */ + RetryBackoffSpec(long max, + Predicate aThrowablePredicate, + boolean isTransientErrors, + Duration minBackoff, Duration maxBackoff, double jitterFactor, + Scheduler backoffScheduler, + Consumer doPreRetry, + Consumer doPostRetry, + BiFunction, Mono> asyncPreRetry, + BiFunction, Mono> asyncPostRetry, + BiFunction retryExhaustedGenerator) { + this.maxAttempts = max; + this.errorFilter = aThrowablePredicate::test; //massaging type + this.isTransientErrors = isTransientErrors; + this.minBackoff = minBackoff; + this.maxBackoff = maxBackoff; + this.jitterFactor = jitterFactor; + this.backoffScheduler = backoffScheduler; + this.syncPreRetry = doPreRetry; + this.syncPostRetry = doPostRetry; + this.asyncPreRetry = asyncPreRetry; + this.asyncPostRetry = asyncPostRetry; + this.retryExhaustedGenerator = retryExhaustedGenerator; + } + + /** + * Set the maximum number of retry attempts allowed. 1 meaning "1 retry attempt": + * the original subscription plus an extra re-subscription in case of an error, but + * no more. + * + * @param maxAttempts the new retry attempt limit + * @return a new copy of the builder which can either be further configured or used as {@link Retry} + */ + public RetryBackoffSpec maxAttempts(long maxAttempts) { + return new RetryBackoffSpec( + maxAttempts, + this.errorFilter, + this.isTransientErrors, + this.minBackoff, + this.maxBackoff, + this.jitterFactor, + this.backoffScheduler, + this.syncPreRetry, + this.syncPostRetry, + this.asyncPreRetry, + this.asyncPostRetry, + this.retryExhaustedGenerator); + } + + /** + * Set the {@link Predicate} that will filter which errors can be retried. Exceptions + * that don't pass the predicate will be propagated downstream and terminate the retry + * sequence. Defaults to allowing retries for all exceptions. + * + * @param errorFilter the predicate to filter which exceptions can be retried + * @return a new copy of the builder which can either be further configured or used as {@link Retry} + */ + public RetryBackoffSpec filter(Predicate errorFilter) { + return new RetryBackoffSpec( + this.maxAttempts, + Objects.requireNonNull(errorFilter, "errorFilter"), + this.isTransientErrors, + this.minBackoff, + this.maxBackoff, + this.jitterFactor, + this.backoffScheduler, + this.syncPreRetry, + this.syncPostRetry, + this.asyncPreRetry, + this.asyncPostRetry, + this.retryExhaustedGenerator); + } + + /** + * Allows to augment a previously {@link #filter(Predicate) set} {@link Predicate} with + * a new condition to allow retries of some exception or not. This can typically be used with + * {@link Predicate#and(Predicate)} to combine existing predicate(s) with a new one. + *

+ * For example: + *


+	 * //given
+	 * RetrySpec retryTwiceIllegalArgument = Retry.max(2)
+	 *     .throwablePredicate(e -> e instanceof IllegalArgumentException);
+	 *
+	 * RetrySpec retryTwiceIllegalArgWithCause = retryTwiceIllegalArgument.modifyErrorFilter(old ->
+	 *     old.and(e -> e.getCause() != null));
+	 * 
+ * + * @param predicateAdjuster a {@link Function} that returns a new {@link Predicate} given the + * currently in place {@link Predicate} (usually deriving from the old predicate). + * @return a new copy of the builder which can either be further configured or used as {@link Retry} + */ + public RetryBackoffSpec modifyErrorFilter( + Function, Predicate> predicateAdjuster) { + Objects.requireNonNull(predicateAdjuster, "predicateAdjuster"); + Predicate newPredicate = Objects.requireNonNull(predicateAdjuster.apply(this.errorFilter), + "predicateAdjuster must return a new predicate"); + return new RetryBackoffSpec( + this.maxAttempts, + newPredicate, + this.isTransientErrors, + this.minBackoff, + this.maxBackoff, + this.jitterFactor, + this.backoffScheduler, + this.syncPreRetry, + this.syncPostRetry, + this.asyncPreRetry, + this.asyncPostRetry, + this.retryExhaustedGenerator); + } + + /** + * Add synchronous behavior to be executed before the retry trigger is emitted in + * the companion publisher. This should not be blocking, as the companion publisher + * might be executing in a shared thread. + * + * @param doBeforeRetry the synchronous hook to execute before retry trigger is emitted + * @return a new copy of the builder which can either be further configured or used as {@link Retry} + * @see #doBeforeRetryAsync(Function) andDelayRetryWith for an asynchronous version + */ + public RetryBackoffSpec doBeforeRetry( + Consumer doBeforeRetry) { + return new RetryBackoffSpec( + this.maxAttempts, + this.errorFilter, + this.isTransientErrors, + this.minBackoff, + this.maxBackoff, + this.jitterFactor, + this.backoffScheduler, + this.syncPreRetry.andThen(doBeforeRetry), + this.syncPostRetry, + this.asyncPreRetry, + this.asyncPostRetry, + this.retryExhaustedGenerator); + } + + /** + * Add synchronous behavior to be executed after the retry trigger is emitted in + * the companion publisher. This should not be blocking, as the companion publisher + * might be publishing events in a shared thread. + * + * @param doAfterRetry the synchronous hook to execute after retry trigger is started + * @return a new copy of the builder which can either be further configured or used as {@link Retry} + * @see #doAfterRetryAsync(Function) andRetryThen for an asynchronous version + */ + public RetryBackoffSpec doAfterRetry(Consumer doAfterRetry) { + return new RetryBackoffSpec( + this.maxAttempts, + this.errorFilter, + this.isTransientErrors, + this.minBackoff, + this.maxBackoff, + this.jitterFactor, + this.backoffScheduler, + this.syncPreRetry, + this.syncPostRetry.andThen(doAfterRetry), + this.asyncPreRetry, + this.asyncPostRetry, + this.retryExhaustedGenerator); + } + + /** + * Add asynchronous behavior to be executed before the current retry trigger in the companion publisher, + * thus delaying the resulting retry trigger with the additional {@link Mono}. + * + * @param doAsyncBeforeRetry the asynchronous hook to execute before original retry trigger is emitted + * @return a new copy of the builder which can either be further configured or used as {@link Retry} + */ + public RetryBackoffSpec doBeforeRetryAsync( + Function> doAsyncBeforeRetry) { + return new RetryBackoffSpec( + this.maxAttempts, + this.errorFilter, + this.isTransientErrors, + this.minBackoff, + this.maxBackoff, + this.jitterFactor, + this.backoffScheduler, + this.syncPreRetry, + this.syncPostRetry, + (rs, m) -> asyncPreRetry.apply(rs, m).then(doAsyncBeforeRetry.apply(rs)), + this.asyncPostRetry, + this.retryExhaustedGenerator); + } + + /** + * Add asynchronous behavior to be executed after the current retry trigger in the companion publisher, + * thus delaying the resulting retry trigger with the additional {@link Mono}. + * + * @param doAsyncAfterRetry the asynchronous hook to execute after original retry trigger is emitted + * @return a new copy of the builder which can either be further configured or used as {@link Retry} + */ + public RetryBackoffSpec doAfterRetryAsync( + Function> doAsyncAfterRetry) { + return new RetryBackoffSpec( + this.maxAttempts, + this.errorFilter, + this.isTransientErrors, + this.minBackoff, + this.maxBackoff, + this.jitterFactor, + this.backoffScheduler, + this.syncPreRetry, + this.syncPostRetry, + this.asyncPreRetry, + (rs, m) -> asyncPostRetry.apply(rs, m).then(doAsyncAfterRetry.apply(rs)), + this.retryExhaustedGenerator); + } + + /** + * Set the generator for the {@link Exception} to be propagated when the maximum amount of retries + * is exhausted. By default, throws an {@link Exceptions#retryExhausted(String, Throwable)} with the + * message reflecting the total attempt index, transient attempt index and maximum retry count. + * The cause of the last {@link RetrySignal} is also added as the exception's cause. + * + * + * @param retryExhaustedGenerator the {@link Function} that generates the {@link Throwable} for the last + * {@link RetrySignal} + * @return a new copy of the builder which can either be further configured or used as {@link Retry} + */ + public RetryBackoffSpec onRetryExhaustedThrow(BiFunction retryExhaustedGenerator) { + return new RetryBackoffSpec( + this.maxAttempts, + this.errorFilter, + this.isTransientErrors, + this.minBackoff, + this.maxBackoff, + this.jitterFactor, + this.backoffScheduler, + this.syncPreRetry, + this.syncPostRetry, + this.asyncPreRetry, + this.asyncPostRetry, + Objects.requireNonNull(retryExhaustedGenerator, "retryExhaustedGenerator")); + } + + /** + * Set the transient error mode, indicating that the strategy being built should use + * {@link RetrySignal#totalRetriesInARow()} rather than {@link RetrySignal#totalRetries()}. + * Transient errors are errors that could occur in bursts but are then recovered from by + * a retry (with one or more onNext signals) before another error occurs. + *

+ * For a backoff based retry, the backoff is also computed based on the index within + * the burst, meaning the next error after a recovery will be retried with a {@link #minBackoff(Duration)} delay. + * + * @param isTransientErrors {@code true} to activate transient mode + * @return a new copy of the builder which can either be further configured or used as {@link Retry} + */ + public RetryBackoffSpec transientErrors(boolean isTransientErrors) { + return new RetryBackoffSpec( + this.maxAttempts, + this.errorFilter, + isTransientErrors, + this.minBackoff, + this.maxBackoff, + this.jitterFactor, + this.backoffScheduler, + this.syncPreRetry, + this.syncPostRetry, + this.asyncPreRetry, + this.asyncPostRetry, + this.retryExhaustedGenerator); + } + + /** + * Set the minimum {@link Duration} for the first backoff. This method switches to an + * exponential backoff strategy if not already done so. Defaults to {@link Duration#ZERO} + * when the strategy was initially not a backoff one. + * + * @param minBackoff the minimum backoff {@link Duration} + * @return a new copy of the builder which can either be further configured or used as {@link Retry} + */ + public RetryBackoffSpec minBackoff(Duration minBackoff) { + return new RetryBackoffSpec( + this.maxAttempts, + this.errorFilter, + this.isTransientErrors, + minBackoff, + this.maxBackoff, + this.jitterFactor, + this.backoffScheduler, + this.syncPreRetry, + this.syncPostRetry, + this.asyncPreRetry, + this.asyncPostRetry, + this.retryExhaustedGenerator); + } + + /** + * Set a hard maximum {@link Duration} for exponential backoffs. This method switches + * to an exponential backoff strategy with a zero minimum backoff if not already a backoff + * strategy. Defaults to {@code Duration.ofMillis(Long.MAX_VALUE)}. + * + * @param maxBackoff the maximum backoff {@link Duration} + * @return a new copy of the builder which can either be further configured or used as {@link Retry} + */ + public RetryBackoffSpec maxBackoff(Duration maxBackoff) { + return new RetryBackoffSpec( + this.maxAttempts, + this.errorFilter, + this.isTransientErrors, + this.minBackoff, + maxBackoff, + this.jitterFactor, + this.backoffScheduler, + this.syncPreRetry, + this.syncPostRetry, + this.asyncPreRetry, + this.asyncPostRetry, + this.retryExhaustedGenerator); + } + + /** + * Set a jitter factor for exponential backoffs that adds randomness to each backoff. This can + * be helpful in reducing cascading failure due to retry-storms. This method switches to an + * exponential backoff strategy with a zero minimum backoff if not already a backoff strategy. + * Defaults to {@code 0.5} (a jitter of at most 50% of the computed delay). + * + * @param jitterFactor the new jitter factor as a {@code double} between {@code 0d} and {@code 1d} + * @return a new copy of the builder which can either be further configured or used as {@link Retry} + */ + public RetryBackoffSpec jitter(double jitterFactor) { + return new RetryBackoffSpec( + this.maxAttempts, + this.errorFilter, + this.isTransientErrors, + this.minBackoff, + this.maxBackoff, + jitterFactor, + this.backoffScheduler, + this.syncPreRetry, + this.syncPostRetry, + this.asyncPreRetry, + this.asyncPostRetry, + this.retryExhaustedGenerator); + } + + /** + * Set a {@link Scheduler} on which to execute the delays computed by the exponential backoff + * strategy. This method switches to an exponential backoff strategy with a zero minimum backoff + * if not already a backoff strategy. Defaults to {@link Schedulers#parallel()} in the backoff + * strategy. + * + * @param backoffScheduler the {@link Scheduler} to use + * @return a new copy of the builder which can either be further configured or used as {@link Retry} + */ + public RetryBackoffSpec scheduler(Scheduler backoffScheduler) { + return new RetryBackoffSpec( + this.maxAttempts, + this.errorFilter, + this.isTransientErrors, + this.minBackoff, + this.maxBackoff, + this.jitterFactor, + Objects.requireNonNull(backoffScheduler, "backoffScheduler"), + this.syncPreRetry, + this.syncPostRetry, + this.asyncPreRetry, + this.asyncPostRetry, + this.retryExhaustedGenerator); + } + + //========== + // strategy + //========== + + protected void validateArguments() { + if (jitterFactor < 0 || jitterFactor > 1) throw new IllegalArgumentException("jitterFactor must be between 0 and 1 (default 0.5)"); + Objects.requireNonNull(this.backoffScheduler, "backoffScheduler must not be null (default Schedulers.parallel())"); + } + + @Override + public Flux generateCompanion(Flux t) { + validateArguments(); + return t.concatMap(retryWhenState -> { + //capture the state immediately + RetrySignal copy = retryWhenState.copy(); + Throwable currentFailure = copy.failure(); + long iteration = isTransientErrors ? copy.totalRetriesInARow() : copy.totalRetries(); + + if (currentFailure == null) { + return Mono.error(new IllegalStateException("Retry.RetrySignal#failure() not expected to be null")); + } + + if (!errorFilter.test(currentFailure)) { + return Mono.error(currentFailure); + } + + if (iteration >= maxAttempts) { + return Mono.error(retryExhaustedGenerator.apply(this, copy)); + } + + Duration nextBackoff; + try { + nextBackoff = minBackoff.multipliedBy((long) Math.pow(2, iteration)); + if (nextBackoff.compareTo(maxBackoff) > 0) { + nextBackoff = maxBackoff; + } + } + catch (ArithmeticException overflow) { + nextBackoff = maxBackoff; + } + + //short-circuit delay == 0 case + if (nextBackoff.isZero()) { + return RetrySpec.applyHooks(copy, Mono.just(iteration), + syncPreRetry, syncPostRetry, asyncPreRetry, asyncPostRetry); + } + + ThreadLocalRandom random = ThreadLocalRandom.current(); + + long jitterOffset; + try { + jitterOffset = nextBackoff.multipliedBy((long) (100 * jitterFactor)) + .dividedBy(100) + .toMillis(); + } + catch (ArithmeticException ae) { + jitterOffset = Math.round(Long.MAX_VALUE * jitterFactor); + } + long lowBound = Math.max(minBackoff.minus(nextBackoff) + .toMillis(), -jitterOffset); + long highBound = Math.min(maxBackoff.minus(nextBackoff) + .toMillis(), jitterOffset); + + long jitter; + if (highBound == lowBound) { + if (highBound == 0) jitter = 0; + else jitter = random.nextLong(highBound); + } + else { + jitter = random.nextLong(lowBound, highBound); + } + Duration effectiveBackoff = nextBackoff.plusMillis(jitter); + return RetrySpec.applyHooks(copy, Mono.delay(effectiveBackoff, backoffScheduler), + syncPreRetry, syncPostRetry, asyncPreRetry, asyncPostRetry); + }); + } + + @Override + public Retry get() { + return this; + } +} diff --git a/reactor-core/src/main/java/reactor/util/retry/RetrySpec.java b/reactor-core/src/main/java/reactor/util/retry/RetrySpec.java new file mode 100644 index 0000000000..1b0f868a7c --- /dev/null +++ b/reactor-core/src/main/java/reactor/util/retry/RetrySpec.java @@ -0,0 +1,381 @@ +/* + * Copyright (c) 2011-Present Pivotal Software Inc, All Rights Reserved. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package reactor.util.retry; + +import java.time.Duration; +import java.util.Objects; +import java.util.function.BiFunction; +import java.util.function.Consumer; +import java.util.function.Function; +import java.util.function.Predicate; +import java.util.function.Supplier; + +import reactor.core.Exceptions; +import reactor.core.publisher.Flux; +import reactor.core.publisher.Mono; + +/** + * A builder for a simple count-based retry strategy with fine grained options: errors that match + * the {@link #filter(Predicate)} are retried (by default all), up to {@link #maxAttempts(long)} times. + *

+ * When the maximum attempt of retries is reached, a runtime exception is propagated downstream which + * can be pinpointed with {@link reactor.core.Exceptions#isRetryExhausted(Throwable)}. The cause of + * the last attempt's failure is attached as said {@link reactor.core.Exceptions#retryExhausted(String, Throwable) retryExhausted} + * exception's cause. This can be customized with {@link #onRetryExhaustedThrow(BiFunction)}. + *

+ * Additionally, to help dealing with bursts of transient errors in a long-lived Flux as if each burst + * had its own attempt counter, one can choose to set {@link #transientErrors(boolean)} to {@code true}. + * The comparison to {@link #maxAttempts(long)} will then be done with the number of subsequent attempts + * that failed without an {@link org.reactivestreams.Subscriber#onNext(Object) onNext} in between. + *

+ * The {@link RetrySpec} is copy-on-write and as such can be stored as a "template" and further configured + * by different components without a risk of modifying the original configuration. + * + * @author Simon Baslé + */ +public final class RetrySpec implements Retry, Supplier { + + static final Duration MAX_BACKOFF = Duration.ofMillis(Long.MAX_VALUE); + static final Consumer NO_OP_CONSUMER = rs -> {}; + static final BiFunction, Mono> NO_OP_BIFUNCTION = (rs, m) -> m; + + + static final BiFunction + RETRY_EXCEPTION_GENERATOR = (builder, rs) -> + Exceptions.retryExhausted("Retries exhausted: " + ( + builder.isTransientErrors + ? rs.totalRetriesInARow() + "/" + builder.maxAttempts + " in a row (" + rs.totalRetries() + " total)" + : rs.totalRetries() + "/" + builder.maxAttempts + ), rs.failure()); + + /** + * The configured maximum for retry attempts. + * + * @see #maxAttempts(long) + */ + public final long maxAttempts; + + /** + * The configured {@link Predicate} to filter which exceptions to retry. + * @see #filter(Predicate) + * @see #modifyErrorFilter(Function) + */ + public final Predicate errorFilter; + + /** + * The configured transient error handling flag. + * @see #transientErrors(boolean) + */ + public final boolean isTransientErrors; + + final Consumer doPreRetry; + final Consumer doPostRetry; + final BiFunction, Mono> asyncPreRetry; + final BiFunction, Mono> asyncPostRetry; + + final BiFunction retryExhaustedGenerator; + + /** + * Copy constructor. + */ + RetrySpec(long max, + Predicate aThrowablePredicate, + boolean isTransientErrors, + Consumer doPreRetry, + Consumer doPostRetry, + BiFunction, Mono> asyncPreRetry, + BiFunction, Mono> asyncPostRetry, + BiFunction retryExhaustedGenerator) { + this.maxAttempts = max; + this.errorFilter = aThrowablePredicate::test; //massaging type + this.isTransientErrors = isTransientErrors; + this.doPreRetry = doPreRetry; + this.doPostRetry = doPostRetry; + this.asyncPreRetry = asyncPreRetry; + this.asyncPostRetry = asyncPostRetry; + this.retryExhaustedGenerator = retryExhaustedGenerator; + } + + /** + * Set the maximum number of retry attempts allowed. 1 meaning "1 retry attempt": + * the original subscription plus an extra re-subscription in case of an error, but + * no more. + * + * @param maxAttempts the new retry attempt limit + * @return the builder for further configuration + */ + public RetrySpec maxAttempts(long maxAttempts) { + return new RetrySpec( + maxAttempts, + this.errorFilter, + this.isTransientErrors, + this.doPreRetry, + this.doPostRetry, + this.asyncPreRetry, + this.asyncPostRetry, + this.retryExhaustedGenerator); + } + + /** + * Set the {@link Predicate} that will filter which errors can be retried. Exceptions + * that don't pass the predicate will be propagated downstream and terminate the retry + * sequence. Defaults to allowing retries for all exceptions. + * + * @param errorFilter the predicate to filter which exceptions can be retried + * @return a new copy of the builder which can either be further configured or used as {@link Retry} + */ + public RetrySpec filter(Predicate errorFilter) { + return new RetrySpec( + this.maxAttempts, + Objects.requireNonNull(errorFilter, "errorFilter"), + this.isTransientErrors, + this.doPreRetry, + this.doPostRetry, + this.asyncPreRetry, + this.asyncPostRetry, + this.retryExhaustedGenerator); + } + + /** + * Allows to augment a previously {@link #filter(Predicate) set} {@link Predicate} with + * a new condition to allow retries of some exception or not. This can typically be used with + * {@link Predicate#and(Predicate)} to combine existing predicate(s) with a new one. + *

+ * For example: + *


+	 * //given
+	 * RetrySpec retryTwiceIllegalArgument = Retry.max(2)
+	 *     .throwablePredicate(e -> e instanceof IllegalArgumentException);
+	 *
+	 * RetrySpec retryTwiceIllegalArgWithCause = retryTwiceIllegalArgument.modifyErrorFilter(old ->
+	 *     old.and(e -> e.getCause() != null));
+	 * 
+ * + * @param predicateAdjuster a {@link Function} that returns a new {@link Predicate} given the + * currently in place {@link Predicate} (usually deriving from the old predicate). + * @return a new copy of the builder which can either be further configured or used as {@link Retry} + */ + public RetrySpec modifyErrorFilter( + Function, Predicate> predicateAdjuster) { + Objects.requireNonNull(predicateAdjuster, "predicateAdjuster"); + Predicate newPredicate = Objects.requireNonNull(predicateAdjuster.apply(this.errorFilter), + "predicateAdjuster must return a new predicate"); + return new RetrySpec( + this.maxAttempts, + newPredicate, + this.isTransientErrors, + this.doPreRetry, + this.doPostRetry, + this.asyncPreRetry, + this.asyncPostRetry, + this.retryExhaustedGenerator); + } + + /** + * Add synchronous behavior to be executed before the retry trigger is emitted in + * the companion publisher. This should not be blocking, as the companion publisher + * might be executing in a shared thread. + * + * @param doBeforeRetry the synchronous hook to execute before retry trigger is emitted + * @return a new copy of the builder which can either be further configured or used as {@link Retry} + * @see #doBeforeRetryAsync(Function) andDelayRetryWith for an asynchronous version + */ + public RetrySpec doBeforeRetry( + Consumer doBeforeRetry) { + return new RetrySpec( + this.maxAttempts, + this.errorFilter, + this.isTransientErrors, + this.doPreRetry.andThen(doBeforeRetry), + this.doPostRetry, + this.asyncPreRetry, + this.asyncPostRetry, + this.retryExhaustedGenerator); + } + + /** + * Add synchronous behavior to be executed after the retry trigger is emitted in + * the companion publisher. This should not be blocking, as the companion publisher + * might be publishing events in a shared thread. + * + * @param doAfterRetry the synchronous hook to execute after retry trigger is started + * @return a new copy of the builder which can either be further configured or used as {@link Retry} + * @see #doAfterRetryAsync(Function) andRetryThen for an asynchronous version + */ + public RetrySpec doAfterRetry(Consumer doAfterRetry) { + return new RetrySpec( + this.maxAttempts, + this.errorFilter, + this.isTransientErrors, + this.doPreRetry, + this.doPostRetry.andThen(doAfterRetry), + this.asyncPreRetry, + this.asyncPostRetry, + this.retryExhaustedGenerator); + } + + /** + * Add asynchronous behavior to be executed before the current retry trigger in the companion publisher, + * thus delaying the resulting retry trigger with the additional {@link Mono}. + * + * @param doAsyncBeforeRetry the asynchronous hook to execute before original retry trigger is emitted + * @return a new copy of the builder which can either be further configured or used as {@link Retry} + */ + public RetrySpec doBeforeRetryAsync( + Function> doAsyncBeforeRetry) { + return new RetrySpec( + this.maxAttempts, + this.errorFilter, + this.isTransientErrors, + this.doPreRetry, + this.doPostRetry, + (rs, m) -> asyncPreRetry.apply(rs, m).then(doAsyncBeforeRetry.apply(rs)), + this.asyncPostRetry, + this.retryExhaustedGenerator); + } + + /** + * Add asynchronous behavior to be executed after the current retry trigger in the companion publisher, + * thus delaying the resulting retry trigger with the additional {@link Mono}. + * + * @param doAsyncAfterRetry the asynchronous hook to execute after original retry trigger is emitted + * @return a new copy of the builder which can either be further configured or used as {@link Retry} + */ + public RetrySpec doAfterRetryAsync( + Function> doAsyncAfterRetry) { + return new RetrySpec( + this.maxAttempts, + this.errorFilter, + this.isTransientErrors, + this.doPreRetry, + this.doPostRetry, + this.asyncPreRetry, + (rs, m) -> asyncPostRetry.apply(rs, m).then(doAsyncAfterRetry.apply(rs)), + this.retryExhaustedGenerator); + } + + /** + * Set the generator for the {@link Exception} to be propagated when the maximum amount of retries + * is exhausted. By default, throws an {@link Exceptions#retryExhausted(String, Throwable)} with the + * message reflecting the total attempt index, transient attempt index and maximum retry count. + * The cause of the last {@link RetrySignal} is also added as the exception's cause. + * + * @param retryExhaustedGenerator the {@link Function} that generates the {@link Throwable} for the last + * {@link RetrySignal} + * @return a new copy of the builder which can either be further configured or used as {@link Retry} + */ + public RetrySpec onRetryExhaustedThrow(BiFunction retryExhaustedGenerator) { + return new RetrySpec( + this.maxAttempts, + this.errorFilter, + this.isTransientErrors, + this.doPreRetry, + this.doPostRetry, + this.asyncPreRetry, + this.asyncPostRetry, + Objects.requireNonNull(retryExhaustedGenerator, "retryExhaustedGenerator")); + } + + /** + * Set the transient error mode, indicating that the strategy being built should use + * {@link RetrySignal#totalRetriesInARow()} rather than {@link RetrySignal#totalRetries()}. + * Transient errors are errors that could occur in bursts but are then recovered from by + * a retry (with one or more onNext signals) before another error occurs. + *

+ * In the case of a simple count-based retry, this means that the {@link #maxAttempts(long)} + * is applied to each burst individually. + * + * @param isTransientErrors {@code true} to activate transient mode + * @return a new copy of the builder which can either be further configured or used as {@link Retry} + */ + public RetrySpec transientErrors(boolean isTransientErrors) { + return new RetrySpec( + this.maxAttempts, + this.errorFilter, + isTransientErrors, + this.doPreRetry, + this.doPostRetry, + this.asyncPreRetry, + this.asyncPostRetry, + this.retryExhaustedGenerator); + } + + //========== + // strategy + //========== + + @Override + public Flux generateCompanion(Flux flux) { + return flux.concatMap(retryWhenState -> { + //capture the state immediately + RetrySignal copy = retryWhenState.copy(); + Throwable currentFailure = copy.failure(); + long iteration = isTransientErrors ? copy.totalRetriesInARow() : copy.totalRetries(); + + if (currentFailure == null) { + return Mono.error(new IllegalStateException("RetryWhenState#failure() not expected to be null")); + } + else if (!errorFilter.test(currentFailure)) { + return Mono.error(currentFailure); + } + else if (iteration >= maxAttempts) { + return Mono.error(retryExhaustedGenerator.apply(this, copy)); + } + else { + return applyHooks(copy, Mono.just(iteration), doPreRetry, doPostRetry, asyncPreRetry, asyncPostRetry); + } + }); + } + + //=================== + // utility functions + //=================== + + static Mono applyHooks(RetrySignal copyOfSignal, + Mono originalCompanion, + final Consumer doPreRetry, + final Consumer doPostRetry, + final BiFunction, Mono> asyncPreRetry, + final BiFunction, Mono> asyncPostRetry) { + if (doPreRetry != NO_OP_CONSUMER) { + try { + doPreRetry.accept(copyOfSignal); + } + catch (Throwable e) { + return Mono.error(e); + } + } + + Mono postRetrySyncMono; + if (doPostRetry != NO_OP_CONSUMER) { + postRetrySyncMono = Mono.fromRunnable(() -> doPostRetry.accept(copyOfSignal)); + } + else { + postRetrySyncMono = Mono.empty(); + } + + Mono preRetryMono = asyncPreRetry == NO_OP_BIFUNCTION ? Mono.empty() : asyncPreRetry.apply(copyOfSignal, Mono.empty()); + Mono postRetryMono = asyncPostRetry != NO_OP_BIFUNCTION ? asyncPostRetry.apply(copyOfSignal, postRetrySyncMono) : postRetrySyncMono; + + return preRetryMono.then(originalCompanion).flatMap(postRetryMono::thenReturn); + } + + @Override + public Retry get() { + return this; + } +} diff --git a/reactor-core/src/test/java/reactor/core/ExceptionsTest.java b/reactor-core/src/test/java/reactor/core/ExceptionsTest.java index 57dfe12a72..a43021bb3a 100644 --- a/reactor-core/src/test/java/reactor/core/ExceptionsTest.java +++ b/reactor-core/src/test/java/reactor/core/ExceptionsTest.java @@ -16,6 +16,7 @@ package reactor.core; import java.io.IOException; +import java.time.Duration; import java.util.List; import java.util.Arrays; import java.util.Collections; @@ -504,4 +505,32 @@ public void unwrapMultipleExcludingTraceback() { .hasSize(2) .hasOnlyElementsOfType(IllegalStateException.class); } + + @Test + public void isRetryExhausted() { + Throwable match1 = Exceptions.retryExhausted("only a message", null); + Throwable match2 = Exceptions.retryExhausted("message and cause", new RuntimeException("cause: boom")); + Throwable noMatch = new IllegalStateException("Retry exhausted: 10/10"); + + assertThat(Exceptions.isRetryExhausted(null)).as("null").isFalse(); + assertThat(Exceptions.isRetryExhausted(match1)).as("match1").isTrue(); + assertThat(Exceptions.isRetryExhausted(match2)).as("match2").isTrue(); + assertThat(Exceptions.isRetryExhausted(noMatch)).as("noMatch").isFalse(); + } + + @Test + public void retryExhaustedMessageWithNoCause() { + Throwable retryExhausted = Exceptions.retryExhausted("message with no cause", null); + + assertThat(retryExhausted).hasMessage("message with no cause") + .hasNoCause(); + } + + @Test + public void retryExhaustedMessageWithCause() { + Throwable retryExhausted = Exceptions.retryExhausted("message with cause", new RuntimeException("boom")); + + assertThat(retryExhausted).hasMessage("message with cause") + .hasCause(new RuntimeException("boom")); + } } \ No newline at end of file diff --git a/reactor-core/src/test/java/reactor/core/publisher/FluxRetryPredicateTest.java b/reactor-core/src/test/java/reactor/core/publisher/FluxRetryPredicateTest.java index b8203fc7cb..64c9a49237 100644 --- a/reactor-core/src/test/java/reactor/core/publisher/FluxRetryPredicateTest.java +++ b/reactor-core/src/test/java/reactor/core/publisher/FluxRetryPredicateTest.java @@ -18,6 +18,7 @@ import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicInteger; +import java.util.function.Predicate; import org.junit.Test; import reactor.test.StepVerifier; @@ -36,7 +37,7 @@ public void sourceNull() { @Test(expected = NullPointerException.class) public void predicateNull() { Flux.never() - .retry(null); + .retry((Predicate) null); } @Test diff --git a/reactor-core/src/test/java/reactor/core/publisher/FluxRetryWhenTest.java b/reactor-core/src/test/java/reactor/core/publisher/FluxRetryWhenTest.java index 1c7ee8b523..e7a652fe9a 100644 --- a/reactor-core/src/test/java/reactor/core/publisher/FluxRetryWhenTest.java +++ b/reactor-core/src/test/java/reactor/core/publisher/FluxRetryWhenTest.java @@ -23,6 +23,7 @@ import java.util.List; import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicInteger; +import java.util.concurrent.atomic.AtomicLong; import java.util.function.Function; import java.util.stream.Collectors; @@ -30,9 +31,10 @@ import org.assertj.core.api.LongAssert; import org.assertj.core.data.Percentage; import org.junit.Test; -import org.reactivestreams.Publisher; import org.reactivestreams.Subscription; + import reactor.core.CoreSubscriber; +import reactor.core.Exceptions; import reactor.core.Scannable; import reactor.core.scheduler.Scheduler; import reactor.core.scheduler.Schedulers; @@ -41,6 +43,8 @@ import reactor.test.subscriber.AssertSubscriber; import reactor.util.context.Context; import reactor.util.function.Tuple2; +import reactor.util.retry.Retry; +import reactor.util.retry.RetryBackoffSpec; import static org.assertj.core.api.Assertions.assertThat; @@ -57,10 +61,17 @@ public void sourceNull() { new FluxRetryWhen<>(null, v -> v); } + @SuppressWarnings("deprecation") + @Test(expected = NullPointerException.class) + public void whenThrowableFactoryNull() { + Flux.never() + .retryWhen((Function) null); + } + @Test(expected = NullPointerException.class) - public void whenFactoryNull() { + public void whenRetrySignalFactoryNull() { Flux.never() - .retryWhen(null); + .retryWhen((Retry) null); } @Test @@ -69,7 +80,7 @@ public void cancelsOther() { Flux when = Flux.range(1, 10) .doOnCancel(() -> cancelled.set(true)); - StepVerifier.create(justError.retryWhen(other -> when)) + StepVerifier.create(justError.retryWhen((Retry) other -> when)) .thenCancel() .verify(); @@ -82,7 +93,7 @@ public void cancelTwiceCancelsOtherOnce() { Flux when = Flux.range(1, 10) .doOnCancel(cancelled::incrementAndGet); - justError.retryWhen(other -> when) + justError.retryWhen((Retry) other -> when) .subscribe(new BaseSubscriber() { @Override protected void hookOnSubscribe(Subscription subscription) { @@ -103,7 +114,7 @@ public void directOtherErrorPreventsSubscribe() { .doOnSubscribe(sub -> sourceSubscribed.set(true)) .doOnCancel(() -> sourceCancelled.set(true)); - Flux retry = source.retryWhen(other -> Mono.error(new IllegalStateException("boom"))); + Flux retry = source.retryWhen((Retry) other -> Mono.error(new IllegalStateException("boom"))); StepVerifier.create(retry) .expectSubscription() @@ -123,7 +134,7 @@ public void lateOtherErrorCancelsSource() { .doOnCancel(() -> sourceCancelled.set(true)); - Flux retry = source.retryWhen(other -> other.flatMap(l -> + Flux retry = source.retryWhen((Retry) other -> other.flatMap(l -> count.getAndIncrement() == 0 ? Mono.just(l) : Mono.error(new IllegalStateException("boom")))); StepVerifier.create(retry) @@ -144,7 +155,7 @@ public void directOtherEmptyPreventsSubscribeAndCompletes() { .doOnSubscribe(sub -> sourceSubscribed.set(true)) .doOnCancel(() -> sourceCancelled.set(true)); - Flux retry = source.retryWhen(other -> Flux.empty()); + Flux retry = source.retryWhen((Retry) other -> Flux.empty()); StepVerifier.create(retry) .expectSubscription() @@ -162,7 +173,7 @@ public void lateOtherEmptyCancelsSourceAndCompletes() { .doOnSubscribe(sub -> sourceSubscribed.set(true)) .doOnCancel(() -> sourceCancelled.set(true)); - Flux retry = source.retryWhen(other -> other.take(1)); + Flux retry = source.retryWhen((Retry) other -> other.take(1)); StepVerifier.create(retry) .expectSubscription() @@ -178,7 +189,7 @@ public void lateOtherEmptyCancelsSourceAndCompletes() { public void coldRepeater() { AssertSubscriber ts = AssertSubscriber.create(); - justError.retryWhen(v -> Flux.range(1, 10)) + justError.retryWhen((Retry) other -> Flux.range(1, 10)) .subscribe(ts); ts.assertValues(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1) @@ -190,7 +201,7 @@ public void coldRepeater() { public void coldRepeaterBackpressured() { AssertSubscriber ts = AssertSubscriber.create(0); - rangeError.retryWhen(v -> Flux.range(1, 5)) + rangeError.retryWhen((Retry) other -> Flux.range(1, 5)) .subscribe(ts); ts.assertNoValues() @@ -226,7 +237,7 @@ public void coldRepeaterBackpressured() { public void coldEmpty() { AssertSubscriber ts = AssertSubscriber.create(0); - rangeError.retryWhen(v -> Flux.empty()) + rangeError.retryWhen((Retry) other -> Flux.empty()) .subscribe(ts); ts.assertNoValues() @@ -238,7 +249,7 @@ public void coldEmpty() { public void coldError() { AssertSubscriber ts = AssertSubscriber.create(0); - rangeError.retryWhen(v -> Flux.error(new RuntimeException("forced failure"))) + rangeError.retryWhen((Retry) other -> Flux.error(new RuntimeException("forced failure"))) .subscribe(ts); ts.assertNoValues() @@ -251,7 +262,7 @@ public void coldError() { public void whenFactoryThrows() { AssertSubscriber ts = AssertSubscriber.create(); - rangeError.retryWhen(v -> { + rangeError.retryWhen((Retry) other -> { throw new RuntimeException("forced failure"); }) .subscribe(ts); @@ -260,27 +271,25 @@ public void whenFactoryThrows() { .assertError(RuntimeException.class) .assertErrorMessage("forced failure") .assertNotComplete(); - } @Test public void whenFactoryReturnsNull() { AssertSubscriber ts = AssertSubscriber.create(); - rangeError.retryWhen(v -> null) + rangeError.retryWhen((Retry) other -> null) .subscribe(ts); ts.assertNoValues() .assertError(NullPointerException.class) .assertNotComplete(); - } @Test public void retryErrorsInResponse() { AssertSubscriber ts = AssertSubscriber.create(); - rangeError.retryWhen(v -> v.map(a -> { + rangeError.retryWhen((Retry) v -> v.map(a -> { throw new RuntimeException("forced failure"); })) .subscribe(ts); @@ -296,7 +305,7 @@ public void retryErrorsInResponse() { public void retryAlways() { AssertSubscriber ts = AssertSubscriber.create(0); - rangeError.retryWhen(v -> v) + rangeError.retryWhen((Retry) v -> v) .subscribe(ts); ts.request(8); @@ -306,22 +315,26 @@ public void retryAlways() { .assertNotComplete(); } - Flux exponentialRetryScenario() { + Flux linearRetryScenario() { AtomicInteger i = new AtomicInteger(); return Flux.create(s -> { if (i.incrementAndGet() == 4) { s.next("hey"); + s.complete(); } else { s.error(new RuntimeException("test " + i)); } - }).retryWhen(repeat -> repeat.zipWith(Flux.range(1, 3), (t1, t2) -> t2) - .flatMap(time -> Mono.delay(Duration.ofSeconds(time)))); + }).retryWhen(Retry + .max(3) + .doBeforeRetry(rs -> System.out.println(rs.copy())) + .doBeforeRetryAsync(rs -> Mono.delay(Duration.ofSeconds(rs.totalRetries() + 1)).then()) + ); } @Test - public void exponentialRetry() { - StepVerifier.withVirtualTime(this::exponentialRetryScenario) + public void linearRetry() { + StepVerifier.withVirtualTime(this::linearRetryScenario) .thenAwait(Duration.ofSeconds(6)) .expectNext("hey") .expectComplete() @@ -362,7 +375,7 @@ public void scanOtherSubscriber() { @Test public void inners() { CoreSubscriber actual = new LambdaSubscriber<>(null, e -> {}, null, null); - CoreSubscriber signaller = new LambdaSubscriber<>(null, e -> {}, null, null); + CoreSubscriber signaller = new LambdaSubscriber<>(null, e -> {}, null, null); Flux when = Flux.empty(); FluxRetryWhen.RetryWhenMainSubscriber main = new FluxRetryWhen .RetryWhenMainSubscriber<>(actual, signaller, when); @@ -386,21 +399,21 @@ public void retryWhenContextTrigger_MergesOriginalContext() { contextPerRetry.add(sig.getContext()); } }) - .retryWhen(errorFlux -> errorFlux.handle((e, sink) -> { + .retryWhen((Retry) retrySignalFlux -> retrySignalFlux.handle((rs, sink) -> { Context ctx = sink.currentContext(); int rl = ctx.getOrDefault("retriesLeft", 0); if (rl > 0) { sink.next(Context.of("retriesLeft", rl - 1)); } else { - sink.error(new IllegalStateException("retries exhausted", e)); + sink.error(Exceptions.retryExhausted("retries exhausted", rs.failure())); } })) .subscriberContext(Context.of("retriesLeft", RETRY_COUNT)) .subscriberContext(Context.of("thirdPartyContext", "present")); StepVerifier.create(retryWithContext) - .expectErrorSatisfies(e -> assertThat(e).isInstanceOf(IllegalStateException.class) + .expectErrorSatisfies(e -> assertThat(e).matches(Exceptions::isRetryExhausted, "isRetryExhausted") .hasMessage("retries exhausted") .hasCause(new IllegalStateException("boom"))) .verify(Duration.ofSeconds(1)); @@ -416,7 +429,11 @@ public void fluxRetryRandomBackoff() { StepVerifier.withVirtualTime(() -> Flux.concat(Flux.range(0, 2), Flux.error(exception)) - .retryBackoff(4, Duration.ofMillis(100), Duration.ofMillis(2000), 0.1) + .retryWhen(Retry + .backoff(4, Duration.ofMillis(100)) + .maxBackoff(Duration.ofMillis(2000)) + .jitter(0.1) + ) .elapsed() .doOnNext(elapsed -> { if (elapsed.getT2() == 0) elapsedList.add(elapsed.getT1());} ) .map(Tuple2::getT2) @@ -449,7 +466,10 @@ public void fluxRetryRandomBackoffDefaultJitter() { StepVerifier.withVirtualTime(() -> Flux.concat(Flux.range(0, 2), Flux.error(exception)) - .retryBackoff(4, Duration.ofMillis(100), Duration.ofMillis(2000)) + .retryWhen(Retry + .backoff(4, Duration.ofMillis(100)) + .maxBackoff(Duration.ofMillis(2000)) + ) .elapsed() .doOnNext(elapsed -> { if (elapsed.getT2() == 0) elapsedList.add(elapsed.getT1());} ) .map(Tuple2::getT2) @@ -482,8 +502,7 @@ public void fluxRetryRandomBackoffDefaultMaxDuration() { StepVerifier.withVirtualTime(() -> Flux.concat(Flux.range(0, 2), Flux.error(exception)) - .log() - .retryBackoff(4, Duration.ofMillis(100)) + .retryWhen(Retry.backoff(4, Duration.ofMillis(100))) .elapsed() .doOnNext(elapsed -> { if (elapsed.getT2() == 0) elapsedList.add(elapsed.getT1());} ) .map(Tuple2::getT2) @@ -516,7 +535,11 @@ public void fluxRetryRandomBackoff_maxBackoffShaves() { StepVerifier.withVirtualTime(() -> Flux.concat(Flux.range(0, 2), Flux.error(exception)) - .retryBackoff(4, Duration.ofMillis(100), Duration.ofMillis(220), 0.9) + .retryWhen(Retry + .backoff(4, Duration.ofMillis(100)) + .maxBackoff(Duration.ofMillis(220)) + .jitter(0.9) + ) .elapsed() .doOnNext(elapsed -> { if (elapsed.getT2() == 0) elapsedList.add(elapsed.getT1());} ) .map(Tuple2::getT2) @@ -560,7 +583,11 @@ public void fluxRetryRandomBackoff_minBackoffFloor() { StepVerifier.withVirtualTime(() -> Flux.concat(Flux.range(0, 2), Flux.error(exception)) - .retryBackoff(1, Duration.ofMillis(100), Duration.ofMillis(2000), 0.9) + .retryWhen(Retry + .backoff(1, Duration.ofMillis(100)) + .maxBackoff(Duration.ofMillis(2000)) + .jitter(0.9) + ) .elapsed() .doOnNext(elapsed -> { if (elapsed.getT2() == 0) elapsedList.add(elapsed.getT1());} ) .map(Tuple2::getT2) @@ -591,7 +618,11 @@ public void fluxRetryRandomBackoff_noRandomness() { StepVerifier.withVirtualTime(() -> Flux.concat(Flux.range(0, 2), Flux.error(exception)) - .retryBackoff(4, Duration.ofMillis(100), Duration.ofMillis(2000), 0) + .retryWhen(Retry + .backoff(4, Duration.ofMillis(100)) + .maxBackoff(Duration.ofMillis(2000)) + .jitter(0) + ) .elapsed() .doOnNext(elapsed -> { if (elapsed.getT2() == 0) elapsedList.add(elapsed.getT1());} ) .map(Tuple2::getT2) @@ -613,15 +644,14 @@ public void fluxRetryRandomBackoffNoArithmeticException() { final Duration INIT = Duration.ofSeconds(10); StepVerifier.withVirtualTime(() -> { - Function, Publisher> backoffFunction = FluxRetryWhen.randomExponentialBackoffFunction( - 80, //with pure exponential, this amount of retries would overflow Duration's capacity - INIT, - EXPLICIT_MAX, - 0d, - Schedulers.parallel()); + RetryBackoffSpec retryBuilder = Retry + //with pure exponential, 80 retries would overflow Duration's capacity + .backoff(80, INIT) + .maxBackoff(EXPLICIT_MAX) + .jitter(0d); return Flux.error(new IllegalStateException("boom")) - .retryWhen(backoffFunction); + .retryWhen(retryBuilder); }) .expectSubscription() .thenAwait(Duration.ofNanos(Long.MAX_VALUE)) @@ -633,16 +663,19 @@ public void fluxRetryRandomBackoffNoArithmeticException() { } @Test - public void fluxRetryBackoffWithGivenScheduler() { + public void fluxRetryBackoffWithSpecificScheduler() { VirtualTimeScheduler backoffScheduler = VirtualTimeScheduler.create(); Exception exception = new IOException("boom retry"); StepVerifier.create( Flux.concat(Flux.range(0, 2), Flux.error(exception)) - .log() - .retryBackoff(4, Duration.ofHours(1), Duration.ofHours(1), 0, backoffScheduler) - .doOnNext(i -> System.out.println(i + " on " + Thread.currentThread().getName())) + .retryWhen(Retry + .backoff(4, Duration.ofHours(1)) + .maxBackoff(Duration.ofHours(1)) + .jitter(0) + .scheduler(backoffScheduler) + ) ) .expectNext(0, 1) //normal output .expectNoEvent(Duration.ofMillis(100)) @@ -656,7 +689,7 @@ public void fluxRetryBackoffWithGivenScheduler() { @Test public void fluxRetryBackoffRetriesOnGivenScheduler() { - //the fluxRetryBackoffWithGivenScheduler above is not suitable to verify the retry scheduler, as VTS is akin to immediate() + //the fluxRetryBackoffWithSpecificScheduler above is not suitable to verify the retry scheduler, as VTS is akin to immediate() //and doesn't really change the Thread Scheduler backoffScheduler = Schedulers.newSingle("backoffScheduler"); String main = Thread.currentThread().getName(); @@ -665,8 +698,12 @@ public void fluxRetryBackoffRetriesOnGivenScheduler() { try { StepVerifier.create(Flux.concat(Flux.range(0, 2), Flux.error(exception)) .doOnError(e -> threadNames.add(Thread.currentThread().getName().replaceAll("-\\d+", ""))) - .retryBackoff(2, Duration.ofMillis(10), Duration.ofMillis(100), 0.5d, backoffScheduler) - + .retryWhen(Retry + .backoff(2, Duration.ofMillis(10)) + .maxBackoff(Duration.ofMillis(100)) + .jitter(0.5d) + .scheduler(backoffScheduler) + ) ) .expectNext(0, 1, 0, 1, 0, 1) .expectErrorSatisfies(e -> assertThat(e).isInstanceOf(IllegalStateException.class) @@ -682,4 +719,123 @@ public void fluxRetryBackoffRetriesOnGivenScheduler() { backoffScheduler.dispose(); } } + + @Test + public void backoffFunctionNotTransient() { + Flux source = transientErrorSource(); + + Retry retryFunction = + Retry.backoff(2, Duration.ZERO) + .maxBackoff(Duration.ofMillis(100)) + .jitter(0d) + .transientErrors(false); + + new FluxRetryWhen<>(source, retryFunction) + .as(StepVerifier::create) + .expectNext(3, 4) + .expectErrorMessage("Retries exhausted: 2/2") + .verify(Duration.ofSeconds(2)); + } + + @Test + public void backoffFunctionTransient() { + Flux source = transientErrorSource(); + + Retry retryFunction = + Retry.backoff(2, Duration.ZERO) + .maxBackoff(Duration.ofMillis(100)) + .jitter(0d) + .transientErrors(true); + + new FluxRetryWhen<>(source, retryFunction) + .as(StepVerifier::create) + .expectNext(3, 4, 7, 8, 11, 12) + .expectComplete() + .verify(Duration.ofSeconds(2)); + } + + @Test + public void simpleFunctionTransient() { + Flux source = transientErrorSource(); + + Retry retryFunction = + Retry.max(2) + .transientErrors(true); + + new FluxRetryWhen<>(source, retryFunction) + .as(StepVerifier::create) + .expectNext(3, 4, 7, 8, 11, 12) + .expectComplete() + .verify(Duration.ofSeconds(2)); + } + + @Test + public void gh1978() { + final int elementPerCycle = 3; + final int stopAfterCycles = 10; + Flux source = + Flux.generate(() -> new AtomicLong(0), (counter, s) -> { + long currentCount = counter.incrementAndGet(); + if (currentCount % elementPerCycle == 0) { + s.error(new RuntimeException("Error!")); + } + else { + s.next(currentCount); + } + return counter; + }); + + List pauses = new ArrayList<>(); + + StepVerifier.withVirtualTime(() -> + source.retryWhen(Retry + .backoff(Long.MAX_VALUE, Duration.ofSeconds(1)) + .maxBackoff(Duration.ofMinutes(1)) + .jitter(0d) + .transientErrors(true) + ) + .take(stopAfterCycles * elementPerCycle) + .elapsed() + .map(Tuple2::getT1) + .doOnNext(pause -> { if (pause > 500) pauses.add(pause / 1000); }) + ) + .thenAwait(Duration.ofHours(1)) + .expectNextCount(stopAfterCycles * elementPerCycle) + .expectComplete() + .verify(Duration.ofSeconds(1)); + + assertThat(pauses).allMatch(p -> p == 1, "pause is constantly 1s"); + } + + public static Flux transientErrorSource() { + AtomicInteger count = new AtomicInteger(); + return Flux.generate(sink -> { + int step = count.incrementAndGet(); + switch (step) { + case 1: + case 2: + case 5: + case 6: + case 9: + case 10: + sink.error(new IllegalStateException("failing on step " + step)); + break; + case 3: //should reset + case 4: //should NOT reset + case 7: //should reset + case 8: //should NOT reset + case 11: //should reset + sink.next(step); + break; + case 12: + sink.next(step); //should NOT reset + sink.complete(); + break; + default: + sink.complete(); + break; + } + }); + } + } diff --git a/reactor-core/src/test/java/reactor/core/publisher/MonoRetryWhenTest.java b/reactor-core/src/test/java/reactor/core/publisher/MonoRetryWhenTest.java index fb0beae6e4..6e555df9ee 100644 --- a/reactor-core/src/test/java/reactor/core/publisher/MonoRetryWhenTest.java +++ b/reactor-core/src/test/java/reactor/core/publisher/MonoRetryWhenTest.java @@ -21,14 +21,17 @@ import java.util.List; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicInteger; +import java.util.function.Function; import org.assertj.core.data.Percentage; import org.junit.Test; +import org.reactivestreams.Publisher; import reactor.core.scheduler.Scheduler; import reactor.core.scheduler.Schedulers; import reactor.test.StepVerifier; import reactor.test.scheduler.VirtualTimeScheduler; +import reactor.util.retry.Retry; import static org.assertj.core.api.Assertions.assertThat; @@ -44,8 +47,9 @@ Mono exponentialRetryScenario() { else { s.error(new RuntimeException("test " + i)); } - }).retryWhen(repeat -> repeat.zipWith(Flux.range(1, 3), (t1, t2) -> t2) - .flatMap(time -> Mono.delay(Duration.ofSeconds(time)))); + }).retryWhen((Function, Publisher>) companion -> companion + .zipWith(Flux.range(1, 3), (t1, t2) -> t2) + .flatMap(time -> Mono.delay(Duration.ofSeconds(time)))); } @Test @@ -69,7 +73,10 @@ public void monoRetryRandomBackoff() { errorCount.incrementAndGet(); elapsedList.add(Schedulers.parallel().now(TimeUnit.MILLISECONDS)); }) - .retryBackoff(4, Duration.ofMillis(100), Duration.ofMillis(2000), 0.1) + .retryWhen(Retry.backoff(4, Duration.ofMillis(100)) + .maxBackoff(Duration.ofMillis(2000)) + .jitter(0.1) + ) ) .thenAwait(Duration.ofMinutes(1)) //ensure whatever the jittered delay that we have time to fit 4 retries .expectErrorSatisfies(e -> assertThat(e).isInstanceOf(IllegalStateException.class) @@ -103,7 +110,9 @@ public void monoRetryRandomBackoffDefaultJitter() { errorCount.incrementAndGet(); elapsedList.add(Schedulers.parallel().now(TimeUnit.MILLISECONDS)); }) - .retryBackoff(4, Duration.ofMillis(100), Duration.ofMillis(2000)) + .retryWhen(Retry.backoff(4, Duration.ofMillis(100)) + .maxBackoff(Duration.ofMillis(2000)) + ) ) .thenAwait(Duration.ofMinutes(1)) //ensure whatever the jittered delay that we have time to fit 4 retries .expectErrorSatisfies(e -> assertThat(e).isInstanceOf(IllegalStateException.class) @@ -138,7 +147,7 @@ public void monoRetryRandomBackoffDefaultMaxDuration() { errorCount.incrementAndGet(); elapsedList.add(Schedulers.parallel().now(TimeUnit.MILLISECONDS)); }) - .retryBackoff(4, Duration.ofMillis(100)) + .retryWhen(Retry.backoff(4, Duration.ofMillis(100))) ) .thenAwait(Duration.ofMinutes(1)) //ensure whatever the jittered delay that we have time to fit 4 retries .expectErrorSatisfies(e -> assertThat(e).isInstanceOf(IllegalStateException.class) @@ -172,7 +181,10 @@ public void monoRetryRandomBackoff_maxBackoffShaves() { errorCount.incrementAndGet(); elapsedList.add(Schedulers.parallel().now(TimeUnit.MILLISECONDS)); }) - .retryBackoff(4, Duration.ofMillis(100), Duration.ofMillis(220), 0.9) + .retryWhen(Retry.backoff(4, Duration.ofMillis(100)) + .maxBackoff(Duration.ofMillis(220)) + .jitter(0.9) + ) ) .thenAwait(Duration.ofMinutes(1)) //ensure whatever the jittered delay that we have time to fit 4 retries .expectErrorSatisfies(e -> assertThat(e).isInstanceOf(IllegalStateException.class) @@ -212,7 +224,10 @@ public void monoRetryRandomBackoff_minBackoffFloor() { errorCount.incrementAndGet(); elapsedList.add(Schedulers.parallel().now(TimeUnit.MILLISECONDS)); }) - .retryBackoff(1, Duration.ofMillis(100), Duration.ofMillis(2000), 0.9) + .retryWhen(Retry.backoff(1, Duration.ofMillis(100)) + .maxBackoff(Duration.ofMillis(2000)) + .jitter(0.9) + ) ) .thenAwait(Duration.ofMinutes(1)) //ensure whatever the jittered delay that we have time to fit 4 retries .expectErrorSatisfies(e -> assertThat(e).isInstanceOf(IllegalStateException.class) @@ -241,7 +256,10 @@ public void monoRetryRandomBackoff_noRandom() { errorCount.incrementAndGet(); elapsedList.add(Schedulers.parallel().now(TimeUnit.MILLISECONDS)); }) - .retryBackoff(4, Duration.ofMillis(100), Duration.ofMillis(2000), 0d) + .retryWhen(Retry.backoff(4, Duration.ofMillis(100)) + .maxBackoff(Duration.ofMillis(2000)) + .jitter(0d) + ) ) .thenAwait(Duration.ofMinutes(1)) //ensure whatever the jittered delay that we have time to fit 4 retries .expectErrorSatisfies(e -> assertThat(e).isInstanceOf(IllegalStateException.class) @@ -257,7 +275,6 @@ public void monoRetryRandomBackoff_noRandom() { assertThat(elapsedList.get(4) - elapsedList.get(3)).isEqualTo(800); } - @Test public void monoRetryBackoffWithGivenScheduler() { VirtualTimeScheduler backoffScheduler = VirtualTimeScheduler.create(); @@ -268,7 +285,11 @@ public void monoRetryBackoffWithGivenScheduler() { StepVerifier.create( Mono.error(exception) .doOnError(t -> errorCount.incrementAndGet()) - .retryBackoff(4, Duration.ofMillis(10), Duration.ofMillis(100), 0, backoffScheduler) + .retryWhen(Retry.backoff(4, Duration.ofMillis(10)) + .maxBackoff(Duration.ofMillis(100)) + .jitter(0) + .scheduler(backoffScheduler) + ) ) .expectSubscription() .expectNoEvent(Duration.ofMillis(400)) @@ -293,8 +314,11 @@ public void monoRetryBackoffRetriesOnGivenScheduler() { try { StepVerifier.create(Mono.error(exception) .doOnError(e -> threadNames.add(Thread.currentThread().getName().replaceFirst("-\\d+", ""))) - .retryBackoff(2, Duration.ofMillis(10), Duration.ofMillis(100), 0.5d, backoffScheduler) - + .retryWhen(Retry.backoff(2, Duration.ofMillis(10)) + .maxBackoff(Duration.ofMillis(100)) + .jitter(0.5d) + .scheduler(backoffScheduler) + ) ) .expectErrorSatisfies(e -> assertThat(e).isInstanceOf(IllegalStateException.class) .hasMessage("Retries exhausted: 2/2") diff --git a/reactor-core/src/test/java/reactor/guide/GuideDebuggingExtraTests.java b/reactor-core/src/test/java/reactor/guide/GuideDebuggingExtraTests.java index a787c0227d..bb0fa15cc4 100644 --- a/reactor-core/src/test/java/reactor/guide/GuideDebuggingExtraTests.java +++ b/reactor-core/src/test/java/reactor/guide/GuideDebuggingExtraTests.java @@ -20,6 +20,7 @@ import java.io.StringWriter; import org.junit.Test; + import reactor.core.publisher.Flux; import reactor.core.publisher.Hooks; @@ -50,9 +51,9 @@ public void debuggingActivatedWithDeepTraceback() { + "\t|_ Flux.map ⇢ at reactor.guide.FakeRepository.findAllUserByName(FakeRepository.java:27)\n" + "\t|_ Flux.map ⇢ at reactor.guide.FakeRepository.findAllUserByName(FakeRepository.java:28)\n" + "\t|_ Flux.filter ⇢ at reactor.guide.FakeUtils1.lambda$static$1(FakeUtils1.java:29)\n" - + "\t|_ Flux.transform ⇢ at reactor.guide.GuideDebuggingExtraTests.debuggingActivatedWithDeepTraceback(GuideDebuggingExtraTests.java:40)\n" + + "\t|_ Flux.transform ⇢ at reactor.guide.GuideDebuggingExtraTests.debuggingActivatedWithDeepTraceback(GuideDebuggingExtraTests.java:41)\n" + "\t|_ Flux.elapsed ⇢ at reactor.guide.FakeUtils2.lambda$static$0(FakeUtils2.java:30)\n" - + "\t|_ Flux.transform ⇢ at reactor.guide.GuideDebuggingExtraTests.debuggingActivatedWithDeepTraceback(GuideDebuggingExtraTests.java:41)\n"); + + "\t|_ Flux.transform ⇢ at reactor.guide.GuideDebuggingExtraTests.debuggingActivatedWithDeepTraceback(GuideDebuggingExtraTests.java:42)\n"); } finally { Hooks.resetOnOperatorDebug(); diff --git a/reactor-core/src/test/java/reactor/guide/GuideTests.java b/reactor-core/src/test/java/reactor/guide/GuideTests.java index 229c27c9a6..36010a4f25 100644 --- a/reactor-core/src/test/java/reactor/guide/GuideTests.java +++ b/reactor-core/src/test/java/reactor/guide/GuideTests.java @@ -36,13 +36,16 @@ import java.util.stream.Collectors; import java.util.stream.Stream; +import org.assertj.core.api.Assertions; import org.junit.After; import org.junit.Before; import org.junit.Rule; import org.junit.Test; import org.junit.rules.TestName; +import org.mockito.internal.matchers.Null; import org.reactivestreams.Publisher; import org.reactivestreams.Subscription; + import reactor.core.Disposable; import reactor.core.Exceptions; import reactor.core.publisher.BaseSubscriber; @@ -59,6 +62,7 @@ import reactor.util.context.Context; import reactor.util.function.Tuple2; import reactor.util.function.Tuples; +import reactor.util.retry.Retry; import static org.assertj.core.api.Assertions.assertThat; @@ -655,7 +659,7 @@ public void errorHandlingRetryWhenApproximateRetry() { Flux flux = Flux.error(new IllegalArgumentException()) // <1> .doOnError(System.out::println) // <2> - .retryWhen(companion -> companion.take(3)); // <3> + .retryWhen((Function, Publisher>) companion -> companion.take(3)); // <3> StepVerifier.create(flux) .verifyComplete(); @@ -666,38 +670,102 @@ public void errorHandlingRetryWhenApproximateRetry() { @Test public void errorHandlingRetryWhenEquatesRetry() { + AtomicInteger errorCount = new AtomicInteger(); Flux flux = - Flux.error(new IllegalArgumentException()) - .retryWhen(companion -> companion - .zipWith(Flux.range(1, 4), (error, index) -> { // <1> - if (index < 4) return index; // <2> - else throw Exceptions.propagate(error); // <3> - }) - ); + Flux.error(new IllegalArgumentException()) + .doOnError(e -> errorCount.incrementAndGet()) + .retryWhen((Retry) companion -> // <1> + companion.map(rs -> { // <2> + if (rs.totalRetries() < 3) return rs.totalRetries(); // <3> + else throw Exceptions.propagate(rs.failure()); // <4> + }) + ); StepVerifier.create(flux) .verifyError(IllegalArgumentException.class); - StepVerifier.create(Flux.error(new IllegalArgumentException()).retry(3)) + AtomicInteger retryNErrorCount = new AtomicInteger(); + StepVerifier.create(Flux.error(new IllegalArgumentException()).doOnError(e -> retryNErrorCount.incrementAndGet()).retry(3)) .verifyError(); + + assertThat(errorCount).hasValue(retryNErrorCount.get()); + } + + @Test + public void errorHandlingRetryBuilders() { + Throwable exception = new IllegalStateException("boom"); + Flux errorFlux = Flux.error(exception); + + errorFlux.retryWhen(Retry.max(3)) + .as(StepVerifier::create) + .verifyErrorSatisfies(e -> assertThat(e) + .hasMessage("Retries exhausted: 3/3") + .hasCause(exception)); + + errorFlux.retryWhen(Retry.max(3).filter(error -> error instanceof NullPointerException)) + .as(StepVerifier::create) + .verifyErrorMessage("boom"); } @Test public void errorHandlingRetryWhenExponential() { + AtomicInteger errorCount = new AtomicInteger(); Flux flux = - Flux.error(new IllegalArgumentException()) - .retryWhen(companion -> companion - .doOnNext(s -> System.out.println(s + " at " + LocalTime.now())) // <1> - .zipWith(Flux.range(1, 4), (error, index) -> { // <2> - if (index < 4) return index; - else throw Exceptions.propagate(error); + Flux.error(new IllegalStateException("boom")) + .doOnError(e -> { // <1> + errorCount.incrementAndGet(); + System.out.println(e + " at " + LocalTime.now()); }) - .flatMap(index -> Mono.delay(Duration.ofMillis(index * 100))) // <3> - .doOnNext(s -> System.out.println("retried at " + LocalTime.now())) // <4> - ); + .retryWhen(Retry + .backoff(3, Duration.ofMillis(100)).jitter(0d) // <2> + .doAfterRetry(rs -> System.out.println("retried at " + LocalTime.now())) // <3> + .onRetryExhaustedThrow((spec, rs) -> rs.failure()) // <4> + ); StepVerifier.create(flux) - .verifyError(IllegalArgumentException.class); + .verifyErrorSatisfies(e -> Assertions + .assertThat(e) + .isInstanceOf(IllegalStateException.class) + .hasMessage("boom")); + + assertThat(errorCount).hasValue(4); + } + + @Test + public void errorHandlingRetryWhenTransient() { + AtomicInteger errorCount = new AtomicInteger(); // <1> + AtomicInteger transientHelper = new AtomicInteger(); + Flux transientFlux = Flux.generate(sink -> { + int i = transientHelper.getAndIncrement(); + if (i == 10) { // <2> + sink.next(i); + sink.complete(); + } + else if (i % 3 == 0) { // <3> + sink.next(i); + } + else { + sink.error(new IllegalStateException("Transient error at " + i)); // <4> + } + }) + .doOnError(e -> errorCount.incrementAndGet()); + +transientFlux.retryWhen(Retry.max(2).transientErrors(true)) // <5> + .blockLast(); +assertThat(errorCount).hasValue(6); // <6> + + transientHelper.set(0); + transientFlux.retryWhen(Retry.max(2).transientErrors(true)) + .as(StepVerifier::create) + .expectNext(0, 3, 6, 9, 10) + .verifyComplete(); + + transientHelper.set(0); + transientFlux.retryWhen(Retry.max(2)) + .as(StepVerifier::create) + .expectNext(0, 3) + .verifyErrorMessage("Retries exhausted: 2/2"); + } public String convert(int i) throws IOException { @@ -980,7 +1048,7 @@ private void printAndAssert(Throwable t, boolean checkForAssemblySuppressed) { assertThat(withSuppressed.getSuppressed()).hasSize(1); assertThat(withSuppressed.getSuppressed()[0]) .hasMessageStartingWith("\nAssembly trace from producer [reactor.core.publisher.MonoSingle] :") - .hasMessageContaining("Flux.single ⇢ at reactor.guide.GuideTests.scatterAndGather(GuideTests.java:944)\n"); + .hasMessageContaining("Flux.single ⇢ at reactor.guide.GuideTests.scatterAndGather(GuideTests.java:1012)\n"); }); } } diff --git a/reactor-core/src/test/java/reactor/util/retry/RetryBackoffSpecTest.java b/reactor-core/src/test/java/reactor/util/retry/RetryBackoffSpecTest.java new file mode 100644 index 0000000000..5e36b25a32 --- /dev/null +++ b/reactor-core/src/test/java/reactor/util/retry/RetryBackoffSpecTest.java @@ -0,0 +1,390 @@ +/* + * Copyright (c) 2011-Present Pivotal Software Inc, All Rights Reserved. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package reactor.util.retry; + +import java.time.Duration; +import java.util.List; +import java.util.concurrent.CopyOnWriteArrayList; +import java.util.concurrent.atomic.AtomicInteger; +import java.util.function.Supplier; + +import org.junit.Test; + +import reactor.core.Exceptions; +import reactor.core.publisher.Flux; +import reactor.core.publisher.FluxRetryWhenTest; +import reactor.core.publisher.Mono; +import reactor.core.scheduler.Schedulers; +import reactor.test.StepVerifier; +import reactor.test.StepVerifierOptions; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatNullPointerException; + +public class RetryBackoffSpecTest { + + @Test + public void suppressingSchedulerFails() { + assertThatNullPointerException().isThrownBy(() -> Retry.backoff(1, Duration.ZERO).scheduler(null)) + .withMessage("backoffScheduler"); + } + + @Test + public void builderMethodsProduceNewInstances() { + RetryBackoffSpec init = Retry.backoff(1, Duration.ZERO); + assertThat(init) + .isNotSameAs(init.minBackoff(Duration.ofSeconds(1))) + .isNotSameAs(init.maxBackoff(Duration.ZERO)) + .isNotSameAs(init.jitter(0.5d)) + .isNotSameAs(init.scheduler(Schedulers.parallel())) + .isNotSameAs(init.maxAttempts(10)) + .isNotSameAs(init.filter(t -> true)) + .isNotSameAs(init.modifyErrorFilter(predicate -> predicate.and(t -> true))) + .isNotSameAs(init.transientErrors(true)) + .isNotSameAs(init.doBeforeRetry(rs -> {})) + .isNotSameAs(init.doAfterRetry(rs -> {})) + .isNotSameAs(init.doBeforeRetryAsync(rs -> Mono.empty())) + .isNotSameAs(init.doAfterRetryAsync(rs -> Mono.empty())) + .isNotSameAs(init.onRetryExhaustedThrow((b, rs) -> new IllegalStateException("boon"))); + } + + @Test + public void builderCanBeUsedAsTemplate() { + //a base builder can be reused across several Flux with different tuning for each flux + RetryBackoffSpec template = Retry.backoff(1, Duration.ZERO).transientErrors(false); + + Supplier> transientError = () -> { + AtomicInteger errorOnEven = new AtomicInteger(); + return Flux.generate(sink -> { + int i = errorOnEven.getAndIncrement(); + if (i == 5) { + sink.complete(); + } + if (i % 2 == 0) { + sink.error(new IllegalStateException("boom " + i)); + } + else { + sink.next(i); + } + }); + }; + + Flux modifiedTemplate1 = transientError.get().retryWhen(template.maxAttempts(2)); + Flux modifiedTemplate2 = transientError.get().retryWhen(template.transientErrors(true)); + + StepVerifier.create(modifiedTemplate1, StepVerifierOptions.create().scenarioName("modified template 1")) + .expectNext(1, 3) + .verifyErrorSatisfies(t -> assertThat(t) + .isInstanceOf(IllegalStateException.class) + .hasMessage("Retries exhausted: 2/2") + .hasCause(new IllegalStateException("boom 4"))); + + StepVerifier.create(modifiedTemplate2, StepVerifierOptions.create().scenarioName("modified template 2")) + .expectNext(1, 3) + .verifyComplete(); + } + + @Test + public void throwablePredicateReplacesThePredicate() { + RetryBackoffSpec retryBuilder = Retry.backoff(1, Duration.ZERO) + .filter(t -> t instanceof RuntimeException) + .filter(t -> t instanceof IllegalStateException); + + assertThat(retryBuilder.errorFilter) + .accepts(new IllegalStateException()) + .rejects(new IllegalArgumentException()) + .rejects(new RuntimeException()); + } + + @Test + public void throwablePredicateModifierAugmentsThePredicate() { + RetryBackoffSpec retryBuilder = Retry.backoff(1, Duration.ZERO) + .filter(t -> t instanceof RuntimeException) + .modifyErrorFilter(p -> p.and(t -> t.getMessage().length() == 3)); + + assertThat(retryBuilder.errorFilter) + .accepts(new IllegalStateException("foo")) + .accepts(new IllegalArgumentException("bar")) + .accepts(new RuntimeException("baz")) + .rejects(new RuntimeException("too big")); + } + + @Test + public void throwablePredicateModifierWorksIfNoPreviousPredicate() { + RetryBackoffSpec retryBuilder = Retry.backoff(1, Duration.ZERO) + .modifyErrorFilter(p -> p.and(t -> t.getMessage().length() == 3)); + + assertThat(retryBuilder.errorFilter) + .accepts(new IllegalStateException("foo")) + .accepts(new IllegalArgumentException("bar")) + .accepts(new RuntimeException("baz")) + .rejects(new RuntimeException("too big")); + } + + @Test + public void throwablePredicateModifierRejectsNullGenerator() { + assertThatNullPointerException().isThrownBy(() -> Retry.backoff(1, Duration.ZERO).modifyErrorFilter(p -> null)) + .withMessage("predicateAdjuster must return a new predicate"); + } + + @Test + public void throwablePredicateModifierRejectsNullFunction() { + assertThatNullPointerException().isThrownBy(() -> Retry.backoff(1, Duration.ZERO).modifyErrorFilter(null)) + .withMessage("predicateAdjuster"); + } + + @Test + public void doBeforeRetryIsCumulative() { + AtomicInteger atomic = new AtomicInteger(); + RetryBackoffSpec retryBuilder = Retry + .backoff(1, Duration.ZERO) + .doBeforeRetry(rs -> atomic.incrementAndGet()) + .doBeforeRetry(rs -> atomic.addAndGet(100)); + + retryBuilder.syncPreRetry.accept(null); + + assertThat(atomic).hasValue(101); + } + + @Test + public void doAfterRetryIsCumulative() { + AtomicInteger atomic = new AtomicInteger(); + RetryBackoffSpec retryBuilder = Retry + .backoff(1, Duration.ZERO) + .doAfterRetry(rs -> atomic.incrementAndGet()) + .doAfterRetry(rs -> atomic.addAndGet(100)); + + retryBuilder.syncPostRetry.accept(null); + + assertThat(atomic).hasValue(101); + } + + @Test + public void delayRetryWithIsCumulative() { + AtomicInteger atomic = new AtomicInteger(); + RetryBackoffSpec retryBuilder = Retry + .backoff(1, Duration.ZERO) + .doBeforeRetryAsync(rs -> Mono.fromRunnable(atomic::incrementAndGet)) + .doBeforeRetryAsync(rs -> Mono.fromRunnable(() -> atomic.addAndGet(100))); + + retryBuilder.asyncPreRetry.apply(null, Mono.empty()).block(); + + assertThat(atomic).hasValue(101); + } + + @Test + public void retryThenIsCumulative() { + AtomicInteger atomic = new AtomicInteger(); + RetryBackoffSpec retryBuilder = Retry + .backoff(1, Duration.ZERO) + .doAfterRetryAsync(rs -> Mono.fromRunnable(atomic::incrementAndGet)) + .doAfterRetryAsync(rs -> Mono.fromRunnable(() -> atomic.addAndGet(100))); + + retryBuilder.asyncPostRetry.apply(null, Mono.empty()).block(); + + assertThat(atomic).hasValue(101); + } + + @Test + public void retryExceptionDefaultsToRetryExhausted() { + RetryBackoffSpec retryBuilder = Retry.backoff(50, Duration.ZERO).transientErrors(true); + + final ImmutableRetrySignal trigger = new ImmutableRetrySignal(100, 50, new IllegalStateException("boom")); + + StepVerifier.create(retryBuilder.generateCompanion(Flux.just(trigger))) + .expectErrorSatisfies(e -> assertThat(e).matches(Exceptions::isRetryExhausted, "isRetryExhausted") + .hasMessage("Retries exhausted: 50/50 in a row (100 total)") + .hasCause(new IllegalStateException("boom"))) + .verify(); + } + + @Test + public void retryExceptionCanBeCustomized() { + RetryBackoffSpec retryBuilder = Retry.backoff(1, Duration.ofMillis(123)) + .onRetryExhaustedThrow((builder, rs) -> new IllegalArgumentException(builder.minBackoff.toString())); + + final ImmutableRetrySignal trigger = new ImmutableRetrySignal(100, 21, new IllegalStateException("boom")); + + StepVerifier.create(retryBuilder.generateCompanion(Flux.just(trigger))) + .expectErrorSatisfies(e -> assertThat(e).matches(t -> !Exceptions.isRetryExhausted(t), "is not retryExhausted") + .hasMessage("PT0.123S") + .hasNoCause()) + .verify(); + } + + @Test + public void defaultRetryExhaustedMessageWithNoTransientErrors() { + assertThat(RetryBackoffSpec.BACKOFF_EXCEPTION_GENERATOR.apply(Retry.backoff(123, Duration.ZERO), + new ImmutableRetrySignal(123, 123, null))) + .hasMessage("Retries exhausted: 123/123"); + } + + @Test + public void defaultRetryExhaustedMessageWithTransientErrors() { + assertThat(RetryBackoffSpec.BACKOFF_EXCEPTION_GENERATOR.apply(Retry.backoff(12, Duration.ZERO).transientErrors(true), + new ImmutableRetrySignal(123, 12, null))) + .hasMessage("Retries exhausted: 12/12 in a row (123 total)"); + } + + @Test + public void companionWaitsForAllHooksBeforeTrigger() { + //this tests the companion directly, vs cumulatedRetryHooks which test full integration in the retryWhen operator + IllegalArgumentException ignored = new IllegalArgumentException("ignored"); + Retry.RetrySignal sig1 = new ImmutableRetrySignal(1, 1, ignored); + Retry.RetrySignal sig2 = new ImmutableRetrySignal(2, 1, ignored); + Retry.RetrySignal sig3 = new ImmutableRetrySignal(3, 1, ignored); + + RetryBackoffSpec retryBuilder = Retry.backoff(10, Duration.ZERO).doAfterRetryAsync(rs -> Mono.delay(Duration.ofMillis(100 * (3 - rs.totalRetries()))).then()); + + StepVerifier.create(retryBuilder.generateCompanion(Flux.just(sig1, sig2, sig3).hide())) + .expectNext(1L, 2L, 3L) + .verifyComplete(); + } + + @Test + public void cumulatedRetryHooks() { + List order = new CopyOnWriteArrayList<>(); + AtomicInteger beforeHookTracker = new AtomicInteger(); + AtomicInteger afterHookTracker = new AtomicInteger(); + + RetryBackoffSpec retryBuilder = Retry + .backoff(1, Duration.ZERO) + .doBeforeRetry(s -> order.add("SyncBefore A: " + s)) + .doBeforeRetry(s -> order.add("SyncBefore B, tracking " + beforeHookTracker.incrementAndGet())) + .doAfterRetry(s -> order.add("SyncAfter A: " + s)) + .doAfterRetry(s -> order.add("SyncAfter B, tracking " + afterHookTracker.incrementAndGet())) + .doBeforeRetryAsync(s -> Mono.delay(Duration.ofMillis(200)).doOnNext(n -> order.add("AsyncBefore C")).then()) + .doBeforeRetryAsync(s -> Mono.fromRunnable(() -> { + order.add("AsyncBefore D"); + beforeHookTracker.addAndGet(100); + })) + .doAfterRetryAsync(s -> Mono.delay(Duration.ofMillis(150)).doOnNext(delayed -> order.add("AsyncAfter C " + s)).then()) + .doAfterRetryAsync(s -> Mono.fromRunnable(() -> { + order.add("AsyncAfter D"); + afterHookTracker.addAndGet(100); + })); + + Mono.error(new IllegalStateException("boom")) + .retryWhen(retryBuilder) + .as(StepVerifier::create) + .verifyError(); + + assertThat(beforeHookTracker).hasValue(101); + assertThat(afterHookTracker).hasValue(101); + + assertThat(order).containsExactly( + "SyncBefore A: attempt #1 (1 in a row), last failure={java.lang.IllegalStateException: boom}", + "SyncBefore B, tracking 1", + "AsyncBefore C", + "AsyncBefore D", + "SyncAfter A: attempt #1 (1 in a row), last failure={java.lang.IllegalStateException: boom}", + "SyncAfter B, tracking 1", + "AsyncAfter C attempt #1 (1 in a row), last failure={java.lang.IllegalStateException: boom}", + "AsyncAfter D" + ); + } + + @Test + public void cumulatedRetryHooksWithTransient() { + List order = new CopyOnWriteArrayList<>(); + AtomicInteger beforeHookTracker = new AtomicInteger(); + AtomicInteger afterHookTracker = new AtomicInteger(); + + RetryBackoffSpec retryBuilder = Retry + .backoff(2, Duration.ZERO) + .maxBackoff(Duration.ZERO) + .transientErrors(true) + .doBeforeRetry(s -> order.add("SyncBefore A: " + s)) + .doBeforeRetry(s -> order.add("SyncBefore B, tracking " + beforeHookTracker.incrementAndGet())) + .doAfterRetry(s -> order.add("SyncAfter A: " + s)) + .doAfterRetry(s -> order.add("SyncAfter B, tracking " + afterHookTracker.incrementAndGet())) + .doBeforeRetryAsync(s -> Mono.delay(Duration.ofMillis(200)).doOnNext(n -> order.add("AsyncBefore C")).then()) + .doBeforeRetryAsync(s -> Mono.fromRunnable(() -> { + order.add("AsyncBefore D"); + beforeHookTracker.addAndGet(100); + })) + .doAfterRetryAsync(s -> Mono.delay(Duration.ofMillis(150)).doOnNext(delayed -> order.add("AsyncAfter C " + s)).then()) + .doAfterRetryAsync(s -> Mono.fromRunnable(() -> { + order.add("AsyncAfter D"); + afterHookTracker.addAndGet(100); + })); + + FluxRetryWhenTest.transientErrorSource() + .retryWhen(retryBuilder) + .blockLast(); + + assertThat(beforeHookTracker).as("before hooks cumulated").hasValue(606); + assertThat(afterHookTracker).as("after hooks cumulated").hasValue(606); + + assertThat(order).containsExactly( + "SyncBefore A: attempt #1 (1 in a row), last failure={java.lang.IllegalStateException: failing on step 1}", + "SyncBefore B, tracking 1", + "AsyncBefore C", + "AsyncBefore D", + "SyncAfter A: attempt #1 (1 in a row), last failure={java.lang.IllegalStateException: failing on step 1}", + "SyncAfter B, tracking 1", + "AsyncAfter C attempt #1 (1 in a row), last failure={java.lang.IllegalStateException: failing on step 1}", + "AsyncAfter D", + + "SyncBefore A: attempt #2 (2 in a row), last failure={java.lang.IllegalStateException: failing on step 2}", + "SyncBefore B, tracking 102", + "AsyncBefore C", + "AsyncBefore D", + "SyncAfter A: attempt #2 (2 in a row), last failure={java.lang.IllegalStateException: failing on step 2}", + "SyncAfter B, tracking 102", + "AsyncAfter C attempt #2 (2 in a row), last failure={java.lang.IllegalStateException: failing on step 2}", + "AsyncAfter D", + + "SyncBefore A: attempt #3 (1 in a row), last failure={java.lang.IllegalStateException: failing on step 5}", + "SyncBefore B, tracking 203", + "AsyncBefore C", + "AsyncBefore D", + "SyncAfter A: attempt #3 (1 in a row), last failure={java.lang.IllegalStateException: failing on step 5}", + "SyncAfter B, tracking 203", + "AsyncAfter C attempt #3 (1 in a row), last failure={java.lang.IllegalStateException: failing on step 5}", + "AsyncAfter D", + + "SyncBefore A: attempt #4 (2 in a row), last failure={java.lang.IllegalStateException: failing on step 6}", + "SyncBefore B, tracking 304", + "AsyncBefore C", + "AsyncBefore D", + "SyncAfter A: attempt #4 (2 in a row), last failure={java.lang.IllegalStateException: failing on step 6}", + "SyncAfter B, tracking 304", + "AsyncAfter C attempt #4 (2 in a row), last failure={java.lang.IllegalStateException: failing on step 6}", + "AsyncAfter D", + + "SyncBefore A: attempt #5 (1 in a row), last failure={java.lang.IllegalStateException: failing on step 9}", + "SyncBefore B, tracking 405", + "AsyncBefore C", + "AsyncBefore D", + "SyncAfter A: attempt #5 (1 in a row), last failure={java.lang.IllegalStateException: failing on step 9}", + "SyncAfter B, tracking 405", + "AsyncAfter C attempt #5 (1 in a row), last failure={java.lang.IllegalStateException: failing on step 9}", + "AsyncAfter D", + + "SyncBefore A: attempt #6 (2 in a row), last failure={java.lang.IllegalStateException: failing on step 10}", + "SyncBefore B, tracking 506", + "AsyncBefore C", + "AsyncBefore D", + "SyncAfter A: attempt #6 (2 in a row), last failure={java.lang.IllegalStateException: failing on step 10}", + "SyncAfter B, tracking 506", + "AsyncAfter C attempt #6 (2 in a row), last failure={java.lang.IllegalStateException: failing on step 10}", + "AsyncAfter D" + ); + } + +} \ No newline at end of file diff --git a/reactor-core/src/test/java/reactor/util/retry/RetrySpecTest.java b/reactor-core/src/test/java/reactor/util/retry/RetrySpecTest.java new file mode 100644 index 0000000000..5eaa318a6a --- /dev/null +++ b/reactor-core/src/test/java/reactor/util/retry/RetrySpecTest.java @@ -0,0 +1,404 @@ +/* + * Copyright (c) 2011-Present Pivotal Software Inc, All Rights Reserved. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * https://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package reactor.util.retry; + +import java.time.Duration; +import java.util.List; +import java.util.concurrent.CopyOnWriteArrayList; +import java.util.concurrent.atomic.AtomicInteger; +import java.util.function.Function; +import java.util.function.Supplier; + +import org.junit.Test; +import org.reactivestreams.Publisher; + +import reactor.core.Exceptions; +import reactor.core.publisher.Flux; +import reactor.core.publisher.FluxRetryWhenTest; +import reactor.core.publisher.Mono; +import reactor.test.StepVerifier; +import reactor.test.StepVerifierOptions; +import reactor.util.retry.Retry.RetrySignal; + +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatNullPointerException; + +public class RetrySpecTest { + + @Test + public void builderMethodsProduceNewInstances() { + RetrySpec init = Retry.max(1); + assertThat(init) + .isNotSameAs(init.maxAttempts(10)) + .isNotSameAs(init.filter(t -> true)) + .isNotSameAs(init.modifyErrorFilter(predicate -> predicate.and(t -> true))) + .isNotSameAs(init.transientErrors(true)) + .isNotSameAs(init.doBeforeRetry(rs -> {})) + .isNotSameAs(init.doAfterRetry(rs -> {})) + .isNotSameAs(init.doBeforeRetryAsync(rs -> Mono.empty())) + .isNotSameAs(init.doAfterRetryAsync(rs -> Mono.empty())) + .isNotSameAs(init.onRetryExhaustedThrow((b, rs) -> new IllegalStateException("boom"))); + } + + @Test + public void builderCanBeUsedAsTemplate() { + //a base builder can be reused across several Flux with different tuning for each flux + RetrySpec template = Retry.max(1).transientErrors(false); + + Supplier> transientError = () -> { + AtomicInteger errorOnEven = new AtomicInteger(); + return Flux.generate(sink -> { + int i = errorOnEven.getAndIncrement(); + if (i == 5) { + sink.complete(); + } + if (i % 2 == 0) { + sink.error(new IllegalStateException("boom " + i)); + } + else { + sink.next(i); + } + }); + }; + + Flux modifiedTemplate1 = transientError.get().retryWhen(template.maxAttempts(2)); + Flux modifiedTemplate2 = transientError.get().retryWhen(template.transientErrors(true)); + + StepVerifier.create(modifiedTemplate1, StepVerifierOptions.create().scenarioName("modified template 1")) + .expectNext(1, 3) + .verifyErrorSatisfies(t -> assertThat(t) + .isInstanceOf(IllegalStateException.class) + .hasMessage("Retries exhausted: 2/2") + .hasCause(new IllegalStateException("boom 4"))); + + StepVerifier.create(modifiedTemplate2, StepVerifierOptions.create().scenarioName("modified template 2")) + .expectNext(1, 3) + .verifyComplete(); + } + + @Test + public void throwablePredicateReplacesThePredicate() { + RetrySpec retrySpec = Retry.max(1) + .filter(t -> t instanceof RuntimeException) + .filter(t -> t instanceof IllegalStateException); + + assertThat(retrySpec.errorFilter) + .accepts(new IllegalStateException()) + .rejects(new IllegalArgumentException()) + .rejects(new RuntimeException()); + } + + @Test + public void throwablePredicateModifierAugmentsThePredicate() { + RetrySpec retrySpec = Retry.max(1) + .filter(t -> t instanceof RuntimeException) + .modifyErrorFilter(p -> p.and(t -> t.getMessage().length() == 3)); + + assertThat(retrySpec.errorFilter) + .accepts(new IllegalStateException("foo")) + .accepts(new IllegalArgumentException("bar")) + .accepts(new RuntimeException("baz")) + .rejects(new RuntimeException("too big")); + } + + @Test + public void throwablePredicateModifierWorksIfNoPreviousPredicate() { + RetrySpec retrySpec = Retry.max(1) + .modifyErrorFilter(p -> p.and(t -> t.getMessage().length() == 3)); + + assertThat(retrySpec.errorFilter) + .accepts(new IllegalStateException("foo")) + .accepts(new IllegalArgumentException("bar")) + .accepts(new RuntimeException("baz")) + .rejects(new RuntimeException("too big")); + } + + @Test + public void throwablePredicateModifierRejectsNullGenerator() { + assertThatNullPointerException().isThrownBy(() -> Retry.max(1).modifyErrorFilter(p -> null)) + .withMessage("predicateAdjuster must return a new predicate"); + } + + @Test + public void throwablePredicateModifierRejectsNullFunction() { + assertThatNullPointerException().isThrownBy(() -> Retry.max(1).modifyErrorFilter(null)) + .withMessage("predicateAdjuster"); + } + + @Test + public void doBeforeRetryIsCumulative() { + AtomicInteger atomic = new AtomicInteger(); + RetrySpec retrySpec = Retry + .max(1) + .doBeforeRetry(rs -> atomic.incrementAndGet()) + .doBeforeRetry(rs -> atomic.addAndGet(100)); + + retrySpec.doPreRetry.accept(null); + + assertThat(atomic).hasValue(101); + } + + @Test + public void doAfterRetryIsCumulative() { + AtomicInteger atomic = new AtomicInteger(); + RetrySpec retrySpec = Retry + .max(1) + .doAfterRetry(rs -> atomic.incrementAndGet()) + .doAfterRetry(rs -> atomic.addAndGet(100)); + + retrySpec.doPostRetry.accept(null); + + assertThat(atomic).hasValue(101); + } + + @Test + public void delayRetryWithIsCumulative() { + AtomicInteger atomic = new AtomicInteger(); + RetrySpec retrySpec = Retry + .max(1) + .doBeforeRetryAsync(rs -> Mono.fromRunnable(atomic::incrementAndGet)) + .doBeforeRetryAsync(rs -> Mono.fromRunnable(() -> atomic.addAndGet(100))); + + retrySpec.asyncPreRetry.apply(null, Mono.empty()).block(); + + assertThat(atomic).hasValue(101); + } + + @Test + public void retryThenIsCumulative() { + AtomicInteger atomic = new AtomicInteger(); + RetrySpec retrySpec = Retry + .max(1) + .doAfterRetryAsync(rs -> Mono.fromRunnable(atomic::incrementAndGet)) + .doAfterRetryAsync(rs -> Mono.fromRunnable(() -> atomic.addAndGet(100))); + + retrySpec.asyncPostRetry.apply(null, Mono.empty()).block(); + + assertThat(atomic).hasValue(101); + } + + + @Test + public void retryExceptionDefaultsToRetryExhausted() { + RetrySpec retrySpec = Retry.max(50).transientErrors(true); + + final ImmutableRetrySignal trigger = new ImmutableRetrySignal(100, 50, new IllegalStateException("boom")); + + StepVerifier.create(retrySpec.generateCompanion(Flux.just(trigger))) + .expectErrorSatisfies(e -> assertThat(e).matches(Exceptions::isRetryExhausted, "isRetryExhausted") + .hasMessage("Retries exhausted: 50/50 in a row (100 total)") + .hasCause(new IllegalStateException("boom"))) + .verify(); + } + + @Test + public void retryExceptionCanBeCustomized() { + RetrySpec retrySpec = Retry + .max(50) + .onRetryExhaustedThrow((builder, rs) -> new IllegalArgumentException("max" + builder.maxAttempts)); + + final ImmutableRetrySignal trigger = new ImmutableRetrySignal(100, 21, new IllegalStateException("boom")); + + StepVerifier.create(retrySpec.generateCompanion(Flux.just(trigger))) + .expectErrorSatisfies(e -> assertThat(e).matches(t -> !Exceptions.isRetryExhausted(t), "is not retryExhausted") + .hasMessage("max50") + .hasNoCause()) + .verify(); + } + + @Test + public void defaultRetryExhaustedMessageWithNoTransientErrors() { + assertThat(RetrySpec.RETRY_EXCEPTION_GENERATOR.apply(Retry.max(123), new ImmutableRetrySignal(123, 123, null))) + .hasMessage("Retries exhausted: 123/123"); + } + + @Test + public void defaultRetryExhaustedMessageWithTransientErrors() { + assertThat(RetrySpec.RETRY_EXCEPTION_GENERATOR.apply(Retry.max(12).transientErrors(true), + new ImmutableRetrySignal(123, 12, null))) + .hasMessage("Retries exhausted: 12/12 in a row (123 total)"); + } + + @Test + public void companionWaitsForAllHooksBeforeTrigger() { + //this tests the companion directly, vs cumulatedRetryHooks which test full integration in the retryWhen operator + IllegalArgumentException ignored = new IllegalArgumentException("ignored"); + RetrySignal sig1 = new ImmutableRetrySignal(1, 1, ignored); + RetrySignal sig2 = new ImmutableRetrySignal(2, 1, ignored); + RetrySignal sig3 = new ImmutableRetrySignal(3, 1, ignored); + + RetrySpec retrySpec = Retry.max(10).doAfterRetryAsync(rs -> Mono.delay(Duration.ofMillis(100 * (3 - rs.totalRetries()))).then()); + + StepVerifier.create(retrySpec.generateCompanion(Flux.just(sig1, sig2, sig3).hide())) + .expectNext(1L, 2L, 3L) + .verifyComplete(); + } + + @Test + public void cumulatedRetryHooks() { + List order = new CopyOnWriteArrayList<>(); + AtomicInteger beforeHookTracker = new AtomicInteger(); + AtomicInteger afterHookTracker = new AtomicInteger(); + + RetrySpec retryBuilder = Retry + .max(1) + .doBeforeRetry(s -> order.add("SyncBefore A: " + s)) + .doBeforeRetry(s -> order.add("SyncBefore B, tracking " + beforeHookTracker.incrementAndGet())) + .doAfterRetry(s -> order.add("SyncAfter A: " + s)) + .doAfterRetry(s -> order.add("SyncAfter B, tracking " + afterHookTracker.incrementAndGet())) + .doBeforeRetryAsync(s -> Mono.delay(Duration.ofMillis(200)).doOnNext(n -> order.add("AsyncBefore C")).then()) + .doBeforeRetryAsync(s -> Mono.fromRunnable(() -> { + order.add("AsyncBefore D"); + beforeHookTracker.addAndGet(100); + })) + .doAfterRetryAsync(s -> Mono.delay(Duration.ofMillis(150)).doOnNext(delayed -> order.add("AsyncAfter C " + s)).then()) + .doAfterRetryAsync(s -> Mono.fromRunnable(() -> { + order.add("AsyncAfter D"); + afterHookTracker.addAndGet(100); + })); + + Mono.error(new IllegalStateException("boom")) + .retryWhen(retryBuilder) + .as(StepVerifier::create) + .verifyError(); + + assertThat(beforeHookTracker).hasValue(101); + assertThat(afterHookTracker).hasValue(101); + + assertThat(order).containsExactly( + "SyncBefore A: attempt #1 (1 in a row), last failure={java.lang.IllegalStateException: boom}", + "SyncBefore B, tracking 1", + "AsyncBefore C", + "AsyncBefore D", + "SyncAfter A: attempt #1 (1 in a row), last failure={java.lang.IllegalStateException: boom}", + "SyncAfter B, tracking 1", + "AsyncAfter C attempt #1 (1 in a row), last failure={java.lang.IllegalStateException: boom}", + "AsyncAfter D" + ); + } + + @Test + public void cumulatedRetryHooksWithTransient() { + List order = new CopyOnWriteArrayList<>(); + AtomicInteger beforeHookTracker = new AtomicInteger(); + AtomicInteger afterHookTracker = new AtomicInteger(); + + RetrySpec retryBuilder = Retry + .maxInARow(2) + .doBeforeRetry(s -> order.add("SyncBefore A: " + s)) + .doBeforeRetry(s -> order.add("SyncBefore B, tracking " + beforeHookTracker.incrementAndGet())) + .doAfterRetry(s -> order.add("SyncAfter A: " + s)) + .doAfterRetry(s -> order.add("SyncAfter B, tracking " + afterHookTracker.incrementAndGet())) + .doBeforeRetryAsync(s -> Mono.delay(Duration.ofMillis(200)).doOnNext(n -> order.add("AsyncBefore C")).then()) + .doBeforeRetryAsync(s -> Mono.fromRunnable(() -> { + order.add("AsyncBefore D"); + beforeHookTracker.addAndGet(100); + })) + .doAfterRetryAsync(s -> Mono.delay(Duration.ofMillis(150)).doOnNext(delayed -> order.add("AsyncAfter C " + s)).then()) + .doAfterRetryAsync(s -> Mono.fromRunnable(() -> { + order.add("AsyncAfter D"); + afterHookTracker.addAndGet(100); + })); + + FluxRetryWhenTest.transientErrorSource() + .retryWhen(retryBuilder) + .blockLast(); + + assertThat(beforeHookTracker).as("before hooks cumulated").hasValue(606); + assertThat(afterHookTracker).as("after hooks cumulated").hasValue(606); + + assertThat(order).containsExactly( + "SyncBefore A: attempt #1 (1 in a row), last failure={java.lang.IllegalStateException: failing on step 1}", + "SyncBefore B, tracking 1", + "AsyncBefore C", + "AsyncBefore D", + "SyncAfter A: attempt #1 (1 in a row), last failure={java.lang.IllegalStateException: failing on step 1}", + "SyncAfter B, tracking 1", + "AsyncAfter C attempt #1 (1 in a row), last failure={java.lang.IllegalStateException: failing on step 1}", + "AsyncAfter D", + + "SyncBefore A: attempt #2 (2 in a row), last failure={java.lang.IllegalStateException: failing on step 2}", + "SyncBefore B, tracking 102", + "AsyncBefore C", + "AsyncBefore D", + "SyncAfter A: attempt #2 (2 in a row), last failure={java.lang.IllegalStateException: failing on step 2}", + "SyncAfter B, tracking 102", + "AsyncAfter C attempt #2 (2 in a row), last failure={java.lang.IllegalStateException: failing on step 2}", + "AsyncAfter D", + + "SyncBefore A: attempt #3 (1 in a row), last failure={java.lang.IllegalStateException: failing on step 5}", + "SyncBefore B, tracking 203", + "AsyncBefore C", + "AsyncBefore D", + "SyncAfter A: attempt #3 (1 in a row), last failure={java.lang.IllegalStateException: failing on step 5}", + "SyncAfter B, tracking 203", + "AsyncAfter C attempt #3 (1 in a row), last failure={java.lang.IllegalStateException: failing on step 5}", + "AsyncAfter D", + + "SyncBefore A: attempt #4 (2 in a row), last failure={java.lang.IllegalStateException: failing on step 6}", + "SyncBefore B, tracking 304", + "AsyncBefore C", + "AsyncBefore D", + "SyncAfter A: attempt #4 (2 in a row), last failure={java.lang.IllegalStateException: failing on step 6}", + "SyncAfter B, tracking 304", + "AsyncAfter C attempt #4 (2 in a row), last failure={java.lang.IllegalStateException: failing on step 6}", + "AsyncAfter D", + + "SyncBefore A: attempt #5 (1 in a row), last failure={java.lang.IllegalStateException: failing on step 9}", + "SyncBefore B, tracking 405", + "AsyncBefore C", + "AsyncBefore D", + "SyncAfter A: attempt #5 (1 in a row), last failure={java.lang.IllegalStateException: failing on step 9}", + "SyncAfter B, tracking 405", + "AsyncAfter C attempt #5 (1 in a row), last failure={java.lang.IllegalStateException: failing on step 9}", + "AsyncAfter D", + + "SyncBefore A: attempt #6 (2 in a row), last failure={java.lang.IllegalStateException: failing on step 10}", + "SyncBefore B, tracking 506", + "AsyncBefore C", + "AsyncBefore D", + "SyncAfter A: attempt #6 (2 in a row), last failure={java.lang.IllegalStateException: failing on step 10}", + "SyncAfter B, tracking 506", + "AsyncAfter C attempt #6 (2 in a row), last failure={java.lang.IllegalStateException: failing on step 10}", + "AsyncAfter D" + ); + } + + @SuppressWarnings("deprecation") + @Test + public void smokeTestLambdaAmbiguity() { + //the following should just compile + + Function, Publisher> functionBased = companion -> companion.take(3); + + Flux.range(1, 10) + .retryWhen(functionBased) + .blockLast(); + + Flux.range(1, 10) + .retryWhen(Retry.max(1).get()) + .blockLast(); + + Mono.just(1) + .retryWhen(functionBased) + .block(); + + Mono.just(1) + .retryWhen(Retry.max(1)) + .block(); + } +} \ No newline at end of file From bf158d8b76c8c1a9d62999da76aa066b57cfa29d Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Simon=20Basl=C3=A9?= Date: Wed, 18 Mar 2020 18:32:47 +0100 Subject: [PATCH 2/3] Use VMware in copyright of new files, polish imports and a few javadocs --- reactor-core/src/main/java/reactor/core/Exceptions.java | 4 +--- .../src/main/java/reactor/core/publisher/FluxRetryWhen.java | 5 ++--- .../src/main/java/reactor/core/publisher/MonoRetryWhen.java | 6 ++---- .../main/java/reactor/util/retry/ImmutableRetrySignal.java | 2 +- reactor-core/src/main/java/reactor/util/retry/Retry.java | 3 +-- .../src/main/java/reactor/util/retry/RetryBackoffSpec.java | 2 +- .../src/main/java/reactor/util/retry/RetrySpec.java | 2 +- reactor-core/src/test/java/reactor/core/ExceptionsTest.java | 4 +--- reactor-core/src/test/java/reactor/guide/GuideTests.java | 3 +-- .../test/java/reactor/util/retry/RetryBackoffSpecTest.java | 2 +- .../src/test/java/reactor/util/retry/RetrySpecTest.java | 2 +- 11 files changed, 13 insertions(+), 22 deletions(-) diff --git a/reactor-core/src/main/java/reactor/core/Exceptions.java b/reactor-core/src/main/java/reactor/core/Exceptions.java index a95ddf3d60..da5b1a7b55 100644 --- a/reactor-core/src/main/java/reactor/core/Exceptions.java +++ b/reactor-core/src/main/java/reactor/core/Exceptions.java @@ -24,7 +24,6 @@ import java.util.Objects; import java.util.concurrent.RejectedExecutionException; import java.util.concurrent.atomic.AtomicReferenceFieldUpdater; -import java.util.function.Supplier; import reactor.core.publisher.Flux; import reactor.util.annotation.Nullable; @@ -343,8 +342,7 @@ public static boolean isMultiple(@Nullable Throwable t) { /** * Check a {@link Throwable} to see if it indicates too many retry attempts have failed. - * Such an exception can be created via {@link #retryExhausted(long, Throwable)} or - * {@link #retryExhausted(Duration)}. + * Such an exception can be created via {@link #retryExhausted(String, Throwable)}. * * @param t the {@link Throwable} to check, {@literal null} always yields {@literal false} * @return true if the Throwable is an instance representing retry exhaustion, false otherwise diff --git a/reactor-core/src/main/java/reactor/core/publisher/FluxRetryWhen.java b/reactor-core/src/main/java/reactor/core/publisher/FluxRetryWhen.java index b85410a5c3..4a38284f52 100644 --- a/reactor-core/src/main/java/reactor/core/publisher/FluxRetryWhen.java +++ b/reactor-core/src/main/java/reactor/core/publisher/FluxRetryWhen.java @@ -1,11 +1,11 @@ /* - * Copyright (c) 2011-2018 Pivotal Software Inc, All Rights Reserved. + * Copyright (c) 2011-Present VMware Inc. or its affiliates, All Rights Reserved. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * - * https://www.apache.org/licenses/LICENSE-2.0 + * https://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, @@ -17,7 +17,6 @@ import java.util.Objects; import java.util.concurrent.atomic.AtomicIntegerFieldUpdater; -import java.util.function.Function; import java.util.stream.Stream; import org.reactivestreams.Publisher; diff --git a/reactor-core/src/main/java/reactor/core/publisher/MonoRetryWhen.java b/reactor-core/src/main/java/reactor/core/publisher/MonoRetryWhen.java index 56a75df62f..bee09daeae 100644 --- a/reactor-core/src/main/java/reactor/core/publisher/MonoRetryWhen.java +++ b/reactor-core/src/main/java/reactor/core/publisher/MonoRetryWhen.java @@ -1,11 +1,11 @@ /* - * Copyright (c) 2011-2017 Pivotal Software Inc, All Rights Reserved. + * Copyright (c) 2011-Present VMware Inc. or its affiliates, All Rights Reserved. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * - * https://www.apache.org/licenses/LICENSE-2.0 + * https://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, @@ -17,9 +17,7 @@ package reactor.core.publisher; import java.util.Objects; -import java.util.function.Function; -import org.reactivestreams.Publisher; import reactor.core.CoreSubscriber; import reactor.util.retry.Retry; diff --git a/reactor-core/src/main/java/reactor/util/retry/ImmutableRetrySignal.java b/reactor-core/src/main/java/reactor/util/retry/ImmutableRetrySignal.java index 19aea93b78..eec5f08e6e 100644 --- a/reactor-core/src/main/java/reactor/util/retry/ImmutableRetrySignal.java +++ b/reactor-core/src/main/java/reactor/util/retry/ImmutableRetrySignal.java @@ -1,5 +1,5 @@ /* - * Copyright (c) 2011-Present Pivotal Software Inc, All Rights Reserved. + * Copyright (c) 2011-Present VMware Inc. or its affiliates, All Rights Reserved. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. diff --git a/reactor-core/src/main/java/reactor/util/retry/Retry.java b/reactor-core/src/main/java/reactor/util/retry/Retry.java index 643df0e4ca..0cdd3f2ead 100644 --- a/reactor-core/src/main/java/reactor/util/retry/Retry.java +++ b/reactor-core/src/main/java/reactor/util/retry/Retry.java @@ -1,5 +1,5 @@ /* - * Copyright (c) 2011-Present Pivotal Software Inc, All Rights Reserved. + * Copyright (c) 2011-Present VMware Inc. or its affiliates, All Rights Reserved. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. @@ -17,7 +17,6 @@ package reactor.util.retry; import java.time.Duration; -import java.util.function.Supplier; import org.reactivestreams.Publisher; diff --git a/reactor-core/src/main/java/reactor/util/retry/RetryBackoffSpec.java b/reactor-core/src/main/java/reactor/util/retry/RetryBackoffSpec.java index fe061f8aeb..9517732eb3 100644 --- a/reactor-core/src/main/java/reactor/util/retry/RetryBackoffSpec.java +++ b/reactor-core/src/main/java/reactor/util/retry/RetryBackoffSpec.java @@ -1,5 +1,5 @@ /* - * Copyright (c) 2011-Present Pivotal Software Inc, All Rights Reserved. + * Copyright (c) 2011-Present VMware Inc. or its affiliates, All Rights Reserved. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. diff --git a/reactor-core/src/main/java/reactor/util/retry/RetrySpec.java b/reactor-core/src/main/java/reactor/util/retry/RetrySpec.java index 1b0f868a7c..ade4efa99a 100644 --- a/reactor-core/src/main/java/reactor/util/retry/RetrySpec.java +++ b/reactor-core/src/main/java/reactor/util/retry/RetrySpec.java @@ -1,5 +1,5 @@ /* - * Copyright (c) 2011-Present Pivotal Software Inc, All Rights Reserved. + * Copyright (c) 2011-Present VMware Inc. or its affiliates, All Rights Reserved. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. diff --git a/reactor-core/src/test/java/reactor/core/ExceptionsTest.java b/reactor-core/src/test/java/reactor/core/ExceptionsTest.java index a43021bb3a..0901dfc0f7 100644 --- a/reactor-core/src/test/java/reactor/core/ExceptionsTest.java +++ b/reactor-core/src/test/java/reactor/core/ExceptionsTest.java @@ -16,13 +16,11 @@ package reactor.core; import java.io.IOException; -import java.time.Duration; -import java.util.List; import java.util.Arrays; import java.util.Collections; +import java.util.List; import java.util.concurrent.RejectedExecutionException; import java.util.concurrent.atomic.AtomicReferenceFieldUpdater; -import java.util.function.Predicate; import org.junit.Before; import org.junit.Test; diff --git a/reactor-core/src/test/java/reactor/guide/GuideTests.java b/reactor-core/src/test/java/reactor/guide/GuideTests.java index 36010a4f25..f805ef73b2 100644 --- a/reactor-core/src/test/java/reactor/guide/GuideTests.java +++ b/reactor-core/src/test/java/reactor/guide/GuideTests.java @@ -42,7 +42,6 @@ import org.junit.Rule; import org.junit.Test; import org.junit.rules.TestName; -import org.mockito.internal.matchers.Null; import org.reactivestreams.Publisher; import org.reactivestreams.Subscription; @@ -1048,7 +1047,7 @@ private void printAndAssert(Throwable t, boolean checkForAssemblySuppressed) { assertThat(withSuppressed.getSuppressed()).hasSize(1); assertThat(withSuppressed.getSuppressed()[0]) .hasMessageStartingWith("\nAssembly trace from producer [reactor.core.publisher.MonoSingle] :") - .hasMessageContaining("Flux.single ⇢ at reactor.guide.GuideTests.scatterAndGather(GuideTests.java:1012)\n"); + .hasMessageContaining("Flux.single ⇢ at reactor.guide.GuideTests.scatterAndGather(GuideTests.java:1011)\n"); }); } } diff --git a/reactor-core/src/test/java/reactor/util/retry/RetryBackoffSpecTest.java b/reactor-core/src/test/java/reactor/util/retry/RetryBackoffSpecTest.java index 5e36b25a32..f36ddfe16c 100644 --- a/reactor-core/src/test/java/reactor/util/retry/RetryBackoffSpecTest.java +++ b/reactor-core/src/test/java/reactor/util/retry/RetryBackoffSpecTest.java @@ -1,5 +1,5 @@ /* - * Copyright (c) 2011-Present Pivotal Software Inc, All Rights Reserved. + * Copyright (c) 2011-Present VMware Inc. or its affiliates, All Rights Reserved. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. diff --git a/reactor-core/src/test/java/reactor/util/retry/RetrySpecTest.java b/reactor-core/src/test/java/reactor/util/retry/RetrySpecTest.java index 5eaa318a6a..0da140112a 100644 --- a/reactor-core/src/test/java/reactor/util/retry/RetrySpecTest.java +++ b/reactor-core/src/test/java/reactor/util/retry/RetrySpecTest.java @@ -1,5 +1,5 @@ /* - * Copyright (c) 2011-Present Pivotal Software Inc, All Rights Reserved. + * Copyright (c) 2011-Present VMware Inc. or its affiliates, All Rights Reserved. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. From 754b5e9521448bdf6cbdfe373832a00ea01032f5 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Simon=20Basl=C3=A9?= Date: Wed, 18 Mar 2020 19:15:52 +0100 Subject: [PATCH 3/3] fix another typo in javadoc --- reactor-core/src/main/java/reactor/core/Exceptions.java | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/reactor-core/src/main/java/reactor/core/Exceptions.java b/reactor-core/src/main/java/reactor/core/Exceptions.java index da5b1a7b55..787be56234 100644 --- a/reactor-core/src/main/java/reactor/core/Exceptions.java +++ b/reactor-core/src/main/java/reactor/core/Exceptions.java @@ -698,8 +698,7 @@ static final class OverflowException extends IllegalStateException { * A specialized {@link IllegalStateException} to signify a {@link Flux#retryWhen(Retry) retry} * has failed (eg. after N attempts, or a timeout). * - * @see #retryExhausted(long, Throwable) - * @see #retryExhausted(Duration) + * @see #retryExhausted(String, Throwable) * @see #isRetryExhausted(Throwable) */ static final class RetryExhaustedException extends IllegalStateException {