You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The most notable enhancement of this version is the inclusion of the ParallelFlowable API that allows parallel execution of a few select operators such as map, filter, concatMap, flatMap, collect, reduce and so on. Note that is a parallel mode for Flowable (a sub-domain specific language) instead of a new reactive base type.
Consequently, several typical operators such as take, skip and many others are not available and there is no ParallelObservable because backpressure is essential in not flooding the internal queues of the parallel operators as by expectation, we want to go parallel because the processing of the data is slow on one thread.
The easiest way of entering the parallel world is by using Flowable.parallel:
By default, the parallelism level is set to the number of available CPUs (Runtime.getRuntime().availableProcessors()) and the prefetch amount from the sequential source is set to Flowable.bufferSize() (128). Both can be specified via overloads of parallel().
ParallelFlowable follows the same principles of parametric asynchrony as Flowable does, therefore, parallel() on itself doesn't introduce the asynchronous consumption of the sequential source but only prepares the parallel flow; the asynchrony is defined via the runOn(Scheduler) operator.
The parallelism level (ParallelFlowable.parallelism()) doesn't have to match the parallelism level of the Scheduler. The runOn operator will use as many Scheduler.Worker instances as defined by the parallelized source. This allows ParallelFlowable to work for CPU intensive tasks via Schedulers.computation(), blocking/IO bound tasks through Schedulers.io() and unit testing via TestScheduler. You can specify the prefetch amount on runOn as well.
Once the necessary parallel operations have been applied, you can return to the sequential Flowable via the ParallelFlowable.sequential() operator.
Flowable<Integer> result = psource.filter(v -> v % 3 == 0).map(v -> v * v).sequential();
Note that sequential doesn't guarantee any ordering between values flowing through the parallel operators.
For further details, please visit the wiki page about Parallel flows. (New content will be added there as time permits.)
API enhancements
Pull 4955: add sample() overload that can emit the very last buffered item.
Pull 4966: add strict() operator for strong Reactive-Streams conformance
Pull 4967: add subjects for Single, Maybe and Completable
Version 2.0.5 - January 27, 2017 (Maven)
The most notable enhancement of this version is the inclusion of the
ParallelFlowable
API that allows parallel execution of a few select operators such asmap
,filter
,concatMap
,flatMap
,collect
,reduce
and so on. Note that is a parallel mode forFlowable
(a sub-domain specific language) instead of a new reactive base type.Consequently, several typical operators such as
take
,skip
and many others are not available and there is noParallelObservable
because backpressure is essential in not flooding the internal queues of the parallel operators as by expectation, we want to go parallel because the processing of the data is slow on one thread.The easiest way of entering the parallel world is by using
Flowable.parallel
:By default, the parallelism level is set to the number of available CPUs (
Runtime.getRuntime().availableProcessors()
) and the prefetch amount from the sequential source is set toFlowable.bufferSize()
(128). Both can be specified via overloads ofparallel()
.ParallelFlowable
follows the same principles of parametric asynchrony asFlowable
does, therefore,parallel()
on itself doesn't introduce the asynchronous consumption of the sequential source but only prepares the parallel flow; the asynchrony is defined via therunOn(Scheduler)
operator.The parallelism level (
ParallelFlowable.parallelism()
) doesn't have to match the parallelism level of theScheduler
. TherunOn
operator will use as manyScheduler.Worker
instances as defined by the parallelized source. This allowsParallelFlowable
to work for CPU intensive tasks viaSchedulers.computation()
, blocking/IO bound tasks throughSchedulers.io()
and unit testing viaTestScheduler
. You can specify the prefetch amount onrunOn
as well.Once the necessary parallel operations have been applied, you can return to the sequential
Flowable
via theParallelFlowable.sequential()
operator.Note that
sequential
doesn't guarantee any ordering between values flowing through the parallel operators.For further details, please visit the wiki page about Parallel flows. (New content will be added there as time permits.)
API enhancements
sample()
overload that can emit the very last buffered item.strict()
operator for strong Reactive-Streams conformanceSingle
,Maybe
andCompletable
compose()
genericsCompletable.hide()
Flowable.parallel()
and parallel operatorsBugfixes
LambdaObserver
calling dispose when terminatingtakeUntil()
other triggering twicewithLatestFrom
null checks, lifecycleObservable.concatMapEager
bad logic for immediate scalars.observeOn
,flatMap
&zip
Observable.combineLatest
consistent withFlowable
, fix early termination cancelling the other sources and document empty source caseA.flatMapB
to eagerly check for cancellations before subscribingExecutorScheduler.scheduleDirect
to reportisDisposed
on task completionOther
@CheckReturnValue
tocreate()
methods ofSubjects
+Processors
sample()
overloads, Maybe andMaybe.switchIfEmpty()
compile
in the library's POMThe text was updated successfully, but these errors were encountered: