public abstract class Schedulers extends Object
Schedulers
provides various Scheduler
flavors usable by publishOn
or subscribeOn
:
parallel()
: Optimized for fast Runnable
non-blocking executions single()
: Optimized for low-latency Runnable
one-off executions boundedElastic()
: Optimized for longer executions, an alternative for blocking tasks where the number of active tasks (and threads) is cappedimmediate()
: to immediately run submitted Runnable
instead of scheduling them (somewhat of a no-op or "null object" Scheduler
)fromExecutorService(ExecutorService)
to create new instances around Executors
Factories prefixed with new
(eg. newBoundedElastic(int, int, String)
return a new instance of their flavor of Scheduler
,
while other factories like boundedElastic()
return a shared instance - which is the one used by operators requiring that flavor as their default Scheduler.
All instances are returned in a initialized
state.
Since 3.6.0 boundedElastic()
can run tasks on VirtualThread
s if the application
runs on a Java 21+ runtime and the DEFAULT_BOUNDED_ELASTIC_ON_VIRTUAL_THREADS
system property is set to true
.
Modifier and Type | Class and Description |
---|---|
static interface |
Schedulers.Factory
Public factory hook to override Schedulers behavior globally
|
static class |
Schedulers.Snapshot
It is also
Disposable in case you don't want to restore the live Schedulers |
Modifier and Type | Field and Description |
---|---|
static boolean |
DEFAULT_BOUNDED_ELASTIC_ON_VIRTUAL_THREADS
Default execution of enqueued tasks on
Thread#ofVirtual for the global
boundedElastic() Scheduler ,
initialized by system property reactor.schedulers.defaultBoundedElasticOnVirtualThreads
and falls back to false . |
static int |
DEFAULT_BOUNDED_ELASTIC_QUEUESIZE
Default maximum number of enqueued tasks PER THREAD for the global
boundedElastic() Scheduler ,
initialized by system property reactor.schedulers.defaultBoundedElasticQueueSize and falls back to
a bound of 100 000 tasks per backing thread. |
static int |
DEFAULT_BOUNDED_ELASTIC_SIZE
Default maximum size for the global
boundedElastic() Scheduler , initialized
by system property reactor.schedulers.defaultBoundedElasticSize and falls back to 10 x number
of processors available to the runtime on init. |
static int |
DEFAULT_POOL_SIZE
Default pool size, initialized by system property
reactor.schedulers.defaultPoolSize
and falls back to the number of processors available to the runtime on init. |
Constructor and Description |
---|
Schedulers() |
Modifier and Type | Method and Description |
---|---|
static boolean |
addExecutorServiceDecorator(String key,
BiFunction<Scheduler,ScheduledExecutorService,ScheduledExecutorService> decorator)
Set up an additional
ScheduledExecutorService decorator for a given key
only if that key is not already present. |
static Scheduler |
boundedElastic()
The common boundedElastic instance, a
Scheduler that
dynamically creates a bounded number of workers. |
static ScheduledExecutorService |
decorateExecutorService(Scheduler owner,
ScheduledExecutorService original)
This method is aimed at
Scheduler implementors, enabling custom implementations
that are backed by a ScheduledExecutorService to also have said executors
decorated (ie. |
static void |
disableMetrics()
Deprecated.
prefer using Micrometer#timedScheduler from the reactor-core-micrometer module. To be removed at the earliest in 3.6.0.
|
static void |
enableMetrics()
Deprecated.
prefer using Micrometer#timedScheduler from the reactor-core-micrometer module. To be removed at the earliest in 3.6.0.
|
static Scheduler |
fromExecutor(Executor executor)
|
static Scheduler |
fromExecutor(Executor executor,
boolean trampoline)
|
static Scheduler |
fromExecutorService(ExecutorService executorService)
Create a
Scheduler which uses a backing ExecutorService to schedule
Runnables for async operators. |
static Scheduler |
fromExecutorService(ExecutorService executorService,
String executorName)
Create a
Scheduler which uses a backing ExecutorService to schedule
Runnables for async operators. |
static Scheduler |
immediate()
Executes tasks immediately instead of scheduling them.
|
static boolean |
isInNonBlockingThread()
Check if calling a Reactor blocking API in the current
Thread is forbidden
or not. |
static boolean |
isNonBlockingThread(Thread t)
Check if calling a Reactor blocking API in the given
Thread is forbidden
or not. |
static Scheduler |
newBoundedElastic(int threadCap,
int queuedTaskCap,
String name)
Scheduler that dynamically creates a bounded number of ExecutorService-based
Workers, reusing them once the Workers have been shut down. |
static Scheduler |
newBoundedElastic(int threadCap,
int queuedTaskCap,
String name,
int ttlSeconds)
Scheduler that dynamically creates a bounded number of ExecutorService-based
Workers, reusing them once the Workers have been shut down. |
static Scheduler |
newBoundedElastic(int threadCap,
int queuedTaskCap,
String name,
int ttlSeconds,
boolean daemon)
Scheduler that dynamically creates a bounded number of ExecutorService-based
Workers, reusing them once the Workers have been shut down. |
static Scheduler |
newBoundedElastic(int threadCap,
int queuedTaskCap,
ThreadFactory threadFactory,
int ttlSeconds)
Scheduler that dynamically creates a bounded number of ExecutorService-based
Workers, reusing them once the Workers have been shut down. |
static Scheduler |
newParallel(int parallelism,
ThreadFactory threadFactory)
Scheduler that hosts a fixed pool of single-threaded ExecutorService-based
workers and is suited for parallel work. |
static Scheduler |
newParallel(String name)
Scheduler that hosts a fixed pool of single-threaded ExecutorService-based
workers and is suited for parallel work. |
static Scheduler |
newParallel(String name,
int parallelism)
Scheduler that hosts a fixed pool of single-threaded ExecutorService-based
workers and is suited for parallel work. |
static Scheduler |
newParallel(String name,
int parallelism,
boolean daemon)
Scheduler that hosts a fixed pool of single-threaded ExecutorService-based
workers and is suited for parallel work. |
static Scheduler |
newSingle(String name)
Scheduler that hosts a single-threaded ExecutorService-based worker. |
static Scheduler |
newSingle(String name,
boolean daemon)
Scheduler that hosts a single-threaded ExecutorService-based worker. |
static Scheduler |
newSingle(ThreadFactory threadFactory)
Scheduler that hosts a single-threaded ExecutorService-based worker. |
static void |
onHandleError(BiConsumer<Thread,? super Throwable> subHook)
Define a hook anonymous part that is executed alongside keyed parts when a
Scheduler has
handled an error . |
static void |
onHandleError(String key,
BiConsumer<Thread,? super Throwable> subHook)
Define a keyed hook part that is executed alongside other parts when a
Scheduler has
handled an error . |
static Runnable |
onSchedule(Runnable runnable)
Applies the hooks registered with
onScheduleHook(String, Function) . |
static void |
onScheduleHook(String key,
Function<Runnable,Runnable> decorator)
Add or replace a named scheduling
decorator . |
static Scheduler |
parallel()
The common parallel instance, a
Scheduler that hosts a fixed pool of single-threaded
ExecutorService-based workers and is suited for parallel work. |
static void |
registerNonBlockingThreadPredicate(Predicate<Thread> predicate)
|
static BiFunction<Scheduler,ScheduledExecutorService,ScheduledExecutorService> |
removeExecutorServiceDecorator(String key)
Remove an existing
ScheduledExecutorService decorator if it has been set up
via addExecutorServiceDecorator(String, BiFunction) . |
static void |
resetFactory()
Re-apply default factory to
Schedulers |
static void |
resetFrom(Schedulers.Snapshot snapshot)
Replace the current Factory and shared Schedulers with the ones saved in a
previously
captured snapshot. |
static void |
resetNonBlockingThreadPredicate()
Unregisters all the
Predicate s registered so far via
registerNonBlockingThreadPredicate(Predicate) . |
static void |
resetOnHandleError()
Reset the
onHandleError(BiConsumer) hook to the default no-op behavior, erasing
all sub-hooks that might have individually added via onHandleError(String, BiConsumer)
or the whole hook set via onHandleError(BiConsumer) . |
static void |
resetOnHandleError(String key)
Reset a specific onHandleError hook part keyed to the provided
String ,
removing that sub-hook if it has previously been defined via onHandleError(String, BiConsumer) . |
static void |
resetOnScheduleHook(String key)
Reset a specific onScheduleHook
sub-hook if it has been set up
via onScheduleHook(String, Function) . |
static void |
resetOnScheduleHooks()
Remove all onScheduleHook
sub-hooks . |
static void |
setExecutorServiceDecorator(String key,
BiFunction<Scheduler,ScheduledExecutorService,ScheduledExecutorService> decorator)
Set up an additional
ScheduledExecutorService decorator for a given key,
even if that key is already present. |
static void |
setFactory(Schedulers.Factory factoryInstance)
|
static Schedulers.Snapshot |
setFactoryWithSnapshot(Schedulers.Factory newFactory)
|
static void |
shutdownNow()
Clear any cached
Scheduler and call dispose on them. |
static Scheduler |
single()
The common single instance, a
Scheduler that hosts a single-threaded ExecutorService-based
worker. |
static Scheduler |
single(Scheduler original)
Wraps a single
Scheduler.Worker from some other
Scheduler and provides Scheduler.Worker
services on top of it. |
public static final int DEFAULT_POOL_SIZE
reactor.schedulers.defaultPoolSize
and falls back to the number of processors available to the runtime on init.Runtime.availableProcessors()
public static final int DEFAULT_BOUNDED_ELASTIC_SIZE
boundedElastic()
Scheduler
, initialized
by system property reactor.schedulers.defaultBoundedElasticSize
and falls back to 10 x number
of processors available to the runtime on init.Runtime.availableProcessors()
,
boundedElastic()
public static final int DEFAULT_BOUNDED_ELASTIC_QUEUESIZE
boundedElastic()
Scheduler
,
initialized by system property reactor.schedulers.defaultBoundedElasticQueueSize
and falls back to
a bound of 100 000 tasks per backing thread.boundedElastic()
public static final boolean DEFAULT_BOUNDED_ELASTIC_ON_VIRTUAL_THREADS
Thread#ofVirtual
for the global
boundedElastic()
Scheduler
,
initialized by system property reactor.schedulers.defaultBoundedElasticOnVirtualThreads
and falls back to false .boundedElastic()
public static Scheduler fromExecutor(Executor executor)
Scheduler
which uses a backing Executor
to schedule
Runnables for async operators.
Tasks scheduled with workers of this Scheduler are not guaranteed to run in FIFO
order and strictly non-concurrently.
If FIFO order is desired, use trampoline parameter of fromExecutor(Executor, boolean)
public static Scheduler fromExecutor(Executor executor, boolean trampoline)
Scheduler
which uses a backing Executor
to schedule
Runnables for async operators.
Trampolining here means tasks submitted in a burst are queued by the Worker itself,
which acts as a sole task from the perspective of the ExecutorService
,
so no reordering (but also no threading).public static Scheduler fromExecutorService(ExecutorService executorService)
Scheduler
which uses a backing ExecutorService
to schedule
Runnables for async operators.
Prefer using fromExecutorService(ExecutorService, String)
,
especially if you plan on using metrics as this gives the executor a meaningful identifier.
executorService
- an ExecutorService
Scheduler
public static Scheduler fromExecutorService(ExecutorService executorService, String executorName)
Scheduler
which uses a backing ExecutorService
to schedule
Runnables for async operators.executorService
- an ExecutorService
Scheduler
public static Scheduler boundedElastic()
Scheduler
that
dynamically creates a bounded number of workers.
Depends on the available environment and specified configurations, there are two types of implementations for this shared scheduler:
Thread
instances. Every Worker is ExecutorService
-based. Reusing Thread
s
once the Workers have been shut down. The underlying daemon threads can be
evicted if idle for more than
60
seconds.
DEFAULT_BOUNDED_ELASTIC_ON_VIRTUAL_THREADS
is set to
true
. Every Worker is based on the custom implementation of the execution
mechanism which ensures every submitted task runs on a new
VirtualThread
instance. This implementation has a shared instance of
ScheduledExecutorService
used to schedule delayed and periodic tasks
such that when triggered they are offloaded to a dedicated new
VirtualThread
instance.
Both implementations share the same configurations:
cap
(by default
ten times the number of available CPU cores, see DEFAULT_BOUNDED_ELASTIC_SIZE
).
Note: Consider increasing DEFAULT_BOUNDED_ELASTIC_SIZE
with the
thread-per-task implementation to run more concurrent VirtualThread
instances underneath.
DEFAULT_BOUNDED_ELASTIC_QUEUESIZE
). Past that point, a RejectedExecutionException
is thrown.
Threads backing a new Scheduler.Worker
are
picked from a pool or are created when needed. In the ExecutorService-based
implementation, the pool is comprised either of idle or busy threads. When all
threads are busy, a best effort attempt is made at picking the thread backing
the least number of workers. In the case of the thread-per-task implementation, it
always creates new threads up to the specified limit.
Note that if a scheduling mechanism is backing a low amount of workers, but these workers submit a lot of pending tasks, a second worker could end up being backed by the same mechanism and see tasks rejected. The picking of the backing mechanism is also done once and for all at worker creation, so tasks could be delayed due to two workers sharing the same backing mechanism and submitting long-running tasks, despite another backing mechanism becoming idle in the meantime.
Only one instance of this common scheduler will be created on the first call and is cached. The same instance is returned on subsequent calls until it is disposed.
One cannot directly dispose
the common instances, as they are cached and shared
between callers. They can however be all shut down
together, or replaced by a
change in Factory
.
Scheduler
that dynamically creates workers with an upper
bound to the number of backing threads and after that on the number of enqueued tasks.public static Scheduler parallel()
Scheduler
that hosts a fixed pool of single-threaded
ExecutorService-based workers and is suited for parallel work.
Only one instance of this common scheduler will be created on the first call and is cached. The same instance is returned on subsequent calls until it is disposed.
One cannot directly dispose
the common instances, as they are cached and shared
between callers. They can however be all shut down
together, or replaced by a
change in Factory
.
Scheduler
that hosts a fixed pool of single-threaded
ExecutorService-based workers and is suited for parallel workpublic static Scheduler immediate()
As a consequence tasks run on the thread that submitted them (eg. the
thread on which an operator is currently processing its onNext/onComplete/onError signals).
This Scheduler
is typically used as a "null object" for APIs that require a
Scheduler but one doesn't want to change threads.
Scheduler
that executes tasks immediately instead of scheduling thempublic static Scheduler newBoundedElastic(int threadCap, int queuedTaskCap, String name)
Scheduler
that dynamically creates a bounded number of ExecutorService-based
Workers, reusing them once the Workers have been shut down. The underlying (user)
threads can be evicted if idle for more than 60
seconds.
The maximum number of created threads is bounded by the provided threadCap
.
The maximum number of task submissions that can be enqueued and deferred on each of these
backing threads is bounded by the provided queuedTaskCap
. Past that point,
a RejectedExecutionException
is thrown.
By order of preference, threads backing a new Scheduler.Worker
are
picked from the idle pool, created anew or reused from the busy pool. In the later case, a best effort
attempt at picking the thread backing the least amount of workers is made.
Note that if a thread is backing a low amount of workers, but these workers submit a lot of pending tasks, a second worker could end up being backed by the same thread and see tasks rejected. The picking of the backing thread is also done once and for all at worker creation, so tasks could be delayed due to two workers sharing the same backing thread and submitting long-running tasks, despite another backing thread becoming idle in the meantime.
Threads backing this scheduler are user threads, so they will prevent the JVM
from exiting until their worker has been disposed AND they've been evicted by TTL, or the whole
scheduler has been disposed
.
Please note, this implementation is not designed to run tasks on
VirtualThread
. Please see
Schedulers.Factory.newThreadPerTaskBoundedElastic(int, int, ThreadFactory)
if you need
VirtualThread
compatible scheduler implementation
threadCap
- maximum number of underlying threads to createqueuedTaskCap
- maximum number of tasks to enqueue when no more threads can be created. Can be Integer.MAX_VALUE
for unbounded enqueueing.name
- Thread prefixScheduler
that dynamically creates workers with an upper bound to
the number of backing threads and after that on the number of enqueued tasks,
that reuses threads and evict idle onespublic static Scheduler newBoundedElastic(int threadCap, int queuedTaskCap, String name, int ttlSeconds)
Scheduler
that dynamically creates a bounded number of ExecutorService-based
Workers, reusing them once the Workers have been shut down. The underlying (user)
threads can be evicted if idle for more than the provided ttlSeconds
.
The maximum number of created threads is bounded by the provided threadCap
.
The maximum number of task submissions that can be enqueued and deferred on each of these
backing threads is bounded by the provided queuedTaskCap
. Past that point,
a RejectedExecutionException
is thrown.
By order of preference, threads backing a new Scheduler.Worker
are
picked from the idle pool, created anew or reused from the busy pool. In the later case, a best effort
attempt at picking the thread backing the least amount of workers is made.
Note that if a thread is backing a low amount of workers, but these workers submit a lot of pending tasks, a second worker could end up being backed by the same thread and see tasks rejected. The picking of the backing thread is also done once and for all at worker creation, so tasks could be delayed due to two workers sharing the same backing thread and submitting long-running tasks, despite another backing thread becoming idle in the meantime.
Threads backing this scheduler are user threads, so they will prevent the JVM
from exiting until their worker has been disposed AND they've been evicted by TTL, or the whole
scheduler has been disposed
.
Please note, this implementation is not designed to run tasks on
VirtualThread
. Please see
Schedulers.Factory.newThreadPerTaskBoundedElastic(int, int, ThreadFactory)
if you need
VirtualThread
compatible scheduler implementation
threadCap
- maximum number of underlying threads to createqueuedTaskCap
- maximum number of tasks to enqueue when no more threads can be created. Can be Integer.MAX_VALUE
for unbounded enqueueing.name
- Thread prefixttlSeconds
- Time-to-live for an idle Scheduler.Worker
Scheduler
that dynamically creates workers with an upper bound to
the number of backing threads and after that on the number of enqueued tasks,
that reuses threads and evict idle onespublic static Scheduler newBoundedElastic(int threadCap, int queuedTaskCap, String name, int ttlSeconds, boolean daemon)
Scheduler
that dynamically creates a bounded number of ExecutorService-based
Workers, reusing them once the Workers have been shut down. The underlying (user or daemon)
threads can be evicted if idle for more than the provided ttlSeconds
.
The maximum number of created threads is bounded by the provided threadCap
.
The maximum number of task submissions that can be enqueued and deferred on each of these
backing threads is bounded by the provided queuedTaskCap
. Past that point,
a RejectedExecutionException
is thrown.
By order of preference, threads backing a new Scheduler.Worker
are
picked from the idle pool, created anew or reused from the busy pool. In the later case, a best effort
attempt at picking the thread backing the least amount of workers is made.
Note that if a thread is backing a low amount of workers, but these workers submit a lot of pending tasks, a second worker could end up being backed by the same thread and see tasks rejected. The picking of the backing thread is also done once and for all at worker creation, so tasks could be delayed due to two workers sharing the same backing thread and submitting long-running tasks, despite another backing thread becoming idle in the meantime.
Depending on the daemon
parameter, threads backing this scheduler can be
user threads or daemon threads. Note that user threads will prevent the JVM from exiting until their
worker has been disposed AND they've been evicted by TTL, or the whole scheduler has been
disposed
.
Please note, this implementation is not designed to run tasks on
VirtualThread
. Please see
Schedulers.Factory.newThreadPerTaskBoundedElastic(int, int, ThreadFactory)
if you need
VirtualThread
compatible scheduler implementation
threadCap
- maximum number of underlying threads to createqueuedTaskCap
- maximum number of tasks to enqueue when no more threads can be created. Can be Integer.MAX_VALUE
for unbounded enqueueing.name
- Thread prefixttlSeconds
- Time-to-live for an idle Scheduler.Worker
daemon
- are backing threads daemon threads
Scheduler
that dynamically creates workers with an upper bound to
the number of backing threads and after that on the number of enqueued tasks,
that reuses threads and evict idle onespublic static Scheduler newBoundedElastic(int threadCap, int queuedTaskCap, ThreadFactory threadFactory, int ttlSeconds)
Scheduler
that dynamically creates a bounded number of ExecutorService-based
Workers, reusing them once the Workers have been shut down. The underlying (user)
threads can be evicted if idle for more than the provided ttlSeconds
.
The maximum number of created threads is bounded by the provided threadCap
.
The maximum number of task submissions that can be enqueued and deferred on each of these
backing threads is bounded by the provided queuedTaskCap
. Past that point,
a RejectedExecutionException
is thrown.
By order of preference, threads backing a new Scheduler.Worker
are
picked from the idle pool, created anew or reused from the busy pool. In the later case, a best effort
attempt at picking the thread backing the least amount of workers is made.
Note that if a thread is backing a low amount of workers, but these workers submit a lot of pending tasks, a second worker could end up being backed by the same thread and see tasks rejected. The picking of the backing thread is also done once and for all at worker creation, so tasks could be delayed due to two workers sharing the same backing thread and submitting long-running tasks, despite another backing thread becoming idle in the meantime.
Threads backing this scheduler are created by the provided ThreadFactory
,
which can decide whether to create user threads or daemon threads. Note that user threads
will prevent the JVM from exiting until their worker has been disposed AND they've been evicted by TTL,
or the whole scheduler has been disposed
.
Please note, this implementation is not designed to run tasks on
VirtualThread
. Please see
Schedulers.Factory.newThreadPerTaskBoundedElastic(int, int, ThreadFactory)
if you need
VirtualThread
compatible scheduler implementation
threadCap
- maximum number of underlying threads to createqueuedTaskCap
- maximum number of tasks to enqueue when no more threads can be created. Can be Integer.MAX_VALUE
for unbounded enqueueing.threadFactory
- a ThreadFactory
to use each thread initializationttlSeconds
- Time-to-live for an idle Scheduler.Worker
Scheduler
that dynamically creates workers with an upper bound to
the number of backing threads and after that on the number of enqueued tasks,
that reuses threads and evict idle onespublic static Scheduler newParallel(String name)
Scheduler
that hosts a fixed pool of single-threaded ExecutorService-based
workers and is suited for parallel work. This type of Scheduler
detects and
rejects usage of blocking Reactor APIs.name
- Thread prefixScheduler
that hosts a fixed pool of single-threaded
ExecutorService-based workers and is suited for parallel workpublic static Scheduler newParallel(String name, int parallelism)
Scheduler
that hosts a fixed pool of single-threaded ExecutorService-based
workers and is suited for parallel work. This type of Scheduler
detects and
rejects usage of blocking Reactor APIs.name
- Thread prefixparallelism
- Number of pooled workers.Scheduler
that hosts a fixed pool of single-threaded
ExecutorService-based workers and is suited for parallel workpublic static Scheduler newParallel(String name, int parallelism, boolean daemon)
Scheduler
that hosts a fixed pool of single-threaded ExecutorService-based
workers and is suited for parallel work. This type of Scheduler
detects and
rejects usage of blocking Reactor APIs.name
- Thread prefixparallelism
- Number of pooled workers.daemon
- false if the Scheduler
requires an explicit Scheduler.dispose()
to exit the VM.Scheduler
that hosts a fixed pool of single-threaded
ExecutorService-based workers and is suited for parallel workpublic static Scheduler newParallel(int parallelism, ThreadFactory threadFactory)
Scheduler
that hosts a fixed pool of single-threaded ExecutorService-based
workers and is suited for parallel work.parallelism
- Number of pooled workers.threadFactory
- a ThreadFactory
to use for the fixed initialized
number of Thread
Scheduler
that hosts a fixed pool of single-threaded
ExecutorService-based workers and is suited for parallel workpublic static Scheduler newSingle(String name)
Scheduler
that hosts a single-threaded ExecutorService-based worker. This type of Scheduler
detects and rejects usage of blocking Reactor APIs.name
- Component and thread name prefixScheduler
that hosts a single-threaded ExecutorService-based
workerpublic static Scheduler newSingle(String name, boolean daemon)
Scheduler
that hosts a single-threaded ExecutorService-based worker. This type of Scheduler
detects and rejects usage of blocking Reactor APIs.name
- Component and thread name prefixdaemon
- false if the Scheduler
requires an explicit Scheduler.dispose()
to exit the VM.Scheduler
that hosts a single-threaded ExecutorService-based
workerpublic static Scheduler newSingle(ThreadFactory threadFactory)
Scheduler
that hosts a single-threaded ExecutorService-based worker.threadFactory
- a ThreadFactory
to use for the unique thread of the
Scheduler
Scheduler
that hosts a single-threaded ExecutorService-based
workerpublic static void onHandleError(BiConsumer<Thread,? super Throwable> subHook)
Scheduler
has
handled an error
. Note that it is executed after
the error has been passed to the thread uncaughtErrorHandler, which is not the
case when a fatal error occurs (see Exceptions.throwIfJvmFatal(Throwable)
).
This variant uses an internal private key, which allows the method to be additive with
onHandleError(String, BiConsumer)
. Prefer adding and removing handler parts
for keys that you own via onHandleError(String, BiConsumer)
nonetheless.
subHook
- the new BiConsumer
to set as the hook's anonymous part.onHandleError(String, BiConsumer)
public static void onHandleError(String key, BiConsumer<Thread,? super Throwable> subHook)
Scheduler
has
handled an error
. Note that it is executed after
the error has been passed to the thread uncaughtErrorHandler, which is not the
case when a fatal error occurs (see Exceptions.throwIfJvmFatal(Throwable)
).
Calling this method twice with the same key replaces the old hook part
of the same key. Calling this method twice with two different keys is otherwise additive.
Note that onHandleError(BiConsumer)
also defines an anonymous part which
effectively uses a private internal key, making it also additive with this method.
key
- the String
key identifying the hook part to set/replace.subHook
- the new hook part to set for the given key.public static boolean isInNonBlockingThread()
Thread
is forbidden
or not. This method returns true
and will forbid the Reactor blocking API if
any of the following conditions meet:
NonBlocking
; orPredicate
s registered via registerNonBlockingThreadPredicate(Predicate)
returns true
.true
if blocking is forbidden in this thread, false
otherwisepublic static boolean isNonBlockingThread(Thread t)
Thread
is forbidden
or not. This method returns true
and will forbid the Reactor blocking API if
any of the following conditions meet:
NonBlocking
; orPredicate
s registered via registerNonBlockingThreadPredicate(Predicate)
returns true
.true
if blocking is forbidden in that thread, false
otherwisepublic static void registerNonBlockingThreadPredicate(Predicate<Thread> predicate)
public static void resetNonBlockingThreadPredicate()
Predicate
s registered so far via
registerNonBlockingThreadPredicate(Predicate)
.@Deprecated public static void enableMetrics()
ExecutorService
that backs a Scheduler
.
No-op if Micrometer isn't available.
The MeterRegistry
used by reactor can be configured via
Metrics.MicrometerConfiguration.useRegistry(MeterRegistry)
prior to using this method, the default being
Metrics.globalRegistry
.
@Deprecated public static void disableMetrics()
enableMetrics()
has been previously called, removes the decorator.
No-op if enableMetrics()
hasn't been called.public static void resetFactory()
Schedulers
public static Schedulers.Snapshot setFactoryWithSnapshot(Schedulers.Factory newFactory)
Schedulers
factories (newParallel
,
newSingle
and newBoundedElastic
).
Unlike setFactory(Factory)
, doesn't shutdown previous Schedulers but
capture them in a Schedulers.Snapshot
that can be later restored via resetFrom(Snapshot)
.
This method should be called safely and with caution, typically on app startup.
newFactory
- an arbitrary Schedulers.Factory
instanceSchedulers.Snapshot
representing a restorable snapshot of Schedulers
public static void resetFrom(@Nullable Schedulers.Snapshot snapshot)
captured
snapshot.
Passing null
re-applies the default factory.
public static void resetOnHandleError()
onHandleError(BiConsumer)
hook to the default no-op behavior, erasing
all sub-hooks that might have individually added via onHandleError(String, BiConsumer)
or the whole hook set via onHandleError(BiConsumer)
.resetOnHandleError(String)
public static void resetOnHandleError(String key)
String
,
removing that sub-hook if it has previously been defined via onHandleError(String, BiConsumer)
.public static void setFactory(Schedulers.Factory factoryInstance)
Schedulers
factories (newParallel
,
newSingle
and newBoundedElastic
).
Also shutdown Schedulers from the cached factories (like single()
) in order to
also use these replacements, re-creating the shared schedulers from the new factory
upon next use.
This method should be called safely and with caution, typically on app startup.
factoryInstance
- an arbitrary Schedulers.Factory
instance.public static boolean addExecutorServiceDecorator(String key, BiFunction<Scheduler,ScheduledExecutorService,ScheduledExecutorService> decorator)
ScheduledExecutorService
decorator for a given key
only if that key is not already present.
The decorator is a BiFunction
taking the Scheduler and the backing
ScheduledExecutorService
as second argument. It returns the
decorated ScheduledExecutorService
.
key
- the key under which to set up the decoratordecorator
- the executor service decorator to add, if key not already present.setExecutorServiceDecorator(String, BiFunction)
,
removeExecutorServiceDecorator(String)
,
onScheduleHook(String, Function)
public static void setExecutorServiceDecorator(String key, BiFunction<Scheduler,ScheduledExecutorService,ScheduledExecutorService> decorator)
ScheduledExecutorService
decorator for a given key,
even if that key is already present.
The decorator is a BiFunction
taking the Scheduler and the backing
ScheduledExecutorService
as second argument. It returns the
decorated ScheduledExecutorService
.
key
- the key under which to set up the decoratordecorator
- the executor service decorator to add, if key not already present.addExecutorServiceDecorator(String, BiFunction)
,
removeExecutorServiceDecorator(String)
,
onScheduleHook(String, Function)
public static BiFunction<Scheduler,ScheduledExecutorService,ScheduledExecutorService> removeExecutorServiceDecorator(String key)
ScheduledExecutorService
decorator if it has been set up
via addExecutorServiceDecorator(String, BiFunction)
.
In case the decorator implements Disposable
, it is also
disposed
.
key
- the key for the executor service decorator to removeaddExecutorServiceDecorator(String, BiFunction)
,
setExecutorServiceDecorator(String, BiFunction)
public static ScheduledExecutorService decorateExecutorService(Scheduler owner, ScheduledExecutorService original)
Scheduler
implementors, enabling custom implementations
that are backed by a ScheduledExecutorService
to also have said executors
decorated (ie. for instrumentation purposes).
It applies the decorators added via
addExecutorServiceDecorator(String, BiFunction)
, so it shouldn't be added
as a decorator. Note also that decorators are not guaranteed to be idempotent, so
this method should be called only once per executor.
owner
- a Scheduler
that owns the ScheduledExecutorService
original
- the ScheduledExecutorService
that the Scheduler
wants to use originallyScheduledExecutorService
, or the original if no decorator is set upaddExecutorServiceDecorator(String, BiFunction)
,
removeExecutorServiceDecorator(String)
public static void onScheduleHook(String key, Function<Runnable,Runnable> decorator)
decorator
. With subsequent calls
to this method, the onScheduleHook hook can be a composite of several sub-hooks, each
with a different key.
The sub-hook is a Function
taking the scheduled Runnable
.
It returns the decorated Runnable
.
key
- the key under which to set up the onScheduleHook sub-hookdecorator
- the Runnable
decorator to add (or replace, if key is already present)resetOnScheduleHook(String)
,
resetOnScheduleHooks()
public static void resetOnScheduleHook(String key)
sub-hook
if it has been set up
via onScheduleHook(String, Function)
.key
- the key for onScheduleHook sub-hook to removeonScheduleHook(String, Function)
,
resetOnScheduleHooks()
public static void resetOnScheduleHooks()
sub-hooks
.public static Runnable onSchedule(Runnable runnable)
onScheduleHook(String, Function)
.public static void shutdownNow()
Scheduler
and call dispose on them.public static Scheduler single()
Scheduler
that hosts a single-threaded ExecutorService-based
worker.
Only one instance of this common scheduler will be created on the first call and is cached. The same instance is returned on subsequent calls until it is disposed.
One cannot directly dispose
the common instances, as they are cached and shared
between callers. They can however be all shut down
together, or replaced by a
change in Factory
.
Scheduler
that hosts a single-threaded
ExecutorService-based workerpublic static Scheduler single(Scheduler original)
Scheduler.Worker
from some other
Scheduler
and provides Scheduler.Worker
services on top of it. Unlike with other factory methods in this class, the delegate
is assumed to be initialized
and won't be implicitly
initialized by this method.
Use the Scheduler.dispose()
to release the wrapped worker.
original
- a Scheduler
to call upon to get the single Scheduler.Worker
Scheduler
consistently returning a same worker from a
source Scheduler