Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WorkerExecutor should accept custom function to create ExecutorService #1539

Closed
testn opened this issue Jul 28, 2016 · 19 comments
Closed

WorkerExecutor should accept custom function to create ExecutorService #1539

testn opened this issue Jul 28, 2016 · 19 comments
Milestone

Comments

@testn
Copy link
Contributor

testn commented Jul 28, 2016

WorkerExecutor should accept custom function to create ExecutorService. This will help customize the policy on the thread pool. For example, we might want to have a policy such that if the queue is full, it should reject the request rather than continuing to enqueue that.

@vietj
Copy link
Member

vietj commented Jul 30, 2016

this can be quite dangerous as one could create executors with wrong settings.

When queue is full, it is rather normal to queue tasks there, so it would rather a limit on the queue itself rather than the pool size.

Beside this particular use case, do you see other use cases ?

@testn
Copy link
Contributor Author

testn commented Jul 31, 2016

In a system with heavy workload, you would rather fail immediately when there is a queuing happening in the system rather keep queueing for more and more. That would lead to higher latency and eventually lead to "out of memory" problem. That is probably the main use case. However, I think it is quite reasonable to give more controls for more advanced users. (a good example is that they can probably control the thread name or add more logic for thread pool tracking.

@vietj
Copy link
Member

vietj commented Jul 31, 2016

How about providing this as an SPI rather like the other Vert.x pluggable like BufferFactory, FutureFactory, etc... ?

@luengnat
Copy link

luengnat commented Aug 9, 2016

That sounds perfect!

@lukyer
Copy link

lukyer commented Nov 22, 2016

Not sure if there is no workaround but i would like to use custom Executor because of overwriting some methods (i need to clean ThreadLocal variables before returning thread to the thread pool).

Or is there some another way to reset ThreadLocal variables value between different worker executions?

@vietj
Copy link
Member

vietj commented Nov 22, 2016

there is an upcoming effort soon about having some kind of hooks for that for distributed tracing. I believe they would allow to do such thing

@hutchig
Copy link
Contributor

hutchig commented Nov 7, 2019

I am currently looking at this issue (in terms of an SPI) for the purposes of embedding Vert.x
in other environments. For me it would be the Open Liberty runtime and I would implement the SPI on top of the Liberty ManagedExecutor etc. I am just about to have a look at/learn how Quarkus does this and then go on to learn what Vert.x would need from such an SPI.

@vietj
Copy link
Member

vietj commented Nov 7, 2019

@hutchig you might as well look at #3182

@hutchig
Copy link
Contributor

hutchig commented May 20, 2020

Hello Julien @vietj , I am coming back to look at this topic again. Reviewing #1539 #1650 #1937 #3182 #3184 I think this is issue is the current open place holder...

An idea occurred to me of a different mechanism that might be used when Vert.x is embedded
in another application server to help with contexts when one might not wish to
replace the ExecutorService or ThreadFactory.

I was interested when looking at the SPI around metrics and particularly the PoolMetrics which have callbacks for queued, rejected, begin, end of task.

Metrics providers like dropwizard seem to be the place to start from for me.

(The use case context I am looking at is the @Blocking annotation in SmallRye Reactive Messaging and Jee/Microprofile contexts.)

I was looking at hacking something which abstracts over the same sort of callbacks
{queued, rejected, begin, end of task} but which was a bit more abstract than being
tightly concerned with just Metrics. My goal is to allow a container to 'do what it needs'
regarding contexts without replacing the Vert.x thread factory or executor. Would it be sensible to carry on on these lines to see where I get to until I have a fork-branch?

It would be along the lines of SPI, service loaded but I see in #3182 (comment) you mention the user API being via
a builder style...I am reckoning that that can go on top of, or replace subsequently a service-loaded-SPI.

I would seem sensible/possible to apply these callbacks just to a subset of executors - starting with some subset of named worker pools - what do you think would be the best way of
subsetting/differentiating where the callbacks would be called? The aim being to impact the
current Vert.x design/performace as little as posssible.

As the callback mechanism is different from ExecutorService factory can I split it off? to another issue?? (It may be that I have to try to progress one or both paths :-) )

I am also having a good look at these hooks as possibly something to use.

@vietj
Copy link
Member

vietj commented May 20, 2020

@hutchig actually Vert.x has the io.vertx.core.Context object it uses for the operations you describe and we could in Vert.x 4 provide a hook for embedded that can provide custom Context implementation that will allow to do that.

A context is responsible for handling concurrency and executing tasks among other things. Can you verify if providing such pluggable context would be a good fit ?

@hutchig
Copy link
Contributor

hutchig commented May 22, 2020

I am having a thorough look/think about your suggestion...one of the things that was attractive to me about using a similar mechanism as the metrics callbacks was that the callbacks occurred, for example into dropwizard, without the application having to make any code changes at all.

For us the analogous situation is where we package up the Smallrye reactive messaging implementation (as the Vert.x client) as well as its Vert.x dependency in our OSGi bundle and can 'get our hooks in' as it were via they way we package them (because you have provided mechanisms that allow this of course). Having SmallRye reactive messaging not having to have any knowledge of it being in one particular environment or another seemed 'good'. The SmallRye codebase would use the Vert.x APIs generically (optimised for Quarkus or whatever), but the Liberty code would not have to use Vert.x APIs (apart from implementing a SPI and making a service loadable) in 'formation' with SmallRye - we would get the callbacks as and when needed.

I will dig into the existing SmallRye and Vert.x code more. Did you have any further thoughts on how a custom Context (for example one representing a Liberty/JBOSS/JEE 'black box') could come from the enviroment rather than being specified by the direct user of the Vert.x APIs (in my use case SmallRye). I can see how it fits with the current Context.executeBlocking very well but am still exploring how it works well for being able to effect this via repackaging (other than have SmallRye (service) load custom Context Interface implementors that can be mapped to from a worker-pool name).

Did you have a preferred issue/forum to progress custom Context interface and hook?

@vietj
Copy link
Member

vietj commented May 22, 2020

@hutchig it is actually not clear to me yet what you want to exactly achieve, can you provide a few concrete use cases of what you want to achieve and how vertx relates to ?

@hutchig
Copy link
Contributor

hutchig commented May 26, 2020

Hi Julien @vietj, apologies for not replying yesterday due to holiday here in the UK, I do have a very concrete use case that it would be helpful to write up here:

In OpenLiberty we use the SmallRye Reactive Messaging implementation to provide the implementation for our MicroProfile Reactive Messaging. Version 2.0 and above of this implementation has support for @Blocking, described here: https://smallrye.io/smallrye-reactive-messaging/smallrye-reactive-messaging/2/advanced/blocking.html

We see the value of this annotation and would like to have the option to incorporate it into the OpenLibery implementation. @Blocking may also become part of the MicroProfile Reactive Messaging specification that forms our user's API at some point in the future.

From the Vert.x code's perspective this annotation would lead to the user's code being run on a worker pool thread rather than all operations on the reactive stream waiting for the operation to
finish. Indeed, this is how the SmallRye implementation implements the annotation
here in WorkerPoolRegistry::executeWork at https://github.com/smallrye/smallrye-reactive-messaging/blob/aae925e2be1121af50637f6deba48a91d7361b0c/smallrye-reactive-messaging-provider/src/main/java/io/smallrye/reactive/messaging/connectors/WorkerPoolRegistry.java#L56

Our (OpenLiberty's) concrete requirements are that this @Blocking annotated user method, that will now run in the embedded Vert.x worker pool thread rather than a thread from the OpenLiberty ManageExecutor thread pool ( a https://docs.oracle.com/javaee/7/api/javax/enterprise/concurrent/ManagedExecutorService.html ) will maintain all the JEE and other requirements that users expect and that OpenLiberty wants to maintain for its server threads.

I will list these concrete requirements (on OpenLiberty's threads) below (this will take me a while to put in and check). For each of these OpenLiberty requirements I need to find the best way to map them into the Vert.x semantics (as is, or via the best set of Liberty/SmallRye/Vert.x changes that I can align).

@hutchig
Copy link
Contributor

hutchig commented May 27, 2020

One of the requirements that we would like to be able to acheive is to be able to control the size of the (any) worker pool and to be able to grow and shrink it over time. In OpenLiberty we
use a ThreadPoolController to be able to control the size of our ThreadPoolExecutor using this method

So in the OpenLiberty ThreadPoolController we call j.u.c.ThreadPoolExecutor::{get/setPoolSize, get/setCorePoolSize, get/setMaximumPoolSize } to control the pool as well as { threadPool.getQueue().size(), getActiveCount, getCompletedTaskCount, getLargestPoolSize } to get stats.

The VertxImpl::getWorkerPool returns an ExecutorService but in effect this is coming from
j.u.c ExecutorService::newFixedThreadPool which returns a ThreadPoolExecutor.

So Julien (@vietj) my concrete question is: If we ultimately wanted to achieve the
ability to grow and shrink the size of a Vert.x worker thread pool - would I be able
to progress this most expediently by:

  1. Providing design/code towards this feature request (i.e. enable Vert.x to obtain
    an external executor/pool via and SPI - this is best from my POV as it would enable
    us to fulfill a number of our requirements (and ones in future that maybe don't exist now).

  2. Providing design/code towards adding into the existing Vert.x WorkerPool etc
    to provide means for us to do things the things we want to do such as, for example, grow and shrink the size of the worker pool, apply contexts (if not done via Context)

The reason I ask for this opinion now is that I wondered when you said "this can be quite dangerous as one could create executors with wrong settings". that you might be concerned to
have external executors but that 174 (which has something we would want if we were not to provide an external ExecutorService) was closed.

@vietj If we had a hard requirement to have growing/shrinking worker pools - where is best for me to prototype? #1539 (here) or 174...etc (based on our ThreadPoolController use of a subset of ThreadPoolExecutors API). What would be your concerns about a foriegn ExecutorService? Of course we might need something to maintain the 'Multi-Reactor' pattern understanding/ordering/exclusivity etc. I am very grateful for your input and want to fit with the direction you'd recommend exploring.

@hutchig
Copy link
Contributor

hutchig commented May 27, 2020

A second concrete requirement/use case we have is to correct JEE context on the worker thread. This could be achieved with a custom executor alone (along with wrapped Runnables) but I have taken also careful note of what you said here and here [Also #2536] and will spend some time on the Vert.x Contexts (and possible hook for custom Contexts in Vert.x 4) and how this could be used by SmallRye api/configuration to get a Context from OpenLiberty.

@hutchig
Copy link
Contributor

hutchig commented May 27, 2020

Of course a custom ExecutorService would have its own threads - would there be an expectation to register and implement BlockedThreadChecker.Task. In fact I think I need to create a list of what the current VertxThread/(Netty)FastThreadLocalThread provide that is used for worker pools.

@hutchig
Copy link
Contributor

hutchig commented May 27, 2020

#3070 Also captures a topic (exception reporting) that is relevant for embedding Vert.x in an application container. As Julien says here this can always be handled by wrapping the runnable that is submitted.

@vietj
Copy link
Member

vietj commented Feb 25, 2021

there is now an SPI in master for providing custom scheduled executor factory for Vert.x

@vietj vietj closed this as completed Feb 25, 2021
@vietj
Copy link
Member

vietj commented Feb 25, 2021

This is also available back in 3.9

@vietj vietj added this to the 4.0.3 milestone Feb 25, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants