-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WorkerExecutor should accept custom function to create ExecutorService #1539
Comments
this can be quite dangerous as one could create executors with wrong settings. When queue is full, it is rather normal to queue tasks there, so it would rather a limit on the queue itself rather than the pool size. Beside this particular use case, do you see other use cases ? |
In a system with heavy workload, you would rather fail immediately when there is a queuing happening in the system rather keep queueing for more and more. That would lead to higher latency and eventually lead to "out of memory" problem. That is probably the main use case. However, I think it is quite reasonable to give more controls for more advanced users. (a good example is that they can probably control the thread name or add more logic for thread pool tracking. |
How about providing this as an SPI rather like the other Vert.x pluggable like BufferFactory, FutureFactory, etc... ? |
That sounds perfect! |
Not sure if there is no workaround but i would like to use custom Executor because of overwriting some methods (i need to clean ThreadLocal variables before returning thread to the thread pool). Or is there some another way to reset ThreadLocal variables value between different worker executions? |
there is an upcoming effort soon about having some kind of hooks for that for distributed tracing. I believe they would allow to do such thing |
I am currently looking at this issue (in terms of an SPI) for the purposes of embedding Vert.x |
Hello Julien @vietj , I am coming back to look at this topic again. Reviewing #1539 #1650 #1937 #3182 #3184 I think this is issue is the current open place holder... An idea occurred to me of a different mechanism that might be used when Vert.x is embedded I was interested when looking at the SPI around metrics and particularly the PoolMetrics which have callbacks for queued, rejected, begin, end of task. Metrics providers like dropwizard seem to be the place to start from for me. (The use case context I am looking at is the @Blocking annotation in SmallRye Reactive Messaging and Jee/Microprofile contexts.) I was looking at hacking something which abstracts over the same sort of callbacks It would be along the lines of SPI, service loaded but I see in #3182 (comment) you mention the user API being via I would seem sensible/possible to apply these callbacks just to a subset of executors - starting with some subset of named worker pools - what do you think would be the best way of As the callback mechanism is different from ExecutorService factory can I split it off? to another issue?? (It may be that I have to try to progress one or both paths :-) ) I am also having a good look at these hooks as possibly something to use. |
@hutchig actually Vert.x has the io.vertx.core.Context object it uses for the operations you describe and we could in Vert.x 4 provide a hook for embedded that can provide custom Context implementation that will allow to do that. A context is responsible for handling concurrency and executing tasks among other things. Can you verify if providing such pluggable context would be a good fit ? |
I am having a thorough look/think about your suggestion...one of the things that was attractive to me about using a similar mechanism as the metrics callbacks was that the callbacks occurred, for example into dropwizard, without the application having to make any code changes at all. For us the analogous situation is where we package up the Smallrye reactive messaging implementation (as the Vert.x client) as well as its Vert.x dependency in our OSGi bundle and can 'get our hooks in' as it were via they way we package them (because you have provided mechanisms that allow this of course). Having SmallRye reactive messaging not having to have any knowledge of it being in one particular environment or another seemed 'good'. The SmallRye codebase would use the Vert.x APIs generically (optimised for Quarkus or whatever), but the Liberty code would not have to use Vert.x APIs (apart from implementing a SPI and making a service loadable) in 'formation' with SmallRye - we would get the callbacks as and when needed. I will dig into the existing SmallRye and Vert.x code more. Did you have any further thoughts on how a custom Context (for example one representing a Liberty/JBOSS/JEE 'black box') could come from the enviroment rather than being specified by the direct user of the Vert.x APIs (in my use case SmallRye). I can see how it fits with the current Context.executeBlocking very well but am still exploring how it works well for being able to effect this via repackaging (other than have SmallRye (service) load custom Context Interface implementors that can be mapped to from a worker-pool name). Did you have a preferred issue/forum to progress custom Context interface and hook? |
@hutchig it is actually not clear to me yet what you want to exactly achieve, can you provide a few concrete use cases of what you want to achieve and how vertx relates to ? |
Hi Julien @vietj, apologies for not replying yesterday due to holiday here in the UK, I do have a very concrete use case that it would be helpful to write up here: In OpenLiberty we use the SmallRye Reactive Messaging implementation to provide the implementation for our MicroProfile Reactive Messaging. Version 2.0 and above of this implementation has support for @Blocking, described here: https://smallrye.io/smallrye-reactive-messaging/smallrye-reactive-messaging/2/advanced/blocking.html We see the value of this annotation and would like to have the option to incorporate it into the OpenLibery implementation. @Blocking may also become part of the MicroProfile Reactive Messaging specification that forms our user's API at some point in the future. From the Vert.x code's perspective this annotation would lead to the user's code being run on a worker pool thread rather than all operations on the reactive stream waiting for the operation to Our (OpenLiberty's) concrete requirements are that this @Blocking annotated user method, that will now run in the embedded Vert.x worker pool thread rather than a thread from the OpenLiberty ManageExecutor thread pool ( a https://docs.oracle.com/javaee/7/api/javax/enterprise/concurrent/ManagedExecutorService.html ) will maintain all the JEE and other requirements that users expect and that OpenLiberty wants to maintain for its server threads. I will list these concrete requirements (on OpenLiberty's threads) below (this will take me a while to put in and check). For each of these OpenLiberty requirements I need to find the best way to map them into the Vert.x semantics (as is, or via the best set of Liberty/SmallRye/Vert.x changes that I can align). |
One of the requirements that we would like to be able to acheive is to be able to control the size of the (any) worker pool and to be able to grow and shrink it over time. In OpenLiberty we So in the OpenLiberty ThreadPoolController we call j.u.c.ThreadPoolExecutor::{get/setPoolSize, get/setCorePoolSize, get/setMaximumPoolSize } to control the pool as well as { threadPool.getQueue().size(), getActiveCount, getCompletedTaskCount, getLargestPoolSize } to get stats. The VertxImpl::getWorkerPool returns an ExecutorService but in effect this is coming from So Julien (@vietj) my concrete question is: If we ultimately wanted to achieve the
The reason I ask for this opinion now is that I wondered when you said "this can be quite dangerous as one could create executors with wrong settings". that you might be concerned to @vietj If we had a hard requirement to have growing/shrinking worker pools - where is best for me to prototype? #1539 (here) or 174...etc (based on our ThreadPoolController use of a subset of ThreadPoolExecutors API). What would be your concerns about a foriegn ExecutorService? Of course we might need something to maintain the 'Multi-Reactor' pattern understanding/ordering/exclusivity etc. I am very grateful for your input and want to fit with the direction you'd recommend exploring. |
A second concrete requirement/use case we have is to correct JEE context on the worker thread. This could be achieved with a custom executor alone (along with wrapped Runnables) but I have taken also careful note of what you said here and here [Also #2536] and will spend some time on the Vert.x Contexts (and possible hook for custom Contexts in Vert.x 4) and how this could be used by SmallRye api/configuration to get a Context from OpenLiberty. |
Of course a custom ExecutorService would have its own threads - would there be an expectation to register and implement BlockedThreadChecker.Task. In fact I think I need to create a list of what the current VertxThread/(Netty)FastThreadLocalThread provide that is used for worker pools. |
there is now an SPI in master for providing custom scheduled executor factory for Vert.x |
This is also available back in 3.9 |
WorkerExecutor should accept custom function to create ExecutorService. This will help customize the policy on the thread pool. For example, we might want to have a policy such that if the queue is full, it should reject the request rather than continuing to enqueue that.
The text was updated successfully, but these errors were encountered: