Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFC 0002: MT Execution Contexts #2

Draft
wants to merge 15 commits into
base: main
Choose a base branch
from

Conversation

ysbaddaden
Copy link
Collaborator

No description provided.

@ysbaddaden ysbaddaden self-assigned this Feb 5, 2024
@beta-ziliani
Copy link
Member

beta-ziliani commented Feb 5, 2024

@straight-shoota
Copy link
Member

The distinction between execution context and scheduler could need a bit refinement. There is definitely some overlap in functionality, just by comparing the API. I guess execution contexts might take over some features of the current scheduler?

Copy link

@RX14 RX14 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Absolutely lovely, well-written proposal. I completely agree with the design intent here, and only have a few—mostly overlapping—comments about event loops and the default context. The vast majority of this design is exactly what I would like to see in crystal.

0002-execution-contexts.md Outdated Show resolved Hide resolved
0002-execution-contexts.md Outdated Show resolved Hide resolved
- a scheduler to run the fibers (or many schedulers for a MT context);
- an event loop (IO & timers):

=> this might be complex: I don’t think we can share a libevent across event bases? we already need to have a “thread local” libevent object for IO objects as well as for PCRE2 (though this is an optimization).
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are multiple event bases required? It doesn't seem obvious to me that they are.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I'm wondering about that. We currently have one libevent base per thread, and that overcomplexifies IO::Evented with lots of objects to allocate in the HEAP for each IO and thread.

I'm probably being naive (though Go seems to do that) but maybe a global event loop wouldn't behave so badly? Even with the potential contention on adding an event to the libevent base, especially on machines with too many cores to count (e.g. ARM Neoverse).

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On the other hand not having to synchronize everything and their mother make other things a lot less complex. It is quite liberating to not give a shit about what other threads are doing.

FWIW, having one dedicated ring per thread is also how the makers of io_uring recommend multi thread usage.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hum, implementing our own wrapper on top of epoll/kqueue becomes more and more compelling.

Since we reschedule the fiber, we could put events on the stack —this is possible with libevent but not recommended: the struct size may change, and not have to keep them somewhere to try and avoid reallocating events all the time.

Then we could have one or many event loops and not care about thread locals (especially in IO); we could keep Fiber#resume_event and merely take care that it can only be in one event loop queue at a time.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, storing it on the stack is a very useful technique for io_uring as well (to keep it alive during submission). I've made a lot of use of that and it helped a lot. There the completion event need to be handled somehow though - I stored it in the fiber itself until it was awakened and could process it, but there may be better techniques.

One good thing about the current implementation though is that it handles the thundering herd problem decently, as far as I've been able to see. That is, if multiple fibers are waiting for something only one of them will wake when something happens. With many listeners that may become something to keep track of.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We shall move the "refactor the event loop" into a proper issue on https://github.com/crystal-lang/crystal/issues

It doesn't need a RFC as it's mostly internal implementation detail.

Copy link
Collaborator Author

@ysbaddaden ysbaddaden Feb 27, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be interesting to consider one event loop (EL) per execution context (EC) 🤔

With a single EL per EC, a starving scheduler will run the EL and enqueue every resumable fibers for the EC.

With one EL per scheduler, a starving scheduler will run its own EL and push the fraction of fibers it happened to have, possibly flooding the EC queues and delaying other schedulers from running their own EL, and delaying resumable fibers from being resumed.

That may be a reason why Go has a single EL: it could be unfair otherwise.

0002-execution-contexts.md Outdated Show resolved Hide resolved
0002-execution-contexts.md Outdated Show resolved Hide resolved

## Default context configuration

This proposal doesn’t solve the inherent problem of: how can applications configure the default context at runtime (e.g. number of MT schedulers) since we create the context before the application’s main can start.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This proposal supports creating multiple execution contexts, let the application configure it's own EC and start fibers in it if required. That allows all the complexity the app needs when configuring the context the application actually runs in, because it's initialized by the application. The root context does not have to be well-used by the application.

@straight-shoota straight-shoota changed the title RFC 0002 - MT Execution Contexts RFC 0002: MT Execution Contexts Feb 8, 2024
@ysbaddaden
Copy link
Collaborator Author

ysbaddaden commented Feb 13, 2024

@RX14 Tell me if I'm wrong, the plan would be:

Crystal 1

  • introduce EC with ST and MT;
  • deprecate same_thread argument;
  • same_thread: false is NOOP;
  • ST accepts same_thread: true (always true anyway);
  • MT raises on same_thread: true (new API, no breaking change);
  • default EC is ST (no breaking change);
  • consider a -Dmt flag to force default EC to be MT (?);

Crystal 2

  • remove deprecated same_thread (breaking change);
  • default EC becomes MT (breaking change).

Isolated context

I think I see that context for UI loops only, and want to prevent blocking behaviors, but there's nothing wrong with doing blocking calls in other use cases, and using the event-loop normally is fine. Still, spawning a fiber without an explicit context should either raise or the default context should be configured (as you suggest):

abstract class ExecutionContext
  class Isolated < ExecutionContext
    def initialize(name : String, @spawn_context : ExecutionContext? = nil, &)
      @thread = Thread.new(name) { yield }
    end

    def spawn(**args, &) : Fiber
      if ctx = @spawn_context
        ctx.spawn(**args) { yield }
      else
        raise RuntimeError.new("Can't spawn in isolated context (need a spawn context)")
      end
    end
  end
end

mt = ExecutionContext::MultiThreaded.new
ui = ExecutionContext::Isolated.new("GTK", spawn_context: mt) { Gtk.main }

Instead of raising, the spawn context could be the default EC.

@ysbaddaden
Copy link
Collaborator Author

@RX14 I applied your suggestions to the RFC.

There's no such method
@RX14
Copy link

RX14 commented Feb 14, 2024

@ysbaddaden I think -Dmt is probably not necessary, and the exact implementation plan for crystal 2 is best left deferred until there's operational experience, but I agree on everything else.

I envision the root execution context being MT or ST a moot point, because every well-architected app has a single App.run line at the top-level and converting that top-level code to be spawning a MT context and waiting for that fiber should be a one-liner if we have the right helper methods in place.

If we all agree, maybe we can start on the other 90% of the RFC: bikeshedding naming. I like ExecutionContext::Parallel, because I don't like the idea of implementation details (threads) leaking into the name.

@ysbaddaden
Copy link
Collaborator Author

@RX14 The mt flag may not be necessary in Crystal v1, as the default context could be MT:1 and resized on demand (still no breaking change). I'll still push for MT:N to be the default in Crystal v2. Execution contexts are a mean to further control the parallelism in very specific cases, not the end solution. I believe developers shouldn't have to care about it until you have to.

I wouldn't bikeshed the namings just yet. As I'm experimenting with the types, I feel that the difference is getting thinner and thinner. In fact, Kotlin only has a single scheduler implementation, and a couple constructors to start execution contexts with 1 (ST) or many threads (MT).

I'm also struggling with the inheritance: EC::MT < EC makes sense, but so does EC::MT::Scheduler < EC as we want EC.current to point to the current MT scheduler running on the thread, not the shared MT context (it's easier to reach the context from the scheduler).

@crysbot
Copy link

crysbot commented Feb 21, 2024

This pull request has been mentioned on Crystal Forum. There might be relevant details there:

https://forum.crystal-lang.org/t/crystal-multithreading-support/6622/8

@ysbaddaden
Copy link
Collaborator Author

ysbaddaden commented Feb 23, 2024

I forgot again but MT:1 would break spawn(same_thread: true) in Crystal 1. It's a NOOP without the preview_mt flag but the parameter was still exposed to the public API 😭

@straight-shoota
Copy link
Member

straight-shoota commented Feb 23, 2024

I think we can accept breakage with same_thread: true. It only works with preview_mt which is explicitly a preview feature. There should be no expectations on compatibility in a setting outside of preview_mt.

Copy link

@yxhuvud yxhuvud left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like this, I like this a lot.


Such a group of fibers will never run in parallel. This can vastly simplify the synchronization logic since you don’t have to deal with parallelism anymore, only concurrency, which is much easier & faster to deal with. For example no need for costly atomic operations, you can simply access a value directly. Parallelism issues and their impact on the application performance is limited to the global communication.

## Issues
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think data and especially execution locality could show up on the negative side as well, as the round robin take away a lot of programmatic control of data locality as well. It is .. possible.. to manually schedule fibers to dedicated threads but it really is not the way it currently is meant to be used.

0002-execution-contexts.md Outdated Show resolved Hide resolved
- a scheduler to run the fibers (or many schedulers for a MT context);
- an event loop (IO & timers):

=> this might be complex: I don’t think we can share a libevent across event bases? we already need to have a “thread local” libevent object for IO objects as well as for PCRE2 (though this is an optimization).
Copy link

@yxhuvud yxhuvud Feb 23, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm in the 'let the event loop decide if it want to be instantiated on a thread, execution level or global level' camp. How that would look API-wise I'm less sure - especially not if dynamic amount of threads in a context is to be supported.

This might be complex: I don’t think we can share a libevent across event bases

From what I have gathered from libevent docs, it is possible, but would necessitate a lot more synchronization when io happens (*), so it is probably slower.

But yes, it is complex. Windows, and its weird file handles says hi. Each open file handle is specific for each instance of whatever it uses, so it needs to be only one global one event instance there.

  • we already enable some structures for thread safety but then create separate bases for each thread anyhow, IIRC. It was quite a while since I looked at it. I think we can remove that enabling without danger - they should really only be used when actually reusing a libevent base between threads). We don't use the specialized mt safe functions libevent that make use of it.

- configuration (e.g. number of threads, …);
- methods to spawn, enqueue, yield and reschedule fibers within its premises;
- a scheduler to run the fibers (or many schedulers for a MT context);
- an event loop (IO & timers):
Copy link

@yxhuvud yxhuvud Feb 23, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It probably needs to be mentioned that it needs to continue to work with channels and mutexes. This is somewhat straightforward today with how fibers are bound to a thread once executed, so the simple version is to just schedule it using the normal interfaces. But interfaces for scheduling are not necessarily the same between execution contexts as they are within it!

For example, a basic work stealing scheduler can default to just enqueue a fiber in the executing scheduler/thread context, and let distribution between threads happen in other ways. But that doesn't work if the thing to wake up doesn't live in the same context - then it needs to be communicated somehow. And then the question is what to communicate it to.

Also somewhere it should probably be explicitly defined what happens if channel interaction happens in an isolated context (as defined above).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is explicit in the Guide side:

Applications can create any number of execution contexts in parallel. These contexts are isolated but they shall still be capable to communicate together with the usual synchronization primitives (e.g. Channel, Mutex) that must be thread-safe.

Once spawned a fiber shouldn’t move to another execution context. For example on re-enqueue the fiber must be resumed into it’s execution context: context B enqueues waiting sender from context A. That being said, we could allow to send a fiber to another context.

It's not detailed in the Technical side, though.

0002-execution-contexts.md Outdated Show resolved Hide resolved
Comment on lines 355 to 357
def initialize(@name : String, @minimum : Int32, @maximum : Int32)
# todo: start @minimum threads
end
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Allowing a dynamic amount of threads requires more synchronization and complexity than having a static amount. While it sounds nice to be able to adjust, it probably warrants its own separate class. Making certain all threads are in a waiting state before starting to actually queue stuff allows a bunch of simplifications with less mutexes and risks a lot fewer possible race conditions too.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Enqueue doesn't have to need much sync. Go pushes to a bounded local queue (per scheduler) with overflow to a global queue; threads can be started at any time: when they reach the run loop they will grab a batch of fibers from the global queue or steal from another scheduler. Stopping ain't more complex, schedulers aren't tied to a specific thread, the thread detaches the scheduler and returns itself to the thread pool.

The complexity is more in when to start / stop a thread.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And that complexity pushes for it to become a "future evolution".

0002-execution-contexts.md Outdated Show resolved Hide resolved
@Blacksmoke16
Copy link
Member

As someone who isn't familiar with this stuff at all, my random question is:

Do we need to do anything in relation to like how Intel has P and E cores now? Like as a way to signal to the OS's thread scheduler that a fiber should have a preference on where it runs? Or is that something the OS itself handles somehow?

0002-execution-contexts.md Outdated Show resolved Hide resolved
@ysbaddaden
Copy link
Collaborator Author

@Blacksmoke16 From what I read specifying a thread priority can hint the OS to schedule the thread on a big (efficient) or little (power) core. We can also set a thread affinity to a given core, but we must detect the core type beforehand.

Adds notes about wrapping an existing EC, and thread affinities
(to pin a thread to a core) in addition to set priorities (still no API).

Simplifies the EC API to remove `yield` and `sleep` that may not
be needed (the `Fiber.yield` and `sleep` methods can create the
resume events), but adds `spawn(same_thread)` to handle the
transition.
0002-execution-contexts.md Outdated Show resolved Hide resolved
0002-execution-contexts.md Outdated Show resolved Hide resolved
text/0002-execution-contexts.md Outdated Show resolved Hide resolved
text/0002-execution-contexts.md Outdated Show resolved Hide resolved
Co-authored-by: Sijawusz Pur Rahnama <[email protected]>
@ysbaddaden
Copy link
Collaborator Author

ysbaddaden commented Apr 19, 2024

Thinking again about names:

  • ST: Mono? Single? Concurrent (to oppose on Parallel)?
  • MT: Parallel is still the best for MT 👍
  • I prefer Isolated over Exclusive after all (the fiber gets isolated), but no strong opinion;

I'm also thinking about simple constructors:

io_workers = ExecutionContext.concurrent
cpu_workers = ExecutionContext.parallel(size: 8)
ui = ExecutionContext.isolate { UI.main_loop }

Yet, I'm still struggling for a nice name to the single threaded context 😞

@straight-shoota
Copy link
Member

SingleThreaded maybe?

I'm not sure we need such convenience constructors. This isn't essential anyway and we can figure it out later.

@ysbaddaden
Copy link
Collaborator Author

@straight-shoota yes, the convenience fonctions aren't needed, but while ExecutionContext::Parallel and ExecutionContext::Isolated feel nice, ExecutionContext::SingleThreaded doesn't have the same catchy feeling.

@crysbot
Copy link

crysbot commented May 3, 2024

This pull request has been mentioned on Crystal Forum. There might be relevant details there:

https://forum.crystal-lang.org/t/how-to-build-for-release/6808/5

straight-shoota pushed a commit to crystal-lang/crystal that referenced this pull request Aug 7, 2024
Add `GC.stop_world` and `GC.start_world` methods to be able to stop and restart the world at will from within Crystal.

- gc/boehm: delegates to `GC_stop_world_external` and `GC_start_world_external`;
- gc/none: implements its own mechanism (tested on UNIX & Windows).

My use case is a [perf-tools](https://github.com/crystal-lang/perf-tools) feature for [RFC 2](crystal-lang/rfcs#2) that must stop the world to print out runtime information of each ExecutionContext with their schedulers and fibers. See crystal-lang/perf-tools#18
@crysbot
Copy link

crysbot commented Sep 23, 2024

This pull request has been mentioned on Crystal Forum. There might be relevant details there:

https://forum.crystal-lang.org/t/timeline-for-multithreading-support/3604/21

@crysbot
Copy link

crysbot commented Sep 23, 2024

This pull request has been mentioned on Crystal Forum. There might be relevant details there:

https://forum.crystal-lang.org/t/upcoming-release-1-14-0/7199/1

@crysbot
Copy link

crysbot commented Sep 25, 2024

This pull request has been mentioned on Crystal Forum. There might be relevant details there:

https://forum.crystal-lang.org/t/new-event-loop-unix-call-for-reviews-tests/7207/6

@crysbot
Copy link

crysbot commented Oct 18, 2024

This pull request has been mentioned on Crystal Forum. There might be relevant details there:

https://forum.crystal-lang.org/t/charting-the-route-to-multi-threading-support/7320/1

@Qard
Copy link

Qard commented Oct 26, 2024

Just wanted to share some thoughts/ideas I'm using in my own Fiber-based multi-threaded language.

There are two distinct access patterns that can be isolated into different types of Fiber construction:

  • In the typical case it is desirable that a Fiber takes access to variables present in the lexical scope in which it is constructed. As such, distributing to another thread creates synchronization challenges which make doing so undesirable.
  • In some cases you have a singular "task" you want distributed which it would be reasonable that the processor takes ownership of that in the sense that a language like Rust does.

How I chose to handle this is with two distinct Fiber types: a local Fiber which can access any in-scope data but can only execute on the thread in which it was created, and an ownership-claiming Fiber which specifically avoids running on the thread in which it was created, taking ownership of a passed in value and bringing that over with it to whatever thread it gets dispatched to.

On top of that I have a work-stealing mechanism which can pull a Fiber back to its origin thread when it's not busy, but the Fiber would otherwise be guaranteed to run on a different thread.

This design focuses very strongly on maintaining locality to achieve the best performance and to avoid synchronization complexity as much as possible while making distribution over threads certain when loop utilization would benefit from it.

In a typical app server you're going to see http requests come in which are largely isolated from each other and can be immediately distributed to other threads, but then the processing within those tends to need to share more with each other, and what you need within the scope of parts of that request is more likely just concurrency and not actually parallelism. If you need parallelism you can reach for that tool, but if a more restrictive tool is provided to ensure actual isolation of the work for that Fiber then it can be much more effectively distributed.

Another thing you see often with http requests is that most object references within a request are to objects allocated in that request, but there are a small number of exceptions which typically fit neatly into a shared interface category such as a database connection (or pool) or an app server exposing utilities to each route. For these types I opted to build actors into my language and have any interaction with these objects go through a shim copied into the ownership-taking fibers which dispatch these interactions across threads through Fiber yields as the "lock" mechanism. A task simply gets passed over to the thread owning the object to do that interaction and the result of that interaction resolves the yield on the other end.

In my language I went for an explicit form of this actor type specialization, but you could also go for explicit construction of a proxy type when passing to an ownership-taking Fiber.

In any case, what I wanted to convey is that in my experience there are two distinct ways in which people want to distribute work with Fibers and isolating those two behaviours can actually have a lot of benefits. 🙂

straight-shoota pushed a commit to crystal-lang/crystal that referenced this pull request Dec 6, 2024
Upgrades the IOCP event loop for Windows to be on par with the Polling event loops (epoll, kqueue) on UNIX. After a few low hanging fruits (enqueue multiple fibers on each call, for example) the last commit completely rewrites the `#run` method:

- store events in pairing heaps;
- high resolution timers (`CreateWaitableTimer`);
- block forever/never (no need for timeout);
- cancelling timeouts (no more dead fibers);
- thread safety (parallel timer de/enqueues) for [RFC #2];
- interrupt run using completion key instead of an UserAPC for [RFC #2] (untested).

[RFC #2]: crystal-lang/rfcs#2
@crysbot
Copy link

crysbot commented Dec 17, 2024

This pull request has been mentioned on Crystal Forum. There might be relevant details there:

https://forum.crystal-lang.org/t/upcoming-release-1-15-0/7537/1

straight-shoota pushed a commit to crystal-lang/crystal that referenced this pull request Dec 25, 2024
In a MT environment such as proposed in crystal-lang/rfcs#2, the main thread's fiber may be resumed by any thread, and it may return which would terminate the program... but it might return from _another thread_ that the process' main thread, which may be unexpected by the OS.

This patch instead explicitly exits from `main` and `wmain`.

For backward compatibility reasons (win32 `wmain` and wasi `__main_argc_argv` both call `main` andand are documented to do so), the default `main` still returns, but is being replaced for UNIX targets by one that exits.

Maybe the OS actual entrypoint could merely call `Crystal.main` instead of `main` and explicitely exit (there wouldn't be a global `main` except for `UNIX`), but this is out of scope for this PR.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Status: In Progress
Development

Successfully merging this pull request may close these issues.

9 participants