You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
yorickpeterse opened this issue
Jan 14, 2024
· 1 comment
Labels
compilerChanges related to the compilerfeatureNew things to add to Inko, such as a new standard library moduleruntimeChanges related to the Rust-based runtime librarystdChanges related to the standard library
The current approach to distributing work is to use a channel, then use e.g. an enum as the message, and have a bunch of processes with a run method that essentially does the following:
fn pub mut run {
loop {
match jobs.receive {
...
}
}
}
While channels certainly have their purpose, it feels a bit redundant to define messages using enum values when processes already have first-class messages.
An alternative is to maintain a list of processes and send them a message directly whenever data is scheduled. This removes the need for an enum, but results in work not being balanced automatically unlike a channel.
It would be nice if we could somehow make this a first-class feature in Inko. The idea is that different instances of a process type can form a "cluster" of sorts. Messages sent to the cluster are then distributed across processes. This would basically act the same way as using channels, just minus the need for an intermediate enum value. Since messages are heap values (due to their variable size), we could in fact reuse channels for this internally.
A hypothetical syntax would be the following:
class async Worker {
fn async foo(a: Int) { ... }
fn async bar(b: Int) { ... }
}
let proc1 = Worker {}
let proc2 = Worker {}
# Unsure about the syntax, but this would create the cluster
let cluster = async [proc1, proc2]
cluster.foo(42)
cluster.bar(50)
I'm not sure how we'd distribute work though. For example, in the above case all processes are sleeping, so we'd have to send a message to one to wake it up. This then puts the onus of distributing work on the sender. A proper work-stealing mechanism (as is used by the scheduler itself) would require more substantial changes to how processes are implemented.
Long story short, I want to explore how we can basically turn the current combination of processes and channels into something that's first-class.
Related work
No response
The text was updated successfully, but these errors were encountered:
yorickpeterse
added
feature
New things to add to Inko, such as a new standard library module
compiler
Changes related to the compiler
std
Changes related to the standard library
runtime
Changes related to the Rust-based runtime library
labels
Jan 14, 2024
compilerChanges related to the compilerfeatureNew things to add to Inko, such as a new standard library moduleruntimeChanges related to the Rust-based runtime librarystdChanges related to the standard library
Description
The current approach to distributing work is to use a channel, then use e.g. an
enum
as the message, and have a bunch of processes with arun
method that essentially does the following:While channels certainly have their purpose, it feels a bit redundant to define messages using
enum
values when processes already have first-class messages.An alternative is to maintain a list of processes and send them a message directly whenever data is scheduled. This removes the need for an
enum
, but results in work not being balanced automatically unlike a channel.It would be nice if we could somehow make this a first-class feature in Inko. The idea is that different instances of a process type can form a "cluster" of sorts. Messages sent to the cluster are then distributed across processes. This would basically act the same way as using channels, just minus the need for an intermediate
enum
value. Since messages are heap values (due to their variable size), we could in fact reuse channels for this internally.A hypothetical syntax would be the following:
I'm not sure how we'd distribute work though. For example, in the above case all processes are sleeping, so we'd have to send a message to one to wake it up. This then puts the onus of distributing work on the sender. A proper work-stealing mechanism (as is used by the scheduler itself) would require more substantial changes to how processes are implemented.
Long story short, I want to explore how we can basically turn the current combination of processes and channels into something that's first-class.
Related work
No response
The text was updated successfully, but these errors were encountered: