Replies: 5 comments 4 replies
-
@ossia/core |
Beta Was this translation helpful? Give feedback.
-
@avilleret what do you think ? would that break a lot of things wrt/ the Max, Pd, externals etc ... ? |
Beta Was this translation helpful? Give feedback.
-
note to self: json_writer_detail.cpp: write_value |
Beta Was this translation helpful? Give feedback.
-
It's starting to be done (was quite the long work....) Another related thing is that I would like to get rid of the (as frankly, there's pretty much never any good reason to ask from a remote network value and just busy wait while it arrives - in particular with udp-based protocols there's no sure way to tell that an incoming message indeed comes from where we believe it comes from which makes it very inconvenient to implement (as it can basically be only a, kinda wasteful, best effort thing where we keep in memory an arbitrarily long list of all the requests that have been made and go through them whenever we get a message to check: has this request been answered)). |
Beta Was this translation helpful? Give feedback.
-
A very good benefit of all that work is that the protocols are being refactored to be muuuuch more configurable than they were as it's much easier to share code between them. For instance now it's possible to create osc clients & servers that will conform to osc 1.0 / osc 1.1 / osc 1.1 + the various I also added some optimizations to make sure that no memory is allocated and osc messages are written pretty much directly for the very common cases of ossia addresses with known types (e,g, 1,2,3,4 floats, ints, bools, etc). So there should be some performance gains around these areas. Additionnaly some conformance issues were fixed & OSC blob support was added (it wasn't supported so far :x) |
Beta Was this translation helpful? Give feedback.
-
Hi all,
the way it's been done so far is that protocol classes may or may not use threads internally to receive messages.
This was done this way mostly because it was the simplest possibility with OSCPack at the time.
I think that this was overall creating more problems than it solved, as now a lot of care must be taken by everyone all the time, when updating trees, when using callbacks, etc.
e.g. it makes it pretty much impossible to use ossia callbacks directly in Python, Unity, UE, etc... as these aren't thread-safe.
A first mitigation was introduced with message queues but it only solves the "sending osc messages" problem, not the "adding / removing nodes" one.
Thus, I'd like to propose moving to an async event loop model instead where the user can choose in which thread are network messages processed - this way, single-threaded environments shouldn't encounter any particular issues anymore (and for users without tremendous performance needs, it will also likely be faster as it removes the need for synchronisation if you just process the incoming messages regularly in your main thread).
I've started doing this in a separate OSC implementation here:
#674
The only difference is that instead of parameters being updated automagically, one has to pass a context
to the protocols when creating them, and then poll that context at regular interval with
asio::io_context::poll()
, which will dispatch the received messages in the thread where poll() (or run()) is called.Also, it will allow users to have a single thread shared by all protocols instead of the current situation where each protocol spins up one or more threads which overall can decrease performance if there are a large amount of protocols running.
The end goal for this is to fix the long-running ossia score issue where it's not possible to update a tree during execution of the score - this would allow us to run a single "update the device tree" step before an execution tick, while right now it can happen randomly during execution which can (will) break things.
Beta Was this translation helpful? Give feedback.
All reactions