Generic queued emitter for threading #150
Replies: 4 comments 5 replies
-
Should this be called |
Beta Was this translation helpful? Give feedback.
-
Stepping back, should we call an invokable with it or should we call a signal ? As a signal, then we would need a connection in C++/QML to connect the "ready" from the background thread to the "sync" method on the Rust side. As an invokable, then we don't need the connection on the C++/QML side, install the background thread calls the "sync" method. Or could we have both a |
Beta Was this translation helpful? Give feedback.
-
As discussed trying to make the call from the Rust background thread totally safe is tricky. The following C++ code shows how we could lock objects enough to stop #include <atomic>
#include <functional>
#include <iostream>
#include <mutex>
#include <thread>
#include <vector>
using namespace std;
class MyObject;
class ThreadObj
{
public:
void loadClosures(MyObject* obj)
{
// Take the list of closures
std::vector<std::function<void()>> vec = {};
{
std::lock_guard<std::mutex> guard(m_mutex);
std::swap(vec, m_closures);
}
// Add each closure to the Qt event loop (here we fake it)
// but this is "safe" as we hold a lock on the mutex in MyObject
for (auto it = vec.cbegin(); it != vec.cend(); it++) {
// QMetaObject::invokeMethod(obj, (*it), Qt::QueuedConnection);
(*it)();
}
}
void queue(std::function<void()> closure)
{
// Queue the closure onto the event loop
std::lock_guard<std::mutex> guard(m_mutex);
m_closures.push_back(closure);
}
private:
std::vector<std::function<void()>> m_closures;
std::mutex m_mutex;
};
class MyObject
{
public:
MyObject()
: m_running(true)
, m_threadEventLoop(&MyObject::eventLoop, this)
{}
~MyObject()
{
// Stop the fake event loop
m_running = false;
m_threadEventLoop.join();
// Stop the obj
m_mutex.lock();
}
int add(int a, int b) { return a + b; }
void slowMethod()
{
std::lock_guard<std::mutex> guard(m_mutex);
for (int i = 0; i < 10; i++) {
this_thread::sleep_for(10ms);
// Create a queued closure that will run on the event loop
auto thread = threadObj();
thread->queue([]() { std::cout << "Queued" << std::endl; });
m_count = add(m_count, 1);
std::cout << "Count: " << m_count << std::endl;
}
}
void event()
{
// Load any closures from child thread objects
std::lock_guard<std::mutex> guard(m_mutex);
for (auto it = m_threadObjs.begin(); it != m_threadObjs.end(); it++) {
(*it)->loadClosures(this);
}
}
void eventLoop()
{
while (m_running) {
event();
this_thread::sleep_for(10ms);
}
}
std::shared_ptr<ThreadObj> threadObj()
{
auto obj = std::make_shared<ThreadObj>();
m_threadObjs.push_back(obj);
return obj;
}
private:
int m_count = 0;
std::mutex m_mutex;
std::atomic_bool m_running;
std::thread m_threadEventLoop;
std::vector<std::shared_ptr<ThreadObj>> m_threadObjs;
};
void
deconstruct(MyObject* obj)
{
this_thread::sleep_for(50ms);
// Attempt to delete the object while in a method
std::cout << "About to delete" << std::endl;
// FIXME: we can't stop free though
// free(obj);
delete obj;
std::cout << "Deleted obj" << std::endl;
}
int
main(void)
{
// MyObject* obj = (MyObject*)malloc(sizeof(MyObject*));
MyObject* obj = new MyObject();
std::thread threadSlow(&MyObject::slowMethod, obj);
std::thread threadDeconstruct(deconstruct, obj);
threadDeconstruct.join();
threadSlow.join();
return 0;
} Then this outputs
|
Beta Was this translation helpful? Give feedback.
-
We now have the ability to add a function pointer to the event loop and then work in the C++ context inside that. So lets close this issue for now. |
Beta Was this translation helpful? Give feedback.
-
Aims
In the current API we have an "UpdateRequester", this specifically calls
handle_update_request
on the Rust side via a queued metamethod call (which means that the UpdateRequester is thread safe and can be move across to other threads).Instead this could be made generic so that we can remove our special trait.
Solution
Instead have a
QueuedEmitter
which takes an invokable name (and potentially parameters?), then the developer can call arbitrary invokables from background threads and not forced to use our API.Problems
queued_emitter(method_name)
? do we check when creating the object on the C++ side and return a Result ? Do we require you to use a function pointer on the Rust side and somehow check that the function is an invokable ?Closure route
With the closure route the closure would have to take the fat pointer (or CppObj + RustObj) as a parameter and not move the self one into it, as otherwise there will be problems with locking and holding onto references.
Beta Was this translation helpful? Give feedback.
All reactions