- Thread object can be created with
std::thread
taking afunctor/lambda
struct a_task {
void operator()() const;
};
std::thread th1{ a_task() }; //uniform initialization syntax
std::thread th1( ( a_task()) ); //using a functor
std::thread th1( []() { so_some_task(); } ); //lambda
note:
std::thread th1( a_task() )
creates a function declaration [most vexing parse] of a functionth1
returnstd::thread
taking a parameter which is a function pointer returninga_task
- Once the thread is created one can decide to
join
ordetach
- If the thread is not joined or detached before the thread is destroyed,
std::thread
dtor callsstd::terminate()
, thus the program terminates - make sure that all the objects
std::thread
uses are valid till its lifetime, otherwise UB
void foo() {
auto val = 10;
std::thread t1{ [&val]() {
for (auto j = 0; j < 100000; ++j)
do_something(val);
}};
t1.detach();
} //may be t1 still running and using val
join()
can be used to wait till the execution of the threadjoin()
can be called only once per thread- The act of calling
join()
also cleans up any storage associated with the thread, so thestd::thread
object is no longer associated with the now-finished thread; it isn’t associated with any thread joinable()
can be used to see whether a thread is can be joined or not- detached thread runs in the background, they are often called
daemon threads
- By passing additional arguments to thread constructor
- All arguments are copied
void foo(const std::string& str);
void hoo() {
auto chr = "some string...";
std::thread t1(foo, chr); //bad
} //function can exit before str is created from chr
//create a string from chr before passing chr to new thread
std::thread t1(foo, std::string{ chr });
- If the values needs to passed by reference then use
std::ref
- Thread ca be used to call the class member functions
class cls {
public:
void foo();
};
cls obj{};
std::thread t1{ &cls::foo, &obj };
std::thread
is not copy-able class likestd::unique_ptr
- ownership can be transferred between threads using
std::move
std::thread::hardware_concurrency()
gives number of threads that can run||y
- This is only a hint, it can return 0 too
std::thread::id
is the thread identifier, obtained byget_id()
on thread objectstd::this_thread::get_id()
- The can be copied and compared
- Two
thread::id
equal means- both are same thread
- both are not holding any thread
- The only guarantee given by the standard is that thread IDs
that compare as equal should produce the same output, and those that are not equal should give different output.
std::thread::id master_thread;
void some_core_part() {
if (std::this_thread::get_id() == master_thread)
do_master_work();
do_common_work();
}
- If the shared data is read only, then there is no problem
- invariants statements that are always true about a data structure
- race condition when the outcome depends on the relative ordering of execution of operations on two or more threads
- data race is a race condition due to the concurrent modification of a single object
- data race leads to UB
6.Race condition can be avoided with
- wrap the data with protection mechanism
- design the data structure and its invariants in a lock-free manner
- Software transactional memory : the required data write and read are stored in a transactional log and then commmit in one step If commit is not proceed as the data structure is already modified then restart the transaction
- making all the code that excess the data mutually exclusive
- lock the mutex before accessing the data, unlock once its done with the data
- While one thread lock the mutex all other threads wait till the mutex got unlocked.
- Its not recommeted to call the individual funcitons
lock()
andunlock()
directly - Use RAII class
std::lock_guard
std::mutex some_mutex;
void add_to_colection(int val) {
std::lock_guard<std::mutex> guard{ some_mutex };
vec.push_back(val);
}
bool is_in_collection(int val) {
std::lock_guard<std::mutex> guard{ some_mutex };
return std::find(std::begin(vec), std::end(vec), val) != std::end(vec);
}
- Any code that has access to the pointer or reference of shared data can now access/modify the protected data without locking the mutex
- deadlock threads cannot proceed as each is waiting for the other to release its mutex
- Always lock two mutexes in the same order to avoid deadlock [this may not be always true]
std::lock
can lock two or more mutexes at once without the risk of deadlockstd::adopt_lock
indicate thelock_guard
that the mutexes are already locked and they should just adopt the ownership of the existing lock on mutex in the ctor
{
std::lock(mu1, mu2); //calling thread locks the mutex
std::lock_guard<std::mutex> l1{ mu1, std::adopt_lock };
std::lock_guard<std::mutex> l2{ mu2, std::adopt_lock };
do_somithing(shared_data);
}
std::lock
provides all or nothing approach, which means if any mutex throws exception while locking all the mutexes will be unlocked [either get all lock or nothing]std::lock
helps to avoid deaslock when aquire 2/more locks together; It cannot help when acquired separately
- avoid nested locks
- avoid call user-supplied code while holding a lock
- Acquire locks in a fixed order, if you cant acquire as a single operation using
std::lock
- Use a lock hierarchy [take light]
- more flexible as it doesn't always own a mutex
std::adopt_lock
lock object manage the lock on mutex- It assumes that the calling thread already owns the losk
- wrapper should adopt the ownership of the mutex and release it when goes out of scope
std::defer_lock
mutex should remain unlocked on construction- It assumes that the calling thread is going to call lock later
- wrapper going to release the lock when it goes out of scope
- lock can be acquired later by calling
lock()
onstd::unique_lock
obj. - slower than
std::lock_guard
, need more space too
std::lock(m1, m2); // calling thread locks the mutex
std::lock_guard<std::mutex> lock1(m1, std::adopt_lock);
std::lock_guard<std::mutex> lock2(m2, std::adopt_lock);
// access shared data protected by the m1 and m2
std::unique_lock<std::mutex> lock1(m1, std::defer_lock);
std::unique_lock<std::mutex> lock2(m2, std::defer_lock);
std::lock(lock1, lock2);
// access shared data protected by the m1 and m2
std::lock_guard
withstd::adopt_lock
strategy assumes the mutex is already acquiredstd::unique_lock
withstd::defer_lock
strategy assumes the mutex is not acquired on construction, rather than explicitly going to be locked- compiler will catch the error if you forget to define one of the
unique_lock
statements - if you forget one of the
lock_guard
statements, compiler will not show any error, but there will be deadlock - Locking with appropriate grained granularity
- fine grained granularity [small amount is protected with lock]
- coarse grained granularity [large amount of date is protected by lock]
The idea is not to block the other threads with unnecessary time consuming tasks, which may reduce the improvements
gained by multithreading
Here
std::unique_lock
can be really handy
void get_process_data() {
std::unique_lock<std::mutex> lk{ mu };
auto data = get_next_data(); //needs to be thread safe
lk.unlock();
// each thread function on different chunk of data.
// So can be all thread runs simultanously
auto result = process(data);
lk.lock();
write_result(result); //write needs to be synchronized so lock()
}
- In general lock needs to be held for period of time as minimum as possible
- Lazy initialization is common in single-threaded code
std::shared_ptr<some_resource> res_ptr;
void foo() {
if (!res_ptr) res_ptr.reset(new some_resource{});
res_ptr->do_something();
}
- double-checked locking pattern is bad
void ub_code() {
if (!res_ptr) {
std::lock_guard<std::mutex> lk{ res_ptr };
if (!res_ptr) res_ptr.reset( new some_resource{} ); //write
}
res_ptr->do_something(); //read
}
- Here the write is not synchronized with the read
- There is no guarantee that @
do_something
ptr may not be fully initialized std::once_flag
andstd::call_once
std::once_flag of;
void foo() {
std::call_once(of, [](){
res_ptr.reset( new some_resource{} );
});
res_ptr->do_something();
}
- static variables all are thread safe
- UB -> if a thread try to lock a mutex which its already locked
- You can have multiple lock on a single instance of same thread
- Another thread can access only if all locks are released by owning thread
- If 1 thread calls lock 3 times another thread can access only if the thread 1 call unlock 3 times