Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conditional variable instead of sleep_for #23

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open

Conditional variable instead of sleep_for #23

wants to merge 1 commit into from

Conversation

yvoinov
Copy link

@yvoinov yvoinov commented Mar 24, 2018

I'm trying to avoid some overhead when pool is idle or underload.

Pls, advice me - is this way correct? Testing shows less overall pool latency, but I'm in doubt - this should not be serialization point for scaling.

I'm trying to avoid some overhead when pool is idle or underload.

Pls, advice me - is this way correct? Testing shows less overall pool latency, but I'm in doubt - this should not be serialization point for scaling.
@codecov
Copy link

codecov bot commented Mar 24, 2018

Codecov Report

Merging #23 into master will not change coverage.
The diff coverage is n/a.

Impacted file tree graph

@@          Coverage Diff           @@
##           master     #23   +/-   ##
======================================
  Coverage    94.2%   94.2%           
======================================
  Files           5       5           
  Lines         138     138           
======================================
  Hits          130     130           
  Misses          8       8

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update af95dd8...4be560f. Read the comment docs.

template <typename Handler>
inline bool Worker<Task, Queue>::post(Handler&& handler)
{
+ m_conditional_lock.notify_one();
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you should do the notification after you have added the job to the queue otherwise the worker might try to pull the job too soon

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Мммммммм, in theory may be. In practice....hardly. Also, how you imagine such implementation? I see only one way - additional local variable requires, so more assembler, slower execution... No, not so good idea.

Copy link

@KayEss KayEss Jun 13, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you're going to worry about the cost of that it might be more interesting to look at the call to notify_one you have for every posted job. Compared to that cost saving the bool result is essentially free.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you're going to worry about the cost of that it might be more interesting to look at the call to notify_one you have for every posted job. Compared to that cost saving the bool result is essentially free.

Take a look on my own fork of this library. Done long time ago. :) This PR is years old :)

Anyway, you can make you own fork, perform benchmark and compare. :)

@yvoinov
Copy link
Author

yvoinov commented Jun 13, 2020

I've just made some performance tests with variant you offered:

inline bool Worker<Task, Queue>::tryPost(Handler&& handler)
{
    {
        std::lock_guard<std::mutex> lock(m_conditional_mutex);
        m_ready.store(true, std::memory_order_relaxed);
    }
    bool ret = m_queue.push(std::forward<Handler>(handler));
    m_conditional_lock.notify_one();
    return ret;
}

There is no visible performance difference with original one. So... see no reason to make such change. Don't forget - compiler can change code order, and it is non-mandatory will execute as written.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants