Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pserver runs in parallel #9154

Merged
merged 5 commits into from
Mar 20, 2018

Conversation

typhoonzero
Copy link
Contributor

@typhoonzero typhoonzero commented Mar 16, 2018

Resolve #9103
Related: #8638

This PR improves send_op performance by 50% using the vgg16 benchmark test (pserver uses 6 cores, more cores can gain better results).

@typhoonzero typhoonzero changed the title pserver runs in parallel [WIP] pserver runs in parallel Mar 16, 2018
@typhoonzero typhoonzero changed the title [WIP] pserver runs in parallel pserver runs in parallel Mar 20, 2018
// and this will still work.
std::vector<std::future<void>> fs;
// block0 contains only listen_and_serv op, start run from block1.
for (int blkid = 1; blkid < num_blocks - 1; ++blkid) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I noticed that L94 promise num_blocks >= 2, but it looks like that if num_blocks==2, the code would not do anything?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a problem, will fix.

Copy link
Contributor

@Yancey1989 Yancey1989 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

@typhoonzero typhoonzero merged commit 5008020 into PaddlePaddle:develop Mar 20, 2018
@typhoonzero typhoonzero deleted the pserver_parallel branch March 20, 2018 09:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants