New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Broadcast async #2546
Broadcast async #2546
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall looks good. I guess apart from the comments here, what's missing is an optimization in the symbol namespace to avoid an additional turnaround, right?
hpx::unregister_with_basename(name, this_site).get(); | ||
|
||
// propagate result | ||
return f.get(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can't we return something like this here:
return hpx::when_all(hpx::unregister_with_basename(name, this_site), std::move(f)).then(
[](auto futures)
{
return hpx::util::get<1>(futures);
});
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why? We don't care about the result of the unregister_with_basename
and nothing depends on it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we don't care, why do we explicitly wait until it completed using .get()?
hpx/lcos/broadcast_async.hpp
Outdated
|
||
// find_from_basename is always a local operation (see assert above) | ||
typedef typename std::decay<T>::type result_type; | ||
return hpx::dataflow( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should use the sync policy here.
hpx/lcos/broadcast_async.hpp
Outdated
hpx::future<bool> was_registered = | ||
hpx::register_with_basename(name, p.get_id(), this_site); | ||
|
||
return hpx::dataflow( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should use the sync policy here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, for the dataflow(), ok.
std::forward<T>(t)); | ||
} | ||
|
||
std::vector<hpx::future<void> > futures; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't we use the already exisiting broadcast functionality here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately that does not quite fit the bill. It would amount to sending the std::map<std::uint32_t, std::vector<std::size_t> >
to everybody which is much more than needed... I was contemplating about creating a special broadcast which splits off the map, but wanted to leave that to future optimizations.
I thought about that but unfortunately this is not possible. The purpose of the symbolic name to id mapping is to allow encoding the target locality without knowing it upfront. In order to do what you suggest we would have to explicitly encode the target locality into the name which would defeat the purpose. One possibility would be to go the MPI route and to introduce communicators (or similar) to be used for the collective operations. |
I think having something along the lines of communicators is our best shot here. I would recommend designing those first and place the wanted collectives on top of them afterwards. |
I was thinking about this exact thing as well - just couldn't come up with an appropriate name ;) |
How far along is this? Is this likely to be done for the next release (max 3-4 months)? |
Let's abandon this for now. We should get back to this after we have a come up with a global strategy of handling 'communicators'. |
This adds a new implementation/API of broadcast