New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix lifo queue backend #3172
Fix lifo queue backend #3172
Conversation
Use deque so that schedule_last can actually schedule threads last in the queue.
Both acceptable but queuing more common and used for command line arguments.
@@ -38,6 +38,9 @@ template class HPX_EXPORT hpx::threads::detail::scheduled_thread_pool< | |||
template class HPX_EXPORT hpx::threads::detail::scheduled_thread_pool< | |||
hpx::threads::policies::local_priority_queue_scheduler<hpx::compat::mutex, | |||
hpx::threads::policies::lockfree_abp_fifo>>; | |||
template class HPX_EXPORT hpx::threads::detail::scheduled_thread_pool< | |||
hpx::threads::policies::local_priority_queue_scheduler<hpx::compat::mutex, | |||
hpx::threads::policies::lockfree_abp_lifo>>; | |||
#endif |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you add this specialization then you probably should add a command line option for it as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, you're right, will add that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OTOH, instead of adding more schedulers to HPX core, we should consider creating a better mechanism (using the thread-pools) allowing for all schedulers (except the default one) to live outside of HPX core. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not thought about it too much, but yes, that would probably be a good thing. How far outside HPX core are you thinking? This probably goes hand in hand with having a nicer way to create thread pools in general.
Is the underlying assumption for this PR that |
No, not at all. It's simply that the scheduling loop relies on |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks!
Note that #2705 also mentions scheduler cleanup. Some schedulers should be removed and we should consider replacing the default scheduler. |
To follow up on the performance, from what I can tell there is no practical difference between using a deque, stack or queue (tested with lots of small tasks with I'm also taking the liberty of quoting @biddisco from irc, saying there is no difference for his particular application:
|
Just a side remark - I don't really trust the ABP schedulers (perhaps without reason), so we may not want to use one of those as the default scheduler. |
This addresses the issue in #3166.
The issue in this case was that when
shutdown_all
waits for all threads to finish it callssuspend(pending)
which eventually callsschedule_last
for theshutdown_all
thread. For the lifo queue backendschedule_last
is the same asschedule
andshutdown_all
will be scheduled as the next thread again, leaving no chance for the remaining threads to run. I suspect we haven't seen this with more than 1 thread because then other OS threads will be able to steal the threads and/or most tests and examples wait for all futures before callinghpx::finalize
.Proposed Changes
schedule_last
can actually schedule a thread last in the queue.Note: from my preliminary tests deque is neither slower nor faster than stack or queue, but I need to check with more examples to see that this holds more generally.