Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Explore raw park/unpark for fiber handoff #2262

Open
headius opened this issue Dec 2, 2014 · 3 comments
Open

Explore raw park/unpark for fiber handoff #2262

headius opened this issue Dec 2, 2014 · 3 comments

Comments

@headius
Copy link
Member

headius commented Dec 2, 2014

Recently I opened a discussion on the concurrency-dev list, at first to investigate a possible bug in LockSupport.park, and later as a more general discussion about mechanisms for explicitly (and efficiently) handing off control from one thread to another. http://cs.oswego.edu/pipermail/concurrency-interest/2014-December/013209.html

The tl;dr is that most folks agree a direct LockSupport.park/unpark implementation would be the most efficient, though there's more impl work to do than just leveraging e.g. ArrayBlockingQueue. I have done some experiments with raw parking, but I was never happy with the result. We should re-examine.

@jhump
Copy link

jhump commented Dec 2, 2014

I have played around with Generators in Java and built something very similar to this. After reading the thread on the concurrency-dev list, I re-wrote it to use a custom synchronizer that directly uses LockSupport to park/unpark threads.

Here is code: https://code.google.com/p/bluegosling/source/browse/src/com/apriori/util/Generator.java

The Google Code project it lives at is mostly a dumping ground for me doing experiments and toying around with ideas (lots of collections, concurrency, reflection, etc.). It's all available via Apache 2.0 license, but I'm happy for applicable parts of this to be used in JRuby and licensed under GPL/LGPL/EPL.

Warning, I've done a very shallow amount of testing, most of it exploratory (here is a sample harness that verifies simple use case plus proper interruption/clean-up of generator threads on GC). I haven't done benchmarks yet, but (anecdotally) it seems much faster than my previous approach, which used two SynchronousQueues.

I know this code is a bit raw and not yet sufficiently tested (or reviewed, since concurrency and the JMM can be non-trivial to get right). If you want help forcing it into shape, I might be able to lend some more time. Let me know.

@jhump
Copy link

jhump commented Dec 2, 2014

Note that the javadoc I wrote might be outdated. It was describing performance woes when it was using SynchronousQueue. I also, at one point, was creating a new thread for each generator, which was also painful.

The code in its current incarnation uses a cached thread pool -- which effectively creates a new thread for each concurrent generator, but re-uses a thread after a generator on that thread completes.

A big downside to having to use add'l threads for this is when the main "consumer" thread never exhausts the generator (e.g. stops calling next, leaving generator in suspended state). In this case, the thread could be leaked.

The code I wrote works around that using a finalizer and weak references, to allow the main Sequence object (likely analogous to Fiber) to be GC'ed, at which point the corresponding suspended thread will be interrupted and throw a SequenceAbandonedException to terminate itself. But that relies on GC which is unpredictable, so extra suspended threads could survive long enough that heavy use of fibers could likely trivially cause an OOME (either OS limit for number of threads or out of address space for allocating the new thread's stack).

@headius
Copy link
Member Author

headius commented Dec 2, 2014

@jhump Thank you for this! I've worked through similar GC issues in our implementation of fibers, so I can sympathize.

This code should be reasonably easy to modify for our fiber implementation, once I figure out how best to implement our other types of thread interrupting atop it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants