You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
it"calls the passed block only once for each item, even with multiple threads"domutex=Mutex.newyielded=[]# record all the numbers yielded to the block, to make sure each is yielded only oncelist=Hamster.iterate(0)do |n|
sleep(rand / 500)# give another thread a chance to get inmutex.synchronize{yielded << n}sleep(rand / 500)n + 1endleft,right=list.partition(&:odd?)10.times.collectdo |i|
Thread.newdo# half of the threads will consume the "left" lazy list, while half consume# the "right" lazy list# make sure that only one thread will run the above "iterate" block at a# time, regardlessifi % 2 == 0left.take(100).sum.should == 10000elseright.take(100).sum.should == 9900endendend.each(&:join)end
classPartitioned < Realizabledefinitialize(partitioner,buffer,mutex)super()@partitioner,@buffer,@mutex=partitioner,buffer,mutexenddefrealize@mutex.synchronizedoreturnif@head != Undefined# another thread got ahead of uswhiletrueif !@buffer.empty?@head=@buffer.shift@tail=Partitioned.new(@partitioner,@buffer,@mutex)# don't hold onto references# tail will keep references alive until end of list is reached@partitioner,@buffer,@mutex=nil,nil,nilreturnelsif@partitioner.done?@head,@size,@tail=nil,0,self@partitioner,@buffer,@mutex=nil,nil,nil# allow them to be GC'dreturnelse@partitioner.next_itemendendendendend
if !@buffer.empty?@head=@buffer.shift@tail=Partitioned.new(@partitioner,@buffer,@mutex)# don't hold onto references# tail will keep references alive until end of list is reached@partitioner,@buffer,@mutex=nil,nil,nilreturn
if !@buffer.empty?@head=@buffer.shift@tail=Partitioned.new(@partitioner,@buffer,@mutex)# don't hold onto references# tail will keep references alive until end of list is reached@partitioner,@buffer=nil,nilreturn
error stops to happen.
Possibly it's a bug of garbage collector.
I don't understand what exactly triggers this error. I tried several times to make independent example to reproduce this bug - without success. If anyone has any idea how to make it - i will implement it gladly.
Reproduced in both JRuby versions - 1.7.19 and 9000.
Thank you.
The text was updated successfully, but these errors were encountered:
I believe this is a threading bug in the hamster gem, which was uncovered when running the specs on JRuby, simply because JRuby allows greater concurrency.
+1 ... esp. since from a quick look it's obvious that the very same code does set @mutex = nil so it's very likely a bug in the hamster code itself (not counting for all concurrent cases). also it's not really needed to reset the @mutex (or any others - would keep it immutable) it does not help the GC much if at all.
@kares, setting @mutex to nil definitely doesn't help the GC if you drop all references to the lazy list itself. It definitely does help the GC if you retain references to the list. If you have many thousands or millions of such lists "live" in memory at the same time, it helps the GC a great deal. It just needs to be done in a thread-safe way -- making sure that no thread will attempt to use it after it has been dropped.
I'm trying to add JRuby support to Hamster gem, and one spec doesn't work with JRuby:
Steps to reproduce:
Spec code
error happens in one of two strings:
Code where happens error:
Error happens on string:
A couple of words about what happens here:
Error doesn't happen in MRI and Rubinius.
It is possible to count errors, if we add 'rescue' to the end of Partitioned#realize:
On my laptop error count is 4 in average.
If in Partitioned#realize
we remove @mutex = nil
error stops to happen.
Possibly it's a bug of garbage collector.
I don't understand what exactly triggers this error. I tried several times to make independent example to reproduce this bug - without success. If anyone has any idea how to make it - i will implement it gladly.
Reproduced in both JRuby versions - 1.7.19 and 9000.
Thank you.
The text was updated successfully, but these errors were encountered: