You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When initializing an application with a sufficiently large amount of JRuby ScriptingContainers (e.g., 32) and a sufficiently large maximum Java heap (e.g., 40Gb), we found that we were unable to get through the setup of all of the containers without encountering a "java.io.FileNotFoundException: (Too many open files)" exception when org.jruby.util.JRubyClassLoader.findClass() calls into org.jruby.util.CompoundJarURLStreamHandler$CompoundJarURLConnection.getInputStream() to try to open a jar in the process of loading a class.
We were able to work around the problem by bumping up ulimit -n for our process to a large enough value. With this workaround in place, all of the ScriptingContainer initialization completes with no exceptions being thrown. After all of the initialization has completed, file descriptor usage eventually (after a Garbage Collection cycle) drops back down and stabilizes. However, we wanted to look deeper into the issue to see if there would be a way to keep file descriptor usage down during initialization so that we wouldn’t have to resort to bumping the ulimit.
Ultimately, it appeared that the bulk of the file descriptors we saw allocated were coming from here. The baseInputStream object allocated on line 152, in the case that the openEntryWithCache call on line 156 is successful, would fall out of scope when the function ended -- without being explicitly closed. The file descriptor associated with the stream would eventually be cleaned up during Garbage Collection. With a large enough heap size, however, Garbage Collection could be deferred out beyond the point where all file descriptors available to the process have been exhausted.
I put up a PR for closing each stream explicitly before it falls out of scope. The PR is #4063. I'm not totally sure if this is the right fix but it appears to be working well in our testing so far.
Environment
Provide at least:
JRuby version (jruby -v) and command line (flags, JRUBY_OPTS, etc)
Released version:
jruby 1.7.25 (1.9.3p551) 2016-04-13 867cb81 on Java HotSpot(TM) 64-Bit Server VM 1.8.0_45-b14 +jit [darwin-x86_64]
Version built from source, off the current HEAD of the jruby-1_7 branch, f83c032:
jruby 1.7.26-SNAPSHOT (1.9.3p551) 2016-08-10 f83c032 on Java HotSpot(TM) 64-Bit Server VM 1.8.0_45-b14 +jit [darwin-x86_64]
This runs a Java application which creates ScriptingContainers, requires a few classes via a Ruby script in each container (openssl, json/ext, and psych), and measures open file descriptors after each step. Output from the script run is directed to file_desc_leak_output.txt.
Expected Behavior
File descriptor usage remains relatively flat after each ScriptingContainer is initialized and the initial set of require Ruby statements is executed.
This is the behavior I see with the fix in the linked PR. After each iteration, exactly the same number of file descriptors are open - 113.
Iteration #0; number of open fd: 113
Iteration #1; number of open fd: 113
Iteration #2; number of open fd: 113
...
Iteration #7; number of open fd: 113
Iteration #8; number of open fd: 113
Iteration #9; number of open fd: 113
Actual Behavior
File descriptor jumps to a peak of about 2200 across iterations. Open file descriptors may remain high even at the end of the run - 1200 or so - although this would drop back to around the 113 count from the expected case once Garbage Collection were triggered. Note that these counts are not deterministic given the relative unpredictability of when Garbage Collection may run.
Iteration #0; number of open fd: 988
Iteration #1; number of open fd: 1863
Iteration #2; number of open fd: 267
...
Iteration #7; number of open fd: 2216
Iteration #8; number of open fd: 426
Iteration #9; number of open fd: 1301
...
This is the behavior I see on both JRuby 1.7.25 and 1.7.26-SNAPSHOT (built from the current head of the jruby-1_7 branch, f83c032.
The text was updated successfully, but these errors were encountered:
@kares Do you know if there's a target release date for a JRuby 1.7.26 at this point? Is there anything we could do to help with getting a new release out? We're interested in using the fix for this issue and a few others that have landed recently in the jruby-1_7 branch as soon as may be feasible. Thanks again!
When initializing an application with a sufficiently large amount of JRuby ScriptingContainers (e.g., 32) and a sufficiently large maximum Java heap (e.g., 40Gb), we found that we were unable to get through the setup of all of the containers without encountering a "java.io.FileNotFoundException: (Too many open files)" exception when
org.jruby.util.JRubyClassLoader.findClass()
calls intoorg.jruby.util.CompoundJarURLStreamHandler$CompoundJarURLConnection.getInputStream()
to try to open a jar in the process of loading a class.We were able to work around the problem by bumping up
ulimit -n
for our process to a large enough value. With this workaround in place, all of the ScriptingContainer initialization completes with no exceptions being thrown. After all of the initialization has completed, file descriptor usage eventually (after a Garbage Collection cycle) drops back down and stabilizes. However, we wanted to look deeper into the issue to see if there would be a way to keep file descriptor usage down during initialization so that we wouldn’t have to resort to bumping theulimit
.Ultimately, it appeared that the bulk of the file descriptors we saw allocated were coming from here. The
baseInputStream
object allocated on line 152, in the case that theopenEntryWithCache
call on line 156 is successful, would fall out of scope when the function ended -- without being explicitly closed. The file descriptor associated with the stream would eventually be cleaned up during Garbage Collection. With a large enough heap size, however, Garbage Collection could be deferred out beyond the point where all file descriptors available to the process have been exhausted.I put up a PR for closing each stream explicitly before it falls out of scope. The PR is #4063. I'm not totally sure if this is the right fix but it appears to be working well in our testing so far.
Environment
Provide at least:
jruby -v
) and command line (flags, JRUBY_OPTS, etc)Released version:
jruby 1.7.25 (1.9.3p551) 2016-04-13 867cb81 on Java HotSpot(TM) 64-Bit Server VM 1.8.0_45-b14 +jit [darwin-x86_64]
Version built from source, off the current HEAD of the jruby-1_7 branch, f83c032:
jruby 1.7.26-SNAPSHOT (1.9.3p551) 2016-08-10 f83c032 on Java HotSpot(TM) 64-Bit Server VM 1.8.0_45-b14 +jit [darwin-x86_64]
uname -a
)OSX El Capitan (10.11.6), CentOS 7, and others.
Steps to Reproduce
Using the reproducer at https://github.com/camlow325/jruby9k-benchmarks/tree/ae33418a140cb20cc32bc377b906c94a57f189f4, cd to “jar-file-desc-leak” and type:
This runs a Java application which creates
ScriptingContainer
s, requires a few classes via a Ruby script in each container (openssl, json/ext, and psych), and measures open file descriptors after each step. Output from the script run is directed tofile_desc_leak_output.txt
.Expected Behavior
File descriptor usage remains relatively flat after each
ScriptingContainer
is initialized and the initial set ofrequire
Ruby statements is executed.This is the behavior I see with the fix in the linked PR. After each iteration, exactly the same number of file descriptors are open - 113.
Actual Behavior
File descriptor jumps to a peak of about 2200 across iterations. Open file descriptors may remain high even at the end of the run - 1200 or so - although this would drop back to around the 113 count from the expected case once Garbage Collection were triggered. Note that these counts are not deterministic given the relative unpredictability of when Garbage Collection may run.
This is the behavior I see on both JRuby 1.7.25 and 1.7.26-SNAPSHOT (built from the current head of the jruby-1_7 branch, f83c032.
The text was updated successfully, but these errors were encountered: