Memory Leak in HTTP/HTTPS? #5480
Comments
Please see: #5465 |
So the output your seeing from running 0.11.2 translates as Though there is another problem your tests brought up. I dropped the logging and ran the test every millisecond for
/cc @indutny @bnoordhuis |
@trevnorris Ok, thanks for taking a look. I'll have to digest this a bit though, it definitely throws a bit of a curveball my way. In my case I'm working on a thin monitoring agent that periodically reports to an API. Ideally I'm able to do this in the manner that has the least impact on the profile of the code being monitored. I guess where in the 0.8 world I may have been triggering more GC activity, in the 0.10+ world I'm consuming more memory. In this less frequent mark-sweep world what is a good rule of thumb for observing memory issues over time? Seeing that the minimum heapUsed bounces off a fairly stable point every few GCs? |
@brycebaril the memory leaks you need to worry about are when a |
@trevnorris Strange, I can't reproduce that. @brycebaril I second what @trevnorris said and will add that part of the problem with your test case is that the C heap tends to get badly fragmented over time. (Plus, most malloc implementations are not very eager to return memory to the operating system even if they could - system calls are expensive.) When I run your server.js script on an x86_64 machine with --max_old_space_size=32, it maxes out around 170 MB of which about 60 MB is "lost" (at least temporarily) due to fragmentation. I don't think there's much we can do about that because the bulk of the malloc/free calls originate inside openssl. I'm going to tentatively close this as NOTABUG. If you have reason to believe otherwise, let me know. |
@bnoordhuis Here's some more information:
and
Don't know exactly why, but when I remove |
I think that's a small bug in libuv's darwin.c. It checks errno when task_info() returns something other than KERN_SUCCESS but I'm reasonably sure that function (like other mach functions) doesn't touch errno at all. IOW, you're probably seeing the wrong error. |
What happens when you apply joyent/libuv@a1cb52a? If you manage to hit the assert I added, can you inspect the value of err in gdb? |
for posterity, this was on linux:
|
I was working with 0.10.5 making a number of simple HTTPS requests and I noticed that the memory went very high, very fast. This hadn't been a problem for me with 0.8.23, so after sorting through all my code and dependencies I wrote a simple set of test files to demonstrate.
I put them in this repo: https://github.com/brycebaril/node_leak_test
Essentially it has a HTTP/HTTPS server and a client for each that POST to the server every 10 seconds, and then log their
process.memoryUsage()
. There is also one that doesn't do any http and just logs memory every 10 seconds for a baseline comparison.I ran them for around 8 hours on node versions:
Yeah, I was in the middle of collecting data when 0.11.2 and 0.10.6 were released 😄
You can see the raw CSV output and some graphs of the results in the repo linked above.
The gist is that 0.8.23 is pretty stable once it initially settles down, but 0.10.5 and above grow to a significantly higher value, and possibly never cap. HTTPS is also much worse than HTTP.
After 8 hours, for the HTTPS clients, the RSS was at:
I didn't plot the server.js memory output, but it was also worse between 0.8.23 and the rest. The CSV files are there with the data.
I don't know if it is a leak, or a GC issue, or what, but it is definitely worse than it used to be. If there are some flags I can try, or something I'm doing wrong in the code that the newer versions of node exposed please let me know.
The text was updated successfully, but these errors were encountered: