Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reserve IR interpreter temp vars from thread-local pool #1794

Closed
headius opened this issue Jul 6, 2014 · 3 comments
Closed

Reserve IR interpreter temp vars from thread-local pool #1794

headius opened this issue Jul 6, 2014 · 3 comments

Comments

@headius
Copy link
Member

headius commented Jul 6, 2014

This is an experimental patch to avoid allocating Object[] space for temp vars in the IR interpreter on every entry into a method.

The logic here is simple: ThreadContext contains a large pre-allocated Object[] array, and instead of allocating a new array, Interpreter requests a chunk of that. The proper starting offset is propagated along with the array reference.

This does not immediately seem to help interpreter performance; however, I believe this will become important as we remove other major source of allocation like framing and scopes.

Patch is here: https://gist.github.com/headius/fad70bf08a1ebc96dc68

I'm playing with some other modifications to get IR interpreter performance a bit higher.

cc @subbuss @enebo

@subbuss
Copy link
Contributor

subbuss commented Jul 7, 2014

Yes, I remember playing with this a while back and noticed no perf. changes. I think we've had independent experiments dealing with tmp array, arg unboxing, etc, but nothing that had all of them in one place -- mostly because the interp starts getting un-DRY-ed.

@enebo
Copy link
Member

enebo commented Jul 7, 2014

I think the main issue in all of these experiments is it takes many of them to eliminate enough allocation to see any individual one. In that sense, so long as we can do these really cleanly then we should just start chipping away. Unboxing all arity path through calls is another obvious micro-opt. I have also toyed with statistics to use partial evaluators for mini-methods which use hardly any instructions.

With that said, I have been resisting these micro-opts. My current thesis to JRuby performance is hot code JITs and non-hot code should start interp'ing quickly. I may be idealistic but I think this interp work will have less payoff than concentrating on JIT compiler optimizations. I also would like cleaner interp code (well cleaner than right now).

@enebo enebo added this to the Invalid or Duplicate milestone Jan 15, 2015
@enebo
Copy link
Member

enebo commented Jan 15, 2015

We know we can do this and may at some point but I am closing out things to reduce what we need to pay attention for for release.

@enebo enebo closed this as completed Jan 15, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants