-
-
Notifications
You must be signed in to change notification settings - Fork 925
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Truffle symbol refactor #3029
Truffle symbol refactor #3029
Conversation
This reverts commit bbdd0ab.
MRI has a global Symbol#to_proc cache (for 67 symbols). Are you sure I am thinking caching the generated Proc per Symbol might make sense too if it's used in multiple locations so only one Proc would need to be created for a given Symbol (but that would need some care in multi-threaded). |
@eregon you are always useful for explaining the historical context of all of this! The I deliberately used something that could be removed to make the difference as extreme as possible - like a PE test. If you use a user-written empty method instead of I didn't put the proc inside the symbol object, as I'm not sure |
Re: |
@chrisseaton Nice IGV graph indeed! |
Looks good, does it have any impact on the benchmarks? |
Big refactoring of our symbol implementation, removing the
RubySymbol
class, removing some potential problems with shared mutableByteList
, moving some stuff into Ruby, optimising some methods, adding symbol GC and completing specs. This leaves us with the whole ofSymbol
implemented in just 400 lines of Java, including license blocks.I added caching for
Symbol#to_proc
. This is an extremely common pattern, so it's surprising nobody else has noticed this is hot and done the same optimisation as us. I hope it's correct - it passes all specs and makes sense to me but please have a think about it as well.There's clearly something badly wrong with Rubinius there - but even though we're using their code for a lot of
Symbol
it doesn't seem to matter for us.Also note that our warmup is so much better these days. Warmed up within a second or so, and first iteration time is already better than anyone else's warmed up time.
@nirvdrum @eregon @pitr-ch please review. @thomaswue for information.