New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixing compiler warnings #585
Conversation
final Timer.Context context = timer.time(); | ||
|
||
try { | ||
try (final Timer.Context context = timer.time()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will automatically call context.stop()
when the try
block exits?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, look at context.close()
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, there's a subtle point here. We have all this entangled metrics code, and we test none of it. If we are advertising metrics originated by this kind of code as Fedora functionality, it's just as much subject to our own testing as plain old CRUD.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems like there are two subtle points here:
- Do we like the pattern of entangled metrics code?
- In any case, we need to implement a testing pattern for it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, both questions are valid. Here are my responses:
- No, because I know of no clear mandate. (Maybe we do have a mandate and I just haven't heard, which is far from unlikely.) @eddies was keen on metrics. I now know of no other rationale.
- If we keep it, then the pattern will involve inheritable test frames that check both counting and timing results.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we keep it. @cbeer and I discussed the possibility of dropping metrics several months ago with the conclusion that it would stay.
However, if we do not see uptake in the 4.0 production release, we can re-evaluate that decision.
As a note, we have a Packer.io build of Graphite that makes this capability more accessible:
https://github.com/ksclarke/packer-graphite
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I followed that conversation. I would like to see it eventuate in a clear set of use cases or even specifications for the kinds of metrics that a Fedora repository is expected to produce, or at least that Fedora Commons is expected to produce.
Resolved with: 762f049 |
https://www.pivotaltracker.com/story/show/81290082