Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ci fw-only Go/revel] Repair revel tests, make future failures less punishing #2625

Merged
merged 2 commits into from Mar 28, 2017

Conversation

michaelhixson
Copy link
Contributor

A couple of function/property names have changed in the revel framework, causing our test code to not compile. I updated our code to use the new names.

Also, the fact that setup.sh was doing compilation in the background meant the failures went unnoticed by our toolset (specifically by framework_test.py), causing a long period of inactivity in the test suite. Removing the & fixes that problem, which will be useful in case the revel framework ever changes in a way that breaks our test code again.

There are more details in the commit messages.

Recent changes to the revel framework broke our revel tests.  (Our code
seems to depend on whatever is in the master branch of revel's github
repository at the moment that we run the tests, so I guess problems
like this are inevitable.)

Specifically, the changes that broke us are:

 - controller.RenderJson was renamed to RenderJSON
 - controller.RenderArgs was renamed to ViewArgs

revel/revel@6ac1603
revel/revel@0e454f2
The "revel run" command compiles the test code and launches the
application server.  If the test code has compilation errors, then the
command fails and does not launch the server.

If the command is run in the foreground, then a failure will be noticed
by our toolset and the test will terminate in a timely fashion.  If the
command is run in the background, however, then a failure will not be
noticed and our toolset will enter a loop, waiting for the application
server port to be bound.  In the failure case, the port is never bound
and the toolset ends up waiting for 105 minutes before finally giving
up.

In the success case, whether the application runs in the foreground or
background makes no difference.  It's ok for the setup script to never
terminate as long as it successfully launches the application server.
The toolset will eventually notice that it binds to the expected port
and will then run the appropriate benchmarks.

The 105 minute timeout seems extremely large, but the reason it's that
large is that there is at least one framework (wt) can take on the order
of one hour to compile.
@mention-bot
Copy link

Thanks @michaelhixson for contributing to The Framework Benchmarks! @robfig, @methane and @msmith-techempower, code you've worked on has been modified. If you have the chance, please review. If you wish to unsubscribe from these notices, please open a Pull Request with the commit message [ci skip] and your name added to the userBlacklist array in the .mention-bot file.

@methane
Copy link
Contributor

methane commented Mar 28, 2017

@michaelhixson Could you add [ci fw-only Go/revel] prefix in pull request title?
And could you use glide to pinning dependency versions? (see #2450).

@michaelhixson michaelhixson changed the title Repair revel tests, make future failures less punishing [ci fw-only Go/revel] Repair revel tests, make future failures less punishing Mar 28, 2017
@michaelhixson
Copy link
Contributor Author

@methane Thanks for looking at this.

could you use glide to pinning dependency versions?

That seems like a good idea, but I'm going to leave that for someone else to implement.

@michaelhixson michaelhixson merged commit f8a4226 into TechEmpower:master Mar 28, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants