-
-
Notifications
You must be signed in to change notification settings - Fork 15.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubernetes: 1.7.9 -> 1.9.1 #33954
kubernetes: 1.7.9 -> 1.9.1 #33954
Conversation
Nice! You can run the tests via:
|
@srhb Excellent! Thanks, will try that! |
Ran
And also
Will be running same tests on P.S. I assume that test passes when exit code is |
One observation - with kubernetes 1.9.1 tests run much much longer. Perhaps some behavior changed in the new kubernetes version? Hope it is not necessarily wrong, but in this case tests have to be changed. Will try to figure out how exactly those tests work. |
On the other hand how to test that kubernetes modules work properly after this upgrade? |
I can test this on one of our kubernetes clusters which uses nixos in next week and check if everything works as it should. We want to update all clusters to kubernetes 1.9 anyway, so this is on agenda. |
@offlinehacker thanks |
@GrahamcOfBorg test kubernetes.rbac |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Failure for system: x86_64-linux
error: while evaluating ‘hydraJob’ at /var/lib/gc-of-borg/.nix-test-rs/repo/38dca4e3aa6bca43ea96d2fcc04e8229/builder/grahamc-zoidberg/lib/customisation.nix:167:14, called from /var/lib/gc-of-borg/.nix-test-rs/repo/38dca4e3aa6bca43ea96d2fcc04e8229/builder/grahamc-zoidberg/nixos/release.nix:286:22:
while evaluating the attribute ‘name’ at /var/lib/gc-of-borg/.nix-test-rs/repo/38dca4e3aa6bca43ea96d2fcc04e8229/builder/grahamc-zoidberg/lib/customisation.nix:172:24:
attribute ‘name’ missing, at /var/lib/gc-of-borg/.nix-test-rs/repo/38dca4e3aa6bca43ea96d2fcc04e8229/builder/grahamc-zoidberg/lib/customisation.nix:172:10
Is there any way to reproduce this on my box somehow? |
@grahamc ? Maybe I kicked it off wrong? |
No, evaluating it via release.nix is also weird here. Building the test directly seems the only way, but ofborg doesn't do that, obviously. |
Why is it weird to evaluate it through release.nix? |
I cherry-picked the two commits from this PR on top of current master (98b35db Wed Jan 17 eclipse-plugins-ansi-econsole: init at 1.3.5) and
|
same error with
the return code was 100 in both cases. |
$ nix-info -m
|
@grahamc I haven't looked into it, but the tests are sufficiently different from the others in release.nix that the naive approach doesn't work:
Hydra knows how to do it, clearly, but I'm not sure whether ofborg does the exact same thing. |
@offlinehacker, did you have a chance to see if it works for you in the field? |
@kuznero I've been able to set this up as a single node cluster on my laptop and run it without problems since Monday. Compared to the previous version, I only had to add |
little update. I actually hit an issue with this just now, the issue matches exactly this one, kubernetes/kubernetes#32796. Which is weird, as this was supposed to be resolved few releases before 1.9.1... |
@jdanekrh thanks for the update. |
@jdanekrh I believe that something has changed relating to bootstrapping of the kubelets' authorization. Adding the following line to the start of every test script works:
I'm wondering whether this is just due to the CN we're using, or if we need to do something else to bootstrap the clients. |
How about something like this? The issue with the current tests is that there is no longer a default ClusterRoleBinding that confers registration access for kubelets with users in the We could just add them both to the tests, but I think they're sane defaults and match up well with what the k8s community is doing. |
nixos/k8s: Enable Node authorizer and NodeRestriction by default
Will this ever get merged? |
Should be ok |
This won't be particularly reliable. Kubernetes 9.x works with Docker 17.03.x. We're currently shipping 17.12.x, and Docker has API changes in minor version bumps, so all sorts of flakiness may ensue. It really should be set to force Docker 17.03. (Or better, 1.12. To quote from the documentation: |
@Baughn It appears that snippet is from kubeadm, not from Kubernetes itself. I haven't found anything in the actual Kubernetes docs that is worded as strongly; the release notes simply call those "verified" versions, and 17.12 appears to work just fine. We may still want to do something to signal this to the user, and at least provide (one of) the verified version(s) as an option, but forcing this seems a bit strong. |
@Baughn This issue is relevant, too: kubernetes/kubernetes#53221 Until k8s switches to matching some/more Docker API versions, the problem remains that we can either choose an EOL Docker or a K8s-unvalidated version. Fun! |
I've tried this today, cherry-picking those on release-17.09. Unfortunately, this has a problem: if CA/key/cert files are not specified explicitly, they'll be generated under Thanks to @srhb for help on #nixos, I had it solved. I've scratched a few notes here: https://gist.github.com/drdaeman/fee048df456ced9f604fb554b78f549f (a sample config and a script to generate dirty certs that would work for a totally insecure local-dev single-node "cluster"). Unfortunately, I'm really brain-dead after the struggle with K8s, so can't write a proper issue. And my weekend's going to be very hasty, so I'm not sure I'll have time for this in next few days. But I thought I'd at least leave this comment here, in case someone else would have similar problem. |
Motivation for this change
Upgrade kubernetes to latest v1.9.1 (as well as kuvecfg to v0.6.0, as well as kubernetes-dashboard to v1.8.2). Related to #30639.
Things done
build-use-sandbox
innix.conf
on non-NixOS)nix-shell -p nox --run "nox-review wip"
./result/bin/
)