New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubernetes: 1.4.6 -> 1.5.1 #21125
kubernetes: 1.4.6 -> 1.5.1 #21125
Conversation
@moretea, thanks for your PR! By analyzing the history of the files in this pull request, we identified @ebzzry, @offlinehacker and @rushmorem to be potential reviewers. |
After updating this PR to 1.5.1, the k8s tests still fail. |
One problem is that the generated docker image is not valid. I'm getting the following error when the docker daemon starts:
After deleting UPDATE: This was probably caused by an invalid state in the qcow2 image in |
So this is docker related issue, not kubernetes?
…On Wed, Dec 14, 2016, 1:07 PM Maarten Hoogendoorn ***@***.***> wrote:
One problem is that the generated docker image is not valid.
I'm getting the following error when the docker daemon starts:
[ 9.708936] dockerd[1230]: time="2016-12-14T12:01:32.822686391Z" level=info msg="libcontainerd: new containerd process, pid: 1238"
[ 9.805169] dockerd[1230]: time="2016-12-14T12:01:32.919309844Z" level=warning msg="devmapper: Usage of loopback devices is strongly discouraged for production use. Please use `--storage-opt dm.thinpooldev` or use `man docker` to refer to dm.thinpooldev section."
[ 9.839039] dockerd[1230]: time="2016-12-14T12:01:32.953487093Z" level=warning msg="devmapper: Base device already exists and has filesystem xfs on it. User specified filesystem will be ignored."
[ 9.850956] dockerd[1230]: time="2016-12-14T12:01:32.965431349Z" level=info msg="[graphdriver] using prior storage driver \"devicemapper\""
[ 9.917129] dockerd[1230]: time="2016-12-14T12:01:33.030317808Z" level=fatal msg="Error starting daemon: layer does not exist"
After deleting /var/lib/docker, I can successfully start docker.service
again. /cc @offlinehacker <https://github.com/offlinehacker> @domenkozar
<https://github.com/domenkozar>
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#21125 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AAjvSwWJv0tVTpph5EVZAAGjehNYPZD6ks5rH9wIgaJpZM4LLylv>
.
|
Apparently the VM state was tainted. After removing the |
@offlinehacker I noticed that the cluster test is broken on master as well. I'll try to fix that. |
@moretea finishing kubernetes tests and fixing any bugs would be very helpful. |
This PR has merge conflicts, needs reading. |
265b624
to
0850b66
Compare
@offlinehacker After rebasing it's getting OOM errors on my 16GB laptop, see https://gist.github.com/moretea/961bb2bac9bee39416c43a4204aebedc |
@moretea you can disable docs generation(comment out) for now if that seems to be an issue |
0850b66
to
2b285cb
Compare
I found the problem and fixed it in a39ac20 Tested with @offlinehacker if you're OK with these changes, I'll squash this to one commit. |
@offlinehacker btw. you can squash commits also as a maintainer on merge. |
Disabled "mungedocs", which broke the build. This appears to be a piece of development tooling to make sure that the documentation is correct. We don't really care about that when we a specific k8s version for NixOS.
Dig could not be found in the test cases. Adding it as a global package fixes this.
92a768c
to
9f892de
Compare
@offlinehacker done. |
Is there a PR for this? I could not find it. I guess that a general mechanism for CNI should be the solution there. I'm willing to contribute to that. |
Motivation for this change
Updating k8s to
1.5.01.5.1, which supports the Container Runtime Interface.Things done
(nix.useSandbox on NixOS,
or option
build-use-sandbox
innix.conf
on non-NixOS)
nix-shell -p nox --run "nox-review wip"
./result/bin/
)The tests fail at the moment, because the kube-apiserver has some ACL problem: