testing/kubernetes: kubeadm init fails
I am trying to install kubeadm and run a simple tiny Kubernetes "cluster". I am using the Dockerfile you can see at https://hub.docker.com/r/dahanna/kubernetes/dockerfile . kubeadm init does not work, whether it's done in the Dockerfile or after the container is built. The full output is below.
(I am making a template https://gitlab.com/dHannasch/container-wrapper-for-python-package/-/blob/kubernetes/.gitlab-ci.yml#L52 so that anyone can quickly and easily wrap any Python package into a Docker container using kaniko, and most of kaniko needs Kubernetes to work (https://github.com/GoogleContainerTools/kaniko/issues/1346) so I need to spin up a tiny Kubernetes cluster on-the-fly on a given CI runner.)
@runlevel5 With #11687 (closed) fixed, do you now have a working example? I might just be doing something stupid.
$ kubeadm init
W0707 02:28:44.411382 9 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.5
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.19.78-coreos
CONFIG_NAMESPACES: enabled
CONFIG_NET_NS: enabled
CONFIG_PID_NS: enabled
CONFIG_IPC_NS: enabled
CONFIG_UTS_NS: enabled
CONFIG_CGROUPS: enabled
CONFIG_CGROUP_CPUACCT: enabled
CONFIG_CGROUP_DEVICE: enabled
CONFIG_CGROUP_FREEZER: enabled
CONFIG_CGROUP_SCHED: enabled
CONFIG_CPUSETS: enabled
CONFIG_MEMCG: enabled
CONFIG_INET: enabled
CONFIG_EXT4_FS: enabled (as module)
CONFIG_PROC_FS: enabled
CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled (as module)
CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled (as module)
CONFIG_OVERLAY_FS: enabled (as module)
CONFIG_AUFS_FS: not set - Required for aufs.
CONFIG_BLK_DEV_DM: enabled (as module)
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
[WARNING Service-Kubelet]: kubelet service does not exist
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR NumCPU]: the number of available CPUs 1 is less than the required 2
[ERROR CRI]: container runtime is not running: output: Client:
Debug Mode: false
Server:
ERROR: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
errors pretty printing info
, error: exit status 1
[ERROR IsDockerSystemdCheck]: cannot execute 'docker info -f {{.CgroupDriver}}': exit status 2
[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[ERROR Swap]: running with swap on is not supported. Please disable swap
[ERROR SystemVerification]: error verifying Docker info: "Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
I noticed in 6701340a that WARNING: kubeadm cannot validate component configs for API groups
is normal and expected. I've messed around some with docker inside the container as you can see, but it doesn't seem to help. I'm not sure whether the standard docker
Alpine package is what Kubernetes wants, but I stuck to it since it successfully silenced the initial [WARNING Service-Docker]: docker service is not enabled, please run 'rc-update add docker default'
.