报错i原文:[root@k8s-master ~]# vim nginx-deploy.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:name: nginx-deployment
spec:replicas: 4template:
▽ metadata:labels:app: nginxspec:containers:- name: nginximage: 10.0.0.10:5000/nginx:1.15ports:- containerPort: 80volumeMounts:- name: nfs_pvc001mountPath: /etc/nginxvolumes:- name: nfs_pvc001persistentVolumeClaim:claimName: pvc0001[root...
https://blog.51cto.com/13641616/24420051.现象:2.查看日志:3.添加IPV4模块:
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOFchmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv44.修改kupe-pro...
删除ns,一直处于Terminating状态中强制删除也是出现报错原因:因为ingress controller的镜像 pull 失败,一直在 retry ,所以我就把 ingress-controller delete 掉,但是一直卡住在删除 namespace 阶段 Ctrl + c[root@master1 ingress]# kubectl delete -f mandatory.yaml
namespace "ingress-nginx" deleted
configmap "nginx-configuration" deleted
configmap "tcp-services" deleted
configmap "udp-services" deleted
servic...
使用kubectl命令查看node节点状态时出现如下报错
[root@localhost ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
192.168.7.102 NotReady <none> 30d v1.12.3
192.168.7.103 NotReady <none> 30d v1.12.3解决办法1、如果在k8s集群中有负载均衡服务,查看keepalived的运行状态,查看浮动IP是否存在,如果没有,重启keepalived服务查看2、重启node节点的kubelet服务[root@localhost ~]#...
报错信息 原因:挂载出现错误1、查看报错服务器提示的目录是否存在,若无则创建 2、查看服务器集群各 Node 节点是否安装了nfs客户端,若无则安装yum -y install nfs-utils rpcbind3、查看 nfs 服务器配置是否正常发现只有一个挂载节点,手动添加后重启服务exportfs 1、修改配置文件vi /etc/exports2、重启服务systemctl restart rpcbind
systemctl restart nfs3、再次查看exportfs重新访问 Pod 信息,发现恢复正常 结束 原文...
一,报错1
参考文档: https://blog.csdn.net/qianghaohao/article/details/82624920 https://www.cnblogs.com/wangzy-tongq/p/13130877.html
[root@k8s-node01 gudong]# kubeadm join 192.168.31.232:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:b5bfe6c4e3cef6455e33c5de503035c49c59da41c3cdbc504b8e7f92d3d329ea
[preflight] Running pre-flight checks[WARNING SystemVerification]: th...
[23-Oct-2019 17:27:39] ERROR: failed to ptrace(ATTACH) child 24: Operation not permitted (1)
在yaml文件中添加
securityContext: capabilities: add: - SYS_PTRACE
不是ingress-nginx不成功出现错误提示kubectl describe pod nginx-ingress-controller-6ffc8fdf96-xtg6n -n ingress-nginx
Normal Scheduled <unknown> default-scheduler Successfully assigned ingress-nginx/nginx-ingress-controller-6ffc8fdf96-xtg6n to 192.168.1.12Normal Pulled 21s kubelet, 192.168.1.12 Container image "quay.io/kubernetes-ingress-controller/nginx-ingress...
故障案例:
发现故障:kubectl get pod -n kube-system -owide|grep -v "Running"NAME READY STATUS RESTARTS AGE IP NODEpod-jljz6 0/1 ImagePullBackOff 0 4d 10.222.96.191 paasn5
查询pod详细信息kubectl describe pod pod-jljz6 -n kube-system
....Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning Failed 6m(x30...
yum -y install kubernetes
报错
Error: docker-ce-cli conflicts with 2:docker-1.13.1-109.gitcccb291.el7.centos.x86_64Error: docker-ce conflicts with 2:docker-1.13.1-109.gitcccb291.el7.centos.x86_64 You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest
原因是因为之前安装过docker,版本冲突原因,解决方法如下:
查询安装docker列表
[root@901-21 ~]...
问题描述
一. 当k8s集群运行日久以后,有的node无法再新建pod,并且出现如下错误,当重启服务器之后,才可以恢复正常使用。查看pod状态的时候会出现以下报错。applying cgroup … caused: mkdir …no space left on device
或者在describe pod的时候出现cannot allocate memory这时候你的 k8s 集群可能就存在内存泄露的问题了,当创建的pod越多的时候内存会泄露的越多,越快。
二. 具体查看是否存在内存泄露cat /sys/fs/cgroup/mem...
k8s 集群部署 flannel 报错
查看 flannel 日志报错如下:
Couldn't fetch network config: client: response is invalid json. The endpoint is probably not valid etcd cluster endpoint. timed out无法获取网络配置:客户端:响应无效json。端点可能不是有效的etcd集群端点。 计时结束
然后检查 etcd 和 flannel 版本
# ./etcd --version
etcd Version: 3.4.15
Git SHA: aa7126864
Go Version: go1.12.17
Go OS/Arch: linux/amd...