努力挣扎的生活 努力挣扎的生活
  • 前端文章

    • JavaScript
  • 学习笔记

    • 《JavaScript教程》
    • 《JavaScript高级程序设计》
    • 《ES6 教程》
    • 《Vue》
    • 《React》
    • 《TypeScript 从零实现 axios》
    • 《Git》
    • TypeScript
    • JS设计模式总结
  • 运维基础
  • 监控
  • 日志系统
  • gitlab安装
  • jenkins安装和管理
  • Jenkins工具集成
  • pipeline流水线
  • Docker
  • Kubernetes
  • Nexus
  • Rancher
  • Prometheus
  • ELK(EFK)
  • 虚拟化
  • Mysql
  • PostgreSQL
  • Redis
  • MongoDB
  • clickhouse
关于
  • 分类
  • 标签
  • 归档
  • 收藏
  • 更多
GitHub (opens new window)

yangfk

瑟瑟发抖的小运维
  • 前端文章

    • JavaScript
  • 学习笔记

    • 《JavaScript教程》
    • 《JavaScript高级程序设计》
    • 《ES6 教程》
    • 《Vue》
    • 《React》
    • 《TypeScript 从零实现 axios》
    • 《Git》
    • TypeScript
    • JS设计模式总结
  • 运维基础
  • 监控
  • 日志系统
  • gitlab安装
  • jenkins安装和管理
  • Jenkins工具集成
  • pipeline流水线
  • Docker
  • Kubernetes
  • Nexus
  • Rancher
  • Prometheus
  • ELK(EFK)
  • 虚拟化
  • Mysql
  • PostgreSQL
  • Redis
  • MongoDB
  • clickhouse
关于
  • 分类
  • 标签
  • 归档
  • 收藏
  • 更多
GitHub (opens new window)
  • Docker

  • Kubernetes

    • K8S常见组件记录
    • 安装DNS服务(bind-9)
    • 私有仓库harbor部署
    • 证书签发环境CFSSL
    • 一步步部署k8s组件(上)
    • 一步步部署k8s组件(中)
    • 一步步部署k8s组件(下)
    • kubelet常用命令
    • K8s的GUI资源管理之仪表板
    • k8s部署jenkins
    • k8s持久存储StorageClass
    • k8s之Volume类型emptyDir和hostPath
    • 深入了解Deployment
    • k8s之meric-server(HPA环境)
    • k8s-deployment常见参数说明
    • rke部署k8s高可用集群
    • K8S之安全机制
    • k8s网络策略
    • kubeadm安装k8s(版本1.26.4)
    • kubeadm安装k8s-自签证书
      • kubeadm安装k8s-自签证书
        • 安装etcd 集群
        • 安装supervisor
        • ETCD相关操作
        • kubeadm 初始化准备
        • 安装网络插件canal
        • 配置CoreDNS自动扩展
        • 测试coredns
        • 添加节点
      • 个人存储下载地址。。。
    • kubeadm证书替换
    • Pod探针
  • Nexus

  • Rancher

  • Prometheus

  • ELK

  • 虚拟化

//
  • 云计算虚拟化
  • Kubernetes
yangfk
2023-11-03

kubeadm安装k8s-自签证书

//

系统:20.04.5 LTS (Focal Fossa)

使用kubeadm部署k8s集群的时候默认ca 根证书10年,apisever,etcd证书为一年(虽然可以轮换)自签让证书的时间更长,我这里设置的是20年

基于上一篇配置基础环境✨✨

# kubeadm安装k8s-自签证书

先自签证书!!!!!!!!!!

主机IP 说明
192.168.255.20 control-plane,etcd01
192.168.255.30 control-plane,etcd02
192.168.255.31 node01,etcd03
  • 配置 hosts,把主机名做解析(每个主机都需要)
cat >>/etc/hosts<<EOF
#注:control-plane-endpoint.yfklife.cn 集群 controlPlaneEndpoint 一般指向高可用虚拟VIP,或者ha VIP,也可以是部署的主控节点
192.168.255.20 control-plane-endpoint.yfklife.cn

192.168.255.20 k8s-manage01
192.168.255.30 k8s-manage02
192.168.255.31 k8s-node01
EOF

1
2
3
4
5
6
7
8
9

# 安装etcd 集群

etcd-v3.3.27下载地址 (opens new window) etcd-v3.4.26下载地址 (opens new window)

三台主机都需要执行安装:192.168.255.20,192.168.255.30,192.168.255.31

#--下载解压,配置
#wget https://github.com/etcd-io/etcd/releases/download/v3.4.26/etcd-v3.4.26-linux-amd64.tar.gz
wget https://github.com/etcd-io/etcd/releases/download/v3.3.27/etcd-v3.3.27-linux-amd64.tar.gz
tar xf etcd-v3.3.27-linux-amd64.tar.gz -C /opt/
mv /opt/etcd-v3.3.27-linux-amd64 /opt/etcd-v3.3.27
ln -s /opt/etcd-v3.3.27 /opt/etcd
mkdir -p /opt/etcd/logs/ /opt/etcd/etcd-server



#--配置启动脚本

hostnameIP=192.168.255.20 #当前主机IP,在哪台机器执行,修改当前主机IP

HOST01=192.168.255.20
HOST02=192.168.255.30
HOST03=192.168.255.31
cat >/opt/etcd/etcd-server-startup.sh<<EOF
#!/bin/sh
./etcd --name etcd-server-${hostnameIP} \
       --data-dir /opt/etcd/etcd-server \
       --listen-peer-urls https://${hostnameIP}:2380 \
       --listen-client-urls https://${hostnameIP}:2379,http://127.0.0.1:2379 \
       --quota-backend-bytes 8000000000 \
       --initial-advertise-peer-urls https://${hostnameIP}:2380 \
       --advertise-client-urls https://${hostnameIP}:2379,http://127.0.0.1:2379 \
       --initial-cluster  etcd-server-${HOST01}=https://${HOST01}:2380,etcd-server-${HOST02}=https://${HOST02}:2380,etcd-server-${HOST03}=https://${HOST03}:2380 \
       --ca-file /etc/kubernetes/etcd/ca.crt \
       --cert-file /etc/kubernetes/etcd/peer.crt \
       --key-file /etc/kubernetes/etcd/peer.key \
       --client-cert-auth  \
       --trusted-ca-file /etc/kubernetes/etcd/ca.crt \
       --peer-ca-file /etc/kubernetes/etcd/ca.crt \
       --peer-cert-file /etc/kubernetes/etcd/peer.crt \
       --peer-key-file /etc/kubernetes/etcd/peer.key \
       --peer-client-cert-auth \
       --peer-trusted-ca-file /etc/kubernetes/etcd/ca.crt \
       --log-output stdout
EOF


chmod 700 /opt/etcd/etcd-server
chmod +x /opt/etcd/etcd-server-startup.sh
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43

# 安装supervisor

通过supervisorctl 管理etcd服务

apt install -y supervisor

cat >/etc/supervisor/conf.d/etcd-server.conf<<EOF
[program:etcd-server-${hostnameIP}]
command=/opt/etcd/etcd-server-startup.sh                        ; the program (relative uses PATH, can take args)       
numprocs=1                                                      ; number of processes copies to start (def 1)
directory=/opt/etcd                                             ; directory to cwd to before exec (def no cwd)
autostart=true                                                  ; start at supervisord start (default: true)
autorestart=true                                                ; retstart at unexpected quit (default: true)
startsecs=30                                                    ; number of secs prog must stay running (def. 1)
startretries=3                                                  ; max # of serial start failures (default 3)
exitcodes=0,2                                                   ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                                 ; signal used to kill process (default TERM)
stopwaitsecs=10                                                 ; max num secs to wait b4 SIGKILL (default 10)
user=root                                                       ; setuid to this UNIX account to run the program
redirect_stderr=true                                            ; redirect proc stderr to stdout (default false)
stdout_logfile=/opt/etcd/logs/etcd.stdout.log           ; stdout log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                                    ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4                                        ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                                     ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false                                     ; emit events on stdout writes (default false)
killasgroup=true
stopasgroup=true
EOF
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
  • 我这里使用root启动,root启动以下步骤可不做,如果使用普通用户启动修改supervisor参数: user=root
#useradd etcd -M -s /sbin/nologin
#chown -R etcd.etcd /opt/etcd /etc/kubernetes/etcd


supervisorctl  update #每次修改配置需要更新
supervisorctl  status
1
2
3
4
5
6

# ETCD相关操作

k8s-etcd备份 (opens new window)

  • 查看成员

/opt/etcd/etcdctl --endpoints=http://127.0.0.1:2379 member list

  • 移除成员

etcdctl member remove 8211f1d0f64f3269

  • 添加成员

etcdctl member add member4 --peer-urls=http://10.0.0.4:2380

在 IP 为 10.0.0.4 的机器上启动新增加的成员:

export ETCD_NAME="member4"
export ETCD_INITIAL_CLUSTER="member2=http://10.0.0.2:2380,member3=http://10.0.0.3:2380,member4=http://10.0.0.4:2380"
export ETCD_INITIAL_CLUSTER_STATE=existing
etcd [flags]
1
2
3
4
  • 检查etcd健康状态

/opt/etcd/etcdctl cluster-health

  • 创建备份

cat >/opt/etcd/etcd-backup.sh<<'EOF'
Date=$(date +%F-%H_%M)
mkdir -p /opt/etcd/backups
ETCDCTL_API=3 /opt/etcd/etcdctl --endpoints=http://127.0.0.1:2379 snapshot save /opt/etcd/backups/${Date}-snapshot.db
gzip /opt/etcd/backups/${Date}-snapshot.db

#保留5天
find /opt/etcd/backups/ -mtime -1 -type f  -mtime +5 -delete

#验证快照
#ETCDCTL_API=3 etcdctl --write-out=table snapshot status /opt/etcd/backups/${Date}-snapshot.db
EOF

chmod +x /opt/etcd/etcd-backup.sh
echo '02 */6 * * * /bin/bash /opt/etcd/etcd-backup.sh >/dev/null 2>&1 &' >>  /var/spool/cron/root

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
  • 恢复步骤

在恢复集群时,使用 --data-dir 选项来指定集群应被恢复到哪个文件夹。

先停止etcd服务,移除之前的数据目录:mv /opt/etcd/etcd-server/member /opt/etcd/etcd-server/member-bak

ETCDCTL_API=3 etcdctl --data-dir /opt/etcd/etcd-server/ snapshot restore /opt/etcd/backups/<时间命名>-snapshot.db

# kubeadm 初始化准备

kubeadm config print init-defaults --kubeconfig ClusterConfiguration --component-configs KubeProxyConfiguration --component-configs KubeletConfiguration > kubeadm-config.yaml

  • 生产kubeadm-config.yaml

kubeadm证书管理 (opens new window)

serverTLSBootstrap: true 使用本地证书

cat >/etc/kubernetes/kubeadm-config.yaml<<EOF
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
imageRepository: registry.aliyuncs.com/google_containers
kubernetesVersion: v1.26.2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
networking: 
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16
controlPlaneEndpoint: "control-plane-endpoint.yfklife.cn:6443" #前面hosts 添加的解析
apiServer: # https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta3#APIServer
  timeoutForControlPlane: 4m0s
  extraArgs:
    authorization-mode: "Node,RBAC"
    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeClaimResize,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,Priority"
    runtime-config: api/all=true
    storage-backend: etcd3
  certSANs:
  - 127.0.0.1 #可使用localhost调试,多预留几个IP
  - localhost
  - 192.168.255.20
  - 192.168.255.30
  - 192.168.255.31
  - 192.168.255.32
  - 192.168.255.33
  extraVolumes:
  - hostPath: /etc/localtime
    mountPath: /etc/localtime
    name: localtime
    readOnly: true
controllerManager: 
  extraArgs:
    bind-address: "0.0.0.0"
  extraVolumes:
  - hostPath: /etc/localtime
    mountPath: /etc/localtime
    name: localtime
    readOnly: true
scheduler: 
  extraArgs:
    bind-address: "0.0.0.0"
  extraVolumes:
  - hostPath: /etc/localtime
    mountPath: /etc/localtime
    name: localtime
    readOnly: true
dns: {}
etcd: 
  external:
    endpoints:
    - https://192.168.255.20:2379
    - https://192.168.255.30:2379
    - https://192.168.255.31:2379
    caFile: /etc/kubernetes/pki/etcd/ca.crt
    certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt
    keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration 
mode: ipvs # or iptables
ipvs:
  excludeCIDRs: null
  minSyncPeriod: 0s
  scheduler: "rr" # 调度算法
  strictARP: false
  syncPeriod: 15s
iptables:
  masqueradeAll: true
  masqueradeBit: 14
  minSyncPeriod: 0s
  syncPeriod: 30s
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration 
cgroupDriver: systemd
failSwapOn: true  #允许swap开启
containerLogMaxSize: 300Mi
containerLogMaxFiles: 5
serializeImagePulls: false
maxPods: 150
evictionHard:
    memory.available:  "1024Mi"
    nodefs.available: "10%"
serverTLSBootstrap: true #启用外部证书
---
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.255.20
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: k8s-manage01
  taints: null
EOF
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
  • 初始化集群

kubeadm init --config /etc/kubernetes/kubeadm-config.yaml --v=5 --upload-certs

# 安装网络插件canal

curl https://raw.githubusercontent.com/projectcalico/calico/v3.24.5/manifests/canal.yaml -O
kubectl apply -f canal.yaml
1
2

# 配置CoreDNS自动扩展

检查是否已经配置 meric-server

kubectl top node

配置meric-server

# 测试coredns

建议跑两个pod,压测

cat >/opt/coredns-hpa.yaml<<EOF
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: coredns
  namespace: kube-system
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: coredns
  minReplicas: 2 #如果压测,设置为1
  maxReplicas: 10
EOF

kubectl apply -f /opt/coredns-hpa.yaml


#创建pod
kubectl run -it --rm --restart=Never --image=infoblox/dnstools:latest dnsload

while true;do dig @10.96.0.10 kubernetes.default.svc.cluster.local >/dev/null ;done


#检查状态

kubectl get pod -n kube-system  |grep coredns
kubectl get horizontalpodautoscalers.autoscaling -n kube-system 

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29

# 添加节点

kubeadm添加节点

# 个人存储下载地址。。。

点击,获取下面下载资源

#docker镜像,需要tar解压才能导入
calico-cni-v3.24.5.tar.gz
calico-node-v3.24.5.tar.gz
coredns-v1.8.6.tar.gz
coreos-flannel-v0.15.1.tar.gz


#etcd
etcd-v3.3.27-linux-amd64.tar.gz
etcd-v3.4.24-linux-amd64.tar.gz
1
2
3
4
5
6
7
8
9
10
//
如果此文章对您有帮助,点击 -->> 请博主喝咖啡 (opens new window)
上次更新: 2025/03/28, 13:42:54
kubeadm安装k8s(版本1.26.4)
kubeadm证书替换

← kubeadm安装k8s(版本1.26.4) kubeadm证书替换→

最近更新
01
Linux Polkit 权限提升漏洞(CVE-2021-4034)
03-28
02
postgreSQL维护
03-17
03
trivy系统漏洞扫描
02-25
更多文章>
Theme by Vdoing | Copyright © 2019-2025 yangfk | 湘ICP备2021014415号-1
  • 跟随系统
  • 浅色模式
  • 深色模式
  • 阅读模式
×
//