harbor镜像私库

配置要求

硬件

资源 最低 推荐
CPU 2 CPU 4 CPU
Mem 4 GB 8 GB
Disk 40 GB 160 GB

软件

docker v17.06.0-ce+ Docker 引擎文档

docker-compose v1.18.0+ Docker Compose 文档

OpenSSL

网络端口

HTTPS 443/4443

HTTP 80

安装

仓库

https://github.com/goharbor/harbor/releases

文档

https://goharbor.io/docs/2.5.3/install-config/download-installer/

harbor

证书ca.key

https://goharbor.io/docs/2.5.3/install-config/configure-https/

生成CA私钥
1
openssl genrsa -out ca.key 4096

kubeadm容器化安装

docker

daemon.json

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[vagrant@k8s ~]$ sudo vi /etc/docker/daemon.json
[vagrant@k8s ~]$ docker info | grep Driver
Storage Driver: overlay2
Logging Driver: json-file
Cgroup Driver: cgroupfs
[vagrant@k8s ~]$ sudo systemctl restart docker
[vagrant@k8s ~]$ docker info | grep Driver
Storage Driver: overlay2
Logging Driver: json-file
Cgroup Driver: systemd
[vagrant@k8s ~]$ sudo cat /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"insecure-registries":["https://k8s.org"]

}

“exec-opts”: [“native.cgroupdriver=systemd”]

Error response from daemon: OCI runtime create failed: systemd cgroup flag passed, but systemd support for managing cgroups is not available

/etc/sysconfig/modules/ipvs.modules

1
2
3
4
5
6
7
8
9
10
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- br_netfilter
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules
1
2
3
4
 cat  /etc/sysctl.conf
net.ipv4.vs.conntrack=1
net.ipv4.vs.conn_reuse_mode=0
net.ipv4.vs.expire_nodest_conn=1

准备images list

1
2
3
4
5
6
7
8
9
./kubeadm config images list
I0809 21:56:57.334915 4785 version.go:254] remote version is much newer: v1.24.3; falling back to: stable-1.21
k8s.gcr.io/kube-apiserver:v1.21.14
k8s.gcr.io/kube-controller-manager:v1.21.14
k8s.gcr.io/kube-scheduler:v1.21.14
k8s.gcr.io/kube-proxy:v1.21.14
k8s.gcr.io/pause:3.4.1
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns/coredns:v1.8.0

flanneld虚拟网络

github https://github.com/flannel-io/flannel/tree/v0.16.1/Documentation

flannel

flannel详情
flannel.yaml
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
        image: k8s.org/k8s/flannel-cni-plugin:v1.0.0-amd64
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
        image: k8s.org/k8s/flannel:v0.15.1-amd64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: k8s.org/k8s/flannel:v0.15.1-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        - --iface=eth1
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg  

异常

CrashLoopBackOff

coredns跟fannel

xx

网段问题

Error registering network: failed to acquire lease: node “k8s01” pod cidr not assigned

assigned

kubeadm join command

–pod-network-cidr 的网段要跟fannel配置里Network的网段一致

flannel.yaml

1
2
3
4
[root@k8s01 ~]# cat /opt/flannel.yaml | grep -i Network
- "networking.k8s.io"
"Network": "10.244.0.0/16",
hostNetwork: true

kube-controller-manager command

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# cat /etc/kubernetes/manifests/kube-controller-manager.yaml  | grep -A 16 command
- command:
- kube-controller-manager
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --bind-address=127.0.0.1
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --cluster-name=kubernetes
- --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
- --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
- --controllers=*,bootstrapsigner,tokencleaner
- --kubeconfig=/etc/kubernetes/controller-manager.conf
- --leader-elect=true
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --root-ca-file=/etc/kubernetes/pki/ca.crt
- --service-account-private-key-file=/etc/kubernetes/pki/sa.key
- --use-service-account-credentials=true
image: k8s.org/k8s/kube-controller-manager:v1.26.1

–allocate-node-cidrs=true 基于云驱动来为 Pod 分配和设置子网掩码

–cluster-cidr=10.244.0.0/16 集群中 Pod 的 CIDR 范围

coredns

1

2

3

kubernetes部署结构

go env

编译二进制

You have a working Go environment.

1
2
3
4
5
6
7
GOPATH=`go env | grep GOPATH | cut -d '"' -f 2 `
mkdir -p $GOPATH/src/k8s.io
cd $GOPATH/src/k8s.io
git clone https://github.com/kubernetes/kubernetes
cd kubernetes
git checkout v1.21.12
make

$GOPATH/go.mod exists but should not #开启模块支持后,并不能与GOPATH共存 ,所以把GOPATH置空

go mod vendor

前置条件配置

  • 一台兼容的 Linux 主机。Kubernetes 项目为基于 Debian 和 Red Hat 的 Linux 发行版以及一些不提供包管理器的发行版提供通用的指令
  • 每台机器 2 GB 或更多的 RAM (如果少于这个数字将会影响你应用的运行内存)
  • 2 CPU 核或更多
  • 集群中的所有机器的网络彼此均能相互连接(公网和内网都可以)
  • 节点之中不可以有重复的主机名、MAC 地址或 product_uuid。请参见这里了解更多详细信息。
  • 开启机器上的某些端口。请参见这里 了解更多详细信息。
  • 禁用交换分区。为了保证 kubelet 正常工作,你 必须 禁用交换分区

更多见 https://kubernetes.io/zh-cn/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

2种 HA 集群方式

文档 https://kubernetes.io/zh-cn/docs/setup/production-environment/tools/kubeadm/ha-topology/

堆叠(Stacked)etcd

这种拓扑将控制平面和 etcd 成员耦合在同一节点上,设置简单.存在耦合失败的风险

iptable防火墙

数据流向

  • 当一个数据包进入网卡时,它首先进入PREROUTING链,内核根据数据包目的IP判断是否需要转送出去。
  • 如果数据包就是**进入本机的,它就会到达INPUT链。数据包到了INPUT链**后,任何进程都会收到它。本机上运行的程序可以发送数据包,这些数据包会经过OUTPUT链,然后到达POSTROUTING链输出。
  • 如果数据包是要**转发出去的,且内核允许转发,数据包就会如图所示向右移动,经过FORWARD链**,然后到达POSTROUTING链输出。

1
2
3
4
5
#临时生效
echo 1 > /proc/sys/net/ipv4/ip_forward
#永久生效
cs@debian:~/oss/hexo$ cat /etc/sysctl.conf | grep net.ipv4.ip_
net.ipv4.ip_forward=1

grep常用过滤

前后行 A B C

grep -A 显示匹配指定内容及之后的n行

grep -B 显示匹配指定内容及之前的n行

grep -C 显示匹配指定内容及其前后各n行

1
2
3
4
5
6
7
8
9
10
11
12
cs@debian:~/oss/hexo$ cat /opt/nginx/logs/k8s-access.log | grep -C 5 "2022:15:43:27"
127.0.0.1 - k8s-apiserver - [31/Jul/2022:15:43:22 +0800] 502 0
127.0.0.1 - k8s-apiserver - [31/Jul/2022:15:43:23 +0800] 502 0
127.0.0.1 - k8s-apiserver - [31/Jul/2022:15:43:24 +0800] 502 0
127.0.0.1 - k8s-apiserver - [31/Jul/2022:15:43:25 +0800] 502 0
127.0.0.1 - k8s-apiserver - [31/Jul/2022:15:43:26 +0800] 502 0
127.0.0.1 - 192.168.56.103:6443, 192.168.56.101:6443, 192.168.56.102:6443 - [31/Jul/2022:15:43:27 +0800] 502 0, 0, 0
127.0.0.1 - k8s-apiserver - [31/Jul/2022:15:43:28 +0800] 502 0
127.0.0.1 - k8s-apiserver - [31/Jul/2022:15:43:29 +0800] 502 0
127.0.0.1 - k8s-apiserver - [31/Jul/2022:15:43:30 +0800] 502 0
127.0.0.1 - k8s-apiserver - [31/Jul/2022:15:43:30 +0800] 502 0
127.0.0.1 - k8s-apiserver - [31/Jul/2022:15:43:31 +0800] 502 0

与操作

多次匹配

1
2
3
4
5
6
7
8
9
10
11
12
cs@debian:~/oss/hexo$ cat /opt/nginx/logs/k8s-access.log | grep "2022:15:43:2" | grep 502
127.0.0.1 - k8s-apiserver - [31/Jul/2022:15:43:20 +0800] 502 0
127.0.0.1 - k8s-apiserver - [31/Jul/2022:15:43:21 +0800] 502 0
127.0.0.1 - k8s-apiserver - [31/Jul/2022:15:43:21 +0800] 502 0
127.0.0.1 - k8s-apiserver - [31/Jul/2022:15:43:22 +0800] 502 0
127.0.0.1 - k8s-apiserver - [31/Jul/2022:15:43:23 +0800] 502 0
127.0.0.1 - k8s-apiserver - [31/Jul/2022:15:43:24 +0800] 502 0
127.0.0.1 - k8s-apiserver - [31/Jul/2022:15:43:25 +0800] 502 0
127.0.0.1 - k8s-apiserver - [31/Jul/2022:15:43:26 +0800] 502 0
127.0.0.1 - 192.168.56.103:6443, 192.168.56.101:6443, 192.168.56.102:6443 - [31/Jul/2022:15:43:27 +0800] 502 0, 0, 0
127.0.0.1 - k8s-apiserver - [31/Jul/2022:15:43:28 +0800] 502 0
127.0.0.1 - k8s-apiserver - [31/Jul/2022:15:43:29 +0800] 502 0

或操作 |

tree工具

目录层级 -Ld

-d 目录

-L level 层级

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
cs@debian:/$ tree -Ld  1
.
├── bin
├── boot
├── dev
├── etc
├── home
├── lib
├── lib64
├── lost+found
├── media
├── mnt
├── opt
├── proc
├── root
├── run
├── sbin
├── snap
├── srv
├── sys
├── tmp
├── usr
└── var

路径前缀 -f

-f 打印路径的前缀(根据命令指定显示)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
cs@debian:/opt/apache$ tree -Ldf  1 ./
.
├── ./apache-maven-3.8.6
├── ./kafka-2.1.1
├── ./maven-3.6.0
├── ./tomcat-8.5.38
└── ./zookeeper-3.4.13

5 directories
cs@debian:/opt/apache$ tree -Ldf 1 /opt/apache/
/opt/apache
├── /opt/apache/apache-maven-3.8.6
├── /opt/apache/kafka-2.1.1
├── /opt/apache/maven-3.6.0
├── /opt/apache/tomcat-8.5.38
└── /opt/apache/zookeeper-3.4.13

5 directories

maven介绍

安装

环境变量

1
2
3
4
5
6
7
8
9
cs@debian:~/oss/hexo$ wget https://dlcdn.apache.org/maven/maven-3/3.8.6/binaries/apache-maven-3.8.6-bin.tar.gz -O apache-maven-3.8.6-bin.tar.gz
cs@debian:~/oss/hexo$ tar -zxvf apache-maven-3.8.6-bin.tar.gz -C /opt/apache
cs@debian:~/oss/hexo$ cat >> ~/.bashrc <<EOF
#maven
if [ -d "/opt/apache/maven-3.8.6" ] ; then
export MAVEN_HOME=/opt/apache/maven-3.8.6
export PATH=${MAVEN_HOME}/bin:\$PATH
fi
EOF

版本

1
cs@debian:~/oss/hexo$ mvn -version

Apache Maven 3.8.6 (84538c9988a25aec085021c365c560670ad80f63)
Maven home: /opt/apache/apache-maven-3.8.6
Java version: 11.0.12, vendor: Oracle Corporation, runtime: /opt/jdk/jdk-11.0.12
Default locale: zh_CN, platform encoding: UTF-8
OS name: “linux”, version: “4.9.0-8-amd64”, arch: “amd64”, family: “unix”

基本命令

编译

1
mvn compile 

–src/main/java目录java源码编译生成class (target目录下)

          

测试

1
mvn test 

–src/test/java 目录编译

          

清理

1
mvn clean

–删除target目录,也就是将class文件等删除

          

打包

1
mvn package 

–生成压缩文件:java项目#jar包;web项目#war包,也是放在target目录下

          

安装

1
2
3
mvn install  

mvn install -Dmaven.test.skip

–将压缩文件(jar或者war)上传到本地仓库

          

部署|发布

1
mvn deploy  

–将压缩文件上传私服

多模块

场景:几百个微服务只打部分包

1
2
3
4
5
6
7
8
9
10
11
12
13
 <profiles>
<profile>
<modules>
<module>sorl-util</module>
<module>page-interface</module>
<module>page-service</module>
<module>search-interface</module>
<module>search-service</module>
<module>search-web</module>
....
</modules>
</profile>
</profiles>

k8s集群

常用命令

缩写

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
certificatesigningrequests (缩写 csr)
componentstatuses (缩写 cs)
configmaps (缩写 cm)
customresourcedefinition (缩写 crd)
daemonsets (缩写 ds)
deployments (缩写 deploy)
endpoints (缩写 ep)
events (缩写 ev)
horizontalpodautoscalers (缩写 hpa)
ingresses (缩写 ing)
limitranges (缩写 limits)
namespaces (缩写 ns)
networkpolicies (缩写 netpol)
nodes (缩写 no)
persistentvolumeclaims (缩写 pvc)
persistentvolumes (缩写 pv)
poddisruptionbudgets (缩写 pdb)
pods (缩写 po)
podsecuritypolicies (缩写 psp)
replicasets (缩写 rs)
replicationcontrollers (缩写 rc)
resourcequotas (缩写 quota)
serviceaccounts (缩写 sa)
services (缩写 svc)
statefulsets (缩写 sts)
storageclasses (缩写 sc)

自动补全

1
2
3
4
5
6
sudo apt install bash-completion

source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)

echo "source <(kubectl completion bash)" >> ~/.bashrc

/usr/bin/zsh /usr/bin/bash

cs(master节点)

componentstatuses

1
2
3
4
5
6
7
cs@debian:~$ kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-2 Healthy {"health": "true"}
etcd-0 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}

node节点

1
2
3
4
5
6
7
8
9
cs@debian:~$ kubectl  get node
NAME STATUS ROLES AGE VERSION
master02 Ready <none> 101d v1.18.8
master03 Ready <none> 101d v1.18.8
node04 Ready <none> 101d v1.18.8
node05 Ready <none> 101d v1.18.8
node06 Ready <none> 101d v1.18.8

kubectl get node -o wide

traefik

Installing Resource Definition and RBAC

1
2
3
4
5
# Install Traefik Resource Definitions:
kubectl apply -f https://raw.githubusercontent.com/traefik/traefik/v2.10/docs/content/reference/dynamic-configuration/kubernetes-crd-definition-v1.yml

# Install RBAC for Traefik:
kubectl apply -f https://raw.githubusercontent.com/traefik/traefik/v2.10/docs/content/reference/dynamic-configuration/kubernetes-crd-rbac.yml

The apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in Kubernetes v1.16+ and will be removed in v1.22+.

For Kubernetes v1.16+, please use the Traefik apiextensions.k8s.io/v1 CRDs instead.

Traefik & CRD & Let’s Encrypt

traefik.sh

traefik:v2.2.10
bash traefik.sh
#!/bin/bash
DIR="$(cd "$(dirname "$0")" && pwd)"

base_file=$DIR/test crd=1-crd.yaml rbac=2-rbac.yaml role=3-role.yaml static=4-static_config.yaml dynamic=5-dynamic_toml.toml deploy=6-deploy.yaml svc=7-service.yaml ingress=8-ingress.yaml

y_crd(){ cat >$1 < spec: group: traefik.containo.us version: v1alpha1 names: kind: IngressRoute plural: ingressroutes singular: ingressroute scope: Namespaced
--- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: middlewares.traefik.containo.us
spec: group: traefik.containo.us version: v1alpha1 names: kind: Middleware plural: middlewares singular: middleware scope: Namespaced
--- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: ingressroutetcps.traefik.containo.us
spec: group: traefik.containo.us version: v1alpha1 names: kind: IngressRouteTCP plural: ingressroutetcps singular: ingressroutetcp scope: Namespaced
--- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: ingressrouteudps.traefik.containo.us
spec: group: traefik.containo.us version: v1alpha1 names: kind: IngressRouteUDP plural: ingressrouteudps singular: ingressrouteudp scope: Namespaced
--- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: tlsoptions.traefik.containo.us
spec: group: traefik.containo.us version: v1alpha1 names: kind: TLSOption plural: tlsoptions singular: tlsoption scope: Namespaced
--- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: tlsstores.traefik.containo.us
spec: group: traefik.containo.us version: v1alpha1 names: kind: TLSStore plural: tlsstores singular: tlsstore scope: Namespaced
--- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: traefikservices.traefik.containo.us
spec: group: traefik.containo.us version: v1alpha1 names: kind: TraefikService plural: traefikservices singular: traefikservice scope: Namespaced EOF }
y_rbac(){ cat>$1 < rules: - apiGroups: - "" resources: - services - endpoints - secrets verbs: - get - list - watch - apiGroups: - "" resources: - persistentvolumes verbs: - get - list - watch - create # persistentvolumes - delete - apiGroups: - "" resources: - persistentvolumeclaims verbs: - get - list - watch - update # persistentvolumeclaims - apiGroups: - extensions resources: - ingresses verbs: - get - list - watch - apiGroups: - extensions resources: - ingresses/status verbs: - update - apiGroups: - traefik.containo.us resources: - middlewares - ingressroutes - traefikservices - ingressroutetcps - ingressrouteudps - tlsoptions - tlsstores verbs: - get - list - watch EOF }
y_role(){ cat >$1 < }
#静态配置动态文件======================? y_static_config(){ cat >$1 < genkey(){ openssl req \ -newkey rsa:2048 -nodes -keyout tls.key \ -x509 -days 3650 -out tls.crt \ -subj "/C=CN/ST=GD/L=SZ/O=cs/OU=shea/CN=k8s.org" #kubectl create secret generic traefik-cert --from-file=tls.crt --from-file=tls.key -n kube-system }
y_dynamic_toml(){ cat >$1 < EOF }
y_deploy(){ cat >$1 < y_service(){ cat >$1 < EOF }
y_ingress(){ cat >$1 <<"EOF" --- apiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute metadata: name: traefik-dashboard-route namespace: kube-system spec: entryPoints: - web routes: - match: Host(`master02`) #pod节点 192.168.56.109 kind: Rule services: - name: traefik port: 8080 EOF }
[ -d "$base_file" ] || { echo "没有目录,则创建目录" && mkdir $base_file; } [ -n "$(which openssl)" ] || { echo "需要用到openssl,没有找到,退出" && exit 1; } cd $base_file
# genkey # [ -f "tls.key" ] || { echo "没有生成密钥,退出" && exit 1; } #kubectl create secret generic traefik-cert --from-file=tls.crt --from-file=tls.key -n kube-system # #kubectl create configmap traefik-conf --from-file=$dynamic -n kube-system
arr=($crd $rbac $role $static $dynamic $deploy $svc $ingress)
for i in ${arr[@]}; do echo "开始生成:"$i y_${i:2:0-5} $i [ -f "$i" ] || { echo "没有生成$i,退出" && exit 1; } #kubectl apply -f $i done
traefik-v2.10.4
bash traefik.sh
#!/bin/bash
DIR="$(cd "$(dirname "$0")" && pwd)"

version="k8s.org/k8s/traefik:v2.10.4" base_file=$DIR/test crd=1-crd.yaml rbac=2-rbac.yaml static=3-static_config.yaml dynamic=4-dynamic_toml.toml deploy=5-deploy.yaml svc=6-service.yaml ingress=7-ingress.yaml
y_crd(){ [ -f "$DIR/crd.yml" ] && { echo "cp crd" && cp $DIR/crd.yml $DIR/test/$1 && return 0; } url=https://raw.githubusercontent.com/traefik/traefik/v2.10/docs/content/reference/dynamic-configuration/kubernetes-crd-definition-v1.yml echo "请执行wget -O crd.yml $url" }
y_rbac(){ [ -f "$DIR/rabc.yml" ] && { echo "cp rabc" && cp $DIR/rabc.yml $DIR/test/$2 && return 0; } url=https://raw.githubusercontent.com/traefik/traefik/v2.10/docs/content/reference/dynamic-configuration/kubernetes-crd-rbac.yml echo "请执行wget -O rabc.yml $url" }

#静态配置动态文件======================? y_static_config(){ cat >$1 <
genkey(){ openssl req \ -newkey rsa:2048 -nodes -keyout tls.key \ -x509 -days 3650 -out tls.crt \ -subj "/C=CN/ST=GD/L=SZ/O=cs/OU=shea/CN=ui.k8s.cn" #ui.k8s.cn 对应rule host #kubectl create secret generic traefik-cert --from-file=tls.crt --from-file=tls.key -n kube-system }
y_dynamic_toml(){ cat >$1 < EOF }
y_deploy(){ cat >$1 < --- apiVersion: apps/v1 kind: Deployment metadata: name: traefik-ingress-controller labels: app: traefik spec: selector: matchLabels: app: traefik template: metadata: name: traefik labels: app: traefik spec: serviceAccountName: traefik-ingress-controller terminationGracePeriodSeconds: 1 containers: - image: $version name: traefik ports: - name: web containerPort: 80 hostPort: 80 ## 将容器端口绑定所在服务器的 80 端口 - name: websecure containerPort: 443 hostPort: 443 ## 将容器端口绑定所在服务器的 443 端口 - name: redis containerPort: 6379 hostPort: 6379 - name: admin containerPort: 8080 ## Traefik Dashboard 端口 resources: limits: cpu: 200m memory: 256Mi requests: cpu: 100m memory: 256Mi securityContext: capabilities: drop: - ALL add: - NET_BIND_SERVICE args: - --configfile=/config/traefik.yaml volumeMounts: - mountPath: "/config" name: "config" - mountPath: "/ssl" name: "ssl" volumes: - name: config configMap: name: traefik-config-yaml - name: ssl secret: secretName: traefik-cert EOF }
y_service(){ cat >$1 < EOF }
#kubectl apply -f https://raw.githubusercontent.com/traefik/traefik/v2.10/docs/content/user-guides/crd-acme/04-ingressroutes.yml y_ingress(){ cat >$1 <<"EOF" apiVersion: traefik.io/v1alpha1 #v3 版本废弃v1alpha1,使用v1 kind: IngressRoute metadata: name: dashboard spec: entryPoints: - websecure routes: - match: Host(`ui.k8s.cn`) kind: Rule services: - name: api@internal kind: TraefikService tls: secretName: traefik-cert EOF }
[ -d "$base_file" ] || { echo "没有目录,则创建目录" && mkdir $base_file; } [ -n "$(which openssl)" ] || { echo "需要用到openssl,没有找到,退出" && exit 1; } cd $base_file
# genkey # [ -f "tls.key" ] || { echo "没有生成密钥,退出" && exit 1; } #kubectl create secret generic traefik-cert --from-file=tls.crt --from-file=tls.key -n kube-system # #kubectl create configmap traefik-conf --from-file=$dynamic -n kube-system # arr=($crd $rbac $static $dynamic $deploy $svc $ingress)
for i in ${arr[@]}; do echo "开始生成:"$i y_${i:2:0-5} $i [ -f "$i" ] || { echo "没有生成$i,退出" && exit 1; } # kubectl apply -f $i done


1
2
3
4
5
6
7
8
9
10
11
$bash traefik.sh
$ tree ./test
./test
├── 1-crd.yaml
├── 2-rbac.yaml
├── 3-role.yaml
├── 4-static_config.yaml
├── 5-dynamic_toml.toml
├── 6-deploy.yaml
├── 7-service.yaml
└── 8-ingress.yaml

https://www.lvbibir.cn/posts/tech/kubernetes-traefik-2-router/

helm

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
❯ helm install -f ./traefik/values.yaml  -name traefik   --namespace kube-system  ./traefik
NAME: traefik
LAST DEPLOYED: Wed Sep 6 20:00:43 2023
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Traefik Proxy v2.10.4 has been deployed successfully on kube-system namespace !
❯ helm upgrade -name traefik --namespace kube-system ./traefik
Release "traefik" has been upgraded. Happy Helming!
NAME: traefik
LAST DEPLOYED: Wed Sep 6 20:08:33 2023
NAMESPACE: kube-system
STATUS: deployed
REVISION: 2
TEST SUITE: None
NOTES:
Traefik Proxy v2.10.4 has been deployed successfully on kube-system namespace !
❯helm uninstall -name traefik --namespace kube-system
release "traefik" uninstalled

nginx

1
2
3
4
5
6
7
8
9
10
#https://docs.nginx.com/nginx-ingress-controller
❯ helm repo add nginx-stable https://helm.nginx.com/stable
"nginx-stable" has been added to your repositories
❯ helm pull nginx-stable/nginx-ingress --untar

#https://github.com/kubernetes/ingress-nginx
❯ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
"ingress-nginx" has been added to your repositories
❯ helm pull ingress-nginx/ingress-nginx --untar

1
2
3
4
5
6
❯ kubectl get validatingwebhookconfigurations.admissionregistration.k8s.io
NAME WEBHOOKS AGE
ingress-nginx-admission 1 42s

❯ kubectl delete -A validatingwebhookconfigurations.admissionregistration.k8s.io ingress-nginx-admission
validatingwebhookconfiguration.admissionregistration.k8s.io "ingress-nginx-admission" deleted

UPGRADE FAILED: cannot patch “grafana” with kind Ingress: Internal error occurred: failed calling webhook “validate.nginx.ingress.kubernetes.io”: failed to call webhook: Post “https://ingress-nginx-controller-admission.default.svc:443/networking/v1/ingresses?timeout=10s": x509: certificate signed by unknown authority

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: todo
namespace: default
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/app-root: /app/
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/configuration-snippet: |
rewrite ^(/app)$ $1/ redirect;
rewrite ^/stylesheets/(.*)$ /app/stylesheets/$1 redirect;
rewrite ^/images/(.*)$ /app/images/$1 redirect;
spec:
rules:
- host: todo.qikqiak.com
http:
paths:
- backend:
serviceName: todo
servicePort: 3000
path: /app(/|$)(.*)
载入天数...载入时分秒... ,