k8s storage

k8s 存储制备器

实现方式

https://github.com/kubernetes-sigs/sig-storage-lib-external-provisioner

nfs

kubernetes集群所有节点需要安装nfs客户端

https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner

1
2
3
4
5
6
7
helm install -f ./nfs-subdir-external-provisioner/values.yaml  ./nfs-subdir-external-provisioner \
-n nfs-provisioner \
--set storageClass.name=nfs-client \
--set storageClass.defaultClass=false \
--set nfs.server=192.168.1.11 \
--set nfs.path=/opt/dynamic-storage \
--generate-name

通过该配置自动创建

–set global.storageClass=nfs-client

删除pod后如何重新挂载原来的卷

1
2
3
4
5
6
7
8
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-client
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
pathPattern: "${.PVC.namespace}/${.PVC.annotations.nfs.io/storage-path}" # waits for nfs.io/storage-path annotation, if not specified will accept as empty string.
onDelete: delete

使用pathPattern路径添加注解

1
2
3
4
persistence
enabled: true
annotations:
nfs.io/storage-path: "prometheus/name"

pvc

PersistentVolumeClaim
PersistentVolumeClaim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
{{- if .Values.server.persistentVolume.annotations }}
annotations:
{{ toYaml .Values.server.persistentVolume.annotations | indent 4 }}
{{- end }}

....

volumeClaimTemplates

volumeClaimTemplates
volumeClaimTemplates
volumeClaimTemplates:
    - metadata:
     name: storage
    {{- with .Values.persistence.annotations }}
     annotations:
    {{- toYaml . | nindent 10 }}
    {{- end }}
    spec:
    accessModes:

.....

csi

https://kubernetes-csi.github.io/docs/external-provisioner.html

CSI Driver = Node(DaemonSet) + Controller(StatefuleSet)

  • 橙色部分:Identity、Node、Controller 是需要开发者自己实现的,被称为 Custom Components
  • 蓝色部分:node-driver-registrar、external-attacher、external-provisioner 组件是 Kubernetes 团队开发和维护的,被称为 External Components,它们都是以 sidecar 的形式与 Custom Components 配合使用的。
交互 过程
External Provisioner + Controller Server 创建、删除数据卷
External Attacher + Controller Server 执行数据卷的挂载、卸载操作
Volume Manager + Volume Plugin + Node Server 执行数据卷的Mount、Umount操作
AD Controller + VolumePlugin 创建、删除VolumeAttachment对象
External Resizer + Controller Server 执行数据卷的扩容操作
ExternalSnapshotter+ControllerServer 执行数据卷的备份操作
Driver Registrar + VolumeManager + Node Server 注册CSI插件,创建CSINode对象
  • controller 根据多备份情况部署,可部署多个,主要负责 provision(动态create/delete) 和 attach(mount/umount)工作

  • node driver registrar 注册功能,每个节点部署,它会在每个节点上进行注册

CSI Node 和 CSI Identity 通常是部署在一个容器里面的,他们和 CSI Controller 一起完成 volume 的 mount 操作

https://blog.csdn.net/2301_76975791/article/details/131098408

s3-csi

https://github.com/ctrox/csi-s3

1
2
3
4
5
6
7
8
9
10
11
12
13
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: csi-s3-existing-bucket
provisioner: ch.ctrox.csi.s3-driver
reclaimPolicy: Retain
parameters:
mounter: rclone
bucket: some-existing-bucket-name
# 'usePrefix' must be true in order to enable the prefix feature and to avoid the removal of the prefix or bucket
usePrefix: "true"
# 'prefix' can be empty (it will mount on the root of the bucket), an existing prefix or a new one.
prefix: custom-prefix

mounter

把S3当文件挂载使用

rclone-minio

https://rclone.org/s3/#liara-cloud

~/.config/rclone/rclone.conf

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[minio]
type = s3
provider = Minio
env_auth = false
access_key_id = USWUXHGYZQYFYFFIT3RE
secret_access_key = MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
region = us-east-1
endpoint = http://192.168.1.106:9000
location_constraint =
server_side_encryption =
[minionew]
type = s3
provider = Minio
env_auth = false
access_key_id = USWUXHGYZQYFYFFIT3RE
secret_access_key = MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
region = us-east-1
endpoint = http://xxxxx:9000
location_constraint =
server_side_encryption =
rclone迁移

前提,两台机器的时区及时间要保持一致

1
rclone sync minio:bucket minionew:bucket

bucket桶名需要保持一致

点击打赏
文章目录
  1. 1. nfs
  2. 2. csi
    1. 2.1. s3-csi
      1. 2.1.1. mounter
        1. 2.1.1.1. rclone-minio
        2. 2.1.1.2. rclone迁移
载入天数...载入时分秒... ,