ceph


搭建好ceph集群创建rbd:(如果使用 StorageClass rbd 是自动创建的)

rbd create --size 1024 ceph-image

查看RBD:

~]# rbd list
ceph-image

查看RBD详情:

rbd info ceph-image

disable不需要的特性:

rbd feature disable ceph-image exclusive-lock object-map fast-diff deep-flatten

创建秘钥:

ceph auth get-key client.admin | base64


k8s


在k8s 上安装ceph客户端工具:

yum install -y ceph-common

将 /etc/ceph 中的  ceph.client.admin.keyring  和  ceph.conf  复制到 k8s 各个节点中的/etc/ceph 中。

scp ceph.conf root@192.168.1.133:/etc/ceph/
scp ceph.client.admin.keyring root@192.168.1.135:/etc/ceph/

在 k8s 上创建如下资源并应用:

secret.yaml

apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret
data:
  key: QVFCK0ZnSmRqVzRPSEJBQUhhT3p3UkgxcjcrK01wZFNVUEtNcFE9PQ==

pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: ceph-pv
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  rbd:       
    monitors:
      - 192.168.1.121:6789
      - 192.168.1.122:6789
      - 192.168.1.123:6789
    pool: rbd
    image: ceph-image
    user: admin
    secretRef:
      name: ceph-secret
    fsType: ext4
    readOnly: false
  persistentVolumeReclaimPolicy: Recycle

pvc.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: ceph-claim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: ceph-pod1
spec:
  containers:
  - name: ceph-busybox
    image: busybox
    command: ["sleep", "60000"]
    volumeMounts:
    - name: ceph-vol1
      mountPath: /usr/share/busybox
      readOnly: false
  volumes:
  - name: ceph-vol1
    persistentVolumeClaim:
      claimName: ceph-claim