ceph


使用 StorageClass 动态创建 pv。

在每个使用到ceph的节点上都安装ceph-common:

yum install -y ceph-common

在ceph集群上创建池pool和对应的权限:

ceph osd pool create kube 1024
ceph auth get-or-create client.kube mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=kube' -o ceph.client.kube.keyring

将ceph集群中  /etc/ceph/ceph.client.admin.keyring  、  ceph.client.kube.keyring  和  ceph.conf  三个文件scp到k8s的/etc/ceph目录下。

scp ceph.conf root@192.168.1.133:/etc/ceph/
scp ceph.client.admin.keyring root@192.168.1.135:/etc/ceph/


注意:如果无法动态创建出pv资源问题应该在ceph,是否创建了rbd,查看方法如下:

~]# rbd list --pool kube
kubernetes-dynamic-pvc-c8dfd9c2-90d7-11e9-93d5-000c295f36e2

如果不行就disable掉不需要的特性:

rbd feature disable kubernetes-dynamic-xxx exclusive-lock object-map fast-diff deep-flatten



k8s


在ceph集群上使用如下命令生成ceph秘钥用在k8s的secret资源中:

ceph auth get-key client.admin | base64

在k8s上创建admin的secret:

secret.yaml

apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret
  namespace: kube-system
data:
  key: QVFCK0ZnSmRqVzRPSEJBQUhhT3p3UkgxcjcrK01wZFNVUEtNcFE9PQ==
type: kubernetes.io/rbd

创建user的secret:

ceph auth get-key client.kube | base64

ceph-user-secret.yaml

apiVersion: v1
kind: Secret
metadata:
  name: ceph-user-secret
  namespace: default
data:
  key: QVFENVJ3ZGRld0VIRWhBQTVaSjVNU1g0UmlJcnRpQTk5aEIvakE9PQ==
type: kubernetes.io/rbd


创建StorageClass资源:

ceph-storageclass.yaml

apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
  name: dynamic
  annotations:
     storageclass.beta.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/rbd
parameters:
  monitors: 192.168.1.121:6789,192.168.1.122:6789,192.168.1.123:6789
  adminId: admin
  adminSecretName: ceph-secret
  adminSecretNamespace: kube-system
  pool: kube
  userId: kube
  userSecretName: ceph-user-secret


创建PVC资源:

ceph-claim.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: ceph-claim
spec:
  accessModes:     
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

测试是否可用:

pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: ceph-test
  namespace: default
  labels:
    app: init
spec:
  containers:
  - name: nginx
    image: nginx:latest
    imagePullPolicy: IfNotPresent
    ports:
    - name: http
      containerPort: 80
    volumeMounts:
    - name: html
      mountPath: /usr/share/nginx/html
  initContainers:
  - name: busybox
    image: busybox:latest
    imagePullPolicy: IfNotPresent
    command: ["/bin/sh","-c","echo 'html'> /usr/share/nginx/html/index.html"]
    volumeMounts:
    - name: html
      mountPath: /usr/share/nginx/html
  volumes:
  - name: html
    persistentVolumeClaim:
      claimName: ceph-claim

先用初始化Pod在存储中写入一个文件,在挂载到主容器的web目录中。访问Pod IP就可验证存储是否可用。