Switching an Integrated OpenShift Container Registry to GlusterFS
Overview
This topic reviews how to attach a GlusterFS volume to an integrated OpenShift Container Registry. This can be done with any of Containerized GlusterFS, External GlusterFS, or standalone GlusterFS. It is assumed that the registry has already been started and a volume has been created.
Prerequisites
-
An existing registry deployed without configuring storage.
-
An existing GlusterFS volume
-
glusterfs-fuse installed on all schedulable nodes.
-
A user with the cluster-admin role binding.
-
For this guide, that user is admin.
-
All |
Manually Provision the GlusterFS PersistentVolumeClaim
-
To enable static provisioning, first create a GlusterFS volume. See the GlusterFS Administration Guide for information on how to do this using the
gluster
command-line interface or the heketi project site for information on how to do this usingheketi-cli
. For this example, the volume will be namedmyVol1
. -
Define the following Service and Endpoints in
gluster-endpoints.yaml
:--- apiVersion: v1 kind: Service metadata: name: glusterfs-cluster (1) spec: ports: - port: 1 --- apiVersion: v1 kind: Endpoints metadata: name: glusterfs-cluster (1) subsets: - addresses: - ip: 192.168.122.221 (2) ports: - port: 1 (3) - addresses: - ip: 192.168.122.222 (2) ports: - port: 1 (3) - addresses: - ip: 192.168.122.223 (2) ports: - port: 1 (3)
1 These names must match. 2 The ip
values must be the actual IP addresses of a GlusterFS server, not hostnames.3 The port number is ignored. -
From the OKD master host, create the Service and Endpoints:
$ oc create -f gluster-endpoints.yaml service "glusterfs-cluster" created endpoints "glusterfs-cluster" created
-
Verify that the Service and Endpoints were created:
$ oc get services NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE glusterfs-cluster 172.30.205.34 <none> 1/TCP <none> 44s $ oc get endpoints NAME ENDPOINTS AGE docker-registry 10.1.0.3:5000 4h glusterfs-cluster 192.168.122.221:1,192.168.122.222:1,192.168.122.223:1 11s kubernetes 172.16.35.3:8443 4d
Endpoints are unique per project. Each project accessing the GlusterFS volume needs its own Endpoints.
-
In order to access the volume, the container must run with either a user ID (UID) or group ID (GID) that has access to the file system on the volume. This information can be discovered in the following manner:
$ mkdir -p /mnt/glusterfs/myVol1 $ mount -t glusterfs 192.168.122.221:/myVol1 /mnt/glusterfs/myVol1 $ ls -lnZ /mnt/glusterfs/ drwxrwx---. 592 590 system_u:object_r:fusefs_t:s0 myVol1 (1) (2)
1 The UID is 592. 2 The GID is 590. -
Define the following PersistentVolume (PV) in
gluster-pv.yaml
:apiVersion: v1 kind: PersistentVolume metadata: name: gluster-default-volume (1) annotations: pv.beta.kubernetes.io/gid: "590" (2) spec: capacity: storage: 2Gi (3) accessModes: (4) - ReadWriteMany glusterfs: endpoints: glusterfs-cluster (5) path: myVol1 (6) readOnly: false persistentVolumeReclaimPolicy: Retain
1 The name of the volume. 2 The GID on the root of the GlusterFS volume. 3 The amount of storage allocated to this volume. 4 accessModes
are used as labels to match a PV and a PVC. They currently do not define any form of access control.5 The Endpoints resource previously created. 6 The GlusterFS volume that will be accessed. -
From the OKD master host, create the PV:
$ oc create -f gluster-pv.yaml
-
Verify that the PV was created:
$ oc get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE gluster-default-volume <none> 2147483648 RWX Available 2s
-
Create a PersistentVolumeClaim (PVC) that will bind to the new PV in
gluster-claim.yaml
:apiVersion: v1 kind: PersistentVolumeClaim metadata: name: gluster-claim (1) spec: accessModes: - ReadWriteMany (2) resources: requests: storage: 1Gi (3)
1 The claim name is referenced by the pod under its volumes
section.2 Must match the accessModes
of the PV.3 This claim will look for PVs offering 1Gi or greater capacity. -
From the OKD master host, create the PVC:
$ oc create -f gluster-claim.yaml
-
Verify that the PV and PVC are bound:
$ oc get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE gluster-pv <none> 1Gi RWX Available gluster-claim 37s $ oc get pvc NAME LABELS STATUS VOLUME CAPACITY ACCESSMODES AGE gluster-claim <none> Bound gluster-pv 1Gi RWX 24s
PVCs are unique per project. Each project accessing the GlusterFS volume needs its own PVC. PVs are not bound to a single project, so PVCs across multiple projects may refer to the same PV. |
Attach the PersistentVolumeClaim to the Registry
Before moving forward, ensure that the docker-registry service is running.
$ oc get svc NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE docker-registry 172.30.167.194 <none> 5000/TCP docker-registry=default 18m
If either the docker-registry service or its associated pod is not running, refer back to the registry setup instructions for troubleshooting before continuing. |
Then, attach the PVC:
$ oc set volume deploymentconfigs/docker-registry --add --name=registry-storage -t pvc \ --claim-name=gluster-claim --overwrite
Setting up the Registry provides more information on using an OpenShift Container Registry.