Create a Postgres Database using a PVC
Cloud applications often require a place to persist relational data. Containers inside pods have ephemeral filesystems that are lost when a pod restarts or terminates. Persistent volumes solve this problem by allowing data to persist beyond a pod's lifetime. This tutorial explains how to create a Postgres database on a persistent volume, and then demonstrates that the data survives the restart of the Postgres pod.
Note that the PVC data won't move if CloudFlow moves your project to a new location. See here for strategies to handle this. Or upgrade to our Pro Plan to allow you to specify a static location configuration.
You'll create the following resources:
- a
location-optimizer
ConfigMap to specify a single location - a persistent volume claim for the database,
postgres-pvc
- a Postgres deployment,
postgres-deployment
, using the official Postgres image on Docker Hub - a Service,
postgres-service
, that points to the Postgres pod
Run in a Single Location
You'll likely want your database to run in a single location, so specify that your project should have maximumLocations
of 1 in your location-optimizer
ConfigMap resource:
apiVersion: v1
kind: ConfigMap
data:
strategy: |
{
"strategy": "SolverServiceV1",
"params": {
"policy": "dynamic",
"minimumLocations": 1,
"maximumLocations": 1
}
}
metadata:
name: location-optimizer
Create the PVC
Next, create the PersistentVolumeClaim to hold the database.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pvc
spec:
resources:
requests:
storage: 50Mi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
Apply and check the PersistentVolumeClaim resource.
$ kubectl apply -f postgres-pvc-claim.yaml
persistentvolumeclaim/postgres-pvc created
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
postgres-pvc Pending 14m
$
Create the Deployment
Notes about the deployment:
- Write-replicas, which you are creating below, must be no more than 1. If you need horizontal scale then you can make a separate deployment of read replicas.
- The PVC is mounted at
/var/lib/postgresql/data
, and then we place the database files in thek8s
folder underneath. Thek8s
subfolder andrunAsUser
are specified so that the Postgres container works properly with the security of the volume supporting the PVC. - In production you should use a Secret resource instead of supplying a password directly.
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1 # must be no more than 1
selector:
matchLabels:
app: postgres-deployment
template:
metadata:
labels:
app: postgres-deployment
spec:
securityContext:
runAsUser: 999
containers:
- name: postgres
image: postgres
imagePullPolicy: Always
ports:
- containerPort: 5432
env:
- name: POSTGRES_PASSWORD
value: UseASecretInstead
- name: PGDATA
value: /var/lib/postgresql/data/k8s
volumeMounts:
- name: postgres-data
mountPath: /var/lib/postgresql/data
resources:
requests:
memory: ".5Gi"
cpu: "500m"
limits:
memory: ".5Gi"
cpu: "500m"
volumes:
- name: postgres-data
persistentVolumeClaim:
claimName: postgres-pvc
Apply and check the Deployment resource.
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
postgres-deployment-74fffd4c8b-98sf9 1/1 Running 0 6m7s 10.244.73.223 sfo-gwuie <none> <none>
postgres-deployment-74fffd4c8b-d6d5c 1/1 Running 0 6m7s 10.246.184.74 nyc-adljs <none> <none>
$
Create the Service
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
ports:
- port: 5432
selector:
app: postgres-deployment
Apply and check the Service resource.
$ kubectl apply -f postgres-service.yaml
service/postgres-service created
$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-upstream ClusterIP 10.43.209.247 <none> 80/TCP 25h
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 25h
postgres-service ClusterIP 10.43.227.23 <none> 5432/TCP 6s
$
The Service gives you a hostname postgres-service
that points to your deployment, one that you will use with the -h
argument of psql
in the next section. This hostname also insulates you from the fact that when the pod restarts it might come back with a different IP address.
Test It
We'll exec into the SFO pod, run psql
, make a table, and do a query.
$ kubectl exec -it postgres-deployment-74fffd4c8b-98sf9 -- sh
postgres-deployment-74fffd4c8b-98sf9$ psql -h postgres-service
Password for user postgres: XXXXXXXXXXXXX
psql (15.1 (Debian 15.1-1.pgdg110+1))
Type "help" for help.
postgres=# \dt
Did not find any relations.
postgres=# CREATE TABLE PETS(
ID SERIAL PRIMARY KEY NOT NULL,
NAME TEXT NOT NULL UNIQUE,
ANIMAL TEXT NOT NULL
);
CREATE TABLE
postgres=# INSERT INTO PETS (NAME,ANIMAL) VALUES('JED','CAT');
INSERT 0 1
postgres=# INSERT INTO PETS (NAME,ANIMAL) VALUES('BUCKLEY','DOG');
INSERT 0 1
postgres=# SELECT * FROM PETS;
id | name | animal
----+---------+--------
1 | JED | CAT
2 | BUCKLEY | DOG
(2 rows)
postgres=# quit
postgres-deployment-74fffd4c8b-98sf9$ exit
Now we'll delete the deployment but not the PVC, recreate the deployment, and then validate that the table is still present in the SFO location. This demonstrates that the PVC retained the data.
$ kubectl delete deploy postgres-deployment
deployment.apps "postgres-deployment" deleted
$ kubectl get pods -o wide
No resources found in default namespace.
$ kubectl apply -f postgres-deployment.yaml
deployment.apps/postgres-deployment created
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
postgres-deployment-74fffd4c8b-2fxsp 1/1 Running 0 18s 10.246.104.111 nyc-adljs <none> <none>
postgres-deployment-74fffd4c8b-8chhc 1/1 Running 0 18s 10.244.73.226 sfo-gwuie <none> <none>
$ kubectl exec -it postgres-deployment-74fffd4c8b-8chhc -- sh
postgres-deployment-74fffd4c8b-8chhc$ psql -h postgres-service
Password for user postgres:
psql (15.1 (Debian 15.1-1.pgdg110+1))
Type "help" for help.
postgres=# SELECT * FROM PETS;
id | name | animal
----+---------+--------
1 | JED | CAT
2 | BUCKLEY | DOG
(2 rows)
postgres=# quit
postgres-deployment-74fffd4c8b-8chhc$ exit
$
And there you have it! Now you can access your Postgres database from within your CloudFlow project at host postgres-service, port 5432.