Database¶
RAIL has the ability to run Postgres clusters that you can set up to provide database services for your application. These are provided by the CloudNativePG operator managed by RAIL.
Other databases have no direct support from the RAIL platform currently.
Create a new Postgres cluster¶
Postgres clusters are by default created with a database called app and a
corresponding user called app as well, but in order to connect to it we also
need to provide a password for the user. The password is set up with a
secret like this one:
apiVersion: v1
kind: Secret
metadata:
name: db-app-user
type: kubernetes.io/basic-auth
stringData:
username: app
password: pass1
and then you can set up the cluster with a reference to this secret with:
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: db
spec:
instances: 2
monitoring:
enablePodMonitor: true
bootstrap:
initdb:
database: app
owner: app
secret:
name: db-app-user
storage:
size: 500Mi
This sets up a service with a name derived from the cluster name, suffixed by
-rw (in this case db-rw), that you can connect to. In another Pod in the
same namespace where the psql command is available you can connect to the
database like this:
$ psql --host=db-rw --username=app
Password for user app:
psql (16.9 (Ubuntu 16.9-0ubuntu0.24.04.1), server 17.0 (Debian 17.0-1.pgdg110+1))
WARNING: psql major version 16, server major version 17.
Some psql features might not work.
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, compression: off)
Type "help" for help.
app=>
If you don’t provide the bootstrap element as shown above, then a random
password will be assigned and saved in a new secret called db-app. This secret
also contains connections strings in various forms.
Admin access¶
The simplest way to obtain access to the database as a RAIL user is to just
start psql directly in one of the pods that are part of the cluster.
In this case you can connect without further credentials.
From the RAIL login host this should work:
$ kubectl -n adm-it-xxx-ns get pods
NAME READY STATUS RESTARTS AGE
db-1 2/2 Running 0 45m
db-2 2/2 Running 0 45m
$ kubectl -n adm-it-xxx-ns exec -it db-1 -- psql -d app
Defaulted container "postgres" out of: postgres, bootstrap-controller (init), plugin-barman-cloud (init)
sql (17.5 (Debian 17.5-1.pgdg110+1))
Type "help" for help.
app=#
A quick way to grab a copy of the app database as SQL:
$ kubectl -n adm-it-xxx-ns exec db-1 -- pg_dump app >app.sql
Backup¶
You can set up backups with plugin-barman-cloud, which uses Barman under the hood — a popular solution for managing PostgreSQL backups. Additional configuration options are available from the plugin usage docs.
ObjectStore creation¶
Firstly you need an ObjectStore, which is how you configure the Barman plugin.
In this example we have a Secret called nrec-s3 containing the credentials,
and we reference object storage from NREC’s BGO region.
apiVersion: v1
kind: Secret
metadata:
name: nrec-s3
type: Opaque
stringData:
ACCESS_KEY_ID: verylongstring
ACCESS_SECRET_KEY: verylongsecret
apiVersion: barmancloud.cnpg.io/v1
kind: ObjectStore
metadata:
name: nrec-store
spec:
# This will ensure that barman retains at least one backup older than 30 days
retentionPolicy: "30d"
configuration:
destinationPath: s3://postgres-backups/
endpointURL: https://object.api.bgo.nrec.no
s3Credentials:
accessKeyId:
name: nrec-s3
key: ACCESS_KEY_ID
secretAccessKey:
name: nrec-s3
key: ACCESS_SECRET_KEY
wal:
compression: zstd
data:
compression: bzip2
instanceSidecarConfiguration:
env:
- name: AWS_REQUEST_CHECKSUM_CALCULATION
value: when_required
- name: AWS_RESPONSE_CHECKSUM_VALIDATION
value: when_required
The instanceSidecarConfiguration element above is required due to plugin-barman-cloud#393
Enabled the Barman Cloud Plugin¶
Then, in your Cluster spec, you can enable the barman plugin and set it as the designated WAL archiver:
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: cluster-example
spec:
# Other cluster settings...
plugins:
- name: barman-cloud.cloudnative-pg.io
isWALArchiver: true
parameters:
barmanObjectName: nrec-store
Schedule base backups¶
The plugin will now start shipping write-ahead logs to s3, you also need a base backup to apply these logs to, this can be achieved with the cnpg Backup resource, or more likely with the ScheduledBackup resource. This can also be achieved using a VolumeSnapshot rather than the barman plugin, but we’ll use Barman in this example.
How often you take base backups will impact your usage of s3, and how long it takes you to perform a full recovery, similar for the compression options
apiVersion: postgresql.cnpg.io/v1
kind: ScheduledBackup
metadata:
name: cluster-example-backup
spec:
immediate: true # Also trigger a backup when creating this resource
cluster:
name: cluster-example
schedule: '0 0 18 */6 * *' # Every 6th day at 18:00
backupOwnerReference: self
method: plugin
pluginConfiguration:
name: barman-cloud.cloudnative-pg.io
How to restore a cluster¶
Once you have a base backup and working WAL shipping, it’s possible to restore the cluster. If you want to do this it’s probably a good idea to read the recovery documentation.
The manifest below creates a new Cluster and initializes it using the latest base backup, and then applies each write-ahead log to it
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: db-restored
spec:
instances: 2
imagePullPolicy: IfNotPresent
bootstrap:
recovery:
source: source
#recoveryTarget:
# # Time base target for the recovery
# targetTime: "2023-08-11 11:14:21.00000+02"
externalClusters:
- name: source
plugin:
name: barman-cloud.cloudnative-pg.io
parameters:
barmanObjectName: nrec-store
serverName: cluster-example
storage:
size: 1Gi