Installation and upgrade v1
OpenShift
For instructions on how to install Cloud Native PostgreSQL on Red Hat OpenShift Container Platform, please refer to the "OpenShift" section.
Installation on Kubernetes
Obtaining an EDB subscription token
Important
You must obtain an EDB subscription token to install EDB Postgres Distributed for Kubernetes. Without a token, you will not be able to access the EDB private software repositories.
Installing EDB Postgres Distributed for Kubernetes requires an EDB Repos 2.0 token to gain access to the EDB private software repositories.
You can obtain the token by visiting your EDB Account Profile. You will have to sign in if you are not already logged in.
Your account profile page displays the token to use next to Repos 2.0 Token label. By default, the token is obscured, click the "Show" button (an eye icon) to reveal it.
Your token entitles you to access one of two repositories: standard or enterprise.
standard
- Includes the operators and the EDB Postgres Extended operand images.enterprise
- Includes the operators and the EDB Postgres Advanced and EDB Postgres Extended operand images.
Set the relevant value, determined by your subscription, as an environment variable EDB_SUBSCRIPTION_PLAN
.
then set the Repos 2.0 token to an environment variable EDB_SUBSCRIPTION_TOKEN
.
Warning
The token is sensitive information. Please ensure that you don't expose it to unauthorized users.
You can now proceed with the installation.
Using the Helm Chart
The operator can be installed using the provided Helm chart.
Info
For more details on the Helm chart, see the Helm chart repo documentation.
These installation instructions cover using helm
to deploy of a set of default images with the latest available version.
If want to specify a different operand or proxy image, see
Customize Operand Distribution or Version,
before continuing with the installation.
Prerequisites
Install Helm. Follow these instructions to install it in your system.
Install
kubectl
. See Install Tools for instructions.Create at least one Kubernetes cluster.
Add the repository
Use helm
to add the repository containing all images:
Deploy the images
The helm command below will deploy the operator to the namespace pgd-operator-pgd
.
The container images for both the operator and the selected operand are downloaded
from the global.repository
. The credentials used to pull image are
specified by image.imageCredentials.username
and image.imageCredentials.password
.
Note
The Helm command above will also install the cert-manager. If you already
have cert-manager installed, please add --set cert-manager.enabled=false
.
For more information, see Deploying the PG4K-PGD operators and cert-manager individually
Create a certificate issuer
Be sure to create a cert issuer before you start deploying PGD clusters. The Helm chart prompts you to do this, but in case you miss it, you can, for example, run:
With the operators and a self-signed cert issuer deployed, you can start creating PGD clusters. See the Quick start for an example.
Directly using the operator manifest
Install the cert-manager
EDB Postgres Distributed for Kubernetes requires Cert Manager 1.10 or higher. Following the installation guide or below script to create the cert-manager if you do not have.
Install the EDB pull secret
Before installing EDB Postgres Distributed for Kubernetes, you need to create a pull secret for
EDB software in the pgd-operator-system
namespace.
The pull secret needs to be saved in the namespace where the operator will reside. Create the
pgd-operator-system
namespace using this command:
To create the pull secret itself, run the following command:
Install the EDB Postgres Distributed for Kubernetes operator
Now that the pull-secret has been added to the namespace, the operator can be installed like any other resource in Kubernetes,
through a YAML manifest applied via kubectl
.
There are two different manifests available depending on your subscription plan:
- Standard: The latest standard operator manifest.
- Enterprise: The latest enterprise operator manifest.
You can install the manifest for the latest version of the operator by running:
You can verify that with:
Note
As EDB Postgres Distributed for Kubernetes internally manages each PGD node using the
Cluster
resource as defined by EDB Postgres for Kubernetes, we also need to have the
EDB Postgres for Kubernetes operator installed as a dependency. The manifest mentioned
above contains a well-tested version of EDB Postgres for Kubernetes operator, which will
be installed into the same namespace as EDB Postgres Distributed for Kubernetes operator.
Details about the deployment
In Kubernetes, the operator is by default installed in the pgd-operator-system
namespace as a Kubernetes Deployment
. The name of this deployment depends on the installation method.
When installed through the manifest, by default, it is called pgd-operator-controller-manager
.
When installed via Helm, by default, the deployment name is derived from the helm release name, appended with the
suffix -edb-postgres-distributed-for-kubernetes
(e.g., <name>-edb-postgres-distributed-for-kubernetes
).
Note
With Helm you can customize the name of the deployment via the
fullnameOverride
field in the "values.yaml" file.
You can get more information using the describe
command in kubectl
:
As with any Deployment, it sits on top of a ReplicaSet and supports rolling upgrades. The default configuration of the EDB Postgres Distributed for Kubernetes operator comes with a Deployment of a single replica, which is suitable for most installations. In case the node where the pod is running is not reachable anymore, the pod will be rescheduled on another node.
If you require high availability at the operator level, it is possible to specify multiple replicas in the Deployment configuration - given that the operator supports leader election. Also, you can take advantage of taints and tolerations to make sure that the operator does not run on the same nodes where the actual PostgreSQL clusters are running (this might even include the control plane for self-managed Kubernetes installations).
Operator configuration
You can change the default behavior of the operator by overriding some default options. For more information, please refer to the "Operator configuration" section.
Upgrade
Important
Please carefully read the release notes before performing an upgrade, as some versions might require extra steps.
The EDB Postgres Distributed for Kubernetes (PGD4K) operator relies on the EDB Postgres for Kubernetes (PG4K) operator to manage clusters. To upgrade the EDB PGD4K operator, the PG4K operator must also be upgraded to a supported version. It is recommended to keep the EDB PG4K operator at the Long Term Support (LTS) version, as this is the tested version compatible with the PGD4K operator.
To upgrade the EDB Postgres Distributed (PGD) for Kubernetes operator, follow these steps:
- Upgrade the EDB Postgres for Kubernetes operator as a dependency
- Upgrade the PGD4K controller and the related Kubernetes resources
Note that when you upgrade the EDB Postgres for Kubernetes operator, the instance manager on each PGD node will also be upgraded. For more details, please refer to the EDB Postgres for Kubernetes upgrades documentation.
Unless differently stated in the release notes, those steps are normally done by applying the manifest of the newer version for plain Kubernetes installations.
Compatibility among versions
EDB Postgres Distributed for Kubernetes (PGD4K) follows semantic versioning. Every release of the operator within the same API version is compatible with the previous one. The current API version is v1, corresponding to versions 1.x.y of the operator.
The minor
version of PGD4K operator is tracking a PG4K LTS release change.
For example:
- The PGD4K operator 1.0.x is formally test against PG4K LTS 1.22.
- The PGD4K operator 1.1.x is formally test against PG4K LTS 1.25. PGD4K operator has the same support scope with the PG4K LTS release it is tracking.
In addition to new features, new versions of the operator contain bug fixes and stability enhancements. Because of this, we strongly encourage users to upgrade to the latest version of the operator, as each version is released in order to maintain the most secure and stable Postgres environment.
The release notes page contains a detailed list of the changes introduced in every released version of EDB Postgres Distributed for Kubernetes, and it must be read before upgrading to a newer version of the software.
Most versions are directly upgradable and in that case, applying the newer manifest for plain Kubernetes installations.
When versions are not directly upgradable, the old version (applied for both PGD4K and PG4K) needs to be removed before installing the new one. This won't affect user data but only the operator itself.
Upgrading to 1.0.1 on Red Hat OpenShift
On the openshift platform, starting from version 1.0.1, the EDB Postgres Distributed for Kubernetes operator is required to reference the LTS releases of the PG4K operator. For example, PGD4K v1.0.1 specifically references the PG4K LTS version 1.22.x (any patch release of the 1.22 LTS branch is valid).
To upgrade PGD4K operator, ensure that the PG4K operator is upgraded to a supported version first. Only then can the PGD4K operator be upgraded.
Important
As PGD4K v1.0.0 does not require referencing an LTS release of the PG4K operator, it may have been installed with any PG4K version. If the installed PG4K version is less than 1.22, you can upgrade to PG4K version 1.22 by changing the subscription channel. For more information, see upgrading the operator. However, if the installed PG4K version is greater than 1.22, you will need to reinstall the operator from the stable-1.22 channel to upgrade PGD4K to v1.0.1.
Server-side apply of manifests
To ensure compatibility with Kubernetes 1.29 and upcoming versions, EDB Postgres Distributed for Kubernetes now mandates the utilization of "Server-side apply" when deploying the operator manifest.
While employing this installation method poses no challenges for new
deployments, updating existing operator manifests using the --server-side
option may result in errors resembling the example below:
If such errors arise, they can be resolved by explicitly specifying the
--force-conflicts
option to enforce conflict resolution:
Henceforth, kube-apiserver
will be automatically acknowledged as a recognized
manager for the CRDs, eliminating the need for any further manual intervention
on this matter.