Installation prerequisites for Portworx Backup


Prerequisites

The minimum supported size for the Portworx Backup cluster is three worker nodes. Each node must meet the following hardware, software, and network requirements:

Hardware Requirements
CPU 4 CPU cores minimum, 8 cores recommended
RAM 4 GB minimum, 8 GB recommended
Backend drive 307 GB (In Total)
Software Requirements
Kubernetes
  • 1.26.x and below for both on-premises and other cloud providers
  • Stork
  • 23.7.0 and above
  • Portworx
  • 3.0.0
  • At least 50 GB of free space on the /root file system nodes where Portworx is going to be installed
  • Network Requirements
    Network connectivity Bandwidth:
  • 10 Gbps recommended
  •      (1 Gbps minimum)

    NOTE: The above configuration holds good for 2000 backups.

    For more information, refer to Portworx Installation Prerequisites.

    • If you are using an external OIDC provider, you must use certificates signed by a trusted certificate authority.

    • Make sure helm is installed on the client machine: Helm

    • If you want to install Portworx Backup on OpenShift using the restricted SCC, then you must add the service accounts used by Portworx Backup to the restricted SCC. Execute the following oc adm policy add-scc-to-user commands, replacing <YOUR_NAMESPACE> with your namespace:

      oc adm policy add-scc-to-user restricted system:serviceaccount:<YOUR_NAMESPACE>:default
      oc adm policy add-scc-to-user restricted system:serviceaccount:<YOUR_NAMESPACE>:pxcentral-apiserver
      oc adm policy add-scc-to-user restricted system:serviceaccount:<YOUR_NAMESPACE>:px-keycloak-account
      oc adm policy add-scc-to-user restricted system:serviceaccount:<YOUR_NAMESPACE>:px-backup-account

    NOTE: Portworx Backup 2.3.0 and above use MongoDB 5.x versions internally, which require Intel/AMD chipsets that support Advance Vector Extensions (AVX). If you are deploying Portworx Backup 2.3.0 and above, please ensure that your Intel/AMD chipset versions support AVX.

    Prerequisites to install Portworx Backup on Tanzu

    Tanzu Kubernetes Grid (TKG) administrators can create deployments, StatefulSets, and DaemonSet (privileged pods) in the kube-system and default namespace, but cannot create in other namespaces. For example, Portworx Backup deployment in the central namespace fails, because Tanzu Kubernetes clusters include the default PodSecurityPolicy.

    Before you deploy Portworx Backup, for example in the central namespace, you need to create a rolebinding for privileged and restricted workload deployment using the following commands:

    kubectl create ns central
    kubectl create rolebinding rolebinding-default-privileged-sa-ns_default --namespace=central --clusterrole=psp:vmware-system-privileged --group=system:serviceaccounts

    Admin namespace in Portworx Backup

    Portworx Backup allows you to configure a dedicated and standard custom admin namespace to save all your backup-related Kubernetes resources such as secrets, backup and restore CRs, schedule CRs, and so on for multi-namespace backups and restores.

    Earlier custom resources related to Portworx Backup were stored in the kube-system namespace along with other key Kubernetes resources. Custom admin namespace segregates critical backup custom resources from kube-system and enables the latter to contain only Kubernetes resources.

    Prerequisites:

    • Ensure that your Stork version is 23.7.0 and newer

    For more information on installing Stork, refer Stork in Portworx Backup.

    Operator method

    If you have installed Stork with Portworx operator method, perform the following steps to configure a namespace of your choice as admin namespace:

    1. Run the following command to edit the Stork configuration within the storagecluster:

      kubectl edit storagecluster <clusterspec-name> -n <stork-deployed-namespace>
    2. In the editor, update the arguments in the Stork spec to specify the cluster admin namespace using the admin-namespace parameter.

      stork:
      args:
        admin-namespace: <custom admin_namespace>
        webhook-controller: "true"

      In the above example, you configured a namespace of your choice called admin_namespace as admin namespace.

    3. Save the changes, wait until all the Stork pods are recreated and are in Running state after the changes are applied:

      kubectl get pods -l name=stork -n kube-system

      Output:

      NAME                     READY   STATUS    RESTARTS   AGE
      stork-6fdd7d567b-4w86q   1/1     Running   0          63s
      stork-6fdd7d567b-lj47r   1/1     Running   0          60s
      stork-6fdd7d567b-m6pk7   1/1     Running   0          59s
      CAUTION: After you configure a custom admin namespace, make sure you do not delete it and always exists. Backups fail if the admin namespace does not exist.
      NOTE: Admin namespace configuration is one time activity, once a custom admin namespace is configured do not reconfigure another namespace as an admin namespace.

    Deployment method

    If you have installed Stork using deployment method, configure admin namespace with the below commands:

    1. Edit the Stork deployment from Kubernetes CLI:

      kubectl edit deployment stork -n <stork-deployed-namespace>
    2. In the deployment spec, add the parameter - --admin-namespace=<custom admin-namespace> under containers section:

      containers:
        - command:
          - /stork
          - --admin-namespace=<custom-admin-namespace>
          - --verbose
          - --leader-elect=true
          - --health-monitor-interval=120
          - --webhook-controller=true
    3. Save the changes, wait for all the Stork pods to be recreated and are in a Running state after the changes are applied.

      kubectl get pods -l name=stork -n kube-system

      Output:

      NAME                     READY   STATUS    RESTARTS   AGE
      stork-6fdd7d567b-4w86q   1/1     Running   0          63s
      stork-6fdd7d567b-lj47r   1/1     Running   0          60s
      stork-6fdd7d567b-m6pk7   1/1     Running   0          59s
      NOTE: Admin namespace configuration remains unchanged with Stork upgrade regardless of the Stork deployment method.

    Last edited: Friday, Aug 4, 2023