Environment Disaster Recovery
Ensure that the Cloud Scale deployment has been cleaned up in the cluster.
Perform the following to verify the cleanup process:
Ensure that the namespace associated with Cloud Scale deployment are deleted by using the following command:
kubectl get ns
Confirm that storageclass, pv, clusterroles, clusterrolebindings, crd's associated with Cloud Scale deployment are deleted by using the following command:
kubectl get sc,pv,pvc
(For EKS) If deployment is in different AZ, update the subnet name in
cloudscale-values.yamlfile.For example, if earlier subnet name was and new subnet is , then in
cloudscale-values.yamlfile, there would be a section forloadBalancerAnnotationsas follows:loadBalancerAnnotations: service.beta.kubernetes.io/aws-load-balancer-subnets: subnet-az1Update the name to new subnet name as follows:
loadBalancerAnnotations: service.beta.kubernetes.io/aws-load-balancer-subnets: subnet-az2Update all IPs used for Primary, MSDP, Media and Snapshot Manager server in respective section.
Note:
Change of FQDN is not supported.
The following example shows how to change the IP for Primary server:
Old entry in
cloudscale-values.yamlfile:ipList: - ipAddr: 12.123.12.123 fqdn: primary.netbackup.comUpdate the above old entry as follows:
ipList: - ipAddr: 34.245.34.234 fqdn: primary.netbackup.comSimilarly perform the above given procedure in the example (Primary server) for MSDP, Media and Snapshot Manager server.
Ensure that the iplist listed in Primary, Media, MSDP and Snapshot Manager server sections of
cloudscale-values.yamlfile that was saved during backup must be free and resolvable. If deployment is in different AZ, then FQDN must be same, but IP can be changed, hence ensure that same FQDN's can map to different IP.(For EKS) Update spec > priamryServer > storage > catalog > storageClassName with new EFS ID which is created for primary in
cloudscale-values.yamlfile.Ensure that nodeSelector is present in the
cloudscale-values.yamlandoperators-values.yamlfiles that were noted down during backup must be present in the cluster with required configurations.Perform the steps in the following section for deploying DBaaS:
Verify that the values in
secret_backup.yaml,storageclass_backup.yaml,CPServerLog_storageclass_backup.yaml, andmsdpopstorageclass_backup.yamlare correct. If a non-default StorageClass or updated passwords were used during deployment, ensure that these updated values are also included incloudscale-values.yamlfile.(For DBaaS) Confirm that the values in
secretproviderclass_backup.yamlfile are correctly reflected under global → dbsecretProviderClass in the backupcloudscale-values.yamlfile, and update the admin secret ARN incloudscale-values.yamlfile with the new ARN provided in the AWS console.Install cert-manager (while restore cert-manager version should be same as given during backup):
helm repo add jetstack https://charts.jetstack.io --force-update
helm upgrade -i -n cert-manager cert-manager jetstack/cert-manager --version 1.18.2 --set webhook.timeoutSeconds=30 --set installCRDs=true --wait --create-namespace
Create namespace that is present in
cloudscale-values.yamlfile:kubectl create ns netbackup
Install trust-manager (while restore trust-manager version should be same as given while backup setup has been deployed):
kubectl create namespace trust-manager
helm upgrade -i -n trust-manager trust-manager jetstack/trust-manager --set app.trust.namespace=netbackup --version v0.19.0 --wait
(For EKS) Update the EFS ID in the backed-up
nb-file-premiumStorageClass to the new EFS ID. Then, install the operator using theoperator-values.yamlfile that was backed up during the disaster recovery backup procedure:helm upgrade --install operators operators-<version>.tgz -f operator-values.yaml --create-namespace --namespace netbackup-operator-system
(Required only for DBaaS deployment) Snapshot Manager restore steps:
For AKS
Navigate to the snapshot resource created during backup and under the recovered cluster infra resource group (for example,
MC_<clusterRG>_<cluster name>_<cluster_region>).Note down the resource ID of this disk (navigate to the of the disk). It can be obtained from portal/az cli.
Format of resource ID:
/subscriptions/<subscription id>/resourceGroups/<MC_<clusterRG>_<cluster name>_<cluster_region>/providers/Microsoft.Compute/disks>/<disk name>Create static PV using the resource ID of backed up disk. Copy the below yaml and update the pv name, size of the disk, namespace and storage class name in
pgsql-pv.yamlfile and apply the yaml:pgsql-pv.yamlapiVersion: v1 kind: PersistentVolume metadata: name: <pv name> spec: capacity: storage: <size of the disk> accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: <storage class name> claimRef: name: psql-pvc namespace: <environment namespace> csi: driver: disk.csi.azure.com readOnly: false volumeHandle: <Resorce ID of the Disk>Example of
pgsql-pv.yamlfile:apiVersion: v1 kind: PersistentVolume metadata: name: psql-pv spec: capacity: storage: 30Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: gp2-immediate claimRef: name: psql-pvc namespace: nbux csi: driver: disk.csi.azure.com readOnly: false volumeHandle: /subscriptions/a332d749-22d8-48f6-9027-ff04b314e840/resourceGroups/MC_vibha-vasantraohadule-846288_auto_aks-vibha-vasantraohadule-846288_eastus2/providers/Microsoft.Compute/disks/psql-diskCreate psql-pv using the following command:
kubectl apply -f <path_to_psql_pv.yaml> -n <environment-namespace>
Ensure that the newly created PV is in Available state before restoring the Snapshot Manager server as follows:
kubectl get pv | grep psql-pvc
>> psql-pv 30Gi RWO managed-premium-disk Available nbu/psql-pvc 50s
For EKS
Navigate to the in AWS Console and click on the (expand the Actions drop down) which is taken in backup step 2 in same availability zone (AZ) of volume attached to psql-pvc (mentioned in step 1 of backup steps).
Note down the volumeID (for example, ).
In case deployment is in different availability zone (AZ), user must change the availability zone (AZ) for volume and update the volumeID accordingly.
Create static PV using the backed up volumeID. Copy the below yaml and update the pv name, size of the disk, namespace and storage class name in
pgsql-pv.yamlfile and apply the yaml:pgsql-pv.yamlapiVersion: v1 kind: PersistentVolume metadata: name: <pv name> spec: accessModes: - ReadWriteOnce awsElasticBlockStore: fsType: <fs type> volumeID: <backed up volumeID> # append this aws://az-code/ , for e.g. aws://us-east-2b/ at the starting capacity: storage: 30Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: psql-pvc namespace: <netbackup namespace> persistentVolumeReclaimPolicy: Retain storageClassName: <storage class name> volumeMode: FilesystemSample yaml for
pgsql-pv.yamlfile:apiVersion: v1 kind: PersistentVolume metadata: name: psql-pv spec: accessModes: - ReadWriteOnce awsElasticBlockStore: fsType: ext4 volumeID: aws://us-east-2b/vol-0d86d2ca38f231ede capacity: storage: 30Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: psql-pvc namespace: nbu persistentVolumeReclaimPolicy: Retain storageClassName: gp2-immediate volumeMode: FilesystemCreate psql-pv using the following command:
kubectl apply -f <path_to_psql_pv.yaml> -n <netbackup-namespace>
kubectl get pv | grep psql-pvc
Ensure that the newly created PV is in Available state before restoring the Snapshot Manager server as follows:
kubectl get pv | grep psql-pvc
>>> psql-pv 30Gi RWO gp2-immediate Available nbu/psql-pvc 50s
Update the
cloudscale-values.yamlfile as follows:Add paused:true for MSDP, Media Server in
cloudscale-vales.yamlfile and modify the file asenvironment→ mediaServers→ paused: true; environment→ msdpScaleouts→ paused: trueNote:
(For DBaaS) Ensure that
createServiceAccountis set to false incloudscale-values.yamlfile.Do not install cpServer.
Create a copy of the
cloudscale-values.yamlfile and name itcloudscale-values-copy.yaml. Store this copy, as it would be required during the Snapshot Manager installation.Remove the entire cpServer section from the
cloudscale-values.yamlfile, and add disabled: true under thecpServersection.
Install Cloud Scale using the updated
cloudscale-values.yamlfile that is created in the above step.helm upgrade --install cloudscale cloudscale-<version>.tgz -f cloudscale-values.yaml --create-namespace --namespace netbackup
Once primary erver is up and running, perform the following:
Execute kubectl exec -it -n <namespace> <primary-pod-name> -- /bin/bash command to exec into the primary pod.
Increase the debug logs level on primary server (set VERBOSE = 6 in
bp.conffile).Create a directory DRPackages at persisted location using mkdir /mnt/nbdata/usr/openv/drpackage command.
Deactivate NetBackup health probes using the /opt/veritas/vxapp-manage/nb-health deactivate command.
(For containerized Postgres) Execute the following command in the NetBackup namespace to scale down PostgreSql sts Replicas to 0:
kubectl scale sts nb-postgresql -n netbackup --replicas=0
(For DBaaS) Scale down or power off the DBaaS Server/Instance.
Exec into primary pod using kubectl exec -it -n <namespace> <primary-pod-name> -- /bin/bash command and stop the NetBackup services using the following command:
/usr/openv/netbackup/bin/bp.kill_all
Check if all processes are terminated correctly using /usr/openv/netbackup/bin/bpps command.
If some processes remain, use the following command to forcefully terminate them:
kill -9 <process id>
Perform the following steps for NBATD pod recovery:
Create the
DRPackagesdirectory on persisted location/mnt/nblogs/in nbatd pod by executing the following command:kubectl exec -it -n <namespace> <nbatd-pod-name> --/bin/bash
mkdir /mnt/nblogs/DRPackages
Copy DR files which were saved when performing DR backup to nbatd pod at
/mnt/nblogs/DRPackagesusing the following command:kubectl cp <Path_of_DRPackages_on_host_machine> <nbatd-pod-namespace>/<nbatd-pod-name>:/mnt/nblogs/DRPackages
Execute the following steps in the nbatd pod:
Execute the kubectl exec -it -n <namespace> <nbatd-pod-name> --/bin/bash command.
Deactivate nbatd health probes using the /opt/veritas/vxapp-manage/nbatd_health.sh disable command.
Stop the nbatd service using /opt/veritas/vxapp-manage/nbatd_stop.sh 0 command.
Execute the /opt/veritas/vxapp-manage/nbatd_identity_restore.sh -infile /mnt/nblogs/DRPackages/ (DR package name) command.
Copy back the earlier copied disaster recovery files to primary pod at
/mnt/nbdata/usr/openv/drpackagelocation using the following command:kubectl cp <Path_of_DRPackages_on_host_machine> <primary-pod-namespace>/<primary-pod-name>:/mnt/nbdata/usr/openv/drpackage
Execute the following steps after executing into the primary server pod:
Change the ownership of files in
/mnt/nbdb/usr/openv/drpackageusing the chown nbsvcusr:nbsvcusr <file-name> command.Execute the /usr/openv/netbackup/bin/admincmd/nbhostidentity -import -infile /mnt/nbdb/usr/openv/drpackage/.drpkg command.
Clear NetBackup host cache, run the bpclntcmd -clear_host_cache command.
Restart the pods as follows:
Navigate to the
VRTSk8s-netbackup-<version>/scriptsfolder.Run the
cloudscale_restart.shscript with Restart option as follows:./cloudscale_restart.sh <action> <namespace>
Provide the namespace and the required action:
: Stops all the services under primary server (waits until all the services are stopped).
: Starts all the services and waits until the services are up and running under primary server.
: Stops the services and waits until all the services are down. Then starts all the services and waits until the services are up and running.
Note:
Ignore if policy job pod does not come up in running state. Policy job pod would start once primary services start.
Refresh the certificate revocation list using the /usr/openv/netbackup/bin/nbcertcmd -getcrl command.
Run the primary server reconciler.
This can be done by changing primary spec's field to using the following command and save it:
helm upgrade cloudscale cloudscale-<version>.tgz -n netbackup --reuse-values --set environment.primary.paused=true
Then, to enable the reconciler to run, the primary's paused field in spec should again be set to false.
The SHA fingerprint will get updated in the primary CR's status.
Allow Auto reissue certificate from primary for MSDP, Media and Snapshot Manager server from Web UI.
In Web UI, navigate to Security > Host Mappings > for the MSDP Storage Server, click on the 3 dots on the right > check . Repeat this for media servers and Snapshot Manager server entries also.
Apply the
msdp-cred-secretbacked up during the disaster recovery backup.Change field to false for MSDP using the following command:
helm upgrade cloudscale cloudscale-<version>.tgz -n netbackup --reuse-values --set environment.msdpScaleouts.paused=false
Once MSDP is up and running, add the cloud provider credentials, from where the S3 bucket has been configured as described in "Add a credential in NetBackup' section of the NetBackup™ Web UI Administrator's Guide.
If the LSU cloud alias does not exist, you can use the following command to add it.
/usr/openv/netbackup/bin/admincmd/csconfig cldinstance -as -in <instance-name> -sts <storage-server-name> -lsu_name <lsu-name>
When MSDP Scaleout is up and running, re-use the cloud LSU on NetBackup primary server.
/usr/openv/netbackup/bin/admincmd/nbdevconfig -setconfig -storage_server <STORAGESERVERNAME> -stype PureDisk -configlist <configuration file>
Credentials, bucket name, and sub bucket name must be the same as the recovered Cloud LSU configuration in the previous MSDP Scaleout deployment.
Configuration file template:
V7.5 "operation" "reuse-lsu-cloud" string V7.5 "lsuName" "LSUNAME" string V7.5 "cmsCredName" "XXX" string V7.5 "lsuCloudAlias" "<STORAGESERVERNAME_LSUNAME>" string V7.5 "lsuCloudBucketName" "XXX" string V7.5 "lsuCloudBucketSubName" "XXX" string V7.5 "lsuKmsServerName" "XXX" string
Note:
For Veritas Alta Recovery Vault Azure storage, the cmsCredName is a credential name and cmsCredName can be any string. Add recovery vault credential in the CMS using the NetBackup Web UI and provide the credential name for cmsCredName. For more information, see About Veritas Alta Recovery Vault Azure topic in NetBackup Deduplication Guide.
Change for media server using the following CLI and wait for media pods to come up and running:
helm upgrade cloudscale cloudscale-<version>.tgz -n netbackup --reuse-values \> --set environment.mediaServers[0].name=media1 --set environment.mediaServers[0].replicas=1 \> --set environment.mediaServers[0].nodeSelector.labelKey=agentpool \> --set environment.mediaServers[0].nodeSelector.labelValue=mediapool \> --set environment.mediaServers[0].storage.data.capacity=50Gi \> --set environment.mediaServers[0].storage.data.storageClassName=nb-disk-standardssd \> --set environment.mediaServers[0].storage.log.capacity=5Gi \> --set environment.mediaServers[0].storage.log.storageClassName=nb-disk-standardssd \> --set environment.mediaServers[0].tag=11.1-0016-DR1 \> --set environment.mediaServers[0].paused=false
Perform full Catalog Recovery:
Pause the environment reconciler using the helm upgrade cloudscale cloudscale-<version>.tgz -n netbackup --reuse-values --set environment.paused=true command.
Use one of the following option to perform catalog recovery:
Trigger a Catalog Recovery from the Web UI.
Or
Exec into primary pod and run the bprecover -wizard command.
If Multi Factor Authentication (MFA) was enabled, perform the following additional steps:
Disable MFA: In Web UI, navigate to and disable it.
Exec into primary POD and execute the following command to reset MFA for user provided in primary secret (in this case it is nbuser):
nbseccmd -resetMFA -userinfo unixpwd:nbuxqa-summiteers-10-244-67-129.vxindia.veritas.com:nbuser
Un pause the environment reconciler: helm upgrade cloudscale cloudscale-<version>.tgz -n netbackup --reuse-values --set environment.paused=false
Confirm of access keys are modified to latest once:
In Web UI, navigate to , and ensure that the modified Access Keys are created by
nb-operator.
Once recovery is completed, restart the NetBackup services by running the following script:
cloudscale_restart.sh
Activate NetBackup health probes using the /opt/veritas/vxapp-manage/nb-health activate command.
Install Snapshot Manager server:
Add section in
cloudscale-values.yamlfile and ensure that the field is set to false for the section.Install in the environment using the following command:
helm upgrade --install cloudscale cloudscale-<version>.tgz -f cloudscale-values.yaml --namespace netbackup
Wait for the Snapshot Manager pods to come up and in running state.
Validate if the environment is up and running.