We have mentioned already how different failure scenarios will warrant different recovery requirements. When it comes to Fire, Flood and Blood scenarios we can consider these mostly disaster situations where we might need our workloads up and running in a completely different location as fast as possible or at least with near-zero recovery time objectives (RTO).
Keeping with the theme so far, we are going to concentrate on how this can be achieved using Kasten K10 using the minikube cluster that we deployed and configured a few sessions ago.
Kasten K10 also has built-in functionality to ensure if something was to happen to the Kubernetes cluster it is running on that the catalogue data is replicated and available in a new one [K10 Disaster Recovery](https://docs.kasten.io/latest/operating/dr.html).
The first thing we need to do is add an object storage bucket as a target location for our backups to land. Not only does this act as an offsite location but we can also leverage this as our disaster recovery source data to recover from.
You can see from the image below that we have a choice when it comes to where this location profile is, we are going to select Amazon S3, and we are going to add our sensitive access credentials, region and bucket name.
If we scroll down on the New Profile creation window you will see, that we also can enable immutable backups which leverage the S3 Object Lock API. For this demo, we won't be using that.
In the previous session, we created only an ad-hoc snapshot of our Pac-Man application, therefore we need to create a backup policy that will send our application backups to our newly created object storage location.
Next, we want to enable backups via Snapshot exports meaning that we want to send our data out to our location profile. If you have multiple you can select which one you would like to send your backups to.
Under Advanced settings we are not going to be using any of these but based on our [walkthrough of Kanister yesterday](https://github.com/MichaelCade/90DaysOfDevOps/blob/main/Days/day88.md), we can leverage Kanister as part of Kasten K10 as well to take those application consistent copies of our data.
At the bottom of the created policy, you will have "Show import details" we need this string to be able to import into our standby cluster. Copy this somewhere safe for now.
We then need to deploy a second Kubernetes cluster and where this could be any supported version of Kubernetes including OpenShift, for education we will use the very free version of MiniKube with a different name.
Using `minikube start --addons volumesnapshots,csi-hostpath-driver --apiserver-port=6443 --container-runtime=containerd -p standby --kubernetes-version=1.21.2` we can create our new cluster.
This will take a while but in the meantime, we can use `kubectl get pods -n kasten-io -w` to watch the progress of our pods getting to the running status.
It is worth noting that because we are using MiniKube our application will just run when we run our import policy, our storageclass is the same on this standby cluster. However, something we will cover in the final session is mobility and transformation.
At this point, we are now able to create an import policy in that standby cluster and connect to the object storage backups and determine what and how we want this to look.
First, we add in our Location Profile that we walked through earlier on the other cluster, showing off dark mode here to show the difference between our production system and our DR standby location.
Create the import policy as per the below image. When complete, we can create a policy. There are options here to restore after import and some people might want this option, this will go and be restored into our standby cluster on completion. We also can change the configuration of the application as it is restored and this is what I have documented in [Day 90](day90.md).
If we now head back to the dashboard and into the Applications card, we can then select the drop-down where you see below "Removed" you will see our application here. Select Restore
Here we can see the restore points we have available to us; this was the backup job that we ran on the primary cluster against our Pac-Man application.