When using traditional snapshots since they run as job in Data Factory if loadAll or loadNew jobs are taking a long time and snapshots are getting delayed to be executed there is an alternative method to get backups using spm as contingency plan.
SPM backups: create them by running > spm -e export-server ... This command will create a directory export-server default with the backup. To restore it, run load the manifest.spm file with spm > spm export-server/manifest.spm
The spm backup is a lighter version of of the snapshot backup. It only contains the latest revision, instead of the 5 (configurable) for traditional snapshots to revert to a previous revision. It also does not keep the passwords for both users and external connections to SQL databases.
To work around the fact that passwords are reset, can combine the javascript backup script with the spm script:
First, run the usual backup script but without the --data part. This will backup everything but the datasets, including user passwords and external connections, without impacting the load jobs (no snapshot job will run):
kubectl exec -i $(kubectl get pod -l io.kompose.service=sdf-cs-signals-job-scheduler -o jsonpath="{.items[0].metadata.name}") -- node src/main/scripts/backup.js > backup_nodata.zip
This can be restored the same way as the snapshot backups, but dropping the --data on the restore command as well:
kubectl exec -i $(kubectl get pod -l io.kompose.service=sdf-cs-signals-job-scheduler -o jsonpath="{.items[0].metadata.name}") -- node src/main/scripts/restore.js < backup_nodata.zip
This is just an alternative plan to have backups if the snapshots are not running at all.