Existing A2HA to Automate HA
Warning
Warning
- A2HA user can be migrated to Automate HA with minimum Chef Automate version 20201230192246
This page explains migrating the existing A2HA data to the newly deployed Chef Automate HA. This migration involves the following steps:
Prerequisites
Ability to mount the file system, which was mounted to A2HA Cluster for backup purpose, to Automate HA.
A2HA is configured to take backup on mounted network drive (location example :
/mnt/automate_backup
).
Migration
Run the following commands from any automate instance in A2HA Cluster.
sudo chef-automate backup create sudo chef-automate bootstrap bundle create bootstrap.abb
- The first command will take the backup at the mount file system. You can get the mount path from the file
/hab/a2_deploy_workspace/a2ha.rb
on bastion node. - The second command will create the bootstrap bundle, which we need to copy all the frontend nodes of Automate HA cluster.
- Once the backup is completed successfully, please save the backup Id. For example:
20210622065515
. - If you want to use backup created previously run the command on Automate node, to get the backup id
chef-automate backup list
Backup State Age 20180508201548 completed 8 minutes old 20180508201643 completed 8 minutes old 20180508201952 completed 4 minutes old
- The first command will take the backup at the mount file system. You can get the mount path from the file
Detach the File system from the old A2HA cluster.
Configure the backup at Automate HA cluster, in case if you have not configured, please refer this Doc: Pre Backup Configuration for File System Backup
From the Step 3, you will get the backup mount path.
Stop all the services at frontend nodes in Automate HA Cluster.
- Run the below command to all the Automate and Chef Infra Server nodes
sudo chef-automate stop
To run the restore command we need the airgap bundle. Get the Automate HA airgap bundle from the location
/var/tmp/
in Automate instance. Example :frontend-4.x.y.aib
.- In case of airgap bundle is not present at
/var/tmp
, in that case we can copy the bundle from the bastion node to the Automate node.
- In case of airgap bundle is not present at
Run the command at the Chef-Automate node of Automate HA cluster to get the applied config
sudo chef-automate config show > current_config.toml
Add the OpenSearch credentials to the applied config.
- If using Chef Managed Opensearch, then add the below config into
current_config.toml
(without any changes).[global.v1.external.opensearch.auth.basic_auth] username = "admin" password = "admin"
- If using AWS Managed services, then add the below config into
current_config.toml
(change this with your actual credentials)[global.v1.external.opensearch.auth] scheme = "aws_os" [global.v1.external.opensearch.auth.aws_os] username = "THIS YOU GET IT FROM AWS Console" password = "THIS YOU GET IT FROM AWS Console" access_key = "<YOUR AWS ACCESS KEY>" secret_key = "<YOUR AWS SECRET KEY>"
- If using Chef Managed Opensearch, then add the below config into
To restore the A2HA backup on Chef Automate HA, run the following command from any Chef Automate instance of the Chef Automate HA cluster:
sudo chef-automate backup restore /mnt/automate_backups/backups/20210622065515/ --patch-config current_config.toml --airgap-bundle /var/tmp/frontend-4.x.y.aib --skip-preflight
After the restore is successfully executed, you will see the below message:
Success: Restored backup 20210622065515
Copy the
bootstrap.abb
bundle to all the Frontend nodes of the Chef Automate HA cluster. Unpack the bundle using the below command on all the Frontend nodes.sudo chef-automate bootstrap bundle unpack bootstrap.abb
Start the Service in all the frontend nodes with the below command.
sudo chef-automate start
Warning
After the restore command is successfully executed. If we run the
chef-automate config show
, we can see that both ElasticSearch and OpenSearch config are part of Automate Config. After restoring Automate HA talk to OpenSearch.We should remove the elaticsearch config from all Frontend nodes, to do that, redirect the applied config to the file and set the config again. For example:
chef-automate config show > applied_config.toml
Remove the below field from the
applied_config.toml
.[global.v1.external] [global.v1.external.elasticsearch] enable = true nodes = [""] [global.v1.external.elasticsearch.auth] scheme = "" [global.v1.external.elasticsearch.auth.basic_auth] username = "" password = "" [global.v1.external.elasticsearch.ssl] root_cert = "" server_name = ""
Apply this modified config by running below command.
chef-automate config set applied_config.toml
These steps should be executed on all the Frontend nodes.
Troubleshooting
In case of Restore failure from ElasticSearch to OpenSearch
Error: Failed to restore a snapshot
Get the basepath location from the A2HA Cluster using the curl request below.
REQUEST
curl -XGET http://localhost:10144/_snapshot/_all?pretty -k
RESPONSE
Look for the location
value in the response.
"settings" : {
"location" : "/mnt/automate_backups/automate-elasticsearch-data/chef-automate-es6-compliance-service",
}
location
value should be matched with the OpenSearch cluster. In case of location
value is different, use the below script to create the snapshot repo.
indices=(
chef-automate-es5-automate-cs-oc-erchef
chef-automate-es5-compliance-service
chef-automate-es5-event-feed-service
chef-automate-es5-ingest-service
chef-automate-es6-automate-cs-oc-erchef
chef-automate-es6-compliance-service
chef-automate-es6-event-feed-service
chef-automate-es6-ingest-service
)
for index in ${indices[@]}; do
curl -XPUT -k -H 'Content-Type: application/json' http://localhost:10144/_snapshot/$index --data-binary @- << EOF
{
"type": "fs",
"settings": {
"location" : "/mnt/automate_backups/automate-elasticsearch-data/$index"
}
}
EOF
done
Was this page helpful?