On-Prem Migration

Updated 2 weeks ago by Michael Katz

This topic covers how to migrate data between Harness On-Prem setups, in the following sections:

Scenarios Covered

The source and destination setups can be any of these combinations: 

  • Disconnected On-Prem to Connected On-Prem.
  • Disconnected On-Prem to Disconnected On-Prem.
  • Connected On-Prem to Connected On-Prem.

Not covered here:

  • On-Prem to Harness SaaS. Please contact Harness Support for this scenario.
  • Migration of TimeScaleDB, which supports Custom Dashboards. Please contact Harness Support for this scenario.
    For details on backup and restore of the TimeScaleDB, see Connected On-Prem Backup and Restore Strategy.

Prerequisites

  •  Both the target and source setups should have connectivity to the machine performing the migration. 

  • For Disconnected to Connected On-Prem migrations only: The Harness SE/CS team should make a note of the Service secret for the Learning Engine, and override the value in the Learning Engine Service Infrastructure. This value is available in the database: db.serviceSecrets.find()
     
  • In Harness Manager, disable all SSO/LDAP from the source installation. This ensures that you will not get locked out during migration if Harness Delegates are inaccessible, or if your DNS Name changes.
    You can restore SSO/LDAP on the new setup after migrating.

Step 1: Get Data Dump from Source

Follow the instructions for your source scenario:

Disconnected On-Prem

To get a data dump from a Disconnected On-Prem setup:

  1. Stop the manager and verification microservices:
    docker kill manager
    docker kill verification-service
  2. Get the data dump from mongoContainer:
    docker exec -it mongoContainer bash
    mongodump --host<Source_IP>:<PORT> --username admin --password adminpass --authenticationDatabase admin --out /data/db/backup/dump          

The data dump will now be available in: HARNESS_RUNTIME_DIR/mongo/data/db/backup/dump

3-Box Connected On-Prem

To get a data dump from a 3-Box Connected On-Prem setup:

  1. Get the mongoURI from the manager.
    Run the following command on any single box:
    docker inspect harnessManager
  2. You should now see the MONGO_URI in the environment variables. Copy and store that value.
  3. Stop the manager and verification services.
    Run the following commands on all three boxes:

    docker kill manager
    docker kill verification-service
  4. Get the data dump from mongoContainer. Exec into a mongoContainer on any single box:
    docker exec -it mongoContainer bash
  5. Use the MONGO_URI to populate and run the mongodump command:
    mongodump --uri="<MONGO_URI>" --out /data/db/backup/dump

    The data dump will now be present in the MongoDB data directory.
    Contact Harness Support to obtain this directory's exact location as configured on your system.

Kubernetes Connected On-Prem

To get a data dump from a Kubernetes Connected On-Prem setup:

  1. Get the MONGO_URI from the manager:
    kubectl get pods -n harness : Get the list of managerPodskubectl exec -it <harnessManagerPod> -n harness -- env | grep MONGO_URI
  2. Generate the mongo data dump.
    Exec into the mongo pod:
    kubectl exec -it harness-mongodb-replicaset-0 -n harness -- bash
  3. Use the MONGO_URI to get the dump:
    mongodump --uri="<MONGO_URI>" --out /data/db/backup/dump
  4. Tar the dump:
    tar -cvzf /data/db/backup/dump.tar /data/db/backup/dump/
  5. Exit to the local shell.
  6. Copy the dump file to the local directory:
    kubectl cp harness/harness-mongodb-replicaset-0:/data/db/backup/dump.tar .

Step 2: Restore Data to Target Setup

Follow the instructions for your destination scenario:

Disconnected On-Prem

To restore data to a Disconnected On-Prem setup:

  1. Start the system, using install_harness.sh
  2. Get the MONGO_URI from the manager
    Run the following command on any single box:
    docker inspect harnessManager
    You should see the MONGO_URI in the environment variables. Copy and store that value.
  3. Stop the manager and verification microservices:
    docker kill manager
    docker kill learning-engine
    docker kill verification-service
  4. Copy the data dump from the $RUNTIME_DIR/mongo/data/db/backup directory. (If it is a .tar file, untar it.)
  5. Exec into the mongoContainer:
    docker exec -it mongoContainer bash 
  6. Connect and drop the existing database:
    mongo <MONGO_URI> 
    use  harness;
    db.dropDatabase();
  7. Restore the database from the dump:
    mongorestore --uri=”MONGO_URI” --dir=/data/db/backup/dump
  8. Restart the system, using install_harness.sh

3-Box Connected On-Prem

To restore data to a 3-Box Connected On-Prem setup:

  1. Get the MONGO_URI from the manager.

    Run the following command on any single box:
    docker inspect harnessManager
    You should see the MONGO_URI in the environment variables. Copy and store that value.
  2. Stop the manager and verification services.

    Run the following commands on all three boxes:
    docker kill manager
    docker kill verification-service
  3. Copy the data dump from the $RUNTIME_DIR/mongo/data/db/backup directory.
    If you are not sure where this data directory resides, check with Harness Support. If the data dump is a .tar file, untar it.
  4. Exec into the mongoContainer:
    docker exec -it mongoContainer bash
  5. Connect and drop the existing database:
    mongo <URI> 
    use  harness;
    db.dropDatabase();
  6. Restore data to the database.

    Exec into a mongoContainer on any single box:
    docker exec -it mongoContainer bash
  7. Use the MONGO_URI to populate and run the mongodump command:
    mongorestore --uri=”MONGO_URI” --dir=/data/db/backup/dump
  8. Copy the value of LearningEngineSecret from the harness serviceSecrets database.
    Exec into the mongoContainer:
    docker exec -it mongoContainer bash
    mongo <URI> 
    use  harness;
    db.getCollection('serviceSecrets').find({})
    Your response will look something like this:
    {
        "_id" : "3TbUq-DKQUq6lWUzCBcDlQ",
        "serviceSecret" : "8b9q7lr168145d8nfc05ba2f187k91g9",
        "serviceType" : "LEARNING_ENGINE",
        "createdAt" : NumberLong(1516948224685),
        "lastUpdatedAt" : NumberLong(1516948224685)
    }
  9. Send this information to Harness Support. (Harness might need to update this value in our system for regular upgrades.)
  10. Start Harness using the local start-stop scripts on the Ambassador.

Kubernetes Connected On-Prem

To restore data to a Kubernetes Connected On-Prem setup:

  1. Get the MONGO_URI from the manager:
    kubectl get pods -n harness : Get the list of managerPodskubectl exec -it <harnessManagerPod> -n harness -- env | grep MONGO_URI 
  2. Scale the harness-manager and the harness-verification service deployments to 0.
  3. Copy the dump file to the MongoDB pod:
    kubectl cp dump.tar harness/harness-mongodb-replicaset-0:/data/db/backup/dump.tar .
  4. UnTar the dump: 
    tar -xvf /data/db/backup/dump.tar /data/db/backup/dump/
  5. Use the MONGO_URI to populate and run the mongodump command:
    mongorestore --uri=”MONGO_URI” --dir=/data/db/backup/dump
  6. Copy the value of LearningEngineSecret from the Harness serviceSecrets database.
    Exec into the mongoContainer:
    docker exec -it mongoContainer bash
    mongo <URI> 
    use  harness;
    db.getCollection('serviceSecrets').find({})
    Your response will look something like this:
    {
        "_id" : "3TbUq-DKQUq6lWUzCBcDlQ",
        "serviceSecret" : "8b9q7lr168145d8nfc05ba2f187k91g9",
        "serviceType" : "LEARNING_ENGINE",
        "createdAt" : NumberLong(1516948224685),
        "lastUpdatedAt" : NumberLong(1516948224685)
    }
  7. Send this information to Harness Support. (Harness might need to update this value in our system for regular upgrades.)
  8. Scale the harness-manager and the harness-verification services up to their regular value (3).

Step 3: Forward DNS Name or Renew Delegates

If you can forward the DNS Name from the original setup to the destination setup, you can keep relying on the existing Delegates. Otherwise, you must discard the Delegates running on the destination setup, and download Delegates again.

Step 4: Apply License

Refresh your Harness license according to your migration scenario:

  • If you are migrating from a POV (proof of value) to production, Harness must apply the license after migration.
  • If you are migrating to Disconnected On-Prem, applying the license will be part of the overall installation.
  • If you are migrating to Connected On-Prem, you can normally reuse the existing license after migrating.

Next Steps

  • If you have enabled Custom Dashboards, contact Harness Support to migrate its TimescaleDB dependency.
  • Review Harness' On-Prem release notes and documentation.


How did we do?