Skip to content

Migrating from the Community Edition (VM base) to the Cloud Native edition#

Overview#

This operational guide walks through migration from the community edition, which uses a VM approach, to the cloud native edition, which is primarily a Kubernetes cluster.

Requirements#

  • Access to the CE VM
  • Gluu CE version >= 4.3
  • A Kubernetes cluster, and access to kubectl. You may take a look at the following section to get a better sense on sizing requirements for the Kubernetes cluster.

Migration Steps#

  1. Log in to the server where CE is installed:

    ssh $USER@$CE_SERVER
    
  2. Back up the data in persistence and save them elsewhere.

  3. Set an environment variable to mark where the root directory of CE installation is.

    If using chrooted installation:

    export CE_HOME=/opt/gluu-server
    

    otherwise:

    export CE_HOME=
    
  4. Prepare manifests files:

    1. Create new directory:

      mkdir -p $HOME/ce-migration
      cd $HOME/ce-migration
      

      Take a note about the full path of this directory (for example /root/ce-migration).

    2. If setup.properties.last exists create setup.properties. Otherwise generate setup.properties.:

      cp $CE_HOME/install/community-edition-setup/setup.properties.last setup.properties
      

      If setup.properties.last does not exist:

      openssl enc -d -aes-256-cbc -in $CE_HOME/install/community-edition-setup/setup.properties.last.enc -out setup.properties
      
    3. Get all certificates, keys, and keystores:

      cp $CE_HOME/etc/certs/*.crt .
      cp $CE_HOME/etc/certs/*.key .
      cp $CE_HOME/etc/certs/*.pem .
      cp $CE_HOME/etc/certs/*.jks .
      cp $CE_HOME/etc/certs/*.pkcs12 .
      cp $CE_HOME/opt/shibboleth-idp/credentials/*.jks .
      cp $CE_HOME/opt/shibboleth-idp/credentials/*.kver .
      cp $CE_HOME/opt/shibboleth-idp/conf/datasource.properties .
      
    4. Get the salt file:

      cp $CE_HOME/etc/gluu/conf/salt .
      
  5. Get configuration/secret from the persistence that is used with your current CE installation.

    Run the following LDAP search queries:

    $CE_HOME/opt/opendj/bin/ldapsearch \
        --useSSL \
        --trustAll \
        -D "cn=directory manager" \
        -p 1636 \
        -w $LDAP_PASSWD \
        -b "o=gluu" \
        -s sub '(objectClass=gluuConfiguration)' > gluu-configuration.ldif
    
    $CE_HOME/opt/opendj/bin/ldapsearch \
        --useSSL \
        --trustAll \
        -D "cn=directory manager" \
        -p 1636 \
        -w $LDAP_PASSWD \
        -b "o=gluu" \
        -s sub '(objectClass=oxAuthConfiguration)' > oxauth-configuration.ldif
    
    $CE_HOME/opt/opendj/bin/ldapsearch \
        --useSSL \
        --trustAll \
        -D "cn=directory manager" \
        -p 1636 \
        -w $LDAP_PASSWD \
        -b "o=gluu" \
        -s sub '(objectClass=oxAuthClient)' > oxauth-client.ldif
    

    Here's an example of expected .ldif file:

    dn: ou=configuration,o=gluu
    gluuHostname: 1b4211097aa4
    gluuOrgProfileMgt: false
    gluuPassportEnabled: false
    gluuRadiusEnabled: false
    gluuSamlEnabled: false
    gluuScimEnabled: false
    gluuVdsCacheRefreshEnabled: true
    

    Run the following N1QL queries (in Couchbase UI):

    # save the result as gluu-configuration.json manually
    SELECT META().id, gluu.*
    FROM gluu
    WHERE objectClass = 'gluuConfiguration'
    
    # save the result as oxauth-configuration.json manually
    SELECT META().id, gluu.*
    FROM gluu
    WHERE objectClass = 'oxAuthConfiguration'
    
    # save the result as oxauth-client.json manually
    SELECT META().id, gluu.*
    FROM gluu
    WHERE objectClass = 'oxAuthClient'
    

    Here's an example of the expected .json file:

    [
        {
            "dn": "ou=configuration,o=gluu",
            "gluuPassportEnabled": false,
            "gluuRadiusEnabled": false,
            "gluuSamlEnabled": false,
            "gluuScimEnabled": false,
            "gluuVdsCacheRefreshEnabled": false,
            "id": "configuration",
            "objectClass": "gluuConfiguration"
        }
    ]
    

    Follow the official docs at https://cloud.google.com/spanner/docs/export (currently the supported format is Avro only).

    Here's an example of exported Avro filenames:

    gluuConfiguration.avro-00000-of-00001
    oxAuthConfiguration.avro-00000-of-00001
    oxAuthClient.avro-00000-of-00001
    

    The expected filenames used by config-init container are:

    gluu-configuration.avro
    oxauth-configuration.avro
    oxauth-client.avro
    
    hence you may need to copy them manually from the original Avro files.

    Install mysqlsh, then run the following commands:

    echo 'select * from gluuConfiguration' | mysqlsh --json=pretty --sql --show-warnings=false --uri=$DBUSER@$DBHOST:$DBPORT/$DBNAME -p > gluu-configuration.json
    echo 'select * from oxAuthConfiguration' | mysqlsh --json=pretty --sql --show-warnings=false --uri=$DBUSER@$DBHOST:$DBPORT/$DBNAME -p > oxauth-configuration.json
    echo 'select * from oxAuthClient' | mysqlsh --json=pretty --sql --show-warnings=false --uri=$DBUSER@$DBHOST:$DBPORT/$DBNAME -p > oxauth-client.json
    

    Here's an example of the expected .json file:

    {
        "hasData": true,
        "rows": [
            {
                "doc_id": "configuration",
                "objectClass": "gluuConfiguration",
                "dn": "ou=configuration,o=gluu",
                "description": null,
                "oxSmtpConfiguration": {
                    "v": []
                },
                "gluuVDSenabled": null,
                "ou": "configuration",
                "gluuStatus": null,
                "displayName": null
            }
        ]
    }
    
    psql -h $DBHOST -p $DBPORT -U $DBUSER -d $DBNAME -W -t -A -o gluu-configuration.json -c 'select json_agg(t) from (select * from "gluuConfiguration") t;'
    psql -h $DBHOST -p $DBPORT -U $DBUSER -d $DBNAME -W -t -A -o oxauth-configuration.json -c 'select json_agg(t) from (select * from "oxAuthConfiguration") t;'
    psql -h $DBHOST -p $DBPORT -U $DBUSER -d $DBNAME -W -t -A -o oxauth-client.json -c 'select json_agg(t) from (select * from "oxAuthClient") t;'
    

    Here's an example of the expected .json file:

    [
        {
            "doc_id": "configuration",
            "objectClass": "gluuConfiguration",
            "dn": "ou=configuration,o=gluu",
            "oxTrustStoreConf": "{\"useJreCertificates\":true}",
            "gluuAdditionalMemory": null,
            "gluuSmtpRequiresAuthentication": null,
            "gluuPassportEnabled": 0,
            "gluuShibFailedAuth": null,
            "gluuAppliancePollingInterval": null,
            "gluuAdditionalBandwidth": null,
            "gluuRadiusEnabled": 0,
            "description": null
        }
    ]
    
  6. Log out from the server where CE is installed.

  7. Download manifests files:

    scp -r $USER@$CE_SERVER:/root/ce-migration .
    
  8. Download pygluu-kubernetes.pyz. This package can be built manually.

  9. Run :

    ./pygluu-kubernetes.pyz install
    

    You will be prompted to migrate from CE.

    Note

    The services will not run until the persistence backup data is imported.

  10. Import backup data into the persistence manually.

  11. Restart main services:

    kubectl rollout restart deployment <gluu-release-name>-auth-server -n <gluu-namespace>
    kubectl rollout restart statefulset <gluu-release-name>-oxtrust -n <gluu-namespace>
    #kubectl rollout restart deployment gluu-auth-server -n gluu
    
  12. If new additional services were deployed that originally were not on the source CE VM (i.e. SCIM, Fido2, etc), the persistence job must be enabled to fill the missing entries (existing entries will not be modified). Note that some configuration may need to be modified manually via oxTrust UI.

    1. Open helm/gluu/values.yaml using your favourite editor, and set global.persistence.enabled to true and global.upgrade.enabled to true.

    2. Run helm upgrade:

      helm upgrade <release-name> . -f ./helm/gluu/values.yaml -n <namespace>