Najdorf

Installing on Kubernetes / Openshift

Preparing the Kubernetes namespace

The deploy directory referenced below can be found here

  1. Search for the Custom Resource Definition (CRD) and create it if it doesn’t already exist.

     $ oc get crds | grep manageiqs.manageiq.org
     $ oc create -f deploy/crds/manageiq.org_manageiqs_crd.yaml
    
  2. Set up RBAC.

     $ oc create -f deploy/role.yaml
     $ oc create -f deploy/role_binding.yaml
     $ oc create -f deploy/service_account.yaml
    
  3. Deploy the operator in your namespace.

     $ oc create -f deploy/operator.yaml
    

New Installation

Create the Custom Resource

Make any necessary changes (i.e applicationDomain) that apply to your environment then create the CR (Custom Resource). The operator will take action to deploy the application based on the information in the CR. It may take several minutes for the database to be initialized and the workers to enter a ready state.

    $ oc create -f deploy/crds/manageiq.org_v1alpha1_manageiq_cr.yaml

Migrating from Appliances

Notes

  • Multi-server / multi-zone: Current architecture in podified limits us to running a single server and zone in a Kubernetes namespace. Therefore, when migrating from a multi-appliance and/or multi-zone appliance architecture, you will need to choose a single server to assume the identity of. This server should have the UI and web service roles enabled before taking the database backup to ensure that those workers will start when deployed in the podified environment. All desired roles and settings will need to be configured on this server.

  • Multi-region: Multi-region deployments are slightly more complicated in a podified environment since postgres isn’t as easily exposed outside the project / cluster. If all of the region databases are running outside of the cluster or all of the remote region databases are running outside of the cluster and the global database is in the cluster, everything is configured in the same way as appliances. If the global region database is migrated from an appliance to a pod, the replication subscriptions will need to be recreated. If any of the remote region databases are running in the cluster, the postgresql service for those databases will need to be exposed using a node port. To publish the postgresql service on a node port, patch the service using $ kubectl patch service/postgresql --patch '{"spec": {"type": "NodePort"}}'. Now you will see the node port listed (31871 in this example) as well as the internal service port (5432). This node port can be used along with the IP address of any node in the cluster to access the postgresql service.

      $ oc get service/postgresql
      NAME         TYPE       CLUSTER-IP   EXTERNAL-IP   PORT(S)          AGE
      postgresql   NodePort   192.0.2.1    <none>        5432:31871/TCP   2m
    

Collect data from the appliance

  1. Take a backup of the database

     $ pg_dump -Fc -d vmdb_production > /root/pg_dump
    
  2. Export the encryption key and Base64 encode it for the Kubernetes Secret.

     $ vmdb && rails r "puts Base64.encode64(ManageIQ::Password.key.to_s)"
    
  3. Get the region number

     $ vmdb && rails r "puts MiqRegion.my_region.region"
    
  4. Get the GUID of the server that you want to run as.

     $ vmdb && cat GUID
    

Restore the backup into the kubernetes environment

  1. Create a YAML file defining the Custom Resource (CR). Minimally you’ll need the following:

     apiVersion: manageiq.org/v1alpha1
     kind: ManageIQ
     metadata:
       name: <friendly name for you CR instance>
     spec:
       applicationDomain: <your application domain name>
       databaseRegion: <region number from the appliance above>
       serverGuid: <GUID value from appliance above>
    
  2. Create the CR in your namespace. Once created, the operator will create several additional resources and start deploying the app.

     $ oc create -f <file name from above>
    
  3. Edit the app secret inserting the encryption key from the appliance. Replace the “encryption-key” value with the value we exported from the appliance above.

     $ oc edit secret app-secrets
    
  4. Temporarily hijack the orchestrator by adding the following to the deployment:

     $ kubectl patch deployment orchestrator -p '{"spec":{"template":{"spec":{"containers":[{"name":"orchestrator","command":["sleep", "1d"]}]}}}}'
    
  5. Shell into the orchestrator container:

     $ oc rsh deploy/orchestrator
    
     $ cd /var/www/miq/vmdb
     $ source ./container_env
     $ DISABLE_DATABASE_ENVIRONMENT_CHECK=1 rake db:drop db:create
    
  6. oc rsh into the database pod and restore the database backup

     $ cd /var/lib/pgsql
     # --- download your backup here ---
     $ pg_restore -v -d vmdb_production <your_backup_file>
     $ rm -f <your_backup_file>
     $ exit
    
  7. Back in orchestrator pod from step 5:

     $ rake db:migrate
     $ exit
    
  8. Delete the orchestrator deployment to remove the command override that we added above:

     $ kubectl delete deployment orchestrator
    

Done! The orchestrator will start deploying the rest of the pods required to run the application.

External postgresql

Running with an external Postgres server is an option, if you want the default internal Postgres you can skip this step. Additionally, if you want to secure the connection, you need to include the optional parameters sslmode=verify-full and rootcertificate when you create the secret. To do this, manually create the secret and substitute your values before you create the CR.

    $ oc create secret generic postgresql-secrets \
      --from-literal=dbname=vmdb_production \
      --from-literal=hostname=YOUR_HOSTNAME \
      --from-literal=port=5432 \
      --from-literal=password=YOUR_PASSWORD_HERE \
      --from-literal=username=YOUR_USERNAME_HERE \
      --from-literal=sslmode=verify-full \ # optional
      --from-file=rootcertificate=path/to/optional/certificate.pem # optional

Note: If wanted, the secret name is also customizable by setting databaseSecret in the CR.

Using TLS to encrypt connections between pods inside the cluster:

Create a secret containing all of the certificates

The certificates should all be signed by a CA and that CA certificate should be in uploaded as root_crt so that it can be used to verify connection validity. If the secret is named internal-certificates-secret, no changes are needed in the CR, if you choose a different name, that should be set in the internalCertificatesSecret field of the manageiq CR.

    oc create secret generic internal-certificates-secret \
      --from-file=root_crt=./certs/root.crt \
      --from-file=httpd_crt=./certs/httpd.crt \
      --from-file=httpd_key=./certs/httpd.key \
      --from-file=kafka_crt=./certs/kafka.crt \
      --from-file=kafka_key=./certs/kafka.key \
      --from-file=memcached_crt=./certs/memcached.crt \
      --from-file=memcached_key=./certs/memcached.key \
      --from-file=postgresql_crt=./certs/postgresql.crt \
      --from-file=postgresql_key=./certs/postgresql.key \
      --from-file=api_crt=./certs/api.crt \
      --from-file=api_key=./certs/api.key \
      --from-file=ui_crt=./certs/ui.crt \
      --from-file=ui_key=./certs/ui.key

Generating certificates:

We need a CA and certificates for each service. The certificates need to be valid for the internal kubernetes service name (i.e. httpd, postgres, etc.) and the services that are backing the route (ui & api) also need the certificate to include a SAN with the application domain name. For example, the certificate for the UI needs to be valid for the hostname ui and also your_application.apps.example.com. If you want a script that will generate these certificates for you, see: https://github.com/ManageIQ/manageiq-pods/blob/master/tools/cert_generator.rb