Installation on OpenShift 3.11
A informal HowTo setup all the third party requirements on an OpenShift 3.11 environment
Disclaimer
This HowTo does not provide a production grade ready setup for the Financial Services Workbench. It aims to outline an example setup of the required third party components in an easy way and tries to get the startup time quicker.
Recommendations for a PoC
For a typical PoC situation we recommend the following setup:
- Helm 2.16.1
- Strimzi Version 0.14
- PostgreSQL Version 9.6 with helm chart version 8.2.1
- Keycloak Version 8.0.1 with helm chart version 6.2
- GitLab Version 12.7.5 with helm chart version 3.0.3
- MongoDB Version 3.6
- Usage of proper signed certificates for the OpenShift cluster. Configuring with self-signed certificates will increase the effort and complexity of the installation. Also there should be a wildcard-certificate as default router certificate in place.
We recommend to install all third party services in one OpenShift project. It will be referred to as
foundation
in the further course.
Create project foundation
As OpenShift cluster admin create a new project called foundation
.
Install Helm Tiller
Official documentation
https://v2.helm.sh/docs/using_helm/
Donwload and install helm
Download the helm 2 binaries (e.g. from https://github.com/helm/helm/releases/tag/v2.16.1) in version 2.16.1, unpack it and add it to the path.
Create a serviceaccount for the tiller and grant namespace admin permissions to it. This will be used to drive the tiller.
oc project foundation
oc create sa tiller
oc adm policy add-role-to-user admin -z tiller
Install the tiller in the foundation namespace
helm init --tiller-namespace foundation --service-account tiller
Install Strimzi
Official documentation
https://strimzi.io
Installation
The strimzi operator is installed via a template that can be retrieved from github.
oc apply -f https://github.com/strimzi/strimzi-kafka-operator/releases/download/0.14.0/strimzi-cluster-operator-0.14.0.yaml -n foundation
The strimzi operator service account needs cluster wide rights for creating crds, please check if the kafka operator pod is able to start and the service account (e.g. strimzi-cluster-operator) has enough rights.
In case the service-account does not have admin-cluster privileges, those can be assigned for example via
oc adm policy add-cluster-role-to-user cluster-admin strimzi-cluster-operator
Example configuration
Create the Kafka by adding a Kafka
(custom resource definition) in the project.
apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
name: kafka
namespace: foundation
spec:
kafka:
version: 2.3.0
replicas: 3
listeners:
plain: {}
tls:
authentication:
type: scram-sha-512
config:
offsets.topic.replication.factor: 3
transaction.state.log.replication.factor: 3
transaction.state.log.min.isr: 2
log.message.format.version: '2.3'
storage:
type: ephemeral
zookeeper:
replicas: 3
storage:
type: ephemeral
entityOperator:
topicOperator: {}
userOperator: {}
Create a Kafka User by adding a Kafka
(custom resource definition) in the project.
apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaUser
metadata:
labels:
strimzi.io/cluster: kafka
name: kafka-user
namespace: foundation
spec:
authentication:
type: scram-sha-512
authorization:
acls:
- host: '*'
operation: Read
resource:
name: '*'
patternType: literal
type: topic
type: simple
The password for the user can be found in a secret that is named like the user.
Install PostgreSQL
PostgreSQL is required for installing Keycloak and GitLab.
Official documentation
https://www.postgresql.org and https://github.com/helm/charts/tree/master/stable/postgresql
Installation via helm chart
Create a SecurityContextConstraint (SCC) that allows you that these serviceaccount can run as any user.
oc adm policy add-scc-to-user anyuid -z default -n foundation
Add the chart repository to the local registry and update the local chart repositories.
helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm repo update
Then start the helm chart installation.
helm upgrade --install foundation-database \
--namespace foundation \
--tiller-namespace foundation \
--set image.tag=9.6 \
--set "persistence.storageClass=nfs"
stable/postgresql
Now PostgreSQL installation should start and once it is finished, there is a database up and running
at foundation-database-postgresql.foundation.svc
on port 5432. You can find the password in the
secret foundation-database-postgresql
.
oc get secret --namespace foundation foundation-database-postgresql -o jsonpath="{.data.postgresql-password}" | base64 -d; echo
Install Keycloak
Official documentation
https://www.keycloak.org
Installation via helm chart
Add the chart repository to the local registry and update the local chart repositories.
helm repo add codecentric https://codecentric.github.io/helm-charts
helm repo update
Adjust the values.yaml for the installation.
helm fetch codecentric/keycloak
and unpack it
afterwards in order to see the latest version of the values.yamlFor an example configuration please see below.
Create a database in PostgreSQL. For this create a pod, that is running the PostgreSQL database and get an interactive terminal to this running container.
export POSTGRES_PASSWORD=$(kubectl get secret --namespace foundation foundation-database-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode)
oc run foundation-database-postgresql-client --rm --tty -i \
--restart='Never' --namespace foundation --image docker.io/bitnami/postgresql:9.6 \
--env="PGPASSWORD=$POSTGRES_PASSWORD" --command -- psql \
--host foundation-database-postgresql -U postgres -d postgres -p 5432
Then you can create the database with:
CREATE DATABASE keycloak;
\q
Then start the helm chart installation.
helm upgrade --install keycloak \
--namespace foundation \
codecentric/keycloak \
--tiller-namespace foundation \
-f keycloak-values.yaml
Now keycloak installation should start and once it is finished, you can access it on the
route.host
given in the values.yaml file. To login use the provided credentials or read the
secret keycloak-http
.
oc get secret --namespace foundation keycloak-http -o jsonpath="{.data.password}" | base64 -d; echo
Example configuration (keycloak-values.yaml)
clusterDomain: apps.openshift-cluster.mydomain.cloud
## Optionally override the fully qualified name
# fullnameOverride: keycloak
## Optionally override the name
# nameOverride: keycloak
keycloak:
serviceAccount:
# Specifies whether a service account should be created
create: false
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name:
securityContext:
fsGroup:
containerSecurityContext:
runAsUser:
runAsNonRoot: true
## Username for the initial Keycloak admin user
username: keycloak
## Password for the initial Keycloak admin user. Applicable only if existingSecret is not set.
## If not set, a random 10 characters password will be used
password: ""
# Specifies an existing secret to be used for the admin password
existingSecret: ""
# The key in the existing secret that stores the password
existingSecretKey: password
## OpenShift route configuration.
route:
enabled: true
path: /
annotations: {}
# kubernetes.io/tls-acme: "true"
# haproxy.router.openshift.io/disable_cookies: "true"
# haproxy.router.openshift.io/balance: roundrobin
labels: {}
# key: value
# Host name for the route
host: keycloak.apps.openshift-cluster.mydomain.cloud
## Persistence configuration
persistence:
# If true, the Postgres chart is deployed
deployPostgres: false
# The database vendor. Can be either "postgres", "mysql", "mariadb", or "h2"
dbVendor: postgres
## The following values only apply if "deployPostgres" is set to "false"
# Specifies an existing secret to be used for the database password
existingSecret: ""
# The key in the existing secret that stores the password
existingSecretKey: password
dbName: keycloak
dbHost: foundation-database-postgresql.foundation.svc
dbPort: 5432
dbUser: postgres
# Only used if no existing secret is specified. In this case a new secret is created
dbPassword: "postgres"
test:
enabled: true
securityContext:
fsGroup:
containerSecurityContext:
runAsUser:
runAsNonRoot: true
Install MongoDB
Official documentation
https://www.mongodb.com
Installation via OpenShift Developer Catalog
Install the MongoDB via the Developer Catalog into the namespace foundation
. Choose the
template MongoDB
provided by Red Hat. Please ensure that version of the MongoDB image is set to 3.6
If the template is not available in the OpenShift Developer Catalog it can be downloaded at https://github.com/openshift/origin/blob/master/examples/db-templates/mongodb-persistent-template.json. Please follow the official documentation of OpenShift on how to apply this template in your cluster.
Retrieving the values
You can retrieve the access values from the secret mongodb
.
Install GitLab
Official documentation
https://docs.gitlab.com/charts/
Installation via helm chart
Add the chart repository to the local registry and update the local chart repositories.
helm repo add gitlab https://charts.gitlab.io/
helm repo update
Adjust the values.yaml for the installation.
helm fetch gitlab/gitlab
and unpack it
afterwards in order to see the latest version of the values.yaml and use it as a base for your adjustments.For an example configuration please see below.
Create a database in PostgreSQL. For this create a pod, that is running the PostgreSQL database and get an interactive terminal to this running container.
export POSTGRES_PASSWORD=$(kubectl get secret --namespace foundation foundation-database-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode)
oc run foundation-database-postgresql-client --rm --tty -i \
--restart='Never' --namespace foundation --image docker.io/bitnami/postgresql:9.6 \
--env="PGPASSWORD=$POSTGRES_PASSWORD" --command -- psql \
--host foundation-database-postgresql -U postgres -d postgres -p 5432
Then you can create the database with:
CREATE DATABASE gitlab;
\q
Create a secret containing the password for the PostgreSQL database:
oc create secret generic "gitlab-postgres-password" \
--type="Opaque" \
--from-literal="password"="postgres"
Create a secret containing the tls settings. The files are referring to the certificate chain and the private key, that are used for SSL communication with the cluster. Most commonly these are the same certificate settings as they are used for the default router certificates.
oc create secret tls \
-n foundation gitlab-tls-secret
--cert=fullchain.cer
--key=tls.key
Create a serviceaccount for the gitlab-runner and assign it to the namespaced admin role
oc create sa gitlab-runner
oc adm policy add-role-to-user admin -z gitlab-runner -n foundation
Create a SecurityContextConstraint (SCC) that allows you that these serviceaccounts can run as any user.
oc adm policy add-scc-to-user anyuid -z default -n foundation
oc adm policy add-scc-to-user anyuid -z gitlab-runner -n foundation
oc adm policy add-scc-to-user anyuid -z gitlab-shared-secrets -n foundation
Then start the helm chart installation.
helm upgrade --install gitlab \
--namespace foundation \
gitlab/gitlab \
--tiller-namespace foundation \
-f gitlab-values.yaml
Now gitlab installation should start and once it is finished, you can access it on
https://gitlab.global.hosts.domain
given in the values.yaml file. To login use the initial password of
the root account. It was placed in a secret, that ends with -initial-root-password
.
oc get secret --namespace foundation gitlab-gitlab-initial-root-password -o jsonpath="{.data.password}" | base64 -d; echo
Example configuration (gitlab-values.yaml)
global:
## doc/installation/deployment.md#deploy-the-community-edition
edition: ce
## doc/charts/globals.md#configure-host-settings
hosts:
domain: apps.openshift-cluster.mydomain.cloud
https: true
## doc/charts/globals.md#configure-ingress-settings
ingress:
configureCertmanager: false
annotations: {}
enabled: true
tls:
enabled: true
secretName: gitlab-tls-secret
## doc/charts/globals.md#configure-postgresql-settings
psql:
password:
secret: gitlab-postgres-password
key: password
host: foundation-database.foundation.svc
port: 5432
username: postgres
database: gitlab
# preparedStatements: false
appConfig:
## doc/charts/globals.md#omniauth
omniauth:
enabled: false
autoSignInWithProvider:
syncProfileFromProvider: []
syncProfileAttributes: ["email"]
allowSingleSignOn: ["saml"]
blockAutoCreatedUsers: true
autoLinkLdapUser: false
autoLinkSamlUser: false
externalProviders: []
allowBypassTwoFactor: []
providers: []
# - secret: gitlab-config-omniauth-oauth2
# key: provider
## End of global.appConfig
## End of global
## Installation and configuration of jetstack/cert-manager
## See requirements.yaml for current version
certmanager:
# Install cert-manager chart. Set to false if you already have cert-manager
# installed or if you are not using cert-manager.
install: false
## doc/charts/nginx/index.md
## doc/architecture/decisions.md#nginx-ingress
## Installation and configuration of charts/nginx
nginx-ingress:
enabled: false
## Installation and configuration of stable/prometheus
## See requirements.yaml for current version
prometheus:
install: false
## Instllation and configuration of stable/prostgresql
## See requirements.yaml for current version
postgresql:
install: false
## Installation and configuration charts/registry
## doc/architecture/decisions.md#registry
## doc/charts/registry/
registry:
enabled: false
## Installation and configuration of gitlab/gitlab-runner
## See requirements.yaml for current version
gitlab-runner:
install: true
rbac:
create: false
serviceAccountName: gitlab-runner
runners:
cache:
cacheType: s3
s3BucketName: runner-cache
cacheShared: true
s3BucketLocation: us-east-1
s3CachePath: gitlab-runner
s3CacheInsecure: false
locked: false
serviceAccountName: gitlab-runner
imagePullSecrets:
- runner-dockercfg-secret
## Minio configuration if the service needs root permissions for the persistent volume
minio:
persistence:
storageClass: nfs
# securityContext:
# runAsUser: 0
# runAsRoot: true
# fsGroup: 0
Summary
After following the steps that are outlined in this guideline you have setup the third party components that are necessary for continuing the installation of FSW. This is probably the easiest way to get started, but it is not production-ready and the used settings are not to be considered safe.