Installing Third-Party Components
Informal description how to setup the third party components in an OpenShift 4.5 cluster
Disclaimer
This chapter contains some hints and guidelines on installing required third-party components in an OpenShift 4.5 cluster. It does not provide a production grade ready setup for the IBM Financial Services Workbench. It aims to outline an example setup of the required third party components in an easy way and tries to get the start-up time quicker. With regard to quickly evolving technologies and products, this can be outdated even one day after publishing. All values given as examples in this guide are not considered safe.
Recommendations for a PoC
For a typical PoC situation we recommend the following setup:
Component | Tested Version | Note |
---|---|---|
Helm 2 | 2.16.6 | Required by IBM Financial Service Workbench 2.7.0 |
Strimzi - Apache Kafka on Kubernetes | 0.16.2* | Provided by 'Strimzi' Operator v0.14.0 |
Keycloak | 9.0.2* | Provided by 'Red Hat Keycloak' Operator v9.0.2 |
Gitlab | 12.7.5 | Provided by 'Gitlab' Helm Chart v3.0.3 |
MongoDB | 3.6 | Provided by 'Red Hat MongoDB' template |
Create OpenShift projects
It is recommended and assumed in the following that all third-party components except
GitLab will be installed in a separate OpenShift project called foundation
.
If you decide to also install GitLab, this should be in another project called
foundation-gitlab
. This is due to special security context constraint
requirements that GitLab needs to properly work. Overall it is preferable to use an existing
git repository service.
As a cluster administrator, run the following commands to create the project
foundation
:
$ oc new-project foundation
Repeat to create project foundation-gitlab
for installation of GitLab:
$ oc new-project foundation-gitlab
Install Helm
Official documentation
Install the Helm CLI
You can either download and install the pre-built binary release of Helm v2.16.6 or use the
Helm binary which comes with the product (to be found in
./ssob-install/deployments/helm
)
To use the installer script provided by Helm that will automatically fetch the desired version of Helm and install it locally, run the following command:
$ export DESIRED_VERSION=v2.16.6
$ curl -fsSL https://raw.githubusercontent.com/helm/helm/master/scripts/get | sudo bash
Create Service Account for Helm Tiller
Set up the service account role bindings required by Tiller by creating a service account and granting it admin rights for the namespace:
$ oc project foundation-gitlab
$ oc create sa tiller
$ oc adm policy add-role-to-user admin -z tiller
Install Helm Tiller with TLS
You can either provide your own certificates for configuring TLS/SSL between Helm and Tiller or reuse the Helm certificates of an existing IBM Cloud Pak for Data (CPD 3.5) installation.
To reuse the Helm certificates from the existing CPD Tiller installation, identify the following variables:
Variable | Replacement |
---|---|
{tiller_namespace}
|
Namespace of the existing CPD installation (default: zen) |
{tiller_secret}
|
Name of the Tiller secret that was created by the CPD installation (Commonly: tiller-secret or helm-secret) |
{cert_folder}
|
Folder for storing the certificate files (e.g. $HOME/.helm) |
helm install
command in section Start the Helm chart installation. Create the certificate and key files for Helm Tiller by setting the variables in the following command block and executing the commands:
export TILLER_NAMESPACE={tiller_namespace}
export TILLER_SECRET={tiller_secret}
export SECRET_FOLDER={cert_folder}
mkdir $SECRET_FOLDER
cd $SECRET_FOLDER
# Export secret certificates as files
oc get secret $TILLER_SECRET -n $TILLER_NAMESPACE -o yaml|grep -A3 '^data:'|tail -3 | awk -F: '{system("echo "$2" |base64 --decode > "$1)}'
# Rename to standard names for HELM certificates
mv ca.cert.pem ca.pem
mv helm.cert.pem cert.pem
mv helm.key.pem key.pem
## Set certificate file permissions
chmod 700 $PWD
chmod 644 ./ca.pem
chmod 644 ./cert.pem
chmod 600 ./key.pem
export HELM_TLS_CA_CERT=$PWD/ca.pem
export HELM_TLS_CERT=$PWD/cert.pem
export HELM_TLS_KEY=$PWD/key.pem
# Verify TLS communication
helm version --tls --tiller-namespace $TILLER_NAMESPACE
You should see helm version
command output like this:
Client: &version.Version{SemVer:"v2.16.6", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.16.6", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Install Helm Tiller with TLS
In order to install the Helm Tiller and configure TLS/SSL between Helm and Tiller run the
helm init
command with the --tiller-tls-*
parameters
and names of the certificates, as shown in the following example:
$ helm init \
--tiller-namespace foundation-gitlab \
--tiller-tls \
--tiller-tls-cert $HELM_TLS_CERT \
--tiller-tls-key $HELM_TLS_KEY \
--tiller-tls-verify \
--tls-ca-cert $HELM_TLS_CA_CERT \
--service-account tiller
After a few minutes validate that Tiller is deployed in the namespaces:
$ oc -n foundation-gitlab rollout status deploy/tiller-deploy
deployment "tiller-deploy" successfully rolled out
Verify the Tiller installation
Run the following command to validate Helm can communicate to the Tiller service:
$ helm version --tls --tiller-namespace foundation-gitlab
Client: &version.Version{SemVer:"v2.16.6", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.16.6", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Install Kafka
If you already have Kafka running skip this step and proceed to Install Keycloak. See also Pre-installation tasks on which configuration values of your Kafka installation you need to gather for the installation of IBM Financial Services Workbench.
Official documentation
Install the Strimzi Operator from the OperatorHub
As a cluster administrator, install the Strimzi
operator from the
OperatorHub to the namespace foundation
as follows:
-
Navigate in the web console to the Operators → OperatorHub page.
-
Filter by keyword:
Strimzi
-
Select the operator:
Strimzi (Community)
provided by Strimzi. -
Read the information about the operator and click Install .
-
On the Create Operator Subscription page:
-
Select option
A specific namespace on the cluster
with namespacefoundation
-
Select an update channel (if more than one is available).
-
Select
Automatic approval strategy
-
Click Subscribe
-
-
After the subscription’s upgrade status is up to date, navigate in the web console to the Operators → Installed Operators page.
-
Select the
Strimzi Apache Kafka Operator
and verify that the content for the Overview tab of the Operators → Operator Details page is displayed.
Create the Kafka instance
Create the Kafka
CRD instance in the namespace foundation
as follows:
-
Navigate in the web console to the Operators → Installed Operators page.
-
Select the
Strimzi Apache Kafka Operator
-
Navigate to the Kafka tab of the Operators → Operator Details page.
-
Click Create Kafka
-
In the Strimzi Apache Kafka Operator → Create Kafka page
-
Enter the resource definition (See Example configuration 'Kafka')
-
Click on Create
-
-
Verify that in the Kafka tab the newly created
kafka
CRD instance is displayed.
Example configuration 'Kafka'
apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
name: kafka
namespace: foundation
spec:
kafka:
replicas: 3
listeners:
external:
type: route
plain: {}
tls:
authentication:
type: scram-sha-512
config:
offsets.topic.replication.factor: 3
transaction.state.log.replication.factor: 3
transaction.state.log.min.isr: 2
storage:
type: ephemeral
zookeeper:
replicas: 3
storage:
type: ephemeral
entityOperator:
topicOperator: {}
userOperator: {}
Create the Kafka User instance
Create a KafkaUser
CRD instance in the namespace foundation
as follows:
-
Navigate in the web console to the Operators → Installed Operators page.
-
Select the
Strimzi Apache Kafka Operator
-
Navigate to the Kafka tab of the Operators → Operator Details page.
-
Click Create Kafka User
-
In the Strimzi Apache Kafka Operator → Create KafkaUser page
-
Enter the resource definition (See Example configuration 'KafkaUser' )
-
Click on Create
-
-
Verify that in the Kafka User tab the newly created
kafka-user
CRD instance is displayed.
Example configuration 'KafkaUser'
apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaUser
metadata:
labels:
strimzi.io/cluster: kafka
name: kafka-user
namespace: foundation
spec:
authentication:
type: scram-sha-512
authorization:
acls:
- host: '*'
operation: Read
resource:
name: '*'
patternType: literal
type: topic
type: simple
Retrieve the credentials
You can retrieve the credentials for connecting to the Kafka broker by looking for a
Kubernetes secret named after the user you provided (e.g. kafka-user
):
$ oc -n foundation get secret kafka-user -o jsonpath='{.data.password}' | base64 -d; echo
Retrieve the certificates
Get the certificate that you need during the installation of IBM Financial Services Workbench:
$ oc -n foundation get secret kafka-cluster-ca -o jsonpath='{.data.ca\.key}' | base64 -d > kafka.ca.key
$ oc -n foundation get secret kafka-cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > kafka.ca.crt
Install Keycloak
If you already have a Keycloak instance running consider using that and skip and proceed to Install MongoDB. See also Pre-installation tasks on which configuration values of your Keycloak installation you need to gather for the installation of IBM Financial Services Workbench.
Official documentation
Install Keycloak Operator from the OperatorHub
As a cluster administrator, install the Keycloak
operator from the
OperatorHub to the namespace foundation
:
-
Navigate in the web console to the Operators → OperatorHub page.
-
Filter by keyword:
Keycloak
-
Select the operator:
Keycloak (Community)
provided by Red Hat. -
Read the information about the operator and click Install .
-
On the Create Operator Subscription page:
-
Select option
A specific namespace on the cluster
with namespacefoundation
-
Select an Update Channel (if more than one is available).
-
Select
Automatic approval strategy
-
Click Subscribe
-
-
After the Subscription’s upgrade status is Up to date, navigate in the web console to the Operators → Installed Operators page.
-
Select the
Keycloak Operator
and verify that the content for the Overview tab of the Operators → Operator Details page is displayed.
Create the Keycloak instance
Create the Keycloak
CRD instance in the namespace foundation
:
-
Navigate in the web console to the Operators → Installed Operators page.
-
Select the
Keycloak Operator
-
Navigate to the Keycloak tab of the Operators → Operator Details page.
-
Click Create Keycloak
-
In the Keycloak Operator → Create Keycloak page
-
Enter the resource definition (See Example configuration 'Keycloak' )
-
Click on Create
-
-
Verify that in the Keycloak tab the newly created
foundation-keycloak
CRD instance is displayed.
Example configuration 'Keycloak'
apiVersion: keycloak.org/v1alpha1
kind: Keycloak
metadata:
name: foundation-keycloak
labels:
app: sso
namespace: foundation
spec:
instances: 1
extensions:
- >-
https://github.com/aerogear/keycloak-metrics-spi/releases/download/1.0.4/keycloak-metrics-spi-1.0.4.jar
externalAccess:
enabled: false
*.apps.{CLUSTER_DOMAIN}
being routed to the
Keycloak service. Create public route for Keycloak
Create a public route keycloak-external
as follows, with
{CLUSTER_DOMAIN}
) set to your cluster domain:
kind: Route
apiVersion: route.openshift.io/v1
metadata:
name: keycloak-external
namespace: foundation
labels:
app: keycloak
spec:
host: keycloak-foundation.{CLUSTER_DOMAIN}
to:
kind: Service
name: keycloak
weight: 100
port:
targetPort: keycloak
tls:
termination: reencrypt
insecureEdgeTerminationPolicy: None
wildcardPolicy: None
Retrieve the credentials
You can retrieve the credentials for connecting to the Keycloak by looking for a
Kubernetes secret named credential-foundation-keycloak
:
$ oc -n foundation get secret credential-foundation-keycloak -o jsonpath='{.data.ADMIN_USERNAME}' | base64 -d; echo
$ oc -n foundation get secret credential-foundation-keycloak -o jsonpath='{.data.ADMIN_PASSWORD}' | base64 -d; echo
Retrieve the certificates
The certificates are needed later during installation
(truststore.trustmap.identity
), so please download and save them
temporarily.
$ KEYCLOAK_HOST=`oc -n foundation get route keycloak-external -ojsonpath={.spec.host}`
$ echo | openssl s_client -showcerts -connect $KEYCLOAK_HOST:443 2>&1 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > keycloak-fullchain.pem
Verify the Keycloak installation
route.host
parameter in the values.yaml
file of the route you just created in
section Create public route for
Keycloak.$ oc -n foundation get route keycloak-external
Install MongoDB
If you already have a MongoDB instance running consider using that and skip this step and proceed to Install GitLab. See also Pre-installation tasks on which configuration values of your MongoDB installation you need to gather for the installation of IBM Financial Services Workbench.
Official documentation
Install MongoDB from the OpenShift Developer Catalog
Install the MongoDB database services in the namespace foundation
:
-
Switch in the web console to the Developer perspective.
-
Click +Add inside the left navigation of the web console
-
Select the
From Catalog
tile to navigate to the Developer Catalog. -
Filter by keyword:
MongoDB
-
Select the database service:
MongoDB (Community)
provided by Red Hat. -
Read the information about the service and click Instantiate Template .
-
Modify the following template parameters on the Instantiate Template page:
-
Namespace:
foundation
-
Memory Limit:
512Mi
-
Database Service Name:
mongodb
-
MongoDB Database Name:
mongodb
-
Volume Capacity:
1Gi
-
Version of MongoDB Image:
3.6
-
Click Create
-
-
Wait a few minutes until the Deployment rollout is complete. The current status is displayed in section Conditions in the Template Instances → Template page.
Retrieve the credentials
You can retrieve the credentials and database name for connecting to the MongoDB service
by looking for a Kubernetes secret named mongodb
:
export MONGODB_DATABASE=$(oc -n foundation get secret mongodb -o jsonpath='{.data.database-name}' | base64 -d)
export MONGODB_USER=$(oc -n foundation get secret mongodb -o jsonpath='{.data.database-user}' | base64 -d)
export MONGODB_PASSWORD=$(oc -n foundation get secret mongodb -o jsonpath='{.data.database-password}' | base64 -d)
export MONGODB_ADMIN_PASSWORD=$(oc -n foundation get secret mongodb -o jsonpath='{.data.database-admin-password}' | base64 -d)
Verify the MongoDB installation
After exporting the credentials and database name, first open a remote shell session to the running MongoDB pod:
$ oc get pod | grep mongodb
$ oc rsh <pod>
From the bash shell, verify that the mongoDB user can login with his credentials:
sh-4.2$ mongo -u $MONGODB_USER -p $MONGODB_PASSWORD $MONGODB_DATABASE --eval "db.version()"
MongoDB shell version v3.6.3
connecting to: mongodb://127.0.0.1:27017/mongodb
MongoDB server version: 3.6.3
3.6.3
From the bash shell, verify that the mongoDB admin user can login with his credentials:
sh-4.2$ mongo -u admin -p $MONGODB_ADMIN_PASSWORD admin --eval "db.version()"
MongoDB shell version v3.6.3
connecting to: mongodb://127.0.0.1:27017/mongodb
MongoDB server version: 3.6.3
3.6.3
Install GitLab
Official documentation
Add the Helm charts repository
Add the charts repository to the local registry and update the local chart repositories.
$ helm repo add gitlab https://charts.gitlab.io/
$ helm repo update
Adjust the gitlab-values.yaml
Create a file named gitlab-values.yaml
based on the example
configuration as seen below and replace {CLUSTER_DOMAIN}
with the base
domain of your cluster and change {GITLAB_URI}
to the URL at which you
want to access your GitLab instance.
Example configuration 'gitlab-values.yaml'
# Example values for gitlab/gitlab chart
## NOTICE
# Due to the scope and complexity of this chart, all possible values are
# not documented in this file. Extensive documentation for these values
# and more can be found at https://gitlab.com/gitlab-org/charts/gitlab/
## The global properties are used to configure multiple charts at once.
## Extended documenation at doc/charts/globals.md
global:
## GitLab operator is Alpha. Not for production use.
operator:
enabled: false
rollout:
# Enables automatic pause for deployment rollout. This must be set to `true` to fix
# Helm's issue with 3-way merge. See:
# https://gitlab.com/gitlab-org/charts/gitlab/issues/1262
# https://github.com/helm/helm/issues/3805
autoPause: true
## doc/installation/deployment.md#deploy-the-community-edition
edition: ce
## doc/charts/globals.md#gitlab-version
# gitlabVersion: master
## doc/charts/globals.md#application-resource
application:
create: false
links: []
allowClusterRoles: true
## doc/charts/globals.md#configure-host-settings
hosts:
domain: {CLUSTER_DOMAIN}
https: false
gitlab:
name: {GITLAB_URI}
## doc/charts/globals.md#configure-ingress-settings
ingress:
configureCertmanager: false
annotations: {}
enabled: true
tls:
enabled: false
secretName: gitlab-tls-secret
gitlab:
initialRootPassword: {}
psql:
password: {}
redis:
password:
enabled: true
gitaly:
enabled: true
authToken: {}
# secret:
# key:
serviceName: gitlab-unicorn
internal:
names: ['default']
external: []
tls:
enabled: false
## doc/charts/globals.md#configure-minio-settings
minio:
enabled: true
credentials: {}
# secret:
## doc/charts/globals.md#configure-grafana-integration
grafana:
enabled: false
## doc/charts/globals.md#configure-appconfig-settings
## Rails based portions of this chart share many settings
appConfig:
## doc/charts/globals.md#general-application-settings
enableUsagePing: true
enableSeatLink: true
enableImpersonation:
defaultCanCreateGroup: true
usernameChangingEnabled: true
issueClosingPattern:
defaultTheme:
defaultProjectsFeatures:
issues: true
mergeRequests: true
wiki: true
snippets: true
builds: true
webhookTimeout:
maxRequestDurationSeconds:
## doc/charts/globals.md#cron-jobs-related-settings
cron_jobs: {}
## doc/charts/globals.md#gravatarlibravatar-settings
gravatar:
plainUrl:
sslUrl:
## doc/charts/globals.md#hooking-analytics-services-to-the-gitlab-instance
extra:
googleAnalyticsId:
piwikUrl:
piwikSiteId:
## doc/charts/globals.md#omniauth
omniauth:
enabled: true
autoSignInWithProvider:
syncProfileFromProvider: []
syncProfileAttributes: ['email']
allowSingleSignOn: ['saml']
blockAutoCreatedUsers: true
autoLinkLdapUser: false
autoLinkSamlUser: false
externalProviders: []
allowBypassTwoFactor: []
providers:
- secret: gitlab-oauth2
key: provider
## End of global.appConfig
## doc/charts/geo.md
geo:
enabled: false
## doc/charts/globals.md#configure-gitlab-shell-settings
shell:
authToken: {}
# secret:
# key:
hostKeys: {}
# secret:
## Rails application secrets
## Secret created according to doc/installation/secrets.md#gitlab-rails-secret
## If allowing shared-secrets generation, this is OPTIONAL.
railsSecrets: {}
# secret:
## Rails generic setting, applicable to all Rails-based containers
rails:
bootsnap: # Enable / disable Shopify/Bootsnap cache
enabled: true
## doc/charts/globals.md#configure-registry-settings
registry:
bucket: registry
certificate: {}
# secret:
httpSecret: {}
# secret:
# key:
## GitLab Runner
## Secret created according to doc/installation/secrets.md#gitlab-runner-secret
## If allowing shared-secrets generation, this is OPTIONAL.
runner:
registrationToken: {}
# secret:
## Timezone for containers.
time_zone: UTC
## Global Service Annotations
service:
annotations: {}
## Global Deployment Annotations
deployment:
annotations: {}
antiAffinity: soft
## doc/installation/secrets.md#gitlab-workhorse-secret
workhorse: {}
# secret:
# key:
## doc/charts/globals.md#configure-unicorn
webservice:
workerTimeout: 60
## doc/charts/globals.md#custom-certificate-authorities
# configuration of certificates container & custom CA injection
certificates:
image:
repository: registry.gitlab.com/gitlab-org/build/cng/alpine-certificates
tag: 20171114-r3
customCAs: []
# - secret: custom-CA
# - secret: more-custom-CAs
## kubectl image used by hooks to carry out specific jobs
kubectl:
image:
repository: registry.gitlab.com/gitlab-org/build/cng/kubectl
tag: 1.13.12
pullSecrets: []
securityContext:
# in most base images, this is `nobody:nogroup`
runAsUser: 65534
fsGroup: 65534
busybox:
image:
repository: busybox
tag: latest
## End of global
upgradeCheck:
enabled: true
image: {}
# repository:
# tag:
securityContext:
# in alpine/debian/busybox based images, this is `nobody:nogroup`
runAsUser: 65534
fsGroup: 65534
tolerations: []
resources:
requests:
cpu: 50m
## Installation & configuration of jetstack/cert-manager
## See requirements.yaml for current version
certmanager:
createCustomResource: true
nameOverride: cert-manager
# Install cert-manager chart. Set to false if you already have cert-manager
# installed or if you are not using cert-manager.
install: false
# Other cert-manager configurations from upstream
# See https://github.com/jetstack/cert-manager/blob/master/deploy/charts/cert-manager/README.md#configuration
rbac:
create: true
webhook:
enabled: false
## doc/charts/nginx/index.md
## doc/architecture/decisions.md#nginx-ingress
## Installation & configuration of charts/nginx
nginx-ingress:
enabled: false
## Installation & configuration of stable/prometheus
## See requirements.yaml for current version
prometheus:
install: false
## Instllation & configuration of stable/prostgresql
## See requirements.yaml for current version
postgresql:
postgresqlUsername: gitlab
# This just needs to be set. It will use a second entry in existingSecret for postgresql-postgres-password
postgresqlPostgresPassword: bogus
install: true
postgresqlDatabase: gitlabhq_production
usePasswordFile: true
#existingSecret: 'bogus'
initdbScriptsConfigMap: 'bogus'
metrics:
enabled: true
## Optionally define additional custom metrics
## ref: https://github.com/wrouesnel/postgres_exporter#adding-new-metrics-via-a-config-file
## Installation & configuration charts/registry
## doc/architecture/decisions.md#registry
## doc/charts/registry/
# registry:
# enabled: false
## Automatic shared secret generation
## doc/installation/secrets.md
## doc/charts/shared-secrets
shared-secrets:
enabled: true
rbac:
create: true
## Installation & configuration of gitlab/gitlab-runner
## See requirements.yaml for current version
gitlab-runner:
install: false
rbac:
create: false
runners:
locked: false
cache:
cacheType: s3
s3BucketName: runner-cache
cacheShared: true
s3BucketLocation: us-east-1
s3CachePath: gitlab-runner
s3CacheInsecure: false
## Settings for individual sub-charts under GitLab
gitlab:
## doc/charts/gitlab/task-runner
task-runner:
replicas: 1
global.hosts.https
and
global.ingress.tls
are intentionally disabled to prevent issues caused
by self-signed certificates.Create Keycloak realm 'fsw'
For using Keycloak as identity provider for GitLab authorization the following steps are necessary:
Step 1: Create a Keycloak Realm 'fsw'
Create a realm fsw
(called below {KEYCLOAK_REALM}
) and change to this realm:
-
Go to the Add realm page
-
Enter name:
fsw
-
Click Create
This realm will be later also used by Solution Designer to store the users and their roles.
Step 2: Create a Keycloak Client 'gitlab-client'
-
Navigate to the Configure → Clients page
-
Click the Create button in the right upper corner of the table.
-
Set the following parameters:
-
Client ID:
gitlab-client
-
Client Protocol:
openid-connect
-
Root URL:
{GITLAB_URI}
-
-
Create the client by clicking on Save
-
In the client configuration page Clients → gitlab-client set the following parameters:
-
Access Type:
confidential
(Need to be set first) -
Standard Flow Enabled:
ON
-
Implicit Flow Enabled:
ON
-
Direct Access Grants Enabled:
ON
-
Service Accounts Enabled:
ON
-
Authorization Enabled:
ON
-
Valid Redirect URIs:
{GITLAB_URI}/*
-
Web Origins:
{GITLAB_URI}
-
Save the configuration by clicking on Save
-
-
Navigate to the tab Mappers in the Clients → gitlab-client page
-
Click the Builtin button in the right upper corner of the table.
-
Select the following built-in token mappers of type
User Property
-
email
-
username
-
given name
-
full name
-
family name
-
-
Add selected mappers by clicking on Add selected
-
Navigate to the tab Credentials in the Clients → gitlab-client page
-
Save the secret (called below
{KEYCLOAK_CLIENT_SECRET}
) that is used to create the gitlab-oauth2 secret.
Create the gitlab-oauth2 secret
The following steps show how to enable the OAuth authorization using Keycloak as identity provider:
- Define the configuration of the OAuth authorization and save it in an environment
variable:
-
{KEYCLOAK_CLIENT_SECRET}
refers to the Keycloak Client secret of step 2. -
{KEYCLOAK_URI}
refers to the Keycloak URL of projectfoundation
in the cluster. -
{KEYCLOAK_REALM}
refers to the realmfsw
of step 1.
export OMNI_AUTH_CFG="name: 'oauth2_generic' app_id: 'gitlab-client' app_secret: '{KEYCLOAK_CLIENT_SECRET}' args: client_options: site: 'https://{KEYCLOAK_URI}' user_info_url: '/auth/realms/{KEYCLOAK_REALM}/protocol/openid-connect/userinfo' authorize_url: '/auth/realms/{KEYCLOAK_REALM}/protocol/openid-connect/auth' token_url: '/auth/realms/{KEYCLOAK_REALM}/protocol/openid-connect/token' user_response_structure: id_path: 'sub' attributes: name: 'name' email: 'email' nickname: 'username' first_name: 'given_name' last_name: 'family_name'"
-
- Create a secret containing the omni-auth config. The secret must be named
gitlab-oauth2
and should contain the key namedprovider
with the previously created content.$ oc -n foundation-gitlab create secret generic \ "gitlab-oauth2" --type="Opaque" \ --from-literal="provider"="${OMNI_AUTH_CFG}"
Note: The secret gitlab-oauth2 is configured in the Example configuration 'gitlab-values.yaml' (global.appConfig.omniauth.providers.secret
)
Create the gitlab-tls-secret secret
router-certs-{name}
(where {name} is the
name of the Ingress Controller) in the openshift-ingress namespace contains the default
certificate generated by the operator. See User-provided certificates for default
ingress.# Export default router certificates from openshift-ingress namespace
$ oc get secret router-certs-default -n openshift-ingress -o yaml|grep -A2 '^data:'|tail -2 | awk -F: '{system("echo "$2" |base64 --decode > "$1)}'
# Create gitlab-tls-secret in namespace gitlab
$ oc -n foundation-gitlab create secret tls \
gitlab-tls-secret \
--cert=./tls.crt \
--key=./tls.key
# Remove temporary created certificate files
rm ./tls.crt ./tls.key
Create Security Context Constraints
Create a SecurityContextConstraint (SCC) that allows you that these serviceaccounts can run as any user.
$ oc -n foundation-gitlab adm policy add-scc-to-user anyuid -z default
$ oc -n foundation-gitlab adm policy add-scc-to-user anyuid -z gitlab-shared-secrets
anyuid
for the service
account default
has negative effects on other Pods that require storage
space, typically databases, resulting in insufficient permissions within the containers.
Therefore, the separate project gitlab-foundation
is used for the
Gitlab installation. Start the Helm chart installation
helm install
command. Run the
helm install
command and pass in the configuration file:
# To do - Replace placeholder {cert_folder}
$ export SECRET_FOLDER={cert_folder}
$ export HELM_TLS_CA_CERT=$SECRET_FOLDER/ca.pem
$ export HELM_TLS_CERT=$SECRET_FOLDER/cert.pem
$ export HELM_TLS_KEY=$SECRET_FOLDER/key.pem
$ helm upgrade --tls --install gitlab gitlab/gitlab \
--timeout 600 \
-f gitlab-values.yaml \
--namespace foundation-gitlab \
--tiller-namespace foundation-gitlab
--dry-run --debug
.Now gitlab installation should start create a new release. The command returns immediately and does not wait until the app’s cluster objects are ready.
You can find the new gitlab release by running the ls
command:
$ helm ls
Validate the Gitlab installation
Check on the status of the release by running the status
command:
$ helm status gitlab --tls --tiller-namespace foundation-gitlab
Wait until all cluster objects of this release are ready, and when it is ready, you can access Gitlab using the URL specified by {GITLAB_URI} in the sample configuration file gitlab-values.yaml, or by inspecting the route.
$ oc -n foundation get route
To login use the initial password of the root account. It was placed in a secret, that
ends with -initial-root-password
.
$ oc -n foundation-gitlab get secret gitlab-gitlab-initial-root-password -o jsonpath="{.data.password}" | base64 -d; echo
Summary
After following the outlined steps in this guideline you have installed and configured the third party components that are necessary for continuing the installation of IBM Financial Services Workbench.