Fail to install GoodData in a Self-Hosted Kubernetes cluster

  • 21 March 2022
  • 8 replies

  • Participating Frequently
  • 5 replies


I'm working in a French company, and an R&D developer did a demo of GoodData with your Docker version.
The demo was positive and now the CEO and others would like to test your product, but with some of our data.
Our SysAdmin and myself don't have knowledge of Docker nor Kubernetes.
Before paying for training, they would like to decide if yes or no they choose GoodData.

We had to install Kubernetes, and it seems we succeed in doing it, by following guides. Here the installation setup.


Kubernetes Installation

In our named DNS software, we added 4 entries:

k8s-master                A
k8s-worker1               A
k8s-worker2               A
k8s-worker3               A

We have created 4 CentOS 7 Virtual Machines (fully updated to 03/21/2022):

1 socket, 2 cores, 4 GiB RAM for each node.
Firewall disabled, Kdump disabled, SELinux disabled, SWAP disabled.

The following action was taken on each node, except when specified.

Docker Install:

# yum install -y yum-utils device-mapper-persistent-data lvm2 vim wget
# yum-config-manager --add-repo
# yum install -y docker
# systemctl enable docker && systemctl start docker

To check if docker is working: docker run hello-world

Kubernetes Install:

# vim /etc/yum.repos.d/kubernetes.repo

# yum install -y kubelet-1.22.2 kubeadm-1.22.2 kubectl-1.22.2
  • Installed version of docker: 2:1.13.1-209.git7d71120.el7.centos
  • Installed version of K8S: 1.22.2-0
  • Installed version of kernel: 3.10.0-1160.59.1.el7

Host names install:

# vim /etc/hosts master k8s-master worker1 k8s-worker1 worker2 k8s-worker2 worker3 k8s-worker3

# vim /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

# sysctl --system

To create the cluster, we have done the following on the master node only:

# systemctl enable kubelet && systemctl start kubelet
# kubeadm init --pod-network-cidr=
# export KUBECONFIG=/etc/kubernetes/admin.conf
# kubectl apply -f

We have copied the kubeadm command given by the master.

On each node:

# systemctl enable kubelet && systemctl start kubelet
# kubeadm join --token abcdef.ghijklmnopqrestu --discovery-token-ca-cert-hash sha256:ad7ad7ad7ad7ad7ad7ad7ad7ad7ad7ad7ad7ad7ad7ad7ad7ad7ad7ad7ad7ad7a

Coming back to the master to check if the workers were added to the cluster:

# kubectl get nodes
NAME          STATUS     ROLES                  AGE   VERSION
k8s-master    Ready      control-plane,master   78s   v1.22.2
k8s-worker1   Ready      <none>                 14s   v1.22.2
k8s-worker2   Ready      <none>                 15s   v1.22.2
k8s-worker3   Ready      <none>                 12s   v1.22.2

At this point, it seems that our Kubernetes cluster is working.


Good Data Installation

New DNS entries:

gd                A
gdex              A

HELM 3 install:

# wget
# tar xvf helm-v3.7.1-linux-amd64.tar.gz
# mv linux-amd64/helm /usr/bin/helm

Helm Charts

# helm repo add apache
# helm repo add gooddata
# helm repo add ingress-nginx

K8S namespaces

# vim namespace_pulsar.yaml
apiVersion: v1
kind: Namespace
  name: pulsar
  labels: pulsar

# vim namespace_gooddata-cn.yaml
apiVersion: v1
kind: Namespace
  name: gooddata-cn
  labels: gooddata-cn

# vim namespace_nginx_ingress.yaml
apiVersion: v1
kind: Namespace
  name: ingress-nginx
  labels: ingress-nginx

# vim storage_class.yaml
kind: StorageClass
  name: local-storage
volumeBindingMode: WaitForFirstConsumer

# kubectl apply -f namespace_pulsar.yaml
# kubectl apply -f namespace_gooddata-cn.yaml
# kubectl apply -f namespace_nginx_ingress.yaml
# kubectl apply -f storage_class.yaml

TLS Certificates install

# kubectl create secret tls k8s-secret-tls --cert=/etc/pki/tls/certs/tls_cert.crt --key=/etc/pki/tls/private/tls_cert.key --namespace=gooddata-cn

Install nginx

# helm -n ingress-nginx install ingress-nginx ingress-nginx/ingress-nginx --set controller.replicaCount=2

# kubectl --namespace ingress-nginx get services -o wide -w ingress-nginx-controller
NAME                     TYPE         CLUSTER-IP    EXTERNAL-IP PORT(S)                    AGE ...
ingress-nginx-controller LoadBalancer <pending>   80:31541/TCP,443:30215/TCP 36m ...

It seems that my cluster can’t have an external IP


Organization creation
(on a python onsole)

import crypt
print(crypt.crypt("MyPassword", crypt.mksalt(crypt.METHOD_SHA256)))

(On the usual terminal)

# vim organization.yaml
kind: Organization
  name: ives_org
  id: ives
  name: "IVeS"
  adminGroup: adminGroup
  adminUser: admin
  adminUserToken: "$5$FKty.qsdlkfjnqsdlmkfnqsdmlfwl:skndfqslkdnfqs.ILS1"
    secretName: k8s-secret-tls

# kubectl -n gooddata-cn create -f organization.yaml

error: unable to recognize "organization.yaml": no matches for kind "Organization" in version ""

Authentication setup

# echo -n 'admin:bootstrap:MyPassword' | base64


# curl -H "Authorization: Bearer YWRtaW46Ym9vdHN0kjsdFwOkB6ZXJ0eTEwMw==" -H "Content-type: application/json" -d '{"email": "my_email", "password": "MyPassword", "displayName": "John DOE"}' --request POST
curl: (7) Failed connect to; Connection refused

Install Apache Pulsar Chart

# vim customized-values-pulsar.yaml
  functions: false
  proxy: false
  pulsar_manager: false
  toolset: false
  alert_manager: false
  grafana: false
  node_exporter: false
  prometheus: false
    repository: apachepulsar/pulsar
    repository: apachepulsar/pulsar
    repository: apachepulsar/pulsar
    repository: apachepulsar/pulsar
      name: data
      size: 2Gi
      storageClassName: local-storage
            -Xms128m -Xmx256m -XX:MaxDirectMemorySize=128m
      repository: apachepulsar/pulsar
  replicaCount: 2
      cpu: 0.2
      memory: 128Mi
      name: journal
      size: 5Gi
      storageClassName: local-storage
      name: ledgers
      size: 5Gi
      storageClassName: local-storage
    repository: apachepulsar/pulsar
            -Xms128m -Xmx256m -XX:MaxDirectMemorySize=128m
    subscriptionExpirationTimeMinutes: "5"
    webSocketServiceEnabled: "true"
  replicaCount: 2
      cpu: 0.2
      memory: 256Mi

# helm install --namespace pulsar -f customized-values-pulsar.yaml --set initialize=true pulsar apache/pulsar
W0321 14:23:14.079691   12335 warnings.go:70] policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
NAME: pulsar
LAST DEPLOYED: Mon Mar 21 14:23:13 2022
STATUS: deployed

Install GD-CN

# vim customized-values-gooddata-cn.yaml
  annotations: nginx

deployRedisHA: true
deployPostgresHA: true

    authHost: ''
      authSecretName: k8s-secret-tls
    annotations: nginx

  key: "key/our_key"

# helm install --namespace gooddata-cn --wait -f customized-values-gooddata-cn.yaml gooddata-cn gooddata/gooddata-cn
W0321 14:25:13.723673   12695 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob

Error: INSTALLATION FAILED: failed pre-install: timed out waiting for the condition


Can someone help me to point out our mistakes ?


Best answer by Robert Moucha 25 March 2022, 14:00

View original

8 replies

Userlevel 2

Hello, thank you for detailed steps explaining the installation process. It will make troubleshooting much easier.

issue 1) Ingress controller
Default setup for ingress-nginx assumes you have some external loadbalancer that can be managed by k8s. This works fine
in public cloud environments, but on bare metal or in private clouds it is not always possible. Refer to for details how to cope with this situation and how to properly
configure ingress-nginx controller in your environment.

issue 2) unable to recognize "organization.yaml"
Organization is a custom resource, whose definition (so-called CRD) is installed as a part of gooddata-cn helm chart release.
So you must install gooddata-cn chart first, then create your organization.

The same applies for your attempt to create user using `/api/auth/users` - this API is provided by gooddata-cn


issue 3) timed out waiting for the condition

This issue may have many reasons; the most common is the default helm timeout on waiting for all pods to come up is ř minutes, and this value simply may not be enough for more complex helm charts. Remedy is to add --timeout 10m (or more) to helm install command.

Another cause of this error is that  your k8s cluster doesn’t have sufficient resources for scheduling and starting all pods. I recommend to investigate failed deployment status using kubectl and see what is going on. Use kubectl get, kubectl describe and kubectl logs commands to find and explore pods that are in some weird state (Pending, Error, CrashLoopBackOff and so on). For pods in Pending state, use kubectl describe to see why the pod was not scheduled - there may not be enough CPU or Memory available. Or it is waiting for PersistentVolume to be provisioned.

In summary, high-level steps should be done in this order:

  1. install k8s
  2. install infrastructure charts (ingress-nginx, pulsar, optionally cert-manager and external-dns)
  3. verify that all helm charts were deployed correctly and all pods are running
  4. ensure the ingress controller works (by deploying some helloworld-like app with ingress and checking the url is accessible)
  5. setup DNS records both for Dex and your organization (refer to external-dns docs that could bypass/simplify this step)
  6. install gooddata-cn
  7. create Organization using custom resource
  8. create first user using bootstrap token used in Organization resource
  9. login using UI on your organization's hostname


Kind regards,

Robert Moucha

Hello :)

My first issue is now solved by installing metallb.

I understood that my organisation and the authentication have to be done when GDCN is installed, i’ll see these points later.

I still have the issue when installing GDCN, I try to figure out why.

Still looking on my side, but just for information:

[root@k8s-master ~]# helm install --namespace gooddata-cn --wait -f customized-values-gooddata-cn.yaml gooddata-cn gooddata/gooddata-cn --timeout 30m --debug
install.go:178: [debug] Original chart version: ""
install.go:199: [debug] CHART PATH: /root/.cache/helm/repository/gooddata-cn-1.7.0.tgz

client.go:128: [debug] creating 1 resource(s)
install.go:151: [debug] CRD is already present. Skipping.
client.go:128: [debug] creating 1 resource(s)
install.go:151: [debug] CRD is already present. Skipping.
W0329 10:19:37.631097 18516 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob
client.go:299: [debug] Starting delete for "gooddata-cn-create-namespace" Job
client.go:128: [debug] creating 1 resource(s)
client.go:528: [debug] Watching for changes to Job gooddata-cn-create-namespace with timeout of 30m0s
client.go:556: [debug] Add/Modify event for gooddata-cn-create-namespace: ADDED
client.go:595: [debug] gooddata-cn-create-namespace: Jobs active: 0, jobs failed: 0, jobs succeeded: 0
client.go:556: [debug] Add/Modify event for gooddata-cn-create-namespace: MODIFIED
client.go:595: [debug] gooddata-cn-create-namespace: Jobs active: 1, jobs failed: 0, jobs succeeded: 0
Error: INSTALLATION FAILED: failed pre-install: timed out waiting for the condition
helm.go:88: [debug] failed pre-install: timed out waiting for the condition


To test my cluster:

Verify that deployments work:

[root@k8s-master ~]# kubectl run nginx --image=nginx
pod/nginx created

[root@k8s-master ~]# kubectl get pods -l run=nginx
nginx 0/1 ContainerCreating 0 6s

[root@k8s-master ~]# kubectl get pods -l run=nginx
nginx 1/1 Running 0 15s

Verify that remote access works via port forwarding.

[root@k8s-master ~]# kubectl port-forward nginx 8081:80
Forwarding from -> 80
Forwarding from [::1]:8081 -> 80
[... From a new terminal, `curl --head` ==> HTTP 200 ...]
Handling connection for 8081

Verify that you can access container logs with kubectl logs.

[root@k8s-master ~]# kubectl logs nginx
/ /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/ Looking for shell scripts in /docker-entrypoint.d/
/ Launching /docker-entrypoint.d/ info: Getting the checksum of /etc/nginx/conf.d/default.conf info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/ Launching /docker-entrypoint.d/
/ Launching /docker-entrypoint.d/
/ Configuration complete; ready for start up
2022/03/29 09:16:05 [notice] 1#1: using the "epoll" event method
2022/03/29 09:16:05 [notice] 1#1: nginx/1.21.6
2022/03/29 09:16:05 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6)
2022/03/29 09:16:05 [notice] 1#1: OS: Linux 3.10.0-1160.59.1.el7.x86_64
2022/03/29 09:16:05 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2022/03/29 09:16:05 [notice] 1#1: start worker processes
2022/03/29 09:16:05 [notice] 1#1: start worker process 30
2022/03/29 09:16:05 [notice] 1#1: start worker process 31 - - [29/Mar/2022:09:24:18 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.29.0" "-"

Verify that you can execute commands inside a container with kubectl exec.

[root@k8s-master ~]# kubectl exec -ti nginx -- nginx -v
nginx version: nginx/1.21.6

Verify that services work.

[root@k8s-master ~]# kubectl expose deployment nginx --type LoadBalancer --port 80
service/nginx exposed

[root@k8s-master ~]# kubectl get service
kubernetes ClusterIP <none> 443/TCP 3d17h
nginx LoadBalancer 80:30684/TCP 5s

[root@k8s-master ~]# curl
<!DOCTYPE html>
<title>Welcome to nginx!</title>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p>

<p>For online documentation and support please refer to <a href=""></a>.<br/>
Commercial support is available at <a href=""></a>.</p>

<p><em>Thank you for using nginx.</em></p>

From my computer, reaching shows me the NGiNX “welcome page”. Yay!



I think, and hope I’m right, that my Kubernetes cluster is now working well.



Userlevel 2

Hello, and that is good news!

It is good that you were able to get it working.

Moreover, thank you for posting all this here. Future users who might be facing similar issues could find it very helpful during their troubleshooting.

-- Jan

Userlevel 2

Hi, it’s a great progress. Just your way of nginx validation may not be sufficient. I recommend deploying some simple service (like and create a real ingress pointing to this deployment:

# deploy test app (or use another way of deployment, see the GitHub page)

kubectl apply -k

# create ingress (update to some real hostname on

kubectl  create ingress podinfo --rule="*=podinfo:http" --class=nginx

# check the ingress works through LB


# (it should return a short JSON document)


Hello Robert,

Thank you for your help, it points out that my cluster is working well (following your advice):

[root@k8s-master ~]# curl
"hostname": "podinfo-694f589bf6-w87k7",
"version": "6.1.1",
"revision": "",
"color": "#34577c",
"logo": "",
"message": "greetings from podinfo v6.1.1",
"goos": "linux",
"goarch": "amd64",
"runtime": "go1.17.8",
"num_goroutine": "10",
"num_cpu": "2"

[root@k8s-master ~]# curl
"hostname": "podinfo-694f589bf6-8lhkt",
"version": "6.1.1",
"revision": "",
"color": "#34577c",
"logo": "",
"message": "greetings from podinfo v6.1.1",
"goos": "linux",
"goarch": "amd64",
"runtime": "go1.17.8",
"num_goroutine": "11",
"num_cpu": "2"


Hello all,

My question has been marked as RESOLVED with the Robert Moucha answer but it is not solved at all.

I’m still waiting for someone to help me.

Could you please reopen this topic please?

Thank you!