Solved

Fail to install GoodData in a Self-Hosted Kubernetes cluster

  • 21 March 2022
  • 8 replies
  • 442 views

  • Participating Frequently
  • 5 replies

Hello,

I'm working in a French company, and an R&D developer did a demo of GoodData with your Docker version.
The demo was positive and now the CEO and others would like to test your product, but with some of our data.
Our SysAdmin and myself don't have knowledge of Docker nor Kubernetes.
Before paying for training, they would like to decide if yes or no they choose GoodData.

We had to install Kubernetes, and it seems we succeed in doing it, by following guides. Here the installation setup.

 

Kubernetes Installation

In our named DNS software, we added 4 entries:

k8s-master                A       172.21.0.10
k8s-worker1               A       172.21.0.11
k8s-worker2               A       172.21.0.12
k8s-worker3               A       172.21.0.13

We have created 4 CentOS 7 Virtual Machines (fully updated to 03/21/2022):

1 socket, 2 cores, 4 GiB RAM for each node.
Firewall disabled, Kdump disabled, SELinux disabled, SWAP disabled.

The following action was taken on each node, except when specified.

Docker Install:

# yum install -y yum-utils device-mapper-persistent-data lvm2 vim wget
# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
# yum install -y docker
# systemctl enable docker && systemctl start docker

To check if docker is working: docker run hello-world

Kubernetes Install:

# vim /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

# yum install -y kubelet-1.22.2 kubeadm-1.22.2 kubectl-1.22.2
  • Installed version of docker: 2:1.13.1-209.git7d71120.el7.centos
  • Installed version of K8S: 1.22.2-0
  • Installed version of kernel: 3.10.0-1160.59.1.el7

Host names install:

# vim /etc/hosts
172.21.0.10 master k8s-master
172.21.0.11 worker1 k8s-worker1
172.21.0.12 worker2 k8s-worker2
172.21.0.13 worker3 k8s-worker3

# vim /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

# sysctl --system

To create the cluster, we have done the following on the master node only:

# systemctl enable kubelet && systemctl start kubelet
# kubeadm init --pod-network-cidr=10.244.0.0/16
# export KUBECONFIG=/etc/kubernetes/admin.conf
# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

We have copied the kubeadm command given by the master.

On each node:

# systemctl enable kubelet && systemctl start kubelet
# kubeadm join 172.21.0.10:6443 --token abcdef.ghijklmnopqrestu --discovery-token-ca-cert-hash sha256:ad7ad7ad7ad7ad7ad7ad7ad7ad7ad7ad7ad7ad7ad7ad7ad7ad7ad7ad7ad7ad7a

Coming back to the master to check if the workers were added to the cluster:

# kubectl get nodes
NAME          STATUS     ROLES                  AGE   VERSION
k8s-master    Ready      control-plane,master   78s   v1.22.2
k8s-worker1   Ready      <none>                 14s   v1.22.2
k8s-worker2   Ready      <none>                 15s   v1.22.2
k8s-worker3   Ready      <none>                 12s   v1.22.2

At this point, it seems that our Kubernetes cluster is working.

 

Good Data Installation

New DNS entries:

gd                A       172.21.0.10
gdex              A       172.21.0.10

HELM 3 install:

# wget https://get.helm.sh/helm-v3.7.1-linux-amd64.tar.gz
# tar xvf helm-v3.7.1-linux-amd64.tar.gz
# mv linux-amd64/helm /usr/bin/helm

Helm Charts

# helm repo add apache https://pulsar.apache.org/charts
# helm repo add gooddata https://charts.gooddata.com/
# helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx

K8S namespaces

# vim namespace_pulsar.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: pulsar
  labels:
    metadata.labels.kubernetes.io/metadata.name: pulsar

# vim namespace_gooddata-cn.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: gooddata-cn
  labels:
    metadata.labels.kubernetes.io/metadata.name: gooddata-cn

# vim namespace_nginx_ingress.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    metadata.labels.kubernetes.io/metadata.name: ingress-nginx

# vim storage_class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

# kubectl apply -f namespace_pulsar.yaml
# kubectl apply -f namespace_gooddata-cn.yaml
# kubectl apply -f namespace_nginx_ingress.yaml
# kubectl apply -f storage_class.yaml

TLS Certificates install

# kubectl create secret tls k8s-secret-tls --cert=/etc/pki/tls/certs/tls_cert.crt --key=/etc/pki/tls/private/tls_cert.key --namespace=gooddata-cn

Install nginx

# helm -n ingress-nginx install ingress-nginx ingress-nginx/ingress-nginx --set controller.replicaCount=2

# kubectl --namespace ingress-nginx get services -o wide -w ingress-nginx-controller
NAME                     TYPE         CLUSTER-IP    EXTERNAL-IP PORT(S)                    AGE ...
ingress-nginx-controller LoadBalancer 10.107.60.187 <pending>   80:31541/TCP,443:30215/TCP 36m ...

It seems that my cluster can’t have an external IP

 

Organization creation
(on a python onsole)

import crypt
print(crypt.crypt("MyPassword", crypt.mksalt(crypt.METHOD_SHA256)))

(On the usual terminal)

# vim organization.yaml
apiVersion: controllers.gooddata.com/v1
kind: Organization
metadata:
  name: ives_org
spec:
  id: ives
  name: "IVeS"
  hostname: gd.internal.ives.fr
  adminGroup: adminGroup
  adminUser: admin
  adminUserToken: "$5$FKty.qsdlkfjnqsdlmkfnqsdmlfwl:skndfqslkdnfqs.ILS1"
  tls:
    secretName: k8s-secret-tls

# kubectl -n gooddata-cn create -f organization.yaml

error: unable to recognize "organization.yaml": no matches for kind "Organization" in version "controllers.gooddata.com/v1"

Authentication setup

# echo -n 'admin:bootstrap:MyPassword' | base64
YWRtaW46Ym9vdHN0kjsdFwOkB6ZXJ0eTEwMw==

Dex:

# curl -H "Authorization: Bearer YWRtaW46Ym9vdHN0kjsdFwOkB6ZXJ0eTEwMw==" -H "Content-type: application/json" -d '{"email": "my_email", "password": "MyPassword", "displayName": "John DOE"}' --request POST https://gd.internal.ives.fr/api/auth/users
curl: (7) Failed connect to gd.internal.ives.fr:443; Connection refused

Install Apache Pulsar Chart

# vim customized-values-pulsar.yaml
components:
  functions: false
  proxy: false
  pulsar_manager: false
  toolset: false
monitoring:
  alert_manager: false
  grafana: false
  node_exporter: false
  prometheus: false
images:
  autorecovery:
    repository: apachepulsar/pulsar
  bookie:
    repository: apachepulsar/pulsar
  broker:
    repository: apachepulsar/pulsar
  zookeeper:
    repository: apachepulsar/pulsar
zookeeper:
  volumes:
    data:
      name: data
      size: 2Gi
      storageClassName: local-storage
bookkeeper:
  configData:
    PULSAR_MEM: >
            -Xms128m -Xmx256m -XX:MaxDirectMemorySize=128m
  metadata:
    image:
      repository: apachepulsar/pulsar
  replicaCount: 2
  resources:
    requests:
      cpu: 0.2
      memory: 128Mi
  volumes:
    journal:
      name: journal
      size: 5Gi
      storageClassName: local-storage
    ledgers:
      name: ledgers
      size: 5Gi
      storageClassName: local-storage
pulsar_metadata:
  image:
    repository: apachepulsar/pulsar
broker:
  configData:
    PULSAR_MEM: >
            -Xms128m -Xmx256m -XX:MaxDirectMemorySize=128m
    subscriptionExpirationTimeMinutes: "5"
    webSocketServiceEnabled: "true"
  replicaCount: 2
  resources:
    requests:
      cpu: 0.2
      memory: 256Mi

# helm install --namespace pulsar -f customized-values-pulsar.yaml --set initialize=true pulsar apache/pulsar
W0321 14:23:14.079691   12335 warnings.go:70] policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
NAME: pulsar
LAST DEPLOYED: Mon Mar 21 14:23:13 2022
NAMESPACE: pulsar
STATUS: deployed
REVISION: 1
TEST SUITE: None

Install GD-CN

# vim customized-values-gooddata-cn.yaml
ingress:
  annotations:
    kubernetes.io/ingress.class: nginx

deployRedisHA: true
deployPostgresHA: true

dex:
  ingress:
    authHost: 'gdex.internal.ives.fr'
    tls:
      authSecretName: k8s-secret-tls
    annotations:
      kubernetes.io/ingress.class: nginx

license:
  key: "key/our_key"

# helm install --namespace gooddata-cn --wait -f customized-values-gooddata-cn.yaml gooddata-cn gooddata/gooddata-cn
W0321 14:25:13.723673   12695 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob

Error: INSTALLATION FAILED: failed pre-install: timed out waiting for the condition

 

Can someone help me to point out our mistakes ?

icon

Best answer by Robert Moucha 25 March 2022, 14:00

View original

8 replies

Userlevel 2

Hello, thank you for detailed steps explaining the installation process. It will make troubleshooting much easier.

issue 1) Ingress controller
Default setup for ingress-nginx assumes you have some external loadbalancer that can be managed by k8s. This works fine
in public cloud environments, but on bare metal or in private clouds it is not always possible. Refer to 
https://kubernetes.github.io/ingress-nginx/deploy/baremetal/ for details how to cope with this situation and how to properly
configure ingress-nginx controller in your environment.

issue 2) unable to recognize "organization.yaml"
Organization is a custom resource, whose definition (so-called CRD) is installed as a part of gooddata-cn helm chart release.
So you must install gooddata-cn chart first, then create your organization.

The same applies for your attempt to create user using `/api/auth/users` - this API is provided by gooddata-cn

 

issue 3) timed out waiting for the condition

This issue may have many reasons; the most common is the default helm timeout on waiting for all pods to come up is ř minutes, and this value simply may not be enough for more complex helm charts. Remedy is to add --timeout 10m (or more) to helm install command.

Another cause of this error is that  your k8s cluster doesn’t have sufficient resources for scheduling and starting all pods. I recommend to investigate failed deployment status using kubectl and see what is going on. Use kubectl get, kubectl describe and kubectl logs commands to find and explore pods that are in some weird state (Pending, Error, CrashLoopBackOff and so on). For pods in Pending state, use kubectl describe to see why the pod was not scheduled - there may not be enough CPU or Memory available. Or it is waiting for PersistentVolume to be provisioned.


In summary, high-level steps should be done in this order:

  1. install k8s
  2. install infrastructure charts (ingress-nginx, pulsar, optionally cert-manager and external-dns)
  3. verify that all helm charts were deployed correctly and all pods are running
  4. ensure the ingress controller works (by deploying some helloworld-like app with ingress and checking the url is accessible)
  5. setup DNS records both for Dex and your organization (refer to external-dns docs that could bypass/simplify this step)
  6. install gooddata-cn
  7. create Organization using custom resource
  8. create first user using bootstrap token used in Organization resource
  9. login using UI on your organization's hostname

 

Kind regards,

Robert Moucha

Hello :)

My first issue is now solved by installing metallb.

I understood that my organisation and the authentication have to be done when GDCN is installed, i’ll see these points later.

I still have the issue when installing GDCN, I try to figure out why.

Still looking on my side, but just for information:

[root@k8s-master ~]# helm install --namespace gooddata-cn --wait -f customized-values-gooddata-cn.yaml gooddata-cn gooddata/gooddata-cn --timeout 30m --debug
install.go:178: [debug] Original chart version: ""
install.go:199: [debug] CHART PATH: /root/.cache/helm/repository/gooddata-cn-1.7.0.tgz

client.go:128: [debug] creating 1 resource(s)
install.go:151: [debug] CRD kopfpeerings.kopf.dev is already present. Skipping.
client.go:128: [debug] creating 1 resource(s)
install.go:151: [debug] CRD organizations.controllers.gooddata.com is already present. Skipping.
W0329 10:19:37.631097 18516 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob
client.go:299: [debug] Starting delete for "gooddata-cn-create-namespace" Job
client.go:128: [debug] creating 1 resource(s)
client.go:528: [debug] Watching for changes to Job gooddata-cn-create-namespace with timeout of 30m0s
client.go:556: [debug] Add/Modify event for gooddata-cn-create-namespace: ADDED
client.go:595: [debug] gooddata-cn-create-namespace: Jobs active: 0, jobs failed: 0, jobs succeeded: 0
client.go:556: [debug] Add/Modify event for gooddata-cn-create-namespace: MODIFIED
client.go:595: [debug] gooddata-cn-create-namespace: Jobs active: 1, jobs failed: 0, jobs succeeded: 0
Error: INSTALLATION FAILED: failed pre-install: timed out waiting for the condition
helm.go:88: [debug] failed pre-install: timed out waiting for the condition
INSTALLATION FAILED
main.newInstallCmd.func2
helm.sh/helm/v3/cmd/helm/install.go:127
github.com/spf13/cobra.(*Command).execute
github.com/spf13/cobra@v1.2.1/command.go:856
github.com/spf13/cobra.(*Command).ExecuteC
github.com/spf13/cobra@v1.2.1/command.go:974
github.com/spf13/cobra.(*Command).Execute
github.com/spf13/cobra@v1.2.1/command.go:902
main.main
helm.sh/helm/v3/cmd/helm/helm.go:87
runtime.main
runtime/proc.go:225
runtime.goexit
runtime/asm_amd64.s:1371

 

To test my cluster:

Verify that deployments work:

[root@k8s-master ~]# kubectl run nginx --image=nginx
pod/nginx created

[root@k8s-master ~]# kubectl get pods -l run=nginx
NAME READY STATUS RESTARTS AGE
nginx 0/1 ContainerCreating 0 6s

[root@k8s-master ~]# kubectl get pods -l run=nginx
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 15s

Verify that remote access works via port forwarding.

[root@k8s-master ~]# kubectl port-forward nginx 8081:80
Forwarding from 127.0.0.1:8081 -> 80
Forwarding from [::1]:8081 -> 80
[... From a new terminal, `curl --head http://127.0.0.1:8081` ==> HTTP 200 ...]
Handling connection for 8081

Verify that you can access container logs with kubectl logs.

[root@k8s-master ~]# kubectl logs nginx
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2022/03/29 09:16:05 [notice] 1#1: using the "epoll" event method
2022/03/29 09:16:05 [notice] 1#1: nginx/1.21.6
2022/03/29 09:16:05 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6)
2022/03/29 09:16:05 [notice] 1#1: OS: Linux 3.10.0-1160.59.1.el7.x86_64
2022/03/29 09:16:05 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2022/03/29 09:16:05 [notice] 1#1: start worker processes
2022/03/29 09:16:05 [notice] 1#1: start worker process 30
2022/03/29 09:16:05 [notice] 1#1: start worker process 31
127.0.0.1 - - [29/Mar/2022:09:24:18 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.29.0" "-"

Verify that you can execute commands inside a container with kubectl exec.

[root@k8s-master ~]# kubectl exec -ti nginx -- nginx -v
nginx version: nginx/1.21.6

Verify that services work.

[root@k8s-master ~]# kubectl expose deployment nginx --type LoadBalancer --port 80
service/nginx exposed

[root@k8s-master ~]# kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d17h
nginx LoadBalancer 10.106.49.244 172.21.0.12 80:30684/TCP 5s

[root@k8s-master ~]# curl 172.21.0.12:30684
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p>

<p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

From my computer, reaching http://172.21.0.12/ shows me the NGiNX “welcome page”. Yay!

 

Conclusion

I think, and hope I’m right, that my Kubernetes cluster is now working well.

 

Source: https://acloudguru.com/hands-on-labs/smoke-testing-a-kubernetes-cluster

Userlevel 3

Hello, and that is good news!

It is good that you were able to get it working.

Moreover, thank you for posting all this here. Future users who might be facing similar issues could find it very helpful during their troubleshooting.

-- Jan

Userlevel 2

Hi, it’s a great progress. Just your way of nginx validation may not be sufficient. I recommend deploying some simple service (like https://github.com/stefanprodan/podinfo) and create a real ingress pointing to this deployment:

# deploy test app (or use another way of deployment, see the GitHub page)

kubectl apply -k github.com/stefanprodan/podinfo//kustomize

# create ingress (update podinfo.example.com to some real hostname on 172.21.0.12)

kubectl  create ingress podinfo --rule="podinfo.example.com/*=podinfo:http" --class=nginx

# check the ingress works through LB

curl http://podinfo.example.com

# (it should return a short JSON document)

 

Hello Robert,

Thank you for your help, it points out that my cluster is working well (following your advice):

[root@k8s-master ~]# curl podinfo.internal.ives.fr
{
"hostname": "podinfo-694f589bf6-w87k7",
"version": "6.1.1",
"revision": "",
"color": "#34577c",
"logo": "https://raw.githubusercontent.com/stefanprodan/podinfo/gh-pages/cuddle_clap.gif",
"message": "greetings from podinfo v6.1.1",
"goos": "linux",
"goarch": "amd64",
"runtime": "go1.17.8",
"num_goroutine": "10",
"num_cpu": "2"
}

[root@k8s-master ~]# curl podinfo.internal.ives.fr
{
"hostname": "podinfo-694f589bf6-8lhkt",
"version": "6.1.1",
"revision": "",
"color": "#34577c",
"logo": "https://raw.githubusercontent.com/stefanprodan/podinfo/gh-pages/cuddle_clap.gif",
"message": "greetings from podinfo v6.1.1",
"goos": "linux",
"goarch": "amd64",
"runtime": "go1.17.8",
"num_goroutine": "11",
"num_cpu": "2"
}

 

Hello all,

My question has been marked as RESOLVED with the Robert Moucha answer but it is not solved at all.

I’m still waiting for someone to help me.

Could you please reopen this topic please?

Thank you!

Reply