Hello,
I'm working in a French company, and an R&D developer did a demo of GoodData with your Docker version.
The demo was positive and now the CEO and others would like to test your product, but with some of our data.
Our SysAdmin and myself don't have knowledge of Docker nor Kubernetes.
Before paying for training, they would like to decide if yes or no they choose GoodData.
We had to install Kubernetes, and it seems we succeed in doing it, by following guides. Here the installation setup.
Kubernetes Installation
In our named DNS software, we added 4 entries:
k8s-master A 172.21.0.10
k8s-worker1 A 172.21.0.11
k8s-worker2 A 172.21.0.12
k8s-worker3 A 172.21.0.13
We have created 4 CentOS 7 Virtual Machines (fully updated to 03/21/2022):
1 socket, 2 cores, 4 GiB RAM for each node.
Firewall disabled, Kdump disabled, SELinux disabled, SWAP disabled.
The following action was taken on each node, except when specified.
Docker Install:
# yum install -y yum-utils device-mapper-persistent-data lvm2 vim wget
# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
# yum install -y docker
# systemctl enable docker && systemctl start docker
To check if docker is working: docker run hello-world
Kubernetes Install:
# vim /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
# yum install -y kubelet-1.22.2 kubeadm-1.22.2 kubectl-1.22.2
- Installed version of docker: 2:1.13.1-209.git7d71120.el7.centos
- Installed version of K8S: 1.22.2-0
- Installed version of kernel: 3.10.0-1160.59.1.el7
Host names install:
# vim /etc/hosts
172.21.0.10 master k8s-master
172.21.0.11 worker1 k8s-worker1
172.21.0.12 worker2 k8s-worker2
172.21.0.13 worker3 k8s-worker3
# vim /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
# sysctl --system
To create the cluster, we have done the following on the master node only:
# systemctl enable kubelet && systemctl start kubelet
# kubeadm init --pod-network-cidr=10.244.0.0/16
# export KUBECONFIG=/etc/kubernetes/admin.conf
# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
We have copied the kubeadm command given by the master.
On each node:
# systemctl enable kubelet && systemctl start kubelet
# kubeadm join 172.21.0.10:6443 --token abcdef.ghijklmnopqrestu --discovery-token-ca-cert-hash sha256:ad7ad7ad7ad7ad7ad7ad7ad7ad7ad7ad7ad7ad7ad7ad7ad7ad7ad7ad7ad7ad7a
Coming back to the master to check if the workers were added to the cluster:
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane,master 78s v1.22.2
k8s-worker1 Ready <none> 14s v1.22.2
k8s-worker2 Ready <none> 15s v1.22.2
k8s-worker3 Ready <none> 12s v1.22.2
At this point, it seems that our Kubernetes cluster is working.
Good Data Installation
New DNS entries:
gd A 172.21.0.10
gdex A 172.21.0.10
HELM 3 install:
# wget https://get.helm.sh/helm-v3.7.1-linux-amd64.tar.gz
# tar xvf helm-v3.7.1-linux-amd64.tar.gz
# mv linux-amd64/helm /usr/bin/helm
Helm Charts
# helm repo add apache https://pulsar.apache.org/charts
# helm repo add gooddata https://charts.gooddata.com/
# helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
K8S namespaces
# vim namespace_pulsar.yaml
apiVersion: v1
kind: Namespace
metadata:
name: pulsar
labels:
metadata.labels.kubernetes.io/metadata.name: pulsar
# vim namespace_gooddata-cn.yaml
apiVersion: v1
kind: Namespace
metadata:
name: gooddata-cn
labels:
metadata.labels.kubernetes.io/metadata.name: gooddata-cn
# vim namespace_nginx_ingress.yaml
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx
labels:
metadata.labels.kubernetes.io/metadata.name: ingress-nginx
# vim storage_class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
# kubectl apply -f namespace_pulsar.yaml
# kubectl apply -f namespace_gooddata-cn.yaml
# kubectl apply -f namespace_nginx_ingress.yaml
# kubectl apply -f storage_class.yaml
TLS Certificates install
# kubectl create secret tls k8s-secret-tls --cert=/etc/pki/tls/certs/tls_cert.crt --key=/etc/pki/tls/private/tls_cert.key --namespace=gooddata-cn
Install nginx
# helm -n ingress-nginx install ingress-nginx ingress-nginx/ingress-nginx --set controller.replicaCount=2
# kubectl --namespace ingress-nginx get services -o wide -w ingress-nginx-controller
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ...
ingress-nginx-controller LoadBalancer 10.107.60.187 <pending> 80:31541/TCP,443:30215/TCP 36m ...
It seems that my cluster can’t have an external IP
Organization creation
(on a python onsole)
import crypt
print(crypt.crypt("MyPassword", crypt.mksalt(crypt.METHOD_SHA256)))
(On the usual terminal)
# vim organization.yaml
apiVersion: controllers.gooddata.com/v1
kind: Organization
metadata:
name: ives_org
spec:
id: ives
name: "IVeS"
hostname: gd.internal.ives.fr
adminGroup: adminGroup
adminUser: admin
adminUserToken: "$5$FKty.qsdlkfjnqsdlmkfnqsdmlfwl:skndfqslkdnfqs.ILS1"
tls:
secretName: k8s-secret-tls
# kubectl -n gooddata-cn create -f organization.yaml
error: unable to recognize "organization.yaml": no matches for kind "Organization" in version "controllers.gooddata.com/v1"
Authentication setup
# echo -n 'admin:bootstrap:MyPassword' | base64
YWRtaW46Ym9vdHN0kjsdFwOkB6ZXJ0eTEwMw==
Dex:
# curl -H "Authorization: Bearer YWRtaW46Ym9vdHN0kjsdFwOkB6ZXJ0eTEwMw==" -H "Content-type: application/json" -d '{"email": "my_email", "password": "MyPassword", "displayName": "John DOE"}' --request POST https://gd.internal.ives.fr/api/auth/users
curl: (7) Failed connect to gd.internal.ives.fr:443; Connection refused
Install Apache Pulsar Chart
# vim customized-values-pulsar.yaml
components:
functions: false
proxy: false
pulsar_manager: false
toolset: false
monitoring:
alert_manager: false
grafana: false
node_exporter: false
prometheus: false
images:
autorecovery:
repository: apachepulsar/pulsar
bookie:
repository: apachepulsar/pulsar
broker:
repository: apachepulsar/pulsar
zookeeper:
repository: apachepulsar/pulsar
zookeeper:
volumes:
data:
name: data
size: 2Gi
storageClassName: local-storage
bookkeeper:
configData:
PULSAR_MEM: >
-Xms128m -Xmx256m -XX:MaxDirectMemorySize=128m
metadata:
image:
repository: apachepulsar/pulsar
replicaCount: 2
resources:
requests:
cpu: 0.2
memory: 128Mi
volumes:
journal:
name: journal
size: 5Gi
storageClassName: local-storage
ledgers:
name: ledgers
size: 5Gi
storageClassName: local-storage
pulsar_metadata:
image:
repository: apachepulsar/pulsar
broker:
configData:
PULSAR_MEM: >
-Xms128m -Xmx256m -XX:MaxDirectMemorySize=128m
subscriptionExpirationTimeMinutes: "5"
webSocketServiceEnabled: "true"
replicaCount: 2
resources:
requests:
cpu: 0.2
memory: 256Mi
# helm install --namespace pulsar -f customized-values-pulsar.yaml --set initialize=true pulsar apache/pulsar
W0321 14:23:14.079691 12335 warnings.go:70] policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
NAME: pulsar
LAST DEPLOYED: Mon Mar 21 14:23:13 2022
NAMESPACE: pulsar
STATUS: deployed
REVISION: 1
TEST SUITE: None
Install GD-CN
# vim customized-values-gooddata-cn.yaml
ingress:
annotations:
kubernetes.io/ingress.class: nginx
deployRedisHA: true
deployPostgresHA: true
dex:
ingress:
authHost: 'gdex.internal.ives.fr'
tls:
authSecretName: k8s-secret-tls
annotations:
kubernetes.io/ingress.class: nginx
license:
key: "key/our_key"
# helm install --namespace gooddata-cn --wait -f customized-values-gooddata-cn.yaml gooddata-cn gooddata/gooddata-cn
W0321 14:25:13.723673 12695 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob
Error: INSTALLATION FAILED: failed pre-install: timed out waiting for the condition
Can someone help me to point out our mistakes ?
Best answer by Robert Moucha
View original