404 error on gooddata-cn kubenetes installation

  • 26 November 2021
  • 4 replies
  • 49 views

Hi Team

While trying to install gooddata-cn version 1.3.0 on our cloud provider, all the pods are coming up and running properly. But while accessing the application or an organisation api, we get 404 Not found error.

So,while checking the logs of the ingress-ngnix pods, it says as below.

 

when we check the endpoints using kubectl get endpoints

We are using oracle cloud for the development environment, as initial testing will be done here. Please let me know whether we are missing any configurations.

 


4 replies

Userlevel 2

Hi, assuming you’re using a new ingress-ngnix helm chart (>4.0.0) with older GoodData.CN helm chart (1.3.0). There’s documented step related to setting proper annotations related to ingress class. Please set this annotation both to .ingress.annotations and to .dex.ingress.annotations as shown in example.

 

Note - another (better) solution would be to upgrade to gooddata-cn 1.4.0 where these annotations should not be set because resources are using modern `ingressClassName` attribute supported by networking.k8s.io/v1 Ingress.

 

Hi Robert

Removed the annotation from the customize file.yaml and installed gooddata-cn version 1.4.0. Still the same error is coming.

Attaching the log file for the ingress-nginx pod, please suggest.

Also, the endpoints are coming correctly after giving the right namespace.

--Anjali

On further analysis, the metadata-api is going into crashloopbackoff and below is the error while describing the pod. 

Can you please suggest.

 

-- Anjali

Userlevel 2

I see, so the problem is that the metadata-api pods are not Ready, because the application doesn’t start. That’s the reason of missing endpoints, as they are created by Service when some pod matching the Service selector becomes Ready.

Please check the container logs of these pods using (they are called like `gooddata-cn-metadata-api-xxxxxxxx-yyyyyyyy`) to see why they don’t start. 

It’s also possible that these pods are subject of resource starvation - could you please share how many nodes you have and how much RAM and CPU they have?

Reply