Solved

Standard Helm Post Installation on AWS DEX redirection to localhost

  • 17 August 2021
  • 6 replies
  • 103 views

  • Known Participant
  • 15 replies

Hi Their,

We have facing DEX redirection to localhost  when we visit the home / url after fresh standard installation on AWS K8s 

Plese kindly find the attachment our deployment yaml’s used for below deployment 

 

URL:  http://gd-k8s.factoreal.info


Note: Attached file does not have the valid license 

Thanks
Ashok
@0fffh

icon

Best answer by Milan Sladky 17 August 2021, 16:11

Hello Ashok,

You need to setup your Dex Ingress host which is used as auth. endpoint. By default it is localhost. So add something like this to your custom values:

 

dex:
ingress:
authHost: 'gd-k8s-dex.factoreal.info'

More details are in the doc here https://www.gooddata.com/developers/cloud-native/doc/1.3/installation/k8s/considerations/oidc/.

Regards,

Milan 

View original

6 replies

Hello Ashok,

You need to setup your Dex Ingress host which is used as auth. endpoint. By default it is localhost. So add something like this to your custom values:

 

dex:
ingress:
authHost: 'gd-k8s-dex.factoreal.info'

More details are in the doc here https://www.gooddata.com/developers/cloud-native/doc/1.3/installation/k8s/considerations/oidc/.

Regards,

Milan 

@Milan Sladky Thanks for the pointers its still redirecting  to https://localhost I have also inspected the config.yaml of gooddata-cn-dex and ingress of gooddata-cn-dex if you can help me with full yaml snippet would be really helpfull

The custom values.yaml can look like this - based on yours:

# file name: customized-values-gooddata-cn.yaml

dex:
ingress:
authHost: 'gd-k8s-dex.factoreal.info'

service:
redis:
hosts:
- redis.cache
port: 6379
clusterMode: false
postgres:
host: postgres.database
port: 5432
username: postgres@gooddata-cn-pg
password: <PG_ADMIN_PASSWORD>

# file name: customized-values-gooddata-cn.yaml

deployRedisHA: true
deployPostgresHA: true

redis-ha:
persistentVolume:
storageClass: disk-ebs-gd-vol-puser
postgresql-ha:
persistence:
storageClass: disk-ebs-gd-vol-puser


license:
key: "key/eyJhY2NvdW50Ijp7ImlkIjoiYzVmZTE3ODYtZG1u6UQH4srTfy_AflNYDhDgAt1gw=="

However, you can check the really-applied authHost value like this:

$ helm get all -n gooddata gooddata-cn  | grep -B 4 authHost
dex:
ingress:
annotations:
cert-manager.io/issuer: ca-issuer
authHost: msl-tiger2.dev.intgdc.com
--
name: dex
ingress:
annotations:
cert-manager.io/issuer: ca-issuer
authHost: msl-tiger2.dev.intgdc.com

Assuming that “gooddata” is the Kubernetes namespace and “gooddata-cn” is the Helm release name.

@Milan Sladky  I have cleanded up GDCN and nginx-ingress and installed freshly on the same cluster local redirection issue has been fixed, However all service endpoints returning empty response 
 

 

During the installation process of the helm ingress i have passed the AWS ACM Cert parms somthing like this 
 

helm -n ingress-nginx install ingress-nginx ingress-nginx/ingress-nginx \
  --set controller.replicaCount=2 \
  --set controller.service.annotations."service\.beta\.kubernetes\.io/aws-load-balancer-ssl-cert"=arn:aws:acm:us-east-1:092X09X3X992:certificate/2acb570a-2627-4b66-ac3e-ae2a9cc82a91

Is this causing the return empty response please guide me with how to properly terminate  AWS  ACM ALB  
 

 

 

 


Thanks in Advance 
Ashok

I assume that you want terminate the SSL on the AWS ELB as you use the annotation to deliver the certificate to the ELB. You also need to alter the values.yaml for your Ingress as it is mentioned in the documentation here: https://www.gooddata.com/developers/cloud-native/doc/1.3/installation/k8s/considerations/ingress-aws/ 

Hi @Milan Sladky  based on the above inputs SSL is properly terminated , However ingress throws the 404 

helm upgrade --install ingress-nginx stable/ingress-nginx --namespace ingress-nginx \
    --values values-ingress.yaml --wait --timeout 3m \
    --set controller.service.annotations."service\.beta\.kubernetes\.io/aws-load-balancer-ssl-cert"=arn:aws:acm:us-east-1:112233445566:certificate/2acb570a-2627-4b66-ac3e-ae2a9cc82a41
 

 

---values-ingress.yaml--

# helm-charts/helmfile-values/values-ingress.yaml
controller:
  service:
    targetPorts:
      http: http
      https: http
    annotations:
      # SSL is terminated on ELB, so HTTP will be used downstram to our services
      service.beta.kubernetes.io/aws-load-balancer-backend-protocol: 'http'
      # only 'https' port will use SSL protocol
      service.beta.kubernetes.io/aws-load-balancer-ssl-ports: 'https'
      # keep connections open upto 1 hour
      service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '3600'
      # Disable TLS1.1 and lower protocols on TLS handshake
      service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: 'ELBSecurityPolicy-TLS-1-2-2017-01'
  publishService:
    enabled: true

---your-org-definition.yaml---

apiVersion: controllers.gooddata.com/v1
kind: Organization
metadata:
  # The namespace-unique name of the custom resource
  name: fctstage-org
spec:
  # The Organization ID
  id: fctstage
  # The UI-friendly Organization name
  name: "FCTDEV, Corp."
  # The DNS name where the Organization will be accessible
  hostname: gdk8sstage.factoreal.info
  # The name of the user group for the Organization administrator
  adminGroup: adminGroup
  # The name of the Organization administrator account
  adminUser: admin
  # The salted hash of the administrator password that you generated earlier at Step 1
  adminUserToken: "$5$6iRG6Yc/Ih51I2MN$/IYHZCzihzyOP3uaHs7FaHBnsLv8.dtsKjiMdAJjxc4"
  # An optional `tls` object that describes how the TLS certificate will be handled
  # For more information, see "TLS Configuration of an Organization" further in this article.
  # tls:
  #   # (Required) The name of the Secret where the certificate and the key are stored
  #   secretName: alpha-org-tls
  #   # (Optional) The name of cert-manager's Issuer or ClusterIssuer, if certificates are
  #   # automatically provisioned by cert-manager
  #   issuerName: letsencrypt-prod
  #   # (Optional) The resource that `issuerName` refers to; can be Issuer (default)
  #   # or ClusterIssuer
  #   issuerType: ClusterIssuer


 

kindly help me how to enable the service endpoints to public via ingress

 

Thanks and Regards,
Ashok

Reply