Hi Everyone. I'm in the process of setting up okt...
# gooddata-cn
m
Hi Everyone. I'm in the process of setting up okta as the IDP for a gooddata-cn helm chart installation using these instructions . During the log in flow I'm seeing the redirect to okta containing a
redirect_uri
parameter using http rather than https. Since okta is expecting the redirect uri to be using https it will fail the request because it is not an exact match. Is there a way to configure the
redirect_uri
to always use https ?
For reference this is the structure of the whole url Im seeing
https://$OKTA_DOMAIN/oauth2/v1/authorize?response_type=code&client_id=REDACTED&scope=openid%20profile&state=REDACTED&redirect_uri=<http://exampleorg.dummydomain.com/login/oauth2/code/exampleorg.dummydomain.com&nonce=REDACTED>
r
Hi Manuel, it looks like some minor misconfiguration of ingress-controller setup. Assuming you're accessing to exampleorg.dummydomain.com using HTTPS, then it seems X-Forwarded-* headers are not passed correctly to backend microservices. These headers are import to identify public url schema that is later used to generate redirect_url. I don't know details about your setup. Please check ingress-nginx configuration whether you have the following helm chart value set up:
Copy code
controller.config.use-forwarded-headers='true'
This setting will reconfigure ingress-nginx to properly propagate X-Forwarded-* headers to backend microservices.
m
Thanks Robert, with that setting I'm still seeing the same behavior. Turns out that that in nginx ingress controller is set to
<http://service.beta.kubernetes.io/aws-load-balancer-type|service.beta.kubernetes.io/aws-load-balancer-type>: nlb
. Since that's a layer 4 LB its probably just passing along the request to nginx and not adding the
X-Forwarded-*
headers.
r
And did you acceessed your organization URL via https? If you access it using http/80, then you can make the ingress-nginx to automatically return redirect to https, using the setting
controller.config.force-ssl-redirect='true'
And before configuring your deployment to Okta IdP, did the authentication (using internal IdP) worked correctly? The following ingress-nginx values can be used for standard L4 setup, if you have installed aws-loadbalancer-controller, ingress-nginx and also cert-manager to automatically provision certificates to ingress controller. Not sure how far it matches your setup, but it may serve as a hint.
Copy code
controller:
  service:
    annotations:
      <http://service.beta.kubernetes.io/aws-load-balancer-backend-protocol|service.beta.kubernetes.io/aws-load-balancer-backend-protocol>                  : "tcp"
      <http://service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled|service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled> : "false"
      <http://service.beta.kubernetes.io/aws-load-balancer-type|service.beta.kubernetes.io/aws-load-balancer-type>                              : "external"
      <http://service.beta.kubernetes.io/aws-load-balancer-nlb-target-type|service.beta.kubernetes.io/aws-load-balancer-nlb-target-type>                   : "ip"
      <http://service.beta.kubernetes.io/aws-load-balancer-target-group-attributes|service.beta.kubernetes.io/aws-load-balancer-target-group-attributes>           : "deregistration_delay.connection_termination.enabled=true,preserve_client_ip.enabled=true"
      <http://service.beta.kubernetes.io/aws-load-balancer-scheme|service.beta.kubernetes.io/aws-load-balancer-scheme>                            : "internet-facing"
      <http://service.beta.kubernetes.io/aws-load-balancer-healthcheck-port|service.beta.kubernetes.io/aws-load-balancer-healthcheck-port>                  : "10254"
      <http://service.beta.kubernetes.io/aws-load-balancer-healthcheck-path|service.beta.kubernetes.io/aws-load-balancer-healthcheck-path>                  : "/healthz"
      <http://service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol|service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol>              : "HTTP"
      <http://service.beta.kubernetes.io/aws-load-balancer-alpn-policy|service.beta.kubernetes.io/aws-load-balancer-alpn-policy>                       : "HTTP2Preferred"
  config:
    force-ssl-redirect: 'true'
    client-body-buffer-size: '1m'
    client-body-timeout: '180'
    proxy-buffer-size: '16k'
    enable-brotli: 'true'
    brotli-types: application/vnd.gooddata.api+json application/xml+rss application/atom+xml
      application/javascript application/x-javascript application/json application/rss+xml
      application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json
      application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon
      text/css text/javascript text/plain text/x-component
    use-gzip: 'true'
    gzip-types: application/vnd.gooddata.api+json application/xml+rss application/atom+xml
      application/javascript application/x-javascript application/json application/rss+xml
      application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json
      application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon
      text/css text/javascript text/plain text/x-component
  metrics:
    enabled: true
  replicaCount: 3
serviceAccount:
  automountServiceAccountToken: true
m
Thank you, I have gone ahead and switched to using a classic LB for the moment. But I will use your example in the near future to get an implementation with L4.