Easy and Robust Single Sign-On with OpenID Connect and NGINX Ingress Controller

by

in

With the release of NGINX Ingress Controller 1.10.0, we are happy to announce a major enhancement: a technology preview of OpenID Connect (OIDC) authentication. OIDC is the identity layer built on top of the OAuth 2.0 framework which provides an authentication and single sign‑on (SSO) solution for modern apps. Our OIDC policy is a full‑fledged SSO solution enabling users to securely authenticate with multiple applications and Kubernetes services. Significantly, it enables apps to use an external identity provider (IdP) to authenticate users and frees the apps from having to handle usernames or passwords.

This new capability complements other NGINX Ingress Controller authorization and authentication features, such as JSON Web Token (JWT) authentication, to provide a robust SSO option that is easy to configure with NGINX Ingress resources. This means you can secure apps with a battle‑tested solution for authenticating and authorizing users, and that developers don’t need to implement these functions in the app. Enforcing security and traffic control at the Ingress controller blocks unauthorized and unauthenticated users at early stages of the connection, reducing unnecessary strain on resources in the Kubernetes environment.

Defining an OIDC Policy

When you define and apply an OIDC policy, NGINX Ingress Controller operates as the OIDC relying party, initiating and validating authenticated sessions to the Kubernetes services for which it provides ingress. We support the OIDC Authorization Code Flow with a preconfigured IdP.

Note: OIDC policies are exclusive to NGINX Plus.

Here’s a sample configuration of an OIDC Policy object.

apiVersion: k8s.nginx.org/v1alpha1
kind: Policy
metadata:
name: ingress-oidc-policy
spec:
oidc:
clientID: nginx-ingress
clientSecret: oidc-secret
authEndpoint: https://idp.example.com/openid-connect/auth
tokenEndpoint: https://idp.example.com/openid-connect/token
jwksURI: https://idp.example.com/openid-connect/certs

Here’s how the objects in the policy are used when establishing an OIDC session:

  1. The client requests a protected resource, and NGINX Ingress Controller redirects it to the IdP designated as authEndpoint to authenticate.
  2. If authentication succeeds, the IdP issues a single‑use code and redirects the client to a special URI (tokenEndPoint) hosted by NGINX Ingress Controller, which exchanges the single‑use code for a JWT which the client can use for the duration of the session.
  3. NGINX Ingress Controller stores the JWT and sends a session cookie to the client containing an opaque reference to the JWT.
  4. When the client makes a subsequent request and presents the session cookie, NGINX Ingress Controller retrieves the JWT and validates it against the certificates stored at the jwksURI URI. If the JWT is valid and unexpired, NGINX Ingress Controller proxies the request to the appropriate backend Kubernetes pod.

The NGINX Ingress Controller OIDC policy supports standard OIDC scopes in addition to the default openid scope. Scopes include user‑identity attributes such as name and email address, which may be used as access‑control criteria. If both the client and the IdP allow these scopes to be shared with NGINX Ingress Controller, their corresponding values are encoded as JWT claims in the response from the IdP.

Once the OIDC policy is successfully applied, it can be reused in other ingress load‑balancing configurations, which makes it much easier to add authentication and authorization to apps and Kubernetes services. For example, you can reference an OIDC policy in VirtualServer objects and modify ingress traffic by passing JWT claims as HTTP headers to the upstream application.

apiVersion: k8s.nginx.org/v1
kind: VirtualServer
metadata:
name: webapp
spec:
host: webapp.example.com
tls:
secret: tls-secret
redirect:
enable: true
upstreams:
– name: webapp
service: webapp-svc
port: 80
routes:
– path: /
policies:
– name: oidc-policy
action:
proxy:
upstream: webapp
resquestHeaders:
pass: true
set:
– name: My-Header
value: ${jwt_claim_profile}

What Else Is New in Release 1.10.0?

In NGINX Ingress Controller 1.10.0 we continue our commitment to providing a production‑grade Ingress controller that is flexible, powerful, and easy to use. NGINX Ingress Controller can be configured with Kubernetes Ingress resources and NGINX Ingress resources:

The information in this section applies to NGINX Ingress Controller in both NGINX Open Source and NGINX Plus.

In addition to OIDC authentication, release 1.10.0 includes the following enhancements and improvements:

  • Updated NGINX Service Mesh integration – NGINX Service Mesh version 0.8.0 can now integrate seamlessly with NGINX Ingress Controller.
  • Changes in behavior – The apiVersion for Policy objects is promoted to v1, and only certain secret types are valid.
  • Enhanced visibility and logging – Additional metrics and log annotations simplify troubleshooting and help you quickly identify pain points.
  • Other new features –
    • Improved validation of annotations and secrets
    • NGINX App Protect user‑defined signatures to block unrecognized threats
    • The IP address access control list policy is upgraded to production‑ready status
    • Rancher RKE support

Important Changes in Behavior

The apiVersion of the Policy resource has been promoted to v1 (but the corresponding schema has not changed). If you created policies with release 1.8.0 or 1.9.0 and plan to continue using them, create them again under the updated apiVersion number. For details, see UPGRADE in the Release Notes.

NGINX Ingress Controller now accepts only certain types of secrets. See UPDATING SECRETS in the Release Notes.

Enhanced Visibility and Logging

Configuration Queue Metrics

Monitoring is essential for understanding and visualizing the behavior of your application in practice. Monitoring can simplify troubleshooting and pinpoint flaws and bottlenecks in your application so you can quickly fix them.

In this release, we are adding new workqueue metrics that report how many outstanding configuration changes are waiting in the queue at any given time, and how long a configuration has remained in NGINX Ingress Controller’s queue before being processed.

You can use these metrics to determine whether too many configuration changes are being requested and whether NGINX Ingress Controller has the resources it needs to process configurations quickly.

If the Ingress Controller can’t keep up with the number of configuration changes, you can allocate more computing resources (CPU and memory) to the currently deployed NGINX Ingress Controller pods, or increase the number of deployments so that each deployment has to process a smaller number of configurations.

Annotation of Logs with Kubernetes Object Names

Release 1.9.0 added annotations to Ingress Controller metrics exported to Prometheus to specify the Kubernetes service name, pod, and Ingress resource name.

This release adds the same annotations to NGINX Ingress Controller logs. Log annotations not only improve visibility, but also simplify troubleshooting. Operators can now associate log entries with specific Kubernetes services and resources such as an Ingress or VirtualServer resource, for quicker identification of Kubernetes objects that need attention. For example, a connection may fail because the VirtualServer resource does not include any upstream available to service the connection.

$ kubectl logs NGINX_Ingress_Controller_pod_name -n nginx-ingress
174.115.106.xxx – – [27/Jan/2021:21:36:50 +0000] “GET /tea HTTP/1.1” 503 154 “-” “curl/7.54.0” “-” “app” “virtualserver” “default” “billing-svc”

Additionally, application and service owners can filter logs by service name, Ingress resource, or both, to access only the log entries belonging to their application. This was not possible in earlier releases where log entries didn’t specify the service or Ingress resource.

To configure this logging feature, include built‑in variables in the log-format field of the NGINX Ingress Controller ConfigMap.

Annotations with TCP Metrics

Release 1.10.0 also adds annotations about the Kubernetes service name, pod, and Ingress resource name to TCP/UDP metrics exported to Prometheus. Correlating performance of TCP applications with Kubernetes objects can simplify troubleshooting. Here is an example of an annotated Prometheus metric about active connections of an upstream server:

# HELP nginx_ingress_nginxplus_stream_upstream_server_active Active connections
# TYPE nginx_ingress_nginxplus_stream_upstream_server_active gauge
nginx_ingress_nginxplus_stream_upstream_server_active{class=”nginx”,pod_name=”coredns-6b67b8f5d6-pnrwl”,resource_name=”dns-tcp”,resource_namespace=”default”,resource_type=”transportserver”,server=”10.0.2.20:5353″,service=”coredns”,upstream=”ts_default_dns-tcp_dns-app”} 3

Other Features

Validation of Ingress Annotations

Validating annotations in an Ingress resource increases the reliability of NGINX Ingress Controller by preventing outages during NGINX Ingress Controller reloads that are caused by invalid annotation values. Release 1.10.0 validates more annotations than earlier releases.

NGINX Ingress Controller now also reports validation errors in events associated with Ingress resources as soon as an Ingress manifest is applied. You can see error messages immediately with the kubectl describe command instead of having to look through log files after the fact.

$ kubectl describe ing cafe-ingress

Events:
Type Reason Age From Message
—- —— —- —- ——-
Warning Rejected 7s nginx-ingress-controller default/cafe-ingress was rejected: with error: annotations.nginx.org/lb-method: Invalid value: “least_cons”: Invalid load balancing method: “least_cons”

Validation of Secrets

NGINX Ingress Controller 1.10.0 evaluates the content of Kubernetes secrets, which increases reliability by preventing outages or complex troubleshooting scenarios. If a referenced secret is not valid or does not exist, NGINX Ingress Controller reports the problem in an event associated with the resource that references the secret.

$ kubectl describe ing cafe

Events:
Type Reason Age From Message
—- —— —- —- ——-
Warning AddedOrUpdatedWithWarning 8s nginx-ingress-controller Configuration for default/cafe-ingress was added or updated; with warning(s): TLS secret cafe-secret is invalid: Failed to validate TLS cert and key: tls: failed to find any PEM data in key input

NGINX App Protect User-Defined Signatures

User‑defined signatures are very useful when organizations need to block threats quickly but prebuilt signatures for those threats do not exist yet. With NGINX Ingress Controller 1.10.0 you can create NGINX App Protect user‑defined signatures to customize your Layer 7 security policies.

In previous releases, you could use only predefined signatures provided by F5. In the bespoke signatures you can create in release 1.10.0, you can use offset parameters and regular expressions to block specific strings from user inputs and specify complex rules about the location and format of the strings. For example, when a vulnerability is discovered, you can create bespoke signatures to mitigate the vulnerability instead of updating the entire signature database.

User‑defined signatures are easily ported over to NGINX Ingress Controller from other F5 WAF products, which simplifies manageability of security across platforms and environments.

IP Address Access Control List Policy is Production‑Ready

As part of release 1.10.0, we are graduating the IP address access control list (accessControl) policy introduced in release 1.8.0 to production‑ready status (the three policies introduced in release 1.9.0 and the OIDC policy introduced in this release remain in preview mode).

Rancher Kubernetes Engine Support

Release 1.10.0 supports running NGINX Ingress Controller on Rancher Kubernetes Engine (RKE) by making NGINX Ingress Controller available in Rancher’s catalog. You can easily install NGINX Ingress Controller with a few clicks in the catalog’s UI along with a small number of input parameters.

Resources

For the complete changelog for release 1.10.0, see the Release Notes.

To try NGINX Ingress Controller with NGINX Open Source, you can obtain the release source code, or download a prebuilt container from DockerHub.