Understanding Gateway API Split Architecture: Control Plane vs. Data Plane

by

in

Gateway API is quickly becoming the standard for managing ingress traffic in Kubernetes environments. One of its key architectural decisions is the clear separation between the control plane and the data plane, which are deployed independently. This split is not just a technical detail, it has significant implications for how teams operate, scale, and secure their infrastructure.

Before diving into the architecture itself, let’s be clear about what a control plane and data plane are. Many users of the Ingress API are familiar with installations that combine the control plane and data plane into a single Deployment. Because of this, the term ‘controller’ is often mistakenly thought of as synonymous with the proxy that handles traffic. However, this is not completely accurate, as I describe in the following sections.

Control Plane: Centralized Management

The control plane, also known as a controller, is responsible for watching API resources (Gateways, Routes, policies, and so on), converting the specification defined in those resources into data plane configuration, and then giving that configuration to the data plane. The control plane can technically also consist of other components such as public key infrastructure (PKI), though in the context of Gateway API and Ingress API, the term is usually used to describe just the individual controller that builds configuration. The control plane does not handle user traffic, that responsibility is reserved for the data plane.

Data Plane: Traffic Management

The data plane is the piece that processes and routes client traffic to the backend Kubernetes applications. It receives its configuration from the control plane to tell it where and how to manage traffic. The data plane consists of a proxy (such as NGINX, Envoy, or HAProxy) and any other components that may be necessary for traffic management. It includes a Service, generally of type LoadBalancer, that accepts external client traffic and forwards to the proxy.

Architectural Differences with Ingress API

As I mentioned in the introduction, the term ‘controller’ has been overloaded from the perspective of many Ingress API users to mean both the control plane and the data plane, since some Ingress controllers have both built into the same installed Deployment. However, as I’ve described, these are two separate components, and it’s an important distinction to make when understanding the architecture of Gateway API.

The Ingress API does not prescribe how its implementations should be architected. While some use a single Deployment, others use different Deployments for their control plane and data plane.

Gateway API, on the other hand, encourages a split architecture. The control plane and data plane are separate Deployments, each with its own lifecycle and scaling characteristics.

Installation of the Control Plane

For those familiar with single-Deployment Ingress controllers, you are used to installing the controller and getting both the control plane and data plane at the same time. You get your LoadBalancer Service right away, and you are ready to send traffic.

With Gateway API implementations, such as NGINX Gateway Fabric, you are only initially installing the control plane. No proxy or LoadBalancer Service exists yet. The control plane stands on its own.

Data Plane Provisioning

When a Gateway resource is created, this tells the control plane that the user wants a data plane (or gateway). The control plane sees this and provisions the data plane Deployment. For NGINX Gateway Fabric, this means that you get an NGINX Deployment and an associated LoadBalancer Service. The control plane then configures that NGINX instance based on any Routes or policies that are attached to the Gateway. Once this is provisioned and configured, client traffic can now be routed to backend applications in Kubernetes, through the gateway proxy.

Benefits of Split Architecture

  • Security: By isolating each plane from each other, the attack surface is reduced. If the control plane is compromised, client traffic is not directly affected. If the data plane is compromised, sensitive information stored in the Kubernetes API can’t be accessed. Each component runs with its own set of minimal permissions, and they won’t share process namespaces or filesystems since they live in different Deployments.
  • Scaling: The data plane can be scaled up or down based on traffic demands, independently of the control plane. This allows for resource optimization and ensures that configuration management remains lightweight, even as traffic grows.
  • Reliability: Faults in the control plane do not affect the data plane’s ability to route traffic. Similarly, faults in the data plane do not affect the control plane’s ability to watch and manage resources. Rolling updates and replacements of proxies can be performed with minimal downtime, improving overall system resilience.
  • Multi-Gateway Simplicity: With Gateway API, a single control plane can manage multiple Gateways (data planes) within the same cluster (and potentially across multiple clusters as well). This means you can provision distinct Gateways for different teams, environments, or use cases. Each Gateway has its own configuration and lifecycle, and they are isolated from each other. You can do this without the overhead of installing and maintaining multiple control planes. In the single-Deployment Ingress controller model mentioned earlier, supporting multiple ingress points would require deploying several controller instances, which increases operational complexity and resource usage. Gateway API streamlines this by letting you declaratively create new Gateways as needed, all managed centrally by one control plane, making multi-gateway architectures much easier to implement and maintain.

Conclusion

As Gateway API adoption continues to rise, mastering its split architecture will be a gamechanger for scaling modern infrastructures. Whether you’re looking to better secure your network traffic or optimize scaling, Gateway API provides powerful tools to meet the demands of tomorrow.