Quantcast
Channel: Blog – KongHQ
Viewing all articles
Browse latest Browse all 463

Steps to Deploying Kong as a Service Mesh

$
0
0

In a previous post, we explained how the team at Kong thinks of the term “service mesh.” In this post, we’ll start digging into the workings of Kong deployed as a mesh. We’ll talk about a hypothetical example of the smallest possible deployment of a mesh, with two services talking to each other via two Kong instances – one local to each service.

When deployed in a mesh, we refer to local instances of Kong as “paired proxies.” This is because the service mesh, broken down into its smallest atomic parts, is made of individual network transactions between two proxies that are “aware” of each other. Although any two proxies in the mesh can communicate with each other in this way, when you think about it from the standpoint of a single transaction, there is no service mesh.

Kong’s service mesh deployment – and all service meshes – are made of proxies that form pairs at connection time. Via those paired connections, the proxies provide security, reliability and observability for distributed application architectures. In other words, all service meshes are really collections of paired proxies.

Because the whole mesh is made of paired proxies, our example will be simple. We start out with service A and service B, which exchange requests and responses over insecure, non-TLS (Transport Layer Security) network connections. We’ll assume we have root level access to the hosts running A and B, and that there are security, reliability and observability issues with those connections. Let’s start solving those problems with a pair of Kong proxies.

Symbols and Terminology

We’ll establish some symbols and terminology that we’ll use through the remainder of this and in many other documents about Kong’s service mesh deployment architectures:

Service and Kong Instances

  • A, B, etc. represent single instances of services (also known as “applications”)
    • A is a service that makes requests to B
    • B is a service that responds to requests from A. B does not initiate any requests, nor does it get requested by any service other than A.
    • Both services send and receive non-TLS traffic only – they cannot establish or terminate TLS connections. Both services communicate via HTTP.
  • K represents a Kong node that is not “affiliated” with any particular service
  • KA, KB, etc. represent Kong nodes that proxy all traffic coming in to and going out from A, B, etc.
    • Unlike Kong nodes deployed at the “edge” of your computing environment as API gateways, these KA, KB, etc. nodes are deployed local to the services they are proxying as node proxies or sidecars.

Connections Between Services and Proxies

Though the arrows in this section point only one way for simplicity, it represents both the request and the response traffic. This same convention of “arrow on one end only, for clarity” applies throughout this example.

  • -> represents a non-TLS local connection
  • ---> represents a non-TLS network connection
  • >>>> represents a TLS/HTTPS network connection
  • ===> represents a Kong mutual TLS (KmTLS) network connection between Kong nodes

Kong Configurations

  • Kong Routes are used to configure the “incoming” side of a Kong proxy. A Route must be associated to one Service.
  • Kong Services are used to configure the “outgoing” or upstream side of a Kong proxy. A Service is associated with one or more Routes.

Deployment and Configuration

Here is an architectural walkthrough of how to deploy Kong as a service mesh, which will highlight some of the advantages of this pattern and where they come from. For a full tutorial with code snippets, please see the Streams and Service Mesh documentation.

  1. Start with Service A making HTTP requests to Service B across the network: A--->B. This connection is unsecured and unobservable, and the traffic is traveling over a network, which is inherently unreliable.
  2. Deploy an instance of Kong K and its required datastore to start your Kong cluster. This Kong node can be configured as a Control Plane node only – we’ll be using it only for configuring Kong, not for proxying traffic.
  3. Connect to the Kong Admin API and configure a Service that sends traffic to B via HTTPS and a matching Route that accepts incoming requests for B via both HTTP and HTTPS. (Note that if we started using this Service+Route immediately, we’d get an error because B cannot terminate TLS connections.)
  4. Deploy a Kong proxy KB local to B and configure origins, transparent and iptables. You now have a Kong proxy in front of B proxying all inbound requests and outbound response traffic, and you’ve made no changes to B.
    1. Although Service B cannot terminate TLS connections, the origins config KONG_ORIGINS="https://B:443=http://B:80 causes traffic that KB would normally send via HTTPS to port 8443 to instead be sent via HTTP to port 80.
    2. Kong “blocks by default,” which means that given this configuration, B can’t initiate any requests because KB is now intercepting all outbound requests, and there is not yet a Kong Service+Route for KB to send such requests.
    3. We aren’t yet benefiting from the Kong proxy – while KB is in the request/response path, it isn’t doing anything helpful.
    4. The current situation looks like this: A--->KB->B.
      1. If we had a new Service X that sent HTTPS requests to B, we could also have X>>>KB->B.
  5. A is sending requests to B via unencrypted HTTP. An HTTPS connection between A and B with mTLS would make communication more secure and is one of the capabilities that Kong can provide when deployed as a mesh. A doesn’t initiate HTTPS connections, and we can’t make changes to A. To get the security improvement we seek, first deploy another Kong proxy KA, local to A, configured with origins, transparent and iptables. Let’s examine in detail what happens now:
    1. A initiates an HTTP connection to B as usual. The configuration of transparent and iptables causes KA to intercept this request.
      1. A->KA
    2. KA uses the Service+Route configured in step #3 of this example to accept the incoming request via HTTP, then send it across the network to B via HTTPS.
      1. A->KA>>>
    3. The configuration of transparent and iptables on KB causes KB to intercept the HTTPS request from KA rather than having the request reach B directly – which is necessary because unlike B, KB is able to terminate TLS connections.
      1. A->KA>>>KB
    4. KA and KB automatically upgrade the TLS connection to mutual TLS (mTLS) using Kong-generated certificates. We call mTLS with Kong certs a `KmTLS` connection. We now have a paired proxy.
      1. A->KA===>KB
    5. KB terminates the TLS connection and forwards the request to B via a local HTTP connection. The configuration of origins on KB causes KB to send traffic to B locally rather than across the network as KA did.
      1. A->KA===>KB->B
    6. The response flows “in reverse:” When B responds via HTTP, KB receives the response and sends it over the KmTLS connection to KA. KA terminates TLS and forwards the response to A via a local HTTP connection.
      1. A<-KA<===KB<-B
  6. In step #3, we configured the Route for B to accept both HTTP and HTTPS traffic. As long as there are applications that might call B over HTTP, we need to leave this configuration. However, if we can assert “starting now, all communications with B must be secured with TLS,” then we can PATCH the configuration of the Route for B to accept only HTTPS requests. In our example above, the only service calling B is A, and it is now doing so via TLS (which is initiated by KA) – thus, to ensure that our cross-network traffic is always encrypted, you can make this PATCH change now.
  7. Now that we’ve got a paired proxy between A and B, we can start applying Kong plugins that run only on KA (like authentication), only on KB (like rate limiting with a local counter), or on both (like Zipkin tracing).

Stay tuned for the next blog posts in our series, in which we’ll examine how to use paired proxies to solve observability, security and reliability problems in distributed application architectures.

The post Steps to Deploying Kong as a Service Mesh appeared first on KongHQ.


Viewing all articles
Browse latest Browse all 463

Trending Articles