Quantcast
Channel: Blog – KongHQ
Viewing all articles
Browse latest Browse all 463

Kong and Istio: Setting up Service Mesh on Kubernetes with Kiali for Observability 

$
0
0

Service mesh is redefining the way we think about security, reliability, and observability when it comes to service-to-service communication. In a previous blog post about service mesh, we took a deep dive into our definition of this new pattern for inter-service communication. Today, we’re going to take you through how to use Istio, an open source cloud native service mesh for connecting and securing east-west traffic.

This step by step tutorial will walk you through how to install Istio service mesh on Kubernetes, control your north-south traffic with Kong, and add observability with Kiali.

Part 1: How to set up Istio on Kubernetes 

1. Set up Kubernetes Platform

To get started, you need to install and/or configure one of the various Kubernetes platforms. You can find all the necessary documentation for setup here. For local development, Minikube is a popular option if you have enough RAM to allocate to the Minikube virtual machine. Istio recommends 16 GB of memory and 4 CPUs. Due to burdensome hardware requirements, I will be using Google Kubernetes Engine (GKE) instead of Minikube. Here are the necessary steps to follow along:

(If you have Istio and Kubernetes set up and ready to go, jump to Part 2)

2. Set up GCP account and CLI

You will need to create a Google Cloud Platform (GCP) account. If you don’t have one, you can sign up here and receive free credits with a validity of 12 months. After signing up for an account, you will need to install the GCP SDK, which includes the gcloud CLI. We will use this to create the Kubernetes cluster. After installing the Cloud SDK, install the kubectl command-line tool by running the following command:

gcloud components install kubectl

Now that you have all the necessary tools installed, let’s dive into the fun part!

3. Create a new Kubernetes cluster

To do so, you first have to have an existing project. The following command will create a project with a project_id of “kong-istio-demo-project”. I also threw in a name just to give it more clarity.

gcloud projects create kong-istio-demo-project --name="Kong API Gateway with Istio"

To list all your existing projects and to ensure that that “kong-istio-demo-project” project was created successfully, type the following command:

gcloud projects list

With a project created, you can now create a cluster of running containers on GKE:

(Optional step) – If you are unsure which zone to use when you create your cluster, run the following command to list out all the compute zones and pick one. 

gcloud compute zones list

The following command will create a Kubernetes cluster. It will consist of 4 nodes, and sit on the us-east1-b compute zone.

gcloud container clusters create kong-istio-cluster \
--cluster-version latest \
--num-nodes 4 \
--zone us-east1-b \
--project kong-istio-demo-project

 

You’ll see a bunch of warnings but let’s zoom in on the relevant part at the bottom:

Yay! You have a cluster running with 4 nodes. Let’s get your credentials for kubectl. Using your project_id, cluster name, and compute zone, run the following command:

gcloud container clusters get-credentials kong-istio-cluster
--zone us-east1-b \
--project kong-istio-demo-project

Lastly, you will need to grant the cluster administrator (admin) permissions to the current user. To create the necessary RBAC rules for Istio, the current user requires admin permissions:

kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole=cluster-admin \
--user=$(gcloud config get-value core/account)

Now, if you get your nodes via kubectl, you should see all 4 nodes that you created on your cluster:

kubectl get nodes

4. Install Istio

To start, you will need to download Istio. You can either download it via the Istio release page or run the following command with a specific version number:

curl -L https://git.io/getLatestIstio | ISTIO_VERSION=1.2.4 sh -

Move into the Istio directory. The directory name may differ based on which version you downloaded. Since I specified 1.2.4 in the ISTIO_VERSION up above, I will be changing directory using the following command:

cd istio-1.2.4

And then you want to add the istioctl client to your PATH environment variable. The following command will append the Istio client to your existing PATH:

export PATH=$PWD/bin:$PATH

As you can see in the screenshot above, the Istio directory’s bin has been added to my path. We can now install Istio onto the cluster that we created earlier on GKE. To do so, you have to use kubectl apply to install all the Istio Custom Resource Definitions (CRDs) defined in the istio-1.2.4/install/kubernetes/helm/istio-init/files directory. This will create a new custom resource for each CRD object using the name and schema specified within the YAML files:

for i in install/kubernetes/helm/istio-init/files/crd*yaml; do kubectl apply -f $i; done

Once all the custom resources are created, we can install a demo profile that enforces strict mutual TLS authentication between all clients and servers. This profile installs an Istio sidecar on all newly deployed workloads. Therefore, it is important to only use this on a fresh Kubernetes cluster where all workloads will be Istio-enabled. While this demo will not cover Istio’s permissive mode, you can read more about it here. The following command will output a ton of lines, so I won’t be including the screenshot. Run this to install the istio-demo-auth demo profile on your existing cluster:

kubectl apply -f http://bit.ly/istiomtls

Check the services within the istio-system namespace to make sure everything ran smoothly. All services should have a cluster-ip except for the jaeger-agent:

kubectl get svc -n istio-system

That’s it for part 1! With all your services up and running, you successfully installed a service mesh on a Kubernetes cluster. If you decided to install your cluster locally on Minikube or use another cloud provider’s Kubernetes platform, be sure you install Istio with the strict mutual TLS demo profile

In part 2, we will deploy the Bookinfo Application, configure Kong declaratively, and visual our mesh using Kiali.  

Part 2: How to set up your Istio application with Kong and Kiali 

In Part 1, we covered how to create a Kubernetes cluster and how to install Istio with strict mTLS policy. If you’re just joining us at part 2, you do not have to follow the Google Kubernetes Engine (GKE) steps that we used in part 1. However, you do need Istio installed in a similar fashion that enforces mutual TLS authentication between all clients and servers. If you need to catch up and install Istio, follow our ‘Installing Istio’ section from part 1 of this blog or the official documentation

This is Istio’s Bookinfo Application diagram with Kong acting as the Ingress point:

You can follow the link above to get more details about the application. But to highlight the most important aspect of this diagram, notice that each service has an Envoy sidecar injected alongside it. The Envoy sidecar proxies are what handles the communication between all services.

For this demo, we will be focusing on the Kong service on the left. Kong excels as an Ingress point for any traffic entering your mesh. Kong is an open source gateway that offers extensibility with plugins. 

1. Installing the Bookinfo application

To start the installation process, make sure you are in the Istio installation directory. This should match the directory created during our Istio installation procedure.

Once you’re in the right directory, we need to label the namespace that will host our application. To do so, run:

kubectl label namespace default istio-injection=enabled

Having a namespace labeled istio-injection=enabled is necessary. Or else the default configuration will not inject a sidecar into the pods of your namespace. 

Now deploy your BookInfo application with the following command:

kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml

Let’s double-check our services and pods to make sure that we have it all set up correctly:

kubectl get services

You should see four new services: details, productpage, ratings, and reviews. None of them have an external IP so we will use the Kong gateway to expose the necessary services. And to check pods, run the following command: 

kubectl get pods

This command outputs useful data so let’s take a second to understand it. If you examine the READY column, each pod has two containers running: the service and an Envoy sidecar injected alongside it. Another thing to highlight is that there are three review pods but only 1 review service. The Envoy sidecar will load balance the traffic to three different review pods that contain different versions, giving us the ability to A/B test our changes. With that said, you should now be able to access your product page! 

kubectl exec -it $(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}') -c ratings -- curl productpage:9080/productpage | grep -o "<title>.*</title>"

2. Kong DB-less with declarative configuration

To expose your services to the world, we will deploy Kong as the north-south traffic gateway. Kong 1.1 released with declarative configuration and DB-less mode. Declarative configuration allows you to specify the desired system state through a YAML or JSON file instead of a sequence of API calls. Using declarative config provides several key benefits to reduce complexity, increase automation and enhance system performance. Alongside Kubernetes’ ConfigMap feature, deploying Kong for Ingress control becomes simplified with one YAML file. 

Here is the gist of the YAML file we will use to deploy and configure Kong:

apiVersion: v1
kind: ConfigMap
metadata:
  name: kongconfig
data:
  kong.yml: |
    _format_version: "1.1"
    services:
    - url: "http://mockbin.org/"
      routes:
      - paths:
        - "/mockbin"
      plugins:
      - name: basic-auth
    - url: "http://productpage.default.svc:9080"
      routes:
      - paths:
        - "/"
      plugins:
      - name: rate-limiting
        config:
          minute: 60
          policy: local
    consumers:
    - username: kevin
      basicauth_credentials:
      - username: kevin
        password: abc123

As shown in the ConfigMap, we will be configuring Kong with two services. The first service is a hosted webpage: mockbin.org. Since we don’t want unauthorized people accessing this site, we will lock it down using an authentication plugin. Kong’s basic-auth plugin is one of many plugins that you can use to extend the functionality of your gateway. You can find prebuilt plugins here or explore the Plugin Development Guide to build your own. The second service that will sit behind Kong is the Bookinfo product page we deployed earlier. We will use the rate-limiting plugin to lightly protect this service. Granular control on plugins gives us simplicity AND modularity. Enough talk though, let’s deploy our gateway using:

kubectl apply -f https://bit.ly/kongyaml

To check if the Kong service and pods are up and running, run:

kubectl get pods,svc --sort-by=.metadata.creationTimestamp

When the gateway is running correctly, you will see an EXTERNAL-IP on the Kong service. Let’s export that to an environment variable so we can easily reference it in the remaining steps:

KONG_IP=$(kubectl get svc kong --template="{{range .status.loadBalancer.ingress}}{{.ip}}{{end}}")

Congratulations, you now have a service-mesh up and running with a way to access it securely! To view the product page service’s GUI, go to http://$KONG_IP/productpage. We have a rate-limit set for this service. To test the rate-limiting plugin, you can run a simple bash script like:

while true; do curl http://$KONG_IP/productpage; done

After you hit 60 calls within a minute, as defined in our ConfigMap, you will see a 429 status telling you that you hit your limit.

We can also test the routing to the external httpbin service. It should be inaccessible due to the authentication plugin we configured. Try it out by running:

curl -i http://$KONG_IP/mockbin

To recap, we successfully installed Istio with strict mTLS, deployed an application on the mesh, and secured the mesh using Kong with one YAML file. If you want to learn more about Kong and all its various features, check out the documentation page here. We have one last step for folks who would like a visualize representation.

3. Kiali to visualize it all

Kiali is a console that offers observability and service mesh configuration capabilities. During our Istio installation steps, we actually installed Kiali within the same YAML file. If we look at our existing services in the istio-system namespace, you should see Kiali up and running.

kubectl get svc -n istio-system

It does not come configured with an external IP, so we will have to use port-forward to access the GUI. But prior to that, let’s continuously send traffic to our mesh so we can see that in Kiali:

while true; do curl http://$KONG_IP/productpage; done

With that up and running, open up a new terminal and port-forward the Kiali service so we can access it locally:

Please note that this is only for demo purposes. In a production deployment, the service would be available outside the cluster and be easily accessible. You should not be using port-forward for regular operations in a production system. 

kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=kiali -o jsonpath='{.items[0].metadata.name}') 20001:20001

Now you can access the GUI through the following URL in your web browser:

http://localhost:20001/kiali/console 

There are a lot of features that Kiali offers, you can learn more about them on their official documentation. My favorite feature is the graphs that allow me to visualize the topology of the service mesh. 

That is all I have for this walk-through. If you enjoyed the technologies used in this post, please check out their repositories since they are all open-source and would love to have more contributors! Here are their links for your convenience:

Kong: [Official Documentation] [GitHub] [Twitter]

Kubernetes: [Official Documentation] [GitHub] [Twitter]

Istio: [Official Documentation] [GitHub] [Twitter]

Envoy: [Official Documentation] [GitHub] [Twitter]

Kiali: [Official Documentation] [GitHub] [Twitter]

Thank you for following along!

The post Kong and Istio: Setting up Service Mesh on Kubernetes with Kiali for Observability  appeared first on KongHQ.


Viewing all articles
Browse latest Browse all 463

Trending Articles