Quantcast
Channel: Blog – KongHQ
Viewing all 463 articles
Browse latest View live

Canary Deployment in 5 Minutes with Service Mesh

$
0
0

Welcome to our second hands-on Kuma guide! The first one walked you through securing your application with mTLS using Kuma. Today, this guide will walk you through Kuma’s new L4 traffic routing rules. These rules will allow you to easily implement blue/green deployments and canary releases. In summary, Kuma will now alleviate the stress of deploying new versions and/or features into your service mesh. Let’s take a glimpse at how to achieve it in our sample application:

Start Kubernetes and Marketplace Application

To start, you need a Kubernetes cluster with at least 4GB of memory. We’ve tested Kuma on Kubernetes v1.13.0 – v1.16.x, so use anything older than v1.13.0 with caution. In this tutorial, we’ll be using v1.15.4 on minikube, but feel free to run this in a cluster of your choice.

$ minikube start --cpus 2 --memory 4096 --kubernetes-version v1.15.4
😄  minikube v1.4.0 on Darwin 10.14.6
🔥  Creating virtualbox VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.15.4 on Docker 18.09.9 ...
🚜  Pulling images ...
🚀  Launching Kubernetes ...
⌛  Waiting for: apiserver proxy etcd scheduler controller dns
🏄  Done! kubectl is now configured to use "minikube"

When running on Kubernetes, Kuma will store all of its state and configuration on the underlying Kubernetes API server, and therefore requiring no dependency to store the data. 

With your Kubernetes cluster up and running, we can throw up a demo application built for Kuma. Deploy the marketplace application by running:

$ kubectl apply -f http://bit.ly/kuma101
namespace/kuma-demo created
serviceaccount/elasticsearch created
service/elasticsearch created
replicationcontroller/es created
deployment.apps/redis-master created
service/redis created
service/backend created
deployment.apps/kuma-demo-backend-v0 created
deployment.apps/kuma-demo-backend-v1 created
deployment.apps/kuma-demo-backend-v2 created
configmap/demo-app-config created
service/frontend created
deployment.apps/kuma-demo-app created

This will deploy our demo marketplace application split across four pods. The first pod is an Elasticsearch service that stores all the items in our marketplace. The second pod is the Vue front-end application that will give us a visual page to interact with. The third pod is our Node API server, which is in charge of interacting with the two databases. Lastly, we have the Redis service that stores reviews for each item. Let’s check that the pods are up and running by checking the kuma-demo namespace:

$ kubectl get pods -n kuma-demo
NAME                                       READY    STATUS      RESTARTS      AGE
es-87mgm                                   1/1      Running        0          91s
kuma-demo-app-7f799bbfdf-7bk2x             2/2      Running        0          91s
kuma-demo-backend-v0-6548b88bf8-46z6n      1/1      Running        0          91s
redis-master-6d4cf995c5-d4kc6              1/1      Running        0          91s

With the application running, port-forward the sample application to access the front-end UI at http://localhost:8080:

$ kubectl port-forward ${KUMA_DEMO_APP_POD_NAME} -n kuma-demo 8080:80
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80

Now that you can visualize the application, play around with it! This is what you just created:

The only difference is this diagram includes the v1 and v2 deployment of our back-end API. If you inspect our pods in kuma-demo namespace again, you will only find a lonely v0, but don’t worry, I included the deployments for v1 and v2 for you. Before we scale those deployments, let’s add Kuma.

Download Kuma

To start, we need to download the latest version of Kuma. You can find installation procedures for different platforms on our official documentation. The following guide is being created on macOS so it will be using the Darwin image:

$ wget https://kong.bintray.com/kuma/kuma-0.3.0-darwin-amd64.tar.gz
--2019-12-09 11:25:49--  https://kong.bintray.com/kuma/kuma-0.3.0-darwin-amd64.tar.gz
Resolving kong.bintray.com (kong.bintray.com)... 54.149.67.138, 34.215.12.119
Connecting to kong.bintray.com (kong.bintray.com)|54.149.67.138|:443... connected.
HTTP request sent, awaiting response... 302
Location: https://akamai.bintray.com/3a/3afc187b8e3daa912648fcbe16f0aa9c2eb90b4b0df4f0a5d47d74ae426371b1?__gda__=exp=1575920269~hmac=0d7c9af597660ab1036b3d50bef98fc68dfa0b832e2005d25e1628ae92c6621e&response-content-disposition=attachment%3Bfilename%3D%22kuma-0.3.0-darwin-amd64.tar.gz%22&response-content-type=application%2Fgzip&requestInfo=U2FsdGVkX1-CBTtYNUxxbm2yT4muZ0ig1ICnD2XOqJI7BobZ4DB_RouzRRsn3NBrSFjF_IqjN9wzbGk28ZcFS_mD79NCyZ0V0XxawLL8UvY5D8h-QQdfKTeRUpLUqOKI&response-X-Checksum-Sha1=6df196169311c66a544eccfdd73931b6f3b83593&response-X-Checksum-Sha2=3afc187b8e3daa912648fcbe16f0aa9c2eb90b4b0df4f0a5d47d74ae426371b1 [following]
--2019-12-09 11:25:49--  https://akamai.bintray.com/3a/3afc187b8e3daa912648fcbe16f0aa9c2eb90b4b0df4f0a5d47d74ae426371b1?__gda__=exp=1575920269~hmac=0d7c9af597660ab1036b3d50bef98fc68dfa0b832e2005d25e1628ae92c6621e&response-content-disposition=attachment%3Bfilename%3D%22kuma-0.3.0-darwin-amd64.tar.gz%22&response-content-type=application%2Fgzip&requestInfo=U2FsdGVkX1-CBTtYNUxxbm2yT4muZ0ig1ICnD2XOqJI7BobZ4DB_RouzRRsn3NBrSFjF_IqjN9wzbGk28ZcFS_mD79NCyZ0V0XxawLL8UvY5D8h-QQdfKTeRUpLUqOKI&response-X-Checksum-Sha1=6df196169311c66a544eccfdd73931b6f3b83593&response-X-Checksum-Sha2=3afc187b8e3daa912648fcbe16f0aa9c2eb90b4b0df4f0a5d47d74ae426371b1
Resolving akamai.bintray.com (akamai.bintray.com)... 184.27.29.177
Connecting to akamai.bintray.com (akamai.bintray.com)|184.27.29.177|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 38017379 (36M) [application/gzip]
Saving to: ‘kuma-0.3.0-darwin-amd64.tar.gz’

kuma-0.3.0-darwin-amd64.tar.gz      100%[================================================================>]  36.26M  4.38MB/s    in 8.8s

2019-12-09 11:25:59 (4.13 MB/s) - ‘kuma-0.3.0-darwin-amd64.tar.gz’ saved [38017379/38017379]

Next, let’s unbundle the files to get the following components:

$ tar xvzf kuma-0.3.0-darwin-amd64.tar.gz
x ./
x ./conf/
x ./conf/kuma-cp.conf
x ./bin/
x ./bin/kuma-tcp-echo
x ./bin/kuma-dp
x ./bin/kumactl
x ./bin/kuma-cp
x ./bin/envoy
x ./NOTICE
x ./README
x ./LICENSE

Lastly, go into the ./bin directory where the Kuma components will be:

$ cd bin && ls
envoy   kuma-cp   kuma-dp   kuma-tcp-echo kumactl

Install Kuma

With Kuma downloaded, let’s utilize kumactl to install Kuma on our cluster. The kumactl executable is a very important component in your journey with Kuma, so be sure to read more about it here. Run the following command to install Kuma onto our Kubernetes cluster:

$ ./kumactl install control-plane | kubectl apply -f -
namespace/kuma-system created
secret/kuma-admission-server-tls-cert created
secret/kuma-injector-tls-cert created
secret/kuma-sds-tls-cert created
configmap/kuma-control-plane-config created
configmap/kuma-injector-config created
serviceaccount/kuma-control-plane created
customresourcedefinition.apiextensions.k8s.io/dataplaneinsights.kuma.io created
customresourcedefinition.apiextensions.k8s.io/dataplanes.kuma.io created
customresourcedefinition.apiextensions.k8s.io/meshes.kuma.io created
customresourcedefinition.apiextensions.k8s.io/proxytemplates.kuma.io created
customresourcedefinition.apiextensions.k8s.io/trafficlogs.kuma.io created
customresourcedefinition.apiextensions.k8s.io/trafficpermissions.kuma.io created
customresourcedefinition.apiextensions.k8s.io/trafficroutes.kuma.io created
clusterrole.rbac.authorization.k8s.io/kuma:control-plane created
clusterrolebinding.rbac.authorization.k8s.io/kuma:control-plane created
role.rbac.authorization.k8s.io/kuma:control-plane created
rolebinding.rbac.authorization.k8s.io/kuma:control-plane created
service/kuma-injector created
service/kuma-control-plane created
deployment.apps/kuma-control-plane created
deployment.apps/kuma-injector created
mutatingwebhookconfiguration.admissionregistration.k8s.io/kuma-admission-mutating-webhook-configuration created
mutatingwebhookconfiguration.admissionregistration.k8s.io/kuma-injector-webhook-configuration created
validatingwebhookconfiguration.admissionregistration.k8s.io/kuma-validating-webhook-configuration created

When deploying on Kubernetes, you are supposed to change the state of Kuma by leveraging Kuma’s CRDs. Therefore, we will now use kubectl to help us through the remaining demo. To start, let’s check the pods are up and running within the kuma-system namespace:

$ kubectl get pods -n kuma-system
NAME                                  READY   STATUS    RESTARTS   AGE
kuma-control-plane-7bcc56c869-lzw9t   1/1     Running   0          70s
kuma-injector-9c96cddc8-745r7         1/1     Running   0          70s

While running on Kubernetes, no external dependencies are required, since it leverages the underlying Kubernetes API server to store its configuration. However, as you can see above, a kuma-injector service will also start in order to automatically inject sidecar data plane proxies without human intervention. Data plane proxies are injected into namespaces that include the following label:

kuma.io/sidecar-injection: enabled

Now that our control plane and injector are running, let’s delete the existing kuma-demo pods so they restart. This will give the injector a chance to deploy those sidecar proxies among each pod. 

$ kubectl delete pods --all -n kuma-demo
pod "es-87mgm" deleted
pod "kuma-demo-app-7f799bbfdf-7bk2x" deleted
pod "kuma-demo-backend-v0-6548b88bf8-46z6n" deleted
pod "redis-master-6d4cf995c5-d4kc6" deleted

Check that the pods are up and running again with an additional container. The additional container is the Envoy sidecar proxy that Kuma is injecting into each pod.

$ kubectl get pods -n kuma-demo
NAME                                    READY    STATUS     RESTARTS    AGE
es-jxzfp                                2/2      Running    0           43s
kuma-demo-app-7f799bbfdf-p5gjq          3/3      Running    0           43s
kuma-demo-backend-v0-6548b88bf8-8sbzn   2/2      Running    0           43s
redis-master-6d4cf995c5-42hlc           2/2      Running    0           42s

Now if we port-forward our marketplace application again, I challenge you to spot the difference.

$ kubectl port-forward ${KUMA_DEMO_APP_POD_NAME} -n kuma-demo 8080:80
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80

A-ha! Couldn’t find a thing, right? Well, that is because Kuma doesn’t require a change to your application’s code in order to be used. The only change is that Envoy now handles all the traffic between the services. Kuma implements a pragmatic approach that is very different from the first-generation control planes:

  • It runs with low operational overhead across all the organization
  • It supports every platform
  • It’s easy to use while relying on a solid networking foundation delivered by Envoy – and we see it in action right here!

Canary Deployment

With the mesh up and running, let’s start expanding our application with brand new features. Our current marketplace application has no sales. With the holiday season upon us, the engineering team worked hard to develop v1 and v2 version of the Kuma marketplace to support flash sales. The backend-v1 service will always have one item on sale, and the backend-v2 service will always have two items on sale. So to start, scale up the deployments of v1 and v2 like so:

$ kubectl scale deployment kuma-demo-backend-v1 -n kuma-demo --replicas=1
deployment.extensions/kuma-demo-backend-v1 scaled

and

$ kubectl scale deployment kuma-demo-backend-v2 -n kuma-demo --replicas=1
deployment.extensions/kuma-demo-backend-v2 scaled

Now if we check our pods again, you will see three backend services:

$ kubectl get pods -n kuma-demo
NAME                                       READY   STATUS      RESTARTS    AGE
es-jxzfp                                   2/2     Running      0          9m16s
kuma-demo-app-7f799bbfdf-p5gjq             3/3     Running      0          9m16s
kuma-demo-backend-v0-6548b88bf8-8sbzn      2/2     Running      0          9m16s
kuma-demo-backend-v1-894bcd4bc-p7xz8       2/2     Running      0          20s
kuma-demo-backend-v2-dffb4bffd-48z67       2/2     Running      0          11s
redis-master-6d4cf995c5-42hlc              2/2     Running      0          9m15s

With the new versions up and running, use the new TrafficRoute policy to slowly roll out users to our flash-sale capability. This is also known as canary deployment: a pattern for rolling out new releases to a subset of users or servers. By deploying the change to a small subset of users, we can test its stability and make sure we don’t go broke by introducing too many sales at once.

First, define the following alias:

$ alias benchmark='echo "NUM_REQ NUM_SPECIAL_OFFERS"; kubectl -n kuma-demo exec $( kubectl -n kuma-demo get pods -l app=kuma-demo-frontend -o=jsonpath="{.items[0].metadata.name}" ) -c kuma-fe -- sh -c '"'"'for i in `seq 1 100`; do curl -s http://backend:3001/items?q | jq -c ".[] | select(._source.specialOffer == true)" | wc -l ; done | sort | uniq -c | sort -k2n'"'"''

This alias will help send 100 requests from frontend-app to backend-api and count the number of special offers in the response. Then it will group the request by the number of special offers. Here is an example of the output before we start configuring our traffic-routing:

$ benchmark
NUM_REQ    NUM_SPECIAL_OFFERS
34                     0
33                     1
33                     2

The traffic is equally distributed because have not set any traffic-routing. Let’s change that! Here is what we need to achieve:

We can achieve that with the following policy:

cat <<EOF | kubectl apply -f -
apiVersion: kuma.io/v1alpha1
kind: TrafficRoute
metadata:
  name: frontend-to-backend
  namespace: kuma-demo
mesh: default
spec:
  sources:
  - match:
      service: frontend.kuma-demo.svc:80
  destinations:
  - match:
      service: backend.kuma-demo.svc:3001
  conf:
  # it is NOT a percentage. just a positive weight
  - weight: 80
    destination:
      service: backend.kuma-demo.svc:3001
      version: v0
  # we're NOT checking if total of all weights is 100
  - weight: 20
    destination:
      service: backend.kuma-demo.svc:3001
      version: v1
  # 0 means no traffic will be sent there
  - weight: 0
    destination:
      service: backend.kuma-demo.svc:3001
      version: v2
EOF

trafficroute.kuma.io/frontend-to-backend created

That is all that is necessary! With one simple policy and the weight you apply to each matching service, you can slowly roll out the v1 and v2 version of your application. Let’s run the benchmark alias one more time to see the TrafficRoute policy in action:

$ benchmark
NUM_REQ    NUM_SPECIAL_OFFERS
83                     0
17                     1

We do not see any results for two special offers because it is configured with a weight of 0. Once we’re comfortable with not going bankrupt with our rollout of v1, we can slowly apply weight to v2. You can also see the action live on the webpage. One last time, port-forward the application frontend like so:

$ kubectl port-forward ${KUMA_DEMO_APP_POD_NAME} -n kuma-demo 8080:80
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80

Two out of roughly 10 requests to our webpage will have the sale feature enabled:

That’s all! This was a really quick run-through, so make sure you check out Kuma’s official webpage or repository to find out about more features. You can also join our Slack channel to chat with us live and meet the community! Lastly, sign up for the Kuma newsletter below to stay up-to-date as we push out more features that will make this the best service mesh solution for you.

The post Canary Deployment in 5 Minutes with Service Mesh appeared first on KongHQ.


Infographic: What Technology Leaders Need to Know About Digital Innovation in 2020

$
0
0

We surveyed 200 senior technology decision makers in the U.S., including CIOs, CTOs, VPs of IT, IT directors/architects and software engineers/developers from organizations across a range of industries, with respondents evenly divided between publicly traded and privately held companies that had 1,000 or more employees.

We learned that the stakes for increasing innovation velocity are both high and immediate, as 71 percent of technology leaders believe that organizations would be out of business within six years if they fail to keep pace with innovation in their industry. For public companies, the urgency associated with increasing innovation speed is even more pressing, as 39 percent reported that they believe organizations would be out of business in less than three years if they failed to keep pace with innovation.

Check out the following infographic to see how your organization competes on business innovation today:

The post Infographic: What Technology Leaders Need to Know About Digital Innovation in 2020 appeared first on KongHQ.

2019 Year in Review – Thank You to Our Customers, Community and Partners

$
0
0

As I look back at 2019 and all the amazing things we’ve achieved as a team, it was a big year for us at Kong. We’ve grown tremendously in just the past year alone, doubling to more than 160 employees, reaching 170 Kong Enterprise customers, hitting 100 million downloads of our open source Kong Gateway and running more than 1 million instances of Kong per month across the world. Last but not least, we also open sourced Kuma as a universal service mesh, and we added Insomnia, the #1 OSS API testing platform with over 1,000 customers, to our Kong family.

Let’s take a glance back at our greatest milestones of the year:

  • Raised $43 million in Series C funding – This was a culmination of many years of work as we build towards a nervous system for the cloud. This important funding round enables us to continue to grow Kong on all fronts, including our open source community, enterprise platform, as well as our expansion into new, global markets.

 

Global Kong Community

Global Kong Community

  • Named on Forbes’ Next Billion-Dollar Startups List and named a Visionary in Gartner’s 2019 Magic Quadrant for Full Lifecycle API Management – We’ve come a long way from the days of running the company out of a tiny garage in Milan and living off of rice and beans. This momentous recognition by both Forbes and Gartner is a testament to all the hard work the Kong team has put into building a product that solves real-world problems for developers across the world. Thank you to all of our employees, customers, community, partners and investors for always believing in Kong and our vision.

 

 

  • Hosted our second annual Kong Summit – We kicked off our inaugural Kong Summit last year, which brought together our open source users, enterprise customers and industry leaders to shape the future of software. This year, it was even bigger and better. The event drew 500 attendees from 250 organizations across 28 countries and 200 cities. Next year, we promise an even bigger and better Kong Summit 2020!

 

Kong Summit 2019

 

  • Acquired Insomnia –  As part of our journey to build the service control platform for the future and provide full lifecycle service management, we acquired Insomnia, the leading open source API testing tool. Insomnia is the foundation for our new Kong Studio to help users build and test their APIs and microservices. 

 

Welcome Insomnia to the Kong family

 

  • Released new, exciting products and features – In addition to Kong Studio, we released a number of other new products and features for both our community users and enterprise customers alike, including Kuma, Kong Enterprise 2020, Kong Gateway 2.0 and Kong Brain and Kong Immunity. We listened to your wish lists and brought them to you in our latest product deliveries. 

 

Kuma logo

 

What a year 2019 has been. We have a lot of exciting things brewing for 2020, and we can’t wait to bring those to you in the coming year! We’re just getting started.

Wishing you Happy Holidays and a Happy New Year from all of us at Kong!

The post 2019 Year in Review – Thank You to Our Customers, Community and Partners appeared first on KongHQ.

Kong for Kubernetes 0.7 Released!

$
0
0

Kong for Kubernetes (Kong for K8s) is a Kubernetes Ingress Controller based on the popular Kong Gateway open source project. Kong for K8s is fully Kubernetes Native and provides enhanced API management capabilities. From an architectural perspective, Kong for K8s consists of two parts: A Kubernetes controller, which manages the state of Kong for K8s ingress configuration, and the Kong Gateway which processes and manages incoming API requests. 

We are thrilled to announce the availability of this latest release of Kong for K8s! This release’s highlight features include encrypted credentials, mutual authentication using TLS, native gRPC routing, and performance improvements.

With this release, Kong for K8s now has 100%coverage of Kong Gateway’s administrative API functions. This means that all the features of the Kong Gateway can now be used natively on Kong for K8s through Kubernetes resources.

Encrypted Credentials Using Secret Resource

API access credentials can now be stored in encrypted form inside the Kubernetes datastore using the Secret resource. This provides encryption at rest for sensitive credentials. Kong’s controller reads these secrets from the Kubernetes API server and loads them into Kong.

# create the secret containing the credential and credential-type
$ kubectl create secret generic harry-apikey  \
  --from-literal=kongCredType=key-auth  \
  --from-literal=key=my-sooper-secret-key
# associate it with an existing or new KongConsumer using the
# credentials array
$ echo "apiVersion: configuration.konghq.com/v1
kind: KongConsumer
metadata:
  name: harry
username: harry
credentials:
- harry-apikey" | kubectl apply -f -
# use the API key to authenticate against a service
curl -i -H 'apikey: my-sooper-secret-key' $PROXY_IP/foo/status/200

 

 

 

We have also added support for validating the above secrets as they are created, using the Admission Controller that ships with Kong for K8s.

KongCredential CRD is now deprecated and will be removed in a future release. Users are encouraged to use Secrets for storing credentials.

Native gRPC Routing

gRPC traffic can now be routed via Kong for K8s natively with support for method-based routing. This can be enabled via the path field in the Ingress spec, which corresponds to the gRPC method name when the Ingress resource is annotated with gRPC as the protocol.

 

All logging and observability plugins can be enabled on the gRPC traffic to monitor and get insights into the traffic as gRPC requests are routed via Kong. Kong. We will be adding gRPC support to the wide array of authentication, traffic throttling, transformation plugins – stay tuned!

Mutual Authentication Using mTLS

The connection between Kong for K8s  and Kubernetes services can now be encrypted and authenticated using mTLS. You can use this to further lock down access to your services.

You can enable this feature for all the services in Kubernetes or on a case-by-case basis.

Plugins for Combinations of Consumer and Service

Plugins can now be created for a combination of Ingress and a KongConsumer or a Service and a KongConsumer. This allows for cases where a specific client of an API needs special treatment. A good example here is rate-limiting your users based on different tiers of your services (based on your SLAs/pricing) or giving a specific customer a higher rate-limit on a specific endpoint. Simply apply the same plugins.konghq.com annotation on the resources you’d like to configure the plugin for, and the controller will figure the rest out for you. 

Performance Improvements

By default, Kong for K8s will run in in-memory mode without a database now. This means that the Kubernetes datastore is now the source of truth. This also reduces the operational burden of running Kong and simplifies the management and upgrades, as there is no need to worry about the database anymore.

The controller will also consume less memory, and the number of sync events to Kong should reduce by at least an order of magnitude, further increasing Kong’s performance.

Miscellaneous Additions

Controller Configuration Revamped

Configuration of the Kong for K8s Kubernetes Controller itself can now be tweaked via both environment flags and CLI flags. Environment variables and Secrets can be used to pass sensitive information to the controller. Each flag has a corresponding environment variable (simply prefix the flag name with CONTROLLER_ string).

Multi-Port Services

Services with multiple ports are now supported and can be flexibly exposed to the outside world via Kong for K8s. This was a long-standing ask from the community and our enterprise users alike. Thank you @rainest for contributing this feature!

Upstream Host

The host header sent to the Kubernetes service can now be tweaked using the KongIngress resource.

For a complete list of changes and new features for this latest release of Kong for K8s, please consult the changelog document.

Compatibility

Kong for Ka variety of deployments and runtimes. For a complete view of Kong for K8s compatibility, plea8S works in se see the compatibility document.

Getting Started!

You can try out Kong for K8s using our lab environment, available for free to all at konglabs.io/kubernetes.

You can install Kong for K8s on your Kubernetes cluster with one click:

$ kubectl apply -f bit.ly/k4k8s
or
$ helm repo update
$ helm install stable/kong

Alternatively, if you want to use your own Kubernetes cluster, follow our getting started guide to get your hands dirty.

Please feel free to ask questions on our Community forum — Kong Nation — and open a Github issue if you happen to run into a bug. 

Happy Konging!

The post Kong for Kubernetes 0.7 Released! appeared first on KongHQ.

Deploying Service Mesh on Virtual Machines in 5 Minutes

$
0
0

Welcome to another hands-on Kuma guide! In the first guide, I walked you through securing an application using Kuma in a Kubernetes deployment. Since Kuma is platform-agnostic, I wanted to write a follow-up blog post on how to secure your application if you are not running in Kubernetes. This capability to run anywhere differentiates Kuma from many other service mesh solutions in the market.

To learn how Kuma works on universal mode, we’re going to cover three things today:

  1. Deploying the Kuma sample app on virtual machines using Vagrant
  2. Enabling mTLS to secure traffic between our applications’ components
  3. Using granular traffic permission policies to route traffic

For folks who followed along on our first blog post, this structure may look familiar.That is by intention. Deploying and configuring Kuma was designed since release to be easy regardless of where you deploy it.

So without further ado, let’s accomplish the three tasks listed above in a matter of minutes!

Install Dependencies and Sample Application

We’ll be using Vagrant to deploy our application and demonstrate Kuma’s capabilities in universal mode. Please follow Vagrant’s installation guide to have it set up correctly before proceeding with this guide. I’ll be using Vagrant 2.2.6. You can check using the Vagrant CLI command to verify the installation was successful. 

$ vagrant --version
Vagrant 2.2.6

We need to download the latest version of Kuma next. You can find installation procedures for different platforms on our official documentation. The following guide is being created on macOS, so it will be using the Darwin image:

$ wget https://kong.bintray.com/kuma/kuma-0.3.1-darwin-amd64.tar.gz
--2019-12-23 17:41:17--  https://kong.bintray.com/kuma/kuma-0.3.1-darwin-amd64.tar.gz
Resolving kong.bintray.com (kong.bintray.com)... 34.214.69.171, 52.37.116.40
Connecting to kong.bintray.com (kong.bintray.com)|34.214.69.171|:443... connected.
HTTP request sent, awaiting response... 302
Location: https://akamai.bintray.com/dc/dc68a6fabafa80119b185e5cf607113777037534e2261c6d12130ce89d41f05f?__gda__=exp=1577094798~hmac=3da51d0ab42a474af3f7a0540da84292c8d8847d26ba2fb6d46a9eaa5fa11cef&response-content-disposition=attachment%3Bfilename%3D%22kuma-0.3.1-darwin-amd64.tar.gz%22&response-content-type=application%2Fgzip&requestInfo=U2FsdGVkX19S5RFWMJpgC2Jq8Nb62ngFxAWy887xQ7hZCuvg3K657NQxtRMkyyudrYW4qzaAK9ulloh7NMWapyYCpPw6z12CKzQvQS0CdmhcjkyJ7SVjEk7s5SI-VvOs&response-X-Checksum-Sha1=625e852b137a620980fcddb839ece0856bd06c1f&response-X-Checksum-Sha2=dc68a6fabafa80119b185e5cf607113777037534e2261c6d12130ce89d41f05f [following]
--2019-12-23 17:41:18--  https://akamai.bintray.com/dc/dc68a6fabafa80119b185e5cf607113777037534e2261c6d12130ce89d41f05f?__gda__=exp=1577094798~hmac=3da51d0ab42a474af3f7a0540da84292c8d8847d26ba2fb6d46a9eaa5fa11cef&response-content-disposition=attachment%3Bfilename%3D%22kuma-0.3.1-darwin-amd64.tar.gz%22&response-content-type=application%2Fgzip&requestInfo=U2FsdGVkX19S5RFWMJpgC2Jq8Nb62ngFxAWy887xQ7hZCuvg3K657NQxtRMkyyudrYW4qzaAK9ulloh7NMWapyYCpPw6z12CKzQvQS0CdmhcjkyJ7SVjEk7s5SI-VvOs&response-X-Checksum-Sha1=625e852b137a620980fcddb839ece0856bd06c1f&response-X-Checksum-Sha2=dc68a6fabafa80119b185e5cf607113777037534e2261c6d12130ce89d41f05f
Resolving akamai.bintray.com (akamai.bintray.com)... 173.222.181.233
Connecting to akamai.bintray.com (akamai.bintray.com)|173.222.181.233|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 42443207 (40M) [application/gzip]
Saving to: ‘kuma-0.3.1-darwin-amd64.tar.gz’

kuma-0.3.1-darwin-amd64.t 100%[====================================>]  40.48M   251KB/s    in 2m 10s

2019-12-23 17:43:30 (318 KB/s) - ‘kuma-0.3.1-darwin-amd64.tar.gz’ saved [42443207/42443207]

Next, let’s unbundle the files to get the following components:

$ tar xvzf kuma-0.3.1-darwin-amd64.tar.gz
x ./LICENSE
x ./NOTICE
x ./bin/
x ./bin/kuma-tcp-echo
x ./bin/kumactl
x ./bin/kuma-dp
x ./bin/envoy
x ./bin/kuma-cp
x ./README
x ./conf/
x ./conf/kuma-cp.conf

Lastly, go into the ./bin directory where the kuma components will be:

$ cd bin && ls
envoy   kuma-cp   kuma-dp kuma-tcp-echo kumactl

The kumactl application is a CLI client for the underlying HTTP API of Kuma. Therefore, you can access the state of Kuma by leveraging the API directly. You can configure kumactl to point to any remote kuma-cp instance. This is the reason we wanted to have Kuma downloaded on our local machine. Add this directory to your path so you can call kumactl from anywhere:

$ export PATH=$PATH:$(pwd)

Now, to verify it is working, check that Kuma’s version correlates with the package you downloaded by using kumactl:

$ kumactl version
0.3.1

Download Kuma Marketplace 

We built a sample application that will help illustrate how Kuma works. The sample application is an online marketplace where you can query fashion items and check the reviews left by users. You can find it on our GitHub repository and clone it onto your local machine:

$ git clone https://github.com/Kong/kuma-demo.git
Cloning into 'kuma-demo'...
remote: Enumerating objects: 5, done.
remote: Counting objects: 100% (5/5), done.
remote: Compressing objects: 100% (5/5), done.
remote: Total 1069 (delta 1), reused 1 (delta 0), pack-reused 1064
Receiving objects: 100% (1069/1069), 879.95 KiB | 163.00 KiB/s, done.
Resolving deltas: 100% (539/539), done.

Navigate into the Vagrant directory in the Kuma demo to find the following files:

$ cd kuma-demo/vagrant/ && ls
README.md	backend		control-plane	 frontend
Vagrantfile	common		elastic		 redis

We’ve built out a Vagrantfile that will automatically deploy the sample application across five virtual machines (VMs):

  1. Kuma-control-plane: Machine that houses the Kuma control plane 
  2. Redis: Redis database that holds the reviews for all our items
  3. Elastic: Elasticsearch database that holds all our items’ metadata
  4. Back End: Node API that exposes an endpoint for front end to query databases
  5. Front End: Vue application that allows users to visualize items and reviews that they search for

If you wish to inspect the scripts used to deploy each component of our application, feel free to dig into their respective directories. 

Deploy Kuma Marketplace 

With the Vagrantfile already built out, all you have to run is the following command to get the application up and running:

$ vagrant up
Bringing machine 'kuma-control-plane' up with 'virtualbox' provider...
Bringing machine 'redis' up with 'virtualbox' provider...
Bringing machine 'elastic' up with 'virtualbox' provider...
Bringing machine 'backend' up with 'virtualbox' provider...
Bringing machine 'frontend' up with 'virtualbox' provider...

This step may take a while depending on your machine and internet speed because we are spinning up five virtual machines asynchronously. This would be a great time to inspect the files and scripts included to get a better understanding of each component. 

Once the vagrant up command is complete, run vagrant status to check that we have the five machines listed above:

$ vagrant status
Current machine states:

kuma-cp           running (virtualbox)
redis             running (virtualbox)
elastic           running (virtualbox)
backend           running (virtualbox)
frontend          running (virtualbox)

This environment represents multiple VMs. The VMs are all listed above with their current state. For more information about a specific VM, run `vagrant status NAME`.

Lastly, before we can start shopping on the Kuma marketplace, port-forward the frontend machine. Run:

$ vagrant ssh frontend -- -L 127.0.0.1:8080:127.0.0.1:8080

Now you can access the application if you go to http://localhost:8080. All the traffic between the machines are routed through Kuma’s data plane.

Powerful Policies

With the mesh up and running, let’s start improving our application by fixing a few issues it has. First, we have no encryption between our services, which leaves us vulnerable to attack. Kuma can easily fix this by utilizing the mutual TLS policy. This policy enables automatic encrypted mTLS traffic for all the services in a mesh. Kuma ships with a built-in CA (Certificate Authority), which is initialized with an auto-generated root certificate. The root certificate is unique for every mesh and it used to sign identity certificates for every data-plane. You can provide your own CA if you wish to do so, thanks to Jakub’s new feature in the latest 0.3.1 release.

By default, mTLS is not enabled. You can enable mTLS by updating the mesh policy using kumactl. However, since kumactl is on our local machine and the Kuma control-plane is in a virtual machine, we need to configure kumactl to point to a remote kuma-cp instance. Configure your local kumactl to point to our Vagrant machine by running:

$ kumactl config control-planes add --name=vagrant --address=http://192.168.33.10:5681
added Control Plane "vagrant"
switched active Control Plane to "vagrant"

To check if kumactl was properly configured, use the kumactl inspect dataplanes  to list the components in our demo application:

$ ./kumactl inspect dataplanes
MESH      NAME       TAGS                         STATUS     LAST CONNECTED AGO   LAST UPDATED AGO   TOTAL UPDATES   TOTAL ERRORS
default   frontend   service=frontend             Online     3m49s                2m36s              4               0
default   backend    service=backend version=v0   Online     2m36s                25s                5               0
default   elastic    service=elastic              Online     1m29s                1m28s              2               0
default   redis      service=redis                Online     1m01s                1m01s              2               0

With kumactl configured properly, we can finally update the mesh resource and turn on mTLS in our mesh:

$ cat <<EOF | kumactl apply -f -
type: Mesh
name: default
mtls:
  enabled: true
  ca:
    builtin: {}
EOF

With mTLS enabled, traffic is restricted by default. Remember to apply a Traffic Permission policy to permit connections between data planes. If you try to access the application right now at http://localhost:8080, the application will no longer work, since traffic is encrypted and you do not have any permissions. Traffic Permissions allow you to determine security rules for services that consume other services via their Tags. It is a very useful policy to increase security in the mesh and compliance in the organization. You can determine what source services are allowed to consume specific destination services like so:

$ cat <<EOF | kumactl apply -f -
type: TrafficPermission
name: permission-all
mesh: default
sources:
  - match:
      service: '*'
destinations:
  - match:
      service: '*'
EOF

In this case, our rule states that any source service has permission to route traffic to any destination service. So, if we now access our marketplace at http://localhost:8080, the demo application will look like it’s normal again. However, now all the traffic between Elasticsearch, Node and Redis is encrypted! 

But wait! Hypothetically, some other marketplace was disgruntled by our awesome webpage and starts spamming all our product with fake reviews. What could we do? First, we have to delete the existing permission that allows traffic between all services:

$ kumactl delete traffic-permission permission-all
deleted TrafficPermission "permission-all"

With two granular Traffic Permission policies, we can easily lock down our Redis service:

$ cat <<EOF | kumactl apply -f - 
type: TrafficPermission
name: frontend-to-backend
mesh: default
sources:
  - match:
      service: 'frontend'
destinations:
  - match:
      service: 'backend'
EOF

and

$ cat <<EOF | kumactl apply -f - 
type: TrafficPermission
name: backend-to-elasticsearch
mesh: default
sources:
  - match:
      service: 'backend'
destinations:
  - match:
      service: 'elastic'
EOF

In these two manifests, I’m changing the original Traffic Permission policy to have a very specific source and destination. The first one will give the frontend and backend permission to send traffic. The second will allow backend to communicate with elastic to query items. Without granting the redis service any permissions, the rest of the application can no longer communicate with it. Traffic Permission helps us place redis in solitary confinement until we find out who is targeting us with falsified reviews. 

That’s all! This was a really quick run-through, so make sure you check out Kuma’s official webpage or repository to find out about more features. You can also join our Slack channel to chat with us live and meet the community! Lastly, sign up for the Kuma newsletter below to stay up to date as we push out more features that will make this the best service mesh solution for you.

The post Deploying Service Mesh on Virtual Machines in 5 Minutes appeared first on KongHQ.

Microservices: An Enterprise Software Sea Change

$
0
0

As some of you already know, I have been following the shift towards microservices adoption for a while now. For the longest time, when the industry thought of the transition to microservices, they thought of smaller companies leading the charge. However, I’ve seen large enterprises get value from microservices as well and saw this trickle-in starting in 2016, which is why I am excited to learn this now has achieved mainstream adoption. 

Did you know that 61 percent of large enterprises are already in production with microservices? This finding came out of our recently released 2020 Digital Innovation Benchmark. This research, completed in partnership with the research agency Vanson Bourne, surveyed 200 U.S. technology leaders across industries on the state of digital innovation in their organizations. The research findings underscored that microservices are widely adopted across large enterprises and revealed the major reasons technology leaders are adopting microservices. Realizing that staying competitive requires keeping up with the pace of innovation, technology leaders are prioritizing moving to microservices in order to improve security, development speed, speed to integrate new technologies, infrastructure flexibility and improved collaboration across teams. 

While many organizations still have critical applications running based on monolithic architectures, this new research showcased how widespread the adoption of microservices has become across large enterprises. In the survey, 35 percent of technology leaders at large enterprises reported they are using over 100 microservices and 61 percent reported they are using over 50 microservices. Now that at least partial adoption of microservices represents the clear majority in enterprises, the question for technology leaders has increasingly shifted from “why should we move to microservices?” to “how can we ensure success in our microservices journey?”

Another surprising finding from the research was that while improving developer velocity is one of the top reasons technology leaders choose to move to microservices, the number one reason is actually to improve security. This may seem counter-intuitive since some believe that microservices increases the surface area of attack. However, respondents felt less secure with a monolithic architecture where all the code is in one place for an application subject to attack.  

Technology leaders cited the following reasons for transitioning to microservices: improvements to security (56 percent), increased development speed (55 percent), increased speed of integrating new technologies (53 percent), improved infrastructure flexibility (53 percent) and improved collaboration across teams (46 percent). To unpack the reason these benefits rose to the top for adopting microservices, it helps to understand the context of the challenges many organizations are facing when it comes to running on legacy monolithic architectures.

 

Development speed is slow because pushing an update to one part of the application requires testing and pushing to production the entire codebase. The time to integrate with new technologies is lengthy because monolithic applications limit you to a certain technology stack. For example, if you were working with a Java Virtual Machine (JVM), then components of the application written in non-JVM languages would not work in your monolithic architecture, and your application would quickly become obsolete. Finally, a monolithic architecture is challenging for developers to work with efficiently. Because the code base is unwieldy in size, new team members are challenged to understand previous implementations, and as the team grows, there is no way to effectively segment contributions by functional area of the application. 

Also, one thing the survey did was for the respondents to self-select their own definition of microservices. While there are many purest versions of the definition of microservices, what I have seen over the years is when customers move to a decentralized architecture, they call it moving to “microservices”. How granular their service is, differs by a wide range and for example some of what they call microservices might be what Gartner calls “miniservices”. 

We can see how the benefits of driving microservices adoption are really all ways to address the pain of working with legacy monolith architectures. Based on the findings from this research, the takeaway for technology leaders is that getting started on a microservices journey offers critical benefits for improving efficiency and reducing costs of development, as well as staying competitive in the market. Rather than simply focusing on delivering “more” microservices, the key is throughout the transition to keep the focus on pragmatic digital transformation. Microservices adoption will drive value to the business to the extent that new patterns make development teams more efficient, applications more secure, and systems more able to evolve and adapt to today’s rapid pace of innovation. 

The post Microservices: An Enterprise Software Sea Change appeared first on KongHQ.

Kuma 0.3.2 Released with Kong Gateway Support, Prometheus Metrics and GUI Improvements!

$
0
0

Happy New Year! To kick off 2020, we’re proud to announce Kuma’s 0.3.2 release that includes long, anticipated features. The most prominent one is Kong Gateway support for ingress into your Kuma mesh. Another exciting feature that was widely requested is Prometheus support, which will enable you to scrape your applications’ metrics. Lastly, we announced the Kuma GUI in the last release. Thanks to a lot of early feedback, we’ve added many exciting improvements in this release.

You can take a look at the full changelog here.

Kong Gateway Support

The Dataplane can now operate in Gateway mode, which enables Kuma to integrate with existing API gateways like Kong. Kong would handle all external, client-facing routing, policies, documentation and metrics, while load-balancing and service-to-service policy enforcement are performed through Kuma. This flexible architecture allows management of east-west traffic with Kuma while benefiting from the capabilities of Kong for all north-south traffic.

Here is an example of how to deploy the data plane alongside your Kong gateway:

type: Dataplane
mesh: default
name: kong
networking:
  gateway:
    tags:
      service: kong
  outbound:
  - interface: :33033
    service: frontend

You define an outbound service as you would usually do, but replace the inbound service with the gateway tag. The data plane will then operate in gateway mode. Otherwise, clients would have to be provided with certificates that are generated dynamically for communication between services within the mesh.

Prometheus Metrics

Prometheus, which collects and indexes monitoring data, has been a widely requested integration. This will enable you to have more visibility about what is happening with the Kuma mesh.

To run Prometheus, just update the mesh resource with the new metrics tag:

type: Mesh
name: default
mtls:
  enabled: true
  ca:
    builtin: {}
metrics:
  prometheus: {}

The only difference between the universal and Kubernetes deployment is the new kuma-prometheus-sd process included in the Kuma package. kuma-prometheus-sd process will maintain a connection to Kuma Control Plane when you deploy in Universal mode and save a list of dataplanes for Prometheus to scrape metrics from. In Kubernetes, you do not need to use kuma-prometheus-sd so it takes slightly less time to get up and running. Full examples for both deployment modes can be found in our demo marketplace repository or read more about it in the documentation.

GUI

When launching our latest iteration of the Kuma GUI, you’ll be welcomed with some nice, new features and fixes that make the experience smoother, while also giving you a helpful at-a-glance overview of all of your data planes, meshes and policies. Here are some things you can expect:

Wizard

The Wizard will help you get started with the GUI, while also providing some helpful links to documentation and the next steps. It will detect things like when you’ve switched from Universal to Kubernetes and vice-versa, and it will also provide the status of your data planes.

Improved Mesh Overview

We’ve improved the mesh overview page to provide you with general statistics about your mesh, such as how many dataplanes are associated with it, as well as Health Checks, Traffic Routes, etc. We’ve also added a convenient way to copy entities to your clipboard in YAML format so that you can quickly store them for reference or use in the command line.

Improved Data Tables

We’ve improved the data tables on all overviews to provide features like color-coded statuses, human-readable connection and update times, as well as a simple way to view and copy your entities in YAML format with ease.

Under the Hood

Aside from the aforementioned features and changes, we’ve also made a ton of under-the-hood improvements for things like error handling, a tighter connection between Kuma itself and the GUI, as well as various bug fixes and small changes that were brought to our attention by the amazing open source community.

We hope that you find these changes and features useful! If you would like to read more about this release, you can refer to this pull request for more details.

Announcements

We’ll be hosting our next online Meetup on February 11, and we hope to see you there. Until then, hope you enjoy the new features and let us know what you think! If you have any other feature suggestions, please let us know so we can work together to build it. You can find us on the community Slack channel or through the GitHub repository.

The post Kuma 0.3.2 Released with Kong Gateway Support, Prometheus Metrics and GUI Improvements! appeared first on KongHQ.

Kong Gateway 2.0 GA!

$
0
0

After a full year of development since our last major open source release, we are proud to announce the next chapter of our flagship open-source API gateway — Kong Gateway 2.0 is General Available!

With this release, Kong will become more operationally agnostic for large-scale deployment across on-premises and multi-cloud environments, thanks to the new Hybrid Mode. In addition, plugin development also becomes more language agnostic, thanks to the new Golang PDK support.

Some great release highlights including:

Hybrid Mode Deployment

Also known as Control Plane/Data Plane separation (CP/DP), Hybrid mode allows Kong proxies to be deployed efficiently and securely anywhere, and the entire cluster can be then controlled from a single point (the Control Plane). In this mode, the Data Plane nodes do not connect to the database; instead, their configuration is managed and pushed by the Control Plane as necessary. This feature significantly improves the security and performance of large Kong clusters, while reducing operational costs. To get started with Hybrid Mode deployment, refer to the Hybrid Mode documentation.

Golang PDK Support

Lua is the de-facto language used for writing Kong plugins. While Lua has good performance and is very embeddable, it falls short on developer experience, third party libraries, and general popularity. During Kong Summit 2019, we revealed the Go plugin support for Kong, allowing developers to use Go to develop their plugins entirely.

To help developers get started, we also prepared the Go Plugin Development Guide and Go Plugin Development Kit (PDK) documentations. We can’t wait to see what developers are going to build with them!

ACME (Let’s Encrypt) Support

In the last few years, we have seen a strong push of end-to-end encryption among the industry. HTTPS encryption for your services is nowadays considered a commodity instead of a luxury, thanks to services that provide the ability to automatically provision and manage TLS certificates. We are proud to announce that in Kong Gateway 2.0, end-to-end HTTPS is easier than ever, thanks to the new Automatic Certificate Management Environment (ACME) v2 protocol support and Let’s Encrypt integration. Simply enable the plugin, and Kong takes care of the entire certificate management lifecycle. Interested in trying it out? Please checkout the ACME plugin documentation.

Other Improvements

Please note this above list only touches a fraction of the features/fixes in this version! For a complete list of changes, we encourage you to also read the 2.0.0 Changelog.

Prometheus Plugin Performance

Thanks to some clever tweaks our engineering team has been doing, the Prometheus plugin can now run almost 2x faster (in terms of requests per second).

Extended Support for NGINX Directive Injections

New injection contexts were added for both http and stream subsystems, reducing the need to write custom NGINX templates and facilitating better upgrade compatibility for the user.

Kubernetes Compatibility

The latest Kong for Kubernetes 0.7 release is compatible with Kong 2.0 out of the box. Get started with Kong for Kubernetes today!

Update Path

With the release of 2.0, we have also released Kong Gateway 1.5, which acts as the bridge between older versions of Kong Gateway and the new 2.x series with the API entity to Service/Route migration tool.

Upgrading directly from 0.x versions of Kong Gateway to 2.x is not supported. Rather, those users should upgrade from 0.x to 1.5 first before upgrading to 2.0.0.

We are officially dropping the support for 0.x versions from now on.

What’s Next

It goes without saying how much work has gone into this release from both Kong employees and our awesome community contributors. Kong Gateway 2.0.0 is available today for download, and we encourage everyone to give it a try. As always, please share your feedback using Kong with us on Kong Nation and check out our future community events to connect with us.

Happy Konging!

The post Kong Gateway 2.0 GA! appeared first on KongHQ.


URL Rewriting in Kong

$
0
0

A common requirement for APIs is to rewrite the published URL to a different URL for the upstream service’s endpoint. For example, due to legacy reasons, your upstream endpoint may have a base URI like /api/oilers/. However, you want your publicly accessible API endpoint to now be named /titans/api.

Simple Rewriting

When you configure a route with a path, the part of the URI after the path will automatically be appended to the path value of the upstream service. In this example, we’ll use httpbin.org/anything as our mock service and HTTPie as our command-line client.

Create a service with path /anything/api/oilers:

http POST :8001/services name=runandshoot4ever host=httpbin.org path=/anything/api/oilers

Create a route with path /titans/api:

http POST :8001/services/runandshoot4ever/routes name=tannehill paths:='["/titans/api"]'

Make an API call to /titans/api:

http :8000/titans/api/players/search\?q=henry

Response (see how the URL is translated with a new base path of /anything/api/oilers):

{

   "args": {

       "q": "henry"

   },

   "data": "",

   "files": {},

   "form": {},

   "headers": {

       "Accept": "*/*",

       "Accept-Encoding": "gzip, deflate",

       "Host": "httpbin.org",

       "User-Agent": "HTTPie/2.0.0",

       "X-Forwarded-Host": "localhost"

   },

   "json": null,

   "method": "GET",

   "origin": "172.23.0.1, 74.66.140.21, 172.23.0.1",

 "url": "https://localhost/anything/api/oilers/players/search?q=henry"

}

More Complex Rewriting

Kong can also handle more complex URL rewriting cases by using regex capture groups in our route path and the Request Transformer Advanced plugin (bundled with Kong Enterprise). Instead of simply replacing the base URI from /api/oilers to /titans/api, our requirement is to replace /api/<function>/oilers to /titans/api/<function>.

Setup

Create a new service:

http POST :8001/services name=warrenmoon4ever host=httpbin.org

Capture URI String

Next, we must configure a route to capture the parts of the URI string past that need to be preserved. Kong supports specifying regex capture groups in the paths config parameter, which can be referenced by plugins: https://docs.konghq.com/1.4.x/proxy/#capturing-groups

In our example, we need to set the paths parameter of our route to parse out the function name and capture the rest of the path separately.

Create a route with paths with capturing groups:

http POST :8001/services/warrenmoon4ever/routes name=mariotta paths:='["/titans/api/(?<function>\\\S+?)/(?<path>\\\S+)"]'

Configure Transform Using the Request Transformer Plugin

The next step is to configure the Request Transformer Advanced plugin on the route. Using our example, we need to set the config.replace.uri parameter to inject the function name in between /api and /oilers, then append the rest of the URI.

Create a plugin configuration on the route (replace route ID value with the actual ID from the route create above):

http --form POST :8001/routes/02e1ca00-be63-4f21-80bb-b8d5189525e3/plugins name=request-transformer-advanced config.replace.uri="/anything/api/\$(uri_captures['function'])/oilers/\$(uri_captures['path'])"

Now, when we make a request to /titans/api/search/players?q=henry, the request will be translated to /api/search/oilers/players?q=henry.

Make an API call to /titans/api:

http :8000/titans/api/search/players\?q=henry

Response (see how the URL is translated):

{

   "args": {

       "q": "henry"

   },

   "data": "",

   "files": {},

   "form": {},

   "headers": {

       "Accept": "*/*",

       "Accept-Encoding": "gzip, deflate",

       "Host": "httpbin.org",

       "User-Agent": "HTTPie/2.0.0",

       "X-Forwarded-Host": "localhost"

   },

   "json": null,

   "method": "GET",

   "origin": "172.23.0.1, 74.66.140.21, 172.23.0.1",

   "url": "https://localhost/anything/api/search/oilers/players?q=henry"

}

Conclusion

This is just scratching the surface of the powerful capabilities Kong offers for helping enterprises manage APIs. Learn more at konghq.com!

The post URL Rewriting in Kong appeared first on KongHQ.

Infrastructure as Code without Infrastructure

$
0
0

Infrastructure as Code (IaC) is a powerful process – replacing manual, error prone and expensive operations with automated, consistent and quick provisioning of resources. In many cases, IaC is dependent on existing infrastructure, typically including a configuration management system. Chef, Puppet and SaltStack are all commonly referenced players in this market, each requiring resources to be in place and having their own difficulties in setup and maintenance. As we move to microservices and container orchestration, our need for resource-intensive and complex tooling to provision infrastructure and application dependencies diminishes. So how do you solve the chicken-and-egg problem of standing up IaC without relying on other infrastructure?

Our solution in Amazon Web Services (AWS) was Terraform, cloud-init, Minimal Ubuntu and Ansible. Terraform was an easy choice given our existing use and expertise with the product for provisioning in AWS. We were building Amazon Machine Images (AMIs) using Packer with a minimal set of software packages to bootstrap systems for dynamic configuration based on their role by our configuration management system. However, every change, no matter how subtle it was, required building a new AMI. It also didn’t save much on boot time since an agent would configure the system dynamically at first boot-up. We were also spending a lot of time maintaining a configuration management system and scripts, as well as keeping up on Domain Specific Languages (DSLs).

Enter Minimal Ubuntu – images designed for automating deployment at scale with an optimized kernel and boot process. Needing only to install a small set of packages and most of our tooling at the orchestration layer, we are still able to provision a system that is ready for production traffic in under four minutes. The simplicity of these images also provide greater security and ease of administration.

Cloud-init is installed on Minimal Ubuntu, which allows further configuration of the system using user data. Given the lack of documentation and more sophisticated features of other configuration management systems, we were still looking for something else. Ansible became an attractive option for several reasons: simplistic yet powerful approach to automation, readable configuration and templating using YAML and Jinja2 versus a DSL, and the community contributions and industry embracement.

Most of the documentation for Ansible, though, focuses on the use of a master server that pushes configuration to clients. This doesn’t solve the problem of IaC without relying on infrastructure. Also, maintaining dynamic inventories of clients and pushing configurations to systems in auto scaling groups that need to be ready for production traffic as soon as possible did not make sense. Ansible has a concept of local playbooks, but there isn’t much light shed on the power and simplicity of it. This blog post will walk you through combining these tools to build a bastion host configured with Duo Multi-Factor Authentication (MFA) for SSH and a framework to easily add additional host roles. For brevity, other configuration of our bastion hosts is left out. You will want to perform further tuning and hardening depending on your environment.

Starting with Terraform (note all examples are using version 0.12.x) at the account/IAM level, you will need a EC2 instance profile with access to an S3 bucket where the Ansible playbook tarball will be stored. Terraform for creating the S3 bucket is left to the reader – it is straightforward, and many examples exist for it. It is recommended to enable encryption at rest on the S3 bucket as sensitive information may be required to bootstrap a host:

data "aws_iam_policy_document" "ansible" {
  statement {
    actions = [
      "s3:ListBucket",
      "s3:GetObject",
    ]
    resources = ["${aws_s3_bucket.ansible.arn}/*"]
  }
}

resource "aws_iam_policy" "ansible" { 
  name        = "ansible"
  description = "Access to the Ansible S3 bucket"
  policy      = data.aws_iam_policy_document.ansible.json
}

data "aws_iam_policy_document" "bastion" {
  statement {
    actions = ["sts:AssumeRole"]

    principals {
      type        = "Service"
      identifiers = ["ec2.amazonaws.com"]
    }
  }
}

resource "aws_iam_role" "bastion" {
  name               = "bastion"
  assume_role_policy = data.aws_iam_policy_document.main.json
}

resource "aws_iam_role_policy_attachment" "bastion" {
  role       = aws_iam_role.bastion.name
  policy_arn = aws_iam_policy.ansible.arn
}

resource "aws_iam_instance_profile" "bastion" {
  name = aws_iam_role.bastion.name
  role = aws_iam_role.bastion.name
}}

With a policy to read the S3 bucket and an instance profile the bastion host can assume, define the bastion host EC2 instance:

resource "aws_instance" "main" {
  ami           = var.ami
  instance_type = var.instance_type

  user_data = data.template_cloudinit_config.main.rendered
  key_name  = var.ssh_key

  iam_instance_profile = "bastion"

  subnet_id                   = var.subnet_id
  vpc_security_group_ids      = [var.vpc_security_group_ids]
  associate_public_ip_address = true
}

Most variables are self-explanatory. For this exercise, we will bring attention to the ami and user_data values. The ami value can be found by selecting the version of Ubuntu and the Amazon region for your instance here: https://wiki.ubuntu.com/Minimal.

The user_data value defines the cloud-init configuration:

data "aws_region" "current" {}

data "template_cloudinit_config" "main" {
  gzip          = true
  base64_encode = true

  part {
    filename     = "init.cfg"
    content_type = "text/cloud-config"
    content      = templatefile("${path.module}/cloud-init.cfg", {}) 
  }

  part {
    content_type = "text/x-shellscript"
    content      = templatefile(
      "${path.module}/cloud-init.sh.tmpl",
      {
        ROLE   = var.role
        ENV    = var.environment
        VPC    = var.vpc
        REGION = data.aws_region.current.name
      }
    )
  }
}

The cloud-init.cfg specifies a minimal configuration – installing the AWS CLI tool and Ansible to handle the rest of the process:

# Package configuration
apt:
  primary:
    - arches: [default]

apt_update: true
package_upgrade: true
packages:
  - ansible
  - awscli

write_files:
  - path: /etc/apt/apt.conf.d/00InstallRecommends
    owner: root:root
    permissions: '0644'
    content: |
      APT::Install-Recommends "false";

The shell script following the cloud-init template downloads the Ansible playbook tarball and executes it. Variables for the environment (dev, stage, prod), VPC name and AWS region are passed to customize the configuration based on those settings. The role variable is passed as a tag to define what role the host will play, somewhat correlating to Ansible roles (explained later):

#!/bin/sh
# HOME is not defined for cloud-init
# Ansible, and likely others, don't like that
HOME=/root 
export HOME

cd /opt
aws s3 cp s3://s3-bucket-name/ansible.tar.gz .
if [ $? != 0 ]; then
 echo "Error: Cannot download from S3, check instance profile."
 exit 1
fi

tar zxf ansible.tar.gz && rm -f ansible.tar.gz
ansible-playbook --connection local --inventory 127.0.0.1, \
  --extra-vars env=${ENV} --extra-vars vpc=${VPC} --extra-vars region=${REGION} \
  --tags ${ROLE} ansible/site.yml

The Ansible tarball is created from another Git repository with the Ansible playbook and uploaded to the secure S3 bucket. The directory layout is as follows:

ansible/
    roles/                      # Ansible roles, see https://docs.ansible.com/ansible/latest/user_guide/playbooks_reuse_roles.html
        common/                 
            tasks/
                main.yml        # Applied to all systems
        bastion/
            tasks/
                main.yml        # Bastion host "role"
        duo/
            files/
                common_auth     # /etc/pam.d/common_auth 
                sshd            # /etc/pam.d/sshd
                sshd_config     # /etc/ssh/sshd_config
            tasks/
                main.yml
    site.yml                    # Master playbook
    vars/                       # Variable configuration
        [environment]/          # i.e. dev, stage, prod
            main.yml            # Variables specific to an environment
            [vpc]/              # VPC name, i.e. dev-ops
                main.yml        # Variables specific to an environment and VPC 
                [region]/       # i.e. us-west-2
                    main.yml    # Variables specific to the environment, VPC and region
        main.yml                # Global variables

Ansible roles provide convention over configuration to simplify units of work. We break out each package into a role so they can be reused. We leverage Ansible tags to associate Ansible roles with our concept of a host “role, i.e., bastion. This keeps site.yml simple and clear:

- hosts: localhost
  connection: local

  roles:
    - { role: common, tags: ["always"] }
    - { role: bastion, tags: ["bastion"] }

always is a special tag, specifying to always run a task regardless of the tag specified at execution. It provides the mechanism to run common tasks regardless of the host “role. For this example, we will only use roles/common/tasks/main.yml to load our variable hierarchy but could include tasks for creating admin users, installing default packages, etc.:

---
- name: Include site variables
  include_vars: vars/main.yml

- name: Include environment variables
  include_vars: vars/{{ env }}/main.yml

- name: Include VPC variables
  include_vars: vars/{{ env }}/{{ vpc }}/main.yml

- name: Include region variables
  include_vars: vars/{{ env }}/{{ vpc }}/{{ region }}/main.yml

This provides a powerful and flexible framework for defining variables at different levels. Site level variables apply to all hosts. Variables that might differ between dev and prod (i.e., logging host) can be defined at the environment level in vars/dev/main.yml and vars/prod/main.yml. main.yml must exist for each environment, VPC and AWS region, if only just “—” for its content. In this example, we will define one site level variable in vars/main.yml:

---
aws:
  secrets: s3-bucket-name/secrets

This defines the variable aws.secrets, an S3 bucket and path for downloading files that need to be secured outside of the Ansible playbook Git repository. This value can be customized per environment, VPC and/or region by moving it down the variable hierarchy. Moving onto bastion, roles/bastion/tasks/main.yml disables selective TCP ACKs and installs Ansible roles for software, which for this example, is limited to duo:

---
- name: Disable selective acks (CVE-2019-11477)
  sysctl:
    name: net.ipv4.tcp_sack
    value: '0'
    state: present

- include_role:
    name: "{{ item }}"
  with_items:
    - duo

Lastly, we have duo in roles/duo/tasks.yml:

---
- name: Add key
  apt_key:
    data: |
      -----BEGIN PGP PUBLIC KEY BLOCK-----

      mQINBF25pcQBEADBIWPx6DJ+EItyXif/zgDZjsuwi/4pbd5NBHVpdsK2piteY1h4
      QG0CtfmCrwPRz/q5RlCNKLZ8HJiMrURGGwbts9BM57aVmn7C/OsPo3oOiOOpiiUA
      qFNhuTTQQ812uO+2sULt3/UdRiKUquUgNpdp6SNkNjg5lvKCOWIhKp8l3JbvI572
      0DnSuLGP9pSyQulz7B6vsCQHq1Ib7AArxk88+9QeUmhVKbXHf0K3vaQkmm7KaveK
      fgyxJfNh6ilFBTZq8yxY362vP18goEdOl+2pK0If2r4w1gjEXVLyGaYHKqr7vVC7
      tGYDP6ibzXNDhTNbvN+XZOlk85ttu77TRiKglcuOz3rAY6OybxUo12MYGv3vntgl
      OD6XaL9+dYPVW8R5886Nq0W88wRNUa0jpY1tvO1h7j4OFvSk2xDQml8ugvbvBTZC
      XuzCx//m8UyF617nlUxY4gMs/GiWs7PlJ/Bjd8bNTaATMCdD3s3RX9XUEAUMo+LM
      k4hM+EaWoG++Pym/009fgdI0AAZa7igNTPcLdvAZTGVJ1K7V/QlKIz3RwTfozUtR
      a3/1XfS2Zllj/Nzmx7FI1aWyTScyfl44jfjpnPc1BvUfCmuV/28pKCYsJ3yPtN4i
      ccQKERQF9vUEnCZ67DmksFsrKrn9n38jd02or8ZzRRDx7NJOILhhhlzKTwARAQAB
      tDJEdW8gU2VjdXJpdHkgUGFja2FnZSBTaWduaW5nIDxkZXZAZHVvc2VjdXJpdHku
      Y29tPokCVAQTAQgAPhYhBN8aYLVu/i3IyoqaYQHvmOkQRI/bBQJduaXEAhsDBQkJ
      ZgGABQsJCAcCBhUKCQgLAgQWAgMBAh4BAheAAAoJEAHvmOkQRI/bDjkP/0dUsJgx
      qTqPDgKimmLUM6xL3cuWcoIWe7kh0GVqBEY9wPJzL9aaggkURjbwtQcHiNBV2Mk5
      M6IaoIVQiSGHSob7il1sCSDRb7zWcZPqZqB9QtRBOgZcOLGxW62+UIGXPCQVvaCy
      FDxsmwMnPRYz4rS3X5zK1c2Fo9D59rQmTjj71UGVliNNq8GMH64I/goa4pEryk2t
      Jeby82la6gP+BPHFNc5hi/em3xxdgO16WKfe8uN0NmRZvOUnpThHbUzDjfj3uEf9
      /W6XcJfIyqdGMLpzrjkdaWr1CZg4XrF5q0c0hDzxrshNV04iFreg4ds5HsboNnPX
      M5HN60R1zeqBu+tVADSKYLpGOCczZTOUzYhlJfNOVSd83DE9vaeVTZQYgOK/1oVY
      WYPnr0spZ06FWb/1+irSWXhdDU8tAzRO3IFq4M8eBEkCrOt17dWDfgOcXN/I34IU
      I6RiAUVNc3W1aEB6r9WDPCDr9WBxrwMlceNrwFSJl9InrVIfJG54E6iYR/vBxyzS
      hjFd5PJxNQIhTTOfA4YplDoviSHw/3Ci64OOPmh06Z5zfd+HOH7E+I9SRk2GunPU
      odvnpWELquFzA9OwDLYUlUQC7cYHnGCzzHKHcxFuQKmH1hnAuBvq5H6OzlAgjuA7
      UX82Bl6FLsm7gJUmHq7xCM3zRG3ZKus/JeUJuQINBF25pcQBEADp3Z1ovqhzfFM6
      Oe/0zme9ynaGGcpxktncuvcpirsI5CYjqHWi11g1dG0HXANGDn2+kHrrJOwO6fVQ
      c4d1iImKoTR6ZmYd/Ae7TthsmjZXe3P/s15JpEMhsvwkSH6FOkrCkhgaNArZr6yn
      kb0s2zcJ69h7gz1rmnjmCsDjM9C/Pa99th4CBb2yo8Xq9mSjQKVCHcfFdrdGOMJc
      YtZCJz6Uno4CQSRPAq0l5lxM+HXhkUdPNdoxSUV4IIwZnxxHhXA+WSMC0Px04nVi
      XVDlJ+Vb5Nhf8bbJaiQXoHGFJY7u8+6QruoPQNmKkD7QwVdoEkd8Pb/6Q7ih5lIn
      1ksjIG1G+N8AhkOZCm0aBz/uzMBZV8lswjNW1JEcXafe3QOnS3MxqHUXUzLN3tMB
      bG6me8ENbOMlBGCQa22NVf/C1KGL9nZts0Ljz9eNQTT1mxRvuU4twScomFVXh5ZF
      0LWQdlxVueaebeXQBAtdROyd2wKGO+KMuJXD6Brqh2fCx+kK+zh7cFHLeS4rKLGt
      7h9yI+lbmFArzVIEuiTYx5pspzYrclbiHOGYBKhV/b7iJ66zSxy2FryPzfeKBWzX
      C3kpVQ3RrhMyykUvfMfyx9+gbrCvwz7BoYD0EguPfnYYB2V7A5kM/ljwVRKY2mDL
      UKQ8v12pgegp/TeWvkKJ4AGr06lzNwARAQABiQI8BBgBCAAmFiEE3xpgtW7+LcjK
      ipphAe+Y6RBEj9sFAl25pcQCGwwFCQlmAYAACgkQAe+Y6RBEj9so2w//TF2rdbgD
      boKM2odifrEWv11HQzVpu/xU7gN0vvha11P5qj7V9yGgy31kRCtPZ9Xp/UfLAIaw
      vP85MydLY5/eUa7pRf207Hle5jl4L6g/Uuv41v9NRyOdldXzFmk0XvJfJ9ptXPTR
      0E3m5t0IK2XzVhQhgCgyMb27Eh+kPbegnQV8hRNk8PVpFQNjDh6lDv7aFxmjt76x
      kPUTsFriC3NRDMdun5es+74NMfTuNLF8EPcVfByR+tQuKPCXaSzux02arYFEkVdT
      w9EOrNNagWX9wbI5tB80XNd1BcHCV1QOA9XCeQmcvBN5ww7nOTOwDjAMIqyoJX9D
      l9l/AFJa0PH3xENICpHmapS+LJfgKD1MNfQNKl8nTLRINOXnH/7L4q7LFn024nZa
      B4MvOMq7P3Hs2/iZlfIumk2AeeMemR2G72erPa6zx6I2dyp16rdi0mYHS0m4T+ud
      Ye7pnNwU7EqkuUYcd1oj9txfKFYj0nlOhEzSnLnshr3LsBVtzJi42RZc10rIWZbZ
      bXkcoJgoBo0P+QACduNVZ072OqDquv/OpU3UVszwotMV+IANJ9cX3bXKBCjevfTP
      VsXFL+WQaiGz2OcyD2uFLtLeHCDuZ6oL3Rw9pgT4E5ZXKYj4xd1qXhQea0sQu+8I
      5oRM/JeaPuYz7lH+PhzcqVqpKaWDL0Q9ixs=
      =EHJ5
      -----END PGP PUBLIC KEY BLOCK-----

- name: Add repository
  apt_repository:
    repo: deb [arch=amd64] https://pkg.duosecurity.com/Ubuntu bionic main
    state: present
    filename: duo

- name: Install
  apt:
    name: duo-unix
    state: present
    update_cache: yes

- name: Download configuration
  command: "aws s3 cp s3://{{ aws.secrets }}/{{ role_name }}/pam_duo.conf /etc/duo/pam_duo.conf"

- name: Secure configuration
  file:
    path: /etc/duo/pam_duo.conf
    owner: root
    group: root
    mode: 0600

- name: Configure PAM common
  copy:
    src: common_auth
    dest: /etc/pam.d/common_auth
    owner: root
    group: root
    mode: 0644

- name: Configure PAM sshd
  copy:
    src: sshd
    dest: /etc/pam.d/sshd
    owner: root
    group: root
    mode: 0644

- name: Configure sshd
  copy:
    src: sshd_config
    dest: /etc/ssh/sshd_config
    owner: root
    group: root
    mode: 0644

- name: Restart sshd
  service:
    name: sshd
    state: restarted
    daemon_reload: yes

The duo configuration file contains secrets, so it is downloaded from the encrypted S3 bucket in the secrets/bastion path:

; This file is managed by Ansible, do not modify locally
[duo]
ikey = [redacted]
skey = [redacted]
host = [redacted]

failmode = safe

; Send command for Duo Push authentication
pushinfo = yes
autopush = yes

The remaining files are kept in version control for auditing:

# This file is managed by Ansible, do not modify locally

# /etc/pam.d/common-auth - authentication settings common to all services
#
# This file is included from other service-specific PAM config files,
# and should contain a list of the authentication modules that define
# the central authentication scheme for use on the system
# (e.g., /etc/shadow, LDAP, Kerberos, etc.).  The default is to use the
# traditional Unix authentication mechanisms.
#
# As of pam 1.0.1-6, this file is managed by pam-auth-update by default.
# To take advantage of this, it is recommended that you configure any
# local modules either before or after the default block, and use
# pam-auth-update to manage selection of other modules.  See
# pam-auth-update(8) for details.

# here are the per-package modules (the "Primary" block)
#auth	[success=1 default=ignore]	pam_unix.so nullok_secure
auth  requisite pam_unix.so nullok_secure
auth  [success=1 default=ignore] /lib64/security/pam_duo.so
# here's the fallback if no module succeeds
auth	requisite			pam_deny.so
# prime the stack with a positive return value if there isn't one already;
# this avoids us returning an error just because nothing sets a success code
# since the modules above will each just jump around
auth	required			pam_permit.so
# and here are more per-package modules (the "Additional" block)
auth	optional			pam_cap.so 
# end of pam-auth-update config

# This file is managed by Ansible, do not modify locally

# PAM configuration for the Secure Shell service

# Standard Un*x authentication.
#@include common-auth

# Disallow non-root logins when /etc/nologin exists.
account    required     pam_nologin.so

# Uncomment and edit /etc/security/access.conf if you need to set complex
# access limits that are hard to express in sshd_config.
# account  required     pam_access.so

# Standard Un*x authorization.
@include common-account

# SELinux needs to be the first session rule.  This ensures that any
# lingering context has been cleared.  Without this it is possible that a
# module could execute code in the wrong domain.
session [success=ok ignore=ignore module_unknown=ignore default=bad]        pam_selinux.so close

# Set the loginuid process attribute.
session    required     pam_loginuid.so

# Create a new session keyring.
session    optional     pam_keyinit.so force revoke

# Standard Un*x session setup and teardown.
@include common-session

# Set up user limits from /etc/security/limits.conf.
session    required     pam_limits.so

# Read environment variables from /etc/environment and
# /etc/security/pam_env.conf.
session    required     pam_env.so # [1]
# In Debian 4.0 (etch), locale-related environment variables were moved to
# /etc/default/locale, so read that as well.
session    required     pam_env.so user_readenv=1 envfile=/etc/default/locale

# SELinux needs to intervene at login time to ensure that the process starts
# in the proper default security context.  Only sessions which are intended
# to run in the user's context should be run after this.
session [success=ok ignore=ignore module_unknown=ignore default=bad]        pam_selinux.so open

# Standard Un*x password updating.
@include common-password

# Duo MFA authentication
auth  [success=1 default=ignore] /lib64/security/pam_duo.so
auth  requisite pam_deny.so
auth  required pam_permit.so

# This file is managed by Ansible, do not modify locally

# This is the sshd server system-wide configuration file.  See
# sshd_config(5) for more information.

# This sshd was compiled with PATH=/usr/bin:/bin:/usr/sbin:/sbin

Protocol 2
StrictModes yes

AuthenticationMethods publickey,keyboard-interactive
PubkeyAuthentication yes
ChallengeResponseAuthentication yes
PasswordAuthentication no

X11Forwarding yes

AcceptEnv LANG LC_*

Subsystem sftp /usr/lib/openssh/sftp-server

UsePAM yes
UseDNS no

Create the Ansible playbook tarball that extracts to ansible/ and upload it to the S3 bucket specified in Terraform. Apply the Terraform for IAM first, and then continue to the EC2 instances. Minutes later, you will be able to login to your bastion hosts with Duo MFA. 

You now have a framework that is easy to extend – add software packages to existing host roles, customizing configuration, and adding new host roles that consume software packages. A special thanks to @_p0pr0ck5_ for his work on the variable hierarchy loading in Ansible.

The post Infrastructure as Code without Infrastructure appeared first on KongHQ.

The Difference Between API Gateways and Service Mesh

$
0
0

Why API Management and Service Mesh are Complementary Patterns for Different Use Cases

Note: The goal of this piece is to provide a cheat sheet that guides the architect in deciding when to use an API gateway and when to use a service mesh. Please skip to the “Cheat sheet” section at the end if you want to jump straight into it.

For many years, API Management (APIM) — and the adoption of API gateways — was the primary technology used to implement modern API use cases both inside and outside the data center. API gateway technology has evolved a lot in the past decade, capturing bigger and more comprehensive use cases in what the industry calls “full lifecycle API management.” It’s not just the runtime that connects, secures and governs our API traffic on the data plane of our requests but also a series of functionalities that enable the creation, testing, documentation, monetization, monitoring and overall exposure of our APIs in a much broader context — and target a wider set of user personas from start to finish. That is, there is a full lifecycle of creating and offering APIs as a product to users and customers, not just the management of the network runtime that allows us to expose and consume the APIs (RESTful or not).

Then around 2017, another pattern emerged from the industry: service mesh. Almost immediately, the industry failed to recognize how this pattern played with the API gateway pattern, and a big cloud of confusion started to emerge. This was in part caused by the complete lack of thought leadership of pre-existing APIM vendors that have failed to respond adequately to how service mesh complemented the existing APIM use cases. It was also in part because service mesh started to be marketed to the broader industry by the major cloud vendors (first by Google, later by Amazon and finally by Microsoft) at such a speed that the developer marketing clout of this new pattern preceded the actual mainstream user adoption, therefore creating a misperception in the industry as to what service mesh really was (developer marketing) and was not (technology implementations). It was almost like a mystical pattern that everybody spoke about but very few mastered.

Over time, the technology implementations caught up with the original vision of service mesh, and more and more users implemented the pattern and told their stories. This allows us to now have a more serious rationalization as to what is service mesh (and what it is not) and what is the role of API gateways (and APIM) in a service mesh view of the world.

Many people have already attempted to describe the differences between API gateways and service meshes, and it’s been commonly communicated that API gateways are for north-south traffic and service meshes are for east-west traffic. This is not accurate, and if anything, it underlines a fundamental misunderstanding of both patterns.

In this piece, I want to illustrate the differences between API gateways and service mesh — and when to use one or the other in a pragmatic and objective way.

API Gateways

The API gateway pattern describes an additional hop in the network that every request will have to go through in order to consume the underlying APIs. In this context, some people call the API gateway a centralized deployment.

Being on the execution path of every API request, the API gateway is a data plane that receives requests from a client and can enforce traffic and user policies before finally reverse proxying those requests to the underlying APIs. It can — and most likely will — also enforce policies on the response received from the underlying API before proxying the request back to the original client.

An API gateway can either have a built-in control plane to manage and configure what the data plane does, or both the data plane and the control plane can all be bundled together into the same process. While having a separate control plane is certainly better, some API gateway implementations were able to thrive with a DP+CP bundle in the same process because the number of API gateway nodes we would be deploying was usually of a manageable size and updates could be propagated with existing CI/CD pipelines.

The API gateway is deployed in its own instance (its own VM, host or pod) separate from the client and separate from the APIs. The deployment is therefore quite simple because it is fully separated from the rest of our system and it fully lives in its own architectural layer.

API gateways usually cover three primary API use cases for both internal and external service connectivity as well as for both north-south (outside the datacenter) and east-west (inside the datacenter) traffic.

1. APIs as a product

The first use case is about packaging the API as a product that other developers, partners or teams will consume.

The client applications that they build can initiate requests from outside of the organization (like in the case of a mobile application) or from inside the same company (like in the case of another product, perhaps built by another team or another line of business). Either way, the client applications will run outside of the scope of the product (that’s exposing the API) that they are consuming.

When offering APIs as a product, an API gateway will encapsulate common requirements that govern and manage requests originating from the client to the API services — for example, AuthN/AuthZ use cases, rate-limiting, developer on-boarding, monetization or client application governance. These are higher level use cases implemented by L7 user policies that go above and beyond the management of the underlying protocol since they govern how the users will use the API product.

The APIs exposed by an API gateway are most likely running over the HTTP protocol (i.e., REST, SOAP, GraphQL or gRPC), and the traffic can be both north-south or east-west depending if the client application runs inside or outside the data center. A mobile application will run mostly north-south traffic to the API gateway, while another product within the organization could be running east-west traffic if it’s being deployed in the same data center as the APIs it’s consuming. The direction of traffic is fundamentally irrelevant.

API gateways are also used as an abstraction layer that allow us to change the underlying APIs over time without having to necessarily update the clients consuming them. This is especially important in those scenarios where the client applications are built by developers outside of the organization that cannot be forced to update to the greatest and latest APIs every time we decide to update them. In this instance, the API gateway can be used to keep the backwards compatibility with those client applications as our underlying APIs change over time.

2. Service Connectivity

The second use case is about enforcing networking policies to connect, secure, encrypt, protect and observe the network traffic between the client and the API gateway, as well as between the API gateway and the APIs. They can be called L7 traffic policies because they operate on the underlying network traffic as opposed to governing the user experience.

Once a request is being processed by the API gateway, the gateway itself will have to then make a request to the underlying API in order to get a response (the gateway is, after all, a reverse proxy). Usually we want to secure the request via mutual TLS, log the requests, and overall protect and observe the networking communication. The gateway also acts as a load balancer and will implement features like HTTP routing, support proxying the request to different versions of our APIs (in this context, it can also enable blue/green and canary deployments use cases), as well as fault injection and so on.

The underlying APIs that we are exposing through the API gateway can be built in any architecture (monolithic or microservices) since the API gateway makes no assumption as to how they are built as long as they expose a consumable interface. Most likely the APIs are exposing an interface consumable over HTTP (i.e., REST, SOAP, GraphQL or gRPC).

3. Full Lifecycle API Management

The third use case of an API gateway is being one piece of a larger puzzle in the broader context of API management.

As we all know, managing the APIs, their users and client applications, and their traffic at runtime are only some of the many steps involved in running a successful API strategy. The APIs will have to be created, documented, and tested and mocked. Once running, the APIs will have to be monitored and observed in order to detect anomalies in their usage. Furthermore, when offering APIs as a product, the APIs will have to provide a portal for end users to register their applications, retrieve the credentials and start consuming the APIs.

This broader experience, which is end-to-end and touches various points of the API lifecycle (and most likely different personas will be responsible for different parts of the lifecycle), is called full lifecycle API management, and effectively most APIM solutions provide a bundled solution to implement all of the above concerns in one or more products that will in turn connect to the API gateway to execute policy enforcement.

Service Mesh

With service mesh, we are identifying a pattern that fundamentally improves how we build service-to-service connectivity among two or more services running in our systems. Every time a service wants to make a network request to another service (for example, a monolith consuming the database or a microservice consuming another microservice), we want to take care of that network request by making it more secure and observable, among other concerns.

Service mesh as a pattern can be applied on any architecture (i.e., monolithic or microservice-oriented) and on any platform (i.e., VMs, containers, Kubernetes).

In this regard, service mesh does not introduce new use cases, but it better implements existing use cases that we already had to manage prior to introducing service mesh. Even before implementing service mesh, the application teams were implementing traffic policies like security, observability and error handling within their applications so they could enhance the connectivity of any outbound — or inbound — network requests that their application would either make or receive. The application teams were implementing these use cases by writing more code in their services. This means that different teams would be re-implementing the same functionality over and over again — and in different programming languages, creating fragmentation and security risks for the organization in managing the networking connectivity.

Prior to service mesh, the teams are writing and maintaining code to manage the network connectivity to third-party services. Different implementations will exist to support different languages/frameworks.

 

With the service mesh pattern, we are outsourcing the network management of any inbound or outbound request made by any service (not just the ones that we build but also third-party ones that we deploy) to an out-of-process application (the proxy) that will manage every inbound and outbound network request for us, and because it lives outside of the service, it is by default portable and agnostic in order to support any service written in any language or framework. The proxy will be on the execution path of every request and it’s therefore a data plane process, and since one of the use-cases is implementing end-to-end mTLS encryption and observability, we would run one instance of the proxy alongside every service so that we can seamlessly implement those features without requiring the application teams to do too much work and abstracting those concerns away from them.

We run one instance of the proxy (in purple) alongside every instance of our services.

 

Because the data plane proxy will run alongside every replica of every service, some will call service mesh a decentralized deployment (as opposed to the API gateway pattern, which is a centralized deployment). Also, since we are going to be having extra hops in the network and in order to keep the latency at a minimum, we would run the data plane proxy on the same machine (VM, host, pod) as the service that we are running. Ideally, if the benefits of the proxy are valuable enough and the latency low enough, the equation would still turn in favor of having the proxying as opposed to having fragmentation in how the organization manages the network connectivity among our services.

The proxy application acts as both a proxy when the request is outgoing and as a reverse proxy when the request is incoming. Because we are going to be running one instance of the proxy application for each replica of our services, we are going to be having many proxies running in our systems. In order to configure them all, we would need a control plane that acts as the source of truth for the configuration and behavior we want to enforce and that would connect to the proxies to dynamically propagate the configuration. Because the control plane only connects to the proxies, it is not on the execution path of our service-to-service requests.

The service mesh pattern, therefore, is more invasive than the API gateway pattern because it requires us to deploy a data plane proxy next to each instance of every service, requiring us to update our CI/CD jobs in a substantial way when deploying our applications. While there are other deployment patterns for service mesh, the one described above (one proxy per service replica) is considered to be the industry standard since it guarantees the best, highest availability and allows us to assign a unique identity (via a mTLS certificate) to every replica of every service.

With service mesh, we are fundamentally dealing with one primary use case.

1. Service Connectivity

By outsourcing the network management to a third-party proxy application, the teams can avoid implementing network management in their own services. The proxy can then implement features like mutual TLS encryption, identity, routing, logging, tracing, load-balancing and so on for every service and workload that we deploy, including third-party services like databases that our organization is adopting but not building from scratch.

Since service connectivity within the organization will run on a large number of protocols, a complete service mesh implementation will ideally support not just HTTP but also any other TCP traffic, regardless if it’s north-south or east-west. In this context, service mesh supports a broader range of services and implements L4/L7 traffic policies, whereas API gateways have historically been more focused on L7 policies only.

From a conceptual standpoint, service mesh has a very simple view of the workloads that are running in our systems: everything is a service, and services can talk to each other. Because an API gateway is also a service that receives requests and makes requests, an API gateway would just be a service among other services in a mesh.

Because every replica of every service requires a data plane proxy next to it and the data plane proxies are effectively client load-balancers so they can route outgoing requests to other proxies (and therefore other services), the control plane of a service mesh must know the address of each proxy so that the L4/L7 routing capability can be performed. The address can be associated with any meta-data, like the service name. By doing so, a service mesh essentially provides a built-in service discovery that doesn’t necessarily require a third-party solution. A service discovery tool can still be used to communicate outside of the mesh but most likely not for the traffic that goes inside the mesh.

API Gateway vs. Service Mesh

It is clear by looking at the use cases that there is an area of overlap between API gateways and service meshes, and that is the service connectivity use case.

The service connectivity capabilities that service mesh provides are conflicting with the API connectivity features that an API gateway provides. However, because the ones provided by service mesh are more inclusive (L4 + L7, all TCP traffic, not just HTTP and not just limited to APIs but to every service), they are in a way more complete. But as we can see from the diagram above, there are also use cases that service mesh does not provide, and that is the “API as a product” use case as well as the full API management lifecycle, which still belong to the API gateway pattern.

Since service mesh provides all the service connectivity requirements for a broader range of use-cases (L4+L7), it is natural to think that it would take over those concerns away from the API gateway (L7 only). This conclusion is valid only if we can leverage the service mesh deployment model, and as we will explore, this is not always the case.

One major divergent point between the two patterns is indeed the deployment model: in a service mesh pattern, we must deploy a proxy data plane alongside every replica of every service. This is easy to do when a team wants to deploy service mesh within the scope of its own product, or perhaps its own line of business, but it gets harder to implement when we want to deploy the proxy outside of that scope for three reasons:

  1. Deploying a proxy application alongside every service of every product within the organization can be met with resistance, since different products, teams and lines of business may have fundamentally different ways to build, run and deploy their software.
  2. Every data plane proxy must initiate a connection to the control plane, and in certain cases, we don’t want — or we can’t — grant access to the control plane from services that are deployed outside of the boundaries of a product, a team or a line of business within the organization.
  3. It is not possible to deploy the proxy data plane alongside every service because we do not control all the services in the first place, like in the case of a third-party application built by a developer, customer or partner that is external to the organization.
  4. Services deployed in the same service mesh will have to use the same CA (Certificate Authority) in order to be provided with a valid TLS certificate to consume each other, and sharing a CA may not be possible or desirable among services that belong to different products or teams. In this instance, two separate service meshes (each one with its own CA) can be created, and they can communicate to each other via an intermediate API gateway.

Given that API gateways and service meshes focus on different use cases, I propose the following cheat sheet to determine when to use an API gateway and when to use a service mesh, with the assumption that in most organizations, both will be used since both use cases (the product/user use cases and the service connectivity one) will have to be implemented.

Cheat Sheet

We will use an API gateway to offer APIs “as a product” to internal or external clients/users via a centralized ingress point and to govern and control how they are being exposed and on-boarded via a full lifecycle APIM platform. 

We will use service mesh to build reliable, secure and observable L4/L7 traffic connectivity among all the services that are running in your systems via a decentralized sidecar deployment model that can be adopted and enforced on every service. 

Most likely, the organization will have both of these use cases, and therefore an API gateway and service mesh will be used simultaneously.

Example: A Financial Institution

Given the chart above, we can provide the following example.

It is very common for an organization to have different teams building different products, and these products will have to talk to each other (i.e., a financial institution would have a “banking product” to perform banking activities and a “trading product” that would allow trading on the stock market, but the two products will have to communicate to share information between them).

These teams will also decide at one point in the roadmap to implement service mesh in order to improve the service connectivity among the services that are making up the final product. Because different teams run at different speeds, they will implement two service meshes that are isolated from each other: “Service Mesh A” and “Service Mesh B.”

Let’s assume that in order to be highly available, both products are being deployed on two different data centers, “DC1” and “DC2.”

The banking team wants to offer its service as a product to their internal customer, the trading team. Therefore they want to set up policies in place to on-board the team as if it was an external user via an internal API gateway. The mobile team also will have to consume both products, and they will have to go through an edge API gateway ingress point in order to do that. The architecture would look like this:

Note that the API gateways are also part of the mesh, or otherwise they wouldn’t have an identity (TLS certificate) that would allow them to consume the services within each respective mesh. Like we have explored, an API gateway is just another service among the services that can make and receive network requests.

About the Author

Marco Palladino is an inventor, software developer and entrepreneur. He is the CTO and co-founder of Kong, the most widely adopted open source API platform.

Kong provides API gateway and service mesh products via Kong Gateway and Kuma, both open source and freely downloadable.

Trusted by startups to Fortune 500 enterprises, Kong offers the leading service control platform that gives technology teams the architectural freedom to power connections for modern software architectures and applications across clouds. Kong’s customers span across all industries, including Cargill, WeWork, SoulCycle, Yahoo! Japan, Verifone and Just Eat.

The post The Difference Between API Gateways and Service Mesh appeared first on KongHQ.

Kuma 0.4 Released With L7 Tracing + Grafana Dashboards!

$
0
0

We are happy to announce the release of Kuma 0.4! This is a major release focused on significantly better observability capabilities that also includes many new features and improvements across the board.

This release also marks the 10th release of Kuma since September 2019! We are very proud of the release momentum we have executed so far, and we are looking forward to accelerating the delivery of more advanced L7 features within the next few months, as well as more advanced networking support for complex service mesh deployments across hybrid environments.

Notable Features

  • A new TrafficTrace policy that allows users to configure tracing on L7 HTTP traffic
  • Three official Grafana dashboards to visualize traffic metrics collected by Prometheus
  • For Kubernetes, a new selective sidecar injection capability
  • For Universal deployments, a new data plane format to better support gateway use cases
  • A new protocol tag to support different L7 protocols
  • And much more!

Traffic Trace Example

Grafana Dashboards

Kuma already allowed the collection of metrics via the TrafficMetric, and now it supports three new official Grafana dashboards that, out of the box, allow you to visualize vital metrics about the service mesh. The dashboards provide:

  • Metrics for a single data plane
  • Metrics for a single mesh
  • Metrics for the service traffic in a mesh

You can find the dashboards in the Grafana marketplace. Below, you see an example:

Grafana Example

Community

As we keep adding more and more features in Kuma and work with the broader community, don’t forget to check out the Community resources, including a real-time Slack chat to get an answer to any question you may have when using Kuma.

Kuma’s goal is to create a simple, portable and feature-rich service mesh that everybody can use in minutes across any system. Contributions are welcome to get one step closer to this vision on every new release.

Upgrading

Be sure to carefully read the Upgrade Guide, as this new version introduces a few important changes.

The post Kuma 0.4 Released With L7 Tracing + Grafana Dashboards! appeared first on KongHQ.

Supporting Legacy Web Services With Kong

$
0
0

Let’s admit it – web services (SOAP) are here to stay for a few more years, and maybe for a long time in some places where there is no business incentive to rebuild them. However, with a decline in new SOAP web services and most applications moving to cloud native architectures, a common query is “how can we support legacy services while moving to microservices?”

The good news is Kong’s versatility of handling multi-protocol traffic and extensibility can help address this question. I recently worked with a customer who wanted to quickly move to microservices but still proxy and integrate existing/legacy SOAP services. After all, existing and new services will likely need to communicate with each other. Its existing solution would not work with microservices architecture (too slow and monolithic), and the customer turned to Kong.  

It was clear to the customer that Kong could handle its journey to microservices, but the key question was: Could Kong handle its existing legacy services?

The key requirement with any digital project is to ensure that there is no impact to the consumers. In this case, it was important to provide the same service interface to the consumer (business partners outside of the enterprise) but perform LDAP authentication against a cloud-based identity store and then proxy the request to the existing application. 

The key design principles were simplicity and modularity so that as other scenarios surface, they can be addressed. Reviewing a number of different options, I came across the Kong Serverless plugin, which provides the ability to execute any code as part of any request in addition to the functionality provided by other plugins. This gave us the flexibility needed with the added benefit of also leveraging Kong plugins to minimize the amount of work we had to do. I’ve done custom logic work in other monolithic API gateways before, but the difference with Kong is that it is a light-weight, multi-protocol API gateway that provides enough extensibility to support a variety of use cases (legacy to microservices and FaaS) while staying clear of becoming a heavyweight ESB.

Let’s go through the details of what we configured. I detail the steps sequentially below (I used Kong Enterprise v1.3 for the testing).

1. Connecting to the Calculator Web Service Directly

Using httpie

http POST http://www.dneonline.com/calculator.asmx?op=Add Content-type:application/soap+xml <<< '<soap12:Envelope xmlns:soap12="http://www.w3.org/2003/05/soap-envelope"><soap12:Body><Add xmlns="http://tempuri.org/"><intA>45</intA><intB>55</intB></Add></soap12:Body></soap12:Envelope>'

HTTP/1.1 200 OK

Cache-Control: private, max-age=0

Content-Length: 325

Content-Type: application/soap+xml; charset=utf-8

Date: Thu, 20 Feb 2020 11:03:23 GMT

Server: Microsoft-IIS/7.5

X-AspNet-Version: 2.0.50727

X-Powered-By: ASP.NET

<?xml version="1.0" encoding="utf-8"?><soap:Envelope xmlns:soap="http://www.w3.org/2003/05/soap-envelope" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><soap:Body><AddResponse xmlns="http://tempuri.org/"><AddResult>100</AddResult></AddResponse></soap:Body></soap:Envelope>

Using cURL

curl -v \

>>   --url 'http://www.dneonline.com/calculator.asmx?op=Add' \/

>   --header 'content-type: application/soap+xml; charset=utf-8' \

>   --data '<soap12:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soap12="http://www.w3.org/2003/05/soap-envelope">

>   <soap12:Body>

>     <Add xmlns="http://tempuri.org/">

>       <intA>45</intA>

>       <intB>55</intB>

>     </Add>

>   </soap12:Body>

> </soap12:Envelope>'

*   Trying 45.40.165.23...

* TCP_NODELAY set

* Connected to www.dneonline.com (45.40.165.23) port 80 (#0)

>> POST /calculator.asmx?op=Add HTTP/1.1

> Host: www.dneonline.com

> User-Agent: curl/7.64.1

> Accept: */*

> content-type: application/soap+xml; charset=utf-8

> Content-Length: 316

>

* upload completely sent off: 316 out of 316 bytes

< HTTP/1.1 200 OK

...

<?xml version="1.0" encoding="utf-8"?><soap:Envelope xmlns:soap="http://www.w3.org/2003/05/soap-envelope" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><soap:Body><AddResponse xmlns="http://tempuri.org/"><AddResult>100</AddResult></AddResponse></soap:Body></soap:Envelope>* Closing connection 0

Using Kong Studio:

 

2. Pre-Function Script

The Lua script below, which I saved as get-ws-creds.lua:

  • Extracts the username and password from the SOAP header (WS-Security header)
  • Constructs the Authorization header required for LDAP Authentication

<em>local soap_body = kong.request.get_raw_body()
local xml2lua = require("xml2lua")
local tree = require("xmlhandler.tree")
local handler = tree:new()
local parser = xml2lua.parser(handler)
parser:parse(soap_body)
local header = handler.root["SOAP-ENV:Envelope"]["SOAP-ENV:Header"]["SOAP-ENV:Security"]["SOAP-ENV:UsernameToken"]
local username = header["SOAP-ENV:Username"]
local passwd = header["SOAP-ENV:Password"][1]
-- Construct LDAP AuthZ header
local authorization = username .. ':' .. passwd;
local authorizationBase64 = ngx.encode_base64(authorization);
local authorizationHeader = "LDAP " .. authorizationBase64;
kong.log.info("--&gt;&gt;&gt; auth: " .. authorization, " auth_header: ", authorizationHeader)
-- Set AuthZ header
kong.service.request.add_header('Authorization', authorizationHeader)
kong.log.info("--&gt;&gt;&gt; set ws creds end")</em>

3. Configuring Services, Route and Plugins

To test the integration, we use a publicly available Calculator-Web-Service configured in Kong to proxy to URL http://www.dneonline.com/calculator.asmx?op=Add/. The service performs an add operation of numbers passed in the request.

a. Configure Service

Let’s configure a test service with Kong Admin API using httpie CLI

http -f localhost:8001/services name=Calculator-Web-Service url=http://www.dneonline.com:80/calculator.asmx

HTTP/1.1 201 Created

{

    "client_certificate": null,

    "connect_timeout": 60000,

    "created_at": 1582112424,

    "host": "www.dneonline.com",

    "id": "f1b677fe-4fba-41d1-8d1a-91743863775d",

    "name": "Calculator-Web-Service",

    "path": "/calculator.asmx",

    "port": 80,

    "protocol": "http",

    "read_timeout": 60000,

    "retries": 5,

    "tags": null,

    "updated_at": 1582112424,

    "write_timeout": 60000

}

Next, we configure the necessary Kong routes and the Pre-function and LDAP plugins to finalize the setup:  

b. Configure a route /secure-soap-ldap to test

http -f PUT http://<Kong_Admin_API_Host>:8001/services/Calculator-Web-Service/routes/secure-soap-ldap   paths[]=/secure-soap-ldap

c. Configure the Pre-function plugin on the route. Notice we pass the lua script get-ws-creds.lua. This script will execute before the LDAP auth plugin runs.

http -f http://<Kong_Admin_API_Host>:8001/routes/secure-soap-ldap/plugins name=pre-function config.functions=@get-ws-creds.lua

d. Configure the LDAP Authentication Kong plugin on the route. Here, I’ve provided a test LDAP connection and query details, which you can modify to suit your LDAP instance.

http -f <Kong_Admin_API_Host>:8001/routes/secure-soap-ldap/plugins name=ldap-auth-advanced config.ldap_host=ldapconfig.ldap_port=389 config.base_dn=ou=people,dc=api,dc=au config.header_type=ldap config.attribute=cn config.verify_ldap_host=false config.hide_credentials=true

4. Verifying in Kong Manager

Once you’ve used the Kong Admin API to configure the service, route and plugins, you can quickly visualize and verify in Kong Manager what we did programmatically.  

 

5. Validation

Now it’s time to test. I will use Kong Studio to test since it can handle SOAP/WSDL in addition to REST and GraphQL, in a single tool.  

Let’s try first with correct LDAP credentials passed through the WS-S header in the SOAP envelope, and…. it works! The Pre-function plugin extracts the credentials and seamlessly passes it to the LDAP Authentication plugin to check. Once successful, it proxies the request to the upstream Calculator Web Service to return a SOAP response as below.

Now, I’ll try with some credentials that don’t exist in the LDAP, and I get back an error response with a 403 code.

What Next?

Now that we’ve successfully and securely proxied an existing SOAP service, we have the opportunity to enforce any of the capabilities that the Kong API platform provides, including but not limited to:

  • Rate limiting
  • Response caching
  • Response transformer (for example, to customize the error response)

You can check out all the plugins that Kong provides at the Kong Hub.

 

I’ve left this last step for you to try as per your requirements and creativity. I welcome your feedback.  

Summary

In a few minutes, we were able to securely proxy an existing legacy web service and add additional Kong security plugins. Flexibility and ease of use are why Kong is so popular with customers across the world and why it’s quickly becoming the de facto solution for their transition to microservices.

The post Supporting Legacy Web Services With Kong appeared first on KongHQ.

Kong Partners with Vertigo to Accelerate Digital Transformation in Latin America

$
0
0

We’re excited to partner with Vertigo Tecnologia, experts in digital transformation, to help companies in Latin America accelerate their transition to microservices and other emerging architectures — which is especially crucial in today’s hybrid and multi-cloud environment. 

Vertigo has been developing solutions for complex business issues for more than 20 years. Through Kong’s Go-To-Market (GTM) Partner Program, Vertigo will make the Kong Enterprise full lifecycle API management platform available to their clients, enabling them to effectively secure, connect and manage their APIs and microservices. This will in turn help them adapt to the changing IT landscape more quickly and continue to be competitive in the market.

The post Kong Partners with Vertigo to Accelerate Digital Transformation in Latin America appeared first on KongHQ.

How to Secure APIs and Services Using OpenID Connect

$
0
0

A modern API gateway like Kong enables organizations to achieve some use cases much more easily than traditional gateways. The reason is older, traditional gateways try to provide as many features as possible into a heavyweight monolith, while modern solutions use a best-in-breed approach. These traditional solutions not only try to be a gateway, but they also try to be a business intelligence system, a central logging hub, a monitoring tool and so much more. Unfortunately, this leads to a solution that on paper can do many things but not do one thing particularly well.

A more tactical approach is to leverage best-in-breed solutions that integrate well with each other and are simple to use. Kong’s platform delivers on this approach and provides a modern gateway that’s fast, scalable, easy to use and can easily integrate with other platforms through its pluggable architecture. In this blog post, we will cover how easily Kong integrates with existing identity providers (IdPs) to help secure and govern APIs.

AuthN and AuthZ

The defacto standard for API security today is OpenID Connect with JWT. A few years ago, many gateways heavily relied on being the OAuth/OpenID Connect provider for the whole flow – but today, most IdPs have implemented OpenID Connect, and therefore, customers prefer that the management of keys, tokens and users happen in the IdP versus the gateway to remove the need to manage a separate silo of identity.

Let’s think about a very typical customer scenario we come across: you have a central IdP as the identity manager and central point of truth for authentication as well as groups/permissions of users. A legacy gateway approach would use the IdP for authentication and then in the gateway, you define authorization per endpoint to the groups you want to grant access to the backend services.

This design has two flaws:

  1. To attach the users to groups, they must exist in the gateway so you end up having to manage consumers in the gateway.
  2. Administrators have to maintain group memberships in Kong to grant or revoke permissions.

A Better Approach with Kong

With Kong, you can leverage the IdP for both authentication and authorization without having to manage consumers or groups in Kong, giving you the ability to leverage your IdP to drive access without additional operational overhead and risk. To do this, we can configure Kong to use OpenID Connect groups to attach scopes to the users and let Kong provide access based on the scopes in the JWT tokens. This solves both issues at the same time, and the administration of users and their permissions are now located where they should be: in the IdP.

Let’s see this in practice:

Note: The following example will use Kong Enterprise installed locally. Kong Enterprise provides access to the OpenID Connect plugin needed for this scenario. For the IdP, we will be using KeyCloak. Kong supports many other IdPs. For a full list, see the OpenID Connect plug-ins page (https://docs.konghq.com/hub/kong-inc/openid-connect/).

In order to achieve this, I want to walk you through a small KeyCloak example now. Within KeyCloak, the first thing is creating a new scope, attaching it to a group and then attaching this group to a user:

Keycloak scope creation

Keycloak scope and role mapping

Role to group mapping

User to group mapping

Kong Enterprise settings

Note: I am using httpie as my command line tool of choice – feel free to use Studio, Insomnia, curl, etc. instead.

Let’s begin by creating a service and route in Kong for validation. Replace localhost with the hostname of your Kong installation.

Service and route

http POST localhost:8001/services name=openidconnect url=http://httpbin.org/anything

http POST localhost:8001/services/openidconnect/routes name=openidconnectRoute paths=/oidc -f

OpenID Connect plugin

OK, now let’s configure the openid-connect plugin to connect to the KeyCloak instance:

http -f localhost:8001/routes/openidconnectRoute/plugins \
     name=openid-connect \
     config.issuer=https://keycloak.apim.eu/auth/realms/kong/.well-known/openid-configuration \
     config.client_id=blog_post \
     config.client_secret=a5186adc-b5e2-4501-85a8-eb19a5e1a2a3 \
     config.ssl_verify=false \
     config.consumer_claim=email \
     config.verify_signature=false \
     config.redirect_uri=http://localhost:8000/oidc \
     config.consumer_optional=true \
     config.scopes_required=kong_api_access

Let’s have a look at the parameters.

The config.redirect_uri defines the uri the IDP will redirect the user to after a successful authentication

config.consumer_optional defines whether a Kong consumer should exist to allow access

config.scopes_required defines which scopes are authorized to access. We are defining the JWT returned by KeyCloak must include the scope kong_api_access. Only then Kong will authorize the request and route it to the upstream (backend). The KeyCloak screenshots above show the scope attached to the group of which the user is a member.

Let’s try it

For testing purposes, I have two example users in KeyCloak:

  • Blog_with_scope / veryComplexPa55word
  • Blog_without_scope / veryComplexPa55word

Open a new browser window (either in incognito mode or with all caches empty) and navigate to http://localhost:8000/oidc. You will notice that the user Blog_with_scope will get access.

But Blog_without_scope is denied even though he is also a valid user in KeyCloak.

The user without the scope will produce a log entry like required scopes were not found [ openid, profile, email ] in your Kong logs.

Last but not least, let’s have a look at the JWT for Blog_with_scope, which includes the scope:

Outlook

In this post, we’ve covered how to secure APIs and services with Kong and an IdP without having to manage local consumers or groups in Kong, allowing the IDd to be the source of truth for identity and entitlements. In a future blog post, we’ll cover how to apply policy (i.e., rate limiting and caching) to authenticated consumers.

We’re hopeful you found this blog post useful.
Drop me an email or a Twitter mention if you have any questions.

The post How to Secure APIs and Services Using OpenID Connect appeared first on KongHQ.


Exposing Kuma Service Mesh Using Kong API Gateway

$
0
0

In his most recent blog post, Marco Palladino, our CTO and co-founder, went over the difference between API gateways and service mesh. I highly recommend reading his blog post to see how API management and service mesh are complementary patterns for different use cases, but to summarize in his words, “an API gateway and service mesh will be used simultaneously.” We maintain two open source projects that work flawlessly together to cover all the use cases you may encounter. 

So, in this how-to blog post, I’ll cover how to combine Kong for Kubernetes and Kuma Mesh on Kubernetes. Please have a Kubernetes cluster ready in order to follow along with the instructions below. In addition, we will also be using kumactl command line tool, which you can download on the official installation page

Step 1: Installing Kuma on Kubernetes

Installing Kuma on Kubernetes is fairly straightforward, thanks to the kumactl install [..] function. You can use it to install the control-plane with one click:

$ kumactl install control-plane | kubectl apply -f -
namespace/kuma-system created
secret/kuma-sds-tls-cert created
secret/kuma-admission-server-tls-cert created
…

After everything in kuma-system namespace is up and running, let’s deploy our demo marketplace application

$ kubectl apply -f https://bit.ly/demokuma
namespace/kuma-demo created
serviceaccount/elasticsearch created
…

The application is split into four services with all the traffic entering from the frontend app service. If we want to authenticate all traffic entering our mesh using Kong plugins, we will need to deploy the gateway alongside the mesh. Once again, to learn more about why having a gateway and mesh is important, please read Marco’s blog post.

Step 2. Deploying Kong for Kubernetes

Kong for Kubernetes is an ingress controller-based on the open source Kong Gateway. You can quickly deploy it using kubectl:

$ kubectl apply -f https://bit.ly/demokumakong
customresourcedefinition.apiextensions.k8s.io/kongconsumers.configuration.konghq.com created
customresourcedefinition.apiextensions.k8s.io/kongcredentials.configuration.konghq.com created
customresourcedefinition.apiextensions.k8s.io/kongingresses.configuration.konghq.com created
customresourcedefinition.apiextensions.k8s.io/kongplugins.configuration.konghq.com created
serviceaccount/kong-serviceaccount created
clusterrole.rbac.authorization.k8s.io/kong-ingress-clusterrole created
clusterrolebinding.rbac.authorization.k8s.io/kong-ingress-clusterrole-nisa-binding created
configmap/kong-server-blocks created
service/kong-proxy created
service/kong-validation-webhook created
deployment.apps/ingress-kong created

On Kubernetes, Kuma Dataplane entities are automatically generated. To inject gateway Dataplane, the API gateway’s pod needs to have the following kuma.io/gateway: enabled annotation:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ingress-kong
  ...
spec:
  template:
    metadata:
      annotations:
        kuma.io/gateway: enabled

 Our kuma-demo-kong.yaml already includes this annotation, so you don’t need to do this manually.

After Kong is deployed, export the proxy IP:

export PROXY_IP=$(minikube service -p kuma-demo -n kuma-demo kong-proxy --url | head -1)

And check that the proxy IP has been exported; run:

$ echo $PROXY_IP
http://192.168.64.29:30409

Sweet! Now that we have Kong for Kubernetes deployed, go ahead and add an ingress rule to proxy traffic to the marketplace frontend service. 

$ cat <<EOF | kubectl apply -f - 
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: marketplace
  namespace: kuma-demo
spec:
  rules:
  - http:
      paths:
      - path: /
        backend:
          serviceName: frontend
          servicePort: 80
EOF

By default, the ingress controller distributes traffic amongst all the pods of a Kubernetes service by forwarding the requests directly to pod IP addresses. One can choose the load-balancing strategy to use by specifying a KongIngress resource.

However, in some use cases, the load-balancing should be left up to kube-proxy or a sidecar component in the case of service mesh deployments. For us, load-balancing should be left to Kuma, so the following annotation has been included in our frontend service resource:

apiVersion: v1
kind: Service
metadata:
  name: frontend
  namespace: kuma-demo
  annotations:
    ingress.kubernetes.io/service-upstream: "true"
spec:
  ...

Remember to add this annotation to the appropriate services when you deploy Kong with Kuma.

3. Add Policy

With both Kong and Kuma running on our cluster, all that is left to do is add a traffic permission policy for Kong to the frontend service:

$ cat <<EOF | kubectl apply -f - 
apiVersion: kuma.io/v1alpha1
kind: TrafficPermission
mesh: default
metadata:
  namespace: kuma-demo
  name: kong-to-frontend
spec:
  sources:
  - match:
      service: kong-proxy.kuma-demo.svc:80
  destinations:
  - match:
      service: frontend.kuma-demo.svc:80
EOF

That’s it! Now, if you visit the $PROXY_IP, you will land in the marketplace application proxied through Kong. From here, you can enable all those fancy plugins that Kong has to offer to work alongside the Kuma policies.

Thanks for following along 🙂

The post Exposing Kuma Service Mesh Using Kong API Gateway appeared first on KongHQ.

Protect Your Applications With Cleafy Plugin for Kong

$
0
0

When protecting your online services, the weakest link is represented by the endpoints – that is, by the end-user devices running web or mobile applications or by external systems leveraging open APIs. As a matter of fact, there is a growing number of targeted attacks leveraging sophisticated techniques such as malicious web injections, mobile overlay and API abuse attacks to perform identity hijacking, account takeover, transaction tampering and payment frauds.

Traditional threat detection or anti-malware tools are either unable to detect these advanced attacks or are generating too many false positives that have a heavy operational impact on security teams or are unable to support the need to support the real-time decisions that are required to avoid damages without causing customer friction. This situation is exacerbated as digital transformation (DX), instant/real-time payments and open banking initiatives are extending the exposed security perimeter.

Cleafy, a Kong Hub plugin partner, provides an innovative threat detection and protection technology against the most advanced attacks from web, mobile and API channels. Cleafy is clientless and application-transparent, as it does not require any change to managed applications. Cleafy works by passively monitoring the application traffic between endpoints and backend services, continuously checking the application and communication integrity and assessing in real-time the risk associated to each session, even before the authentication phase. In order to do so, Cleafy can smoothly integrate into the application delivery infrastructure, typically at the ADC or API gateway level. Once threats are identified in real-time, adaptive threat responses can be automatically triggered, such as risk-based authentication or Cleafy dynamic application protection.

As applications become increasingly API-based, the adoption of API gateways is also increasing. In particular, Kong is a fully-fledged and platform-agnostic API gateway solution that is being adopted by leading organizations to enable high volume and low latency. Kong technology has built a solid reputation for being fast, powerful, and stable in supporting core API management requirements, such as routing, rate limiting and authentication. Moreover, Kong provides a plugin-based environment (see https://docs.konghq.com/hub/) which allows third-party vendors to easily develop integrations and extend Kong capabilities.

The “Cleafy plugin for Kong” allows customers to easily integrate Cleafy threat detection and protection in any Kong-powered architecture and thus protect their services and end-users leveraging Kong as integration point for Cleafy. As described here above, in order for Cleafy to be able to verify the integrity of the application end-to-end, Cleafy needs to analyse in real-time all application requests and responses between endpoints and the Kong API gateway. It’s worth noticing that the ability to extend Kong functionality using the Lua language made developing the Cleafy plugin for Kong quite easy.

Integrating Kong with Cleafy

The following figure shows the high-level architecture of the Cleafy plugin for Kong. 

Fig 1: Cleafy high-level integration architecture with Kong

All interactions between endpoints and the backend application service are intercepted by the Cleafy plugin for Kong, thus allowing Cleafy to analyse them. 

Basically, the Cleafy plugin for Kong is made by two main components:

 

  • Response Interceptor: This component is responsible for grabbing each HTTP response served by the application server. Each response is collected and proxied to the original endpoint which originated the corresponding request after instrumenting it so as to be able to (asynchronously) receive a copy of the DOM/API body, once received and executed by the endpoint.

 

  • Message Dispatcher: Each intercepted response is collected and sent to the Cleafy engine. To accomplish this, the dispatcher builds a message that contains the body of each HTTP response and some additional information, including both HTTP request and HTTP response headers.

As soon as Cleafy receives the copy of the response from the endpoint, an integrity check is performed with respect to the original response and any difference is automatically extracted. Such differences may represent malicious code injected on the endpoint or in the communication, thus highlighting potential threats.

Once the Cleafy plugin for Kong is installed and properly configured, no additional configuration is required in order to integrate Cleafy with Kong and have Cleafy ingest and analyse traffic passing through the Kong API gateway. 

The following figure shows how sessions are displayed in the Cleafy web console, with a risk score associated to each event corresponding to a Web/API request issued by the endpoint. Cleafy also provides a comprehensive set of APIs to enable other solutions to take advantage of Cleafy collected and generated information, including risk score, threat evidence and classification.

Fig 2: Cleafy web console displaying sessions with associated real-time risk score

Conclusions

The “Cleafy plugin for Kong” allows customers to easily integrate Cleafy threat detection and protection in any Kong-powered architecture and thus protect their services and end-users leveraging Kong as an integration point for Cleafy.

Kong’s plugin-based environment allows third-party vendors to easily develop integrations and extend Kong capabilities with additional functionalities such as authentication, traffic control, logging, analytics and monitoring, transformations, and security-related features (such as Cleafy). 

The Kong plugin development environment and the Plugin Development Kit introduced in Kong since version 0.14 is very well documented and easy to extend.

https://docs.konghq.com/0.14.x/plugin-development/

The post Protect Your Applications With Cleafy Plugin for Kong appeared first on KongHQ.

Kong for Kubernetes 0.8 Released!

$
0
0

Kong for Kubernetes is a Kubernetes Ingress Controller based on the Kong Gateway open source project. Kong for K8s is fully Kubernetes-native and provides enhanced API management capabilities. From an architectural perspective, Kong for K8s consists of two parts: A Kubernetes controller, which manages the state of Kong for K8S ingress configuration, and the Kong Gateway, which processes and manages incoming API requests. 

We are thrilled to announce the availability of this latest release of Kong for K8s! This release’s highlight features include Knative integration, two new Custom Resource Definitions (CRDs) – KongClusterPlugins and TCPIngress – and a number of new annotations to simplify configuration.

This release works out of the box with the latest version of Kong Gateway as well as Kong Enterprise. All users are advised to upgrade.

Kong as Knative’s Ingress Layer 

Knative is a Kubernetes-based platform that allows you to run serverless workloads on top of Kubernetes. Knative manages auto-scaling, including scale-to-zero, of your workload using Kubernetes primitives. Unlike AWS Lambda or Google Cloud functions, Knative enables serverless workload for functions and application logic of any Kubernetes cluster in any cloud provider or bare-metal deployments.

In addition to the ingress and API management layer for Kubernetes, Kong can now perform ingress for Knative workloads as well. In addition to Ingress resources, Kong can run plugins for Knative serving workloads, taking up the responsibility of authentication, caching, traffic shaping and transformation. This means that as Knative HTTP-based serverless events occur, they can be automatically routed through Kong and appropriately managed. This should keep your Knative services lean and focused on the business logic.

apiVersion: serving.knative.dev/v1

kind: Service

metadata:

  name: helloworld-go

  namespace: default

spec:

  template:

metadata:

     annotations:

         konghq.com/plugins: free-tier-rate-limit, prometheus-metrics

spec:

   containers:

     - image: gcr.io/knative-samples/helloworld-go

       env:

         - name: TARGET

           value: Go Sample v1

 

Here, whenever traffic inbound for helloworld-go, Knative Service is received, and Kong will execute two plugins: free-tier-rate-limit and prometheus-metrics.

New CRD: KongClusterPlugins

Over the past year, we have received feedback from many users regarding plugin configuration. Until now, our object model allowed for namespace KongPlugin resources.

In this model, Service owners are expected to own the plugin configuration (KongPlugin) in addition to Ingress, Service, Deployment and other related resources for a given service. This model works well for the majority of use cases but has two limitations:

  • Sometimes it is important for the plugin configuration to be homogeneous across all teams or groups of services that are running in different namespaces.
  • Also, sometimes the plugin configuration should be controlled by a separate team, and the plugin is applied on to an Ingress or Service by a different team. This is usually true for authentication and traffic shaping plugins where the configuration could be controlled by an operations team (which configures the location of IdP and other properties), and then the plugin is used for certain services on a case-by-case basis.

 

To address these problems, in addition to the KongPlugin resource, we have added a new cluster-level custom resource: KongClusterPlugin.

The new resource is identical in every way to the existing KongPlugin resource, except that it is a cluster-level resource.
You can now create one KongClusterPlugin and share it across namespaces. You can also RBAC this resource differently to allow for cases where only users with specific roles can create plugin configurations and service owners can only use the well-defined KongClusterPlugins.

Beta Release of TCPIngress

In the last release, we introduced native support for gRPC-based services.

With this release, we have now opened up support for all services that are based on a custom protocol.

TCPIngress is a new Custom Resource for exposing all kinds of services outside a Kubernetes cluster.

The definition of the resource is very similar to the Ingress resource in Kubernetes itself.

Here is an example of exposing a database running in Kubernetes:

apiVersion: configuration.konghq.com/v1beta1

kind: TCPIngress

metadata:

  name: sample-tcp

spec:

  rules:

  - port: 9000

backend:

   serviceName: config-db

   servicePort: 2701

Here, we are asking Kong to forward all traffic on port 9000 to the config-db service in Kong.

SNI-Based Routing

In addition to exposing TCP-based services, Kong also supports secure TLS-encrypted TCP streams. In these cases, Kong can route traffic on the same TCP port to different services inside Kubernetes based on the SNI of the TLS handshake. Kong will also terminate the TLS session and proxy the TCP stream to the service in plain-text or can re-encrypt as well.

Annotations

This release ships with a new set of annotations that should minimize and simplify Ingress configuration:

  - konghq.com/plugins

  - konghq.com/override

  - konghq.com/client-cert

  - konghq.com/protocols

  - konghq.com/protocol

  - konghq.com/preserve-host

  - konghq.com/plugins

  - konghq.com/override

  - konghq.com/path

  - konghq.com/strip-path

  - konghq.com/https-redirect-status-code

With these new annotations, the need for using the KongIngress custom resource should go away for the majority of the use cases. 

For a complete list of annotations and how to use them, check out the annotations document.

Upgrading

If you are upgrading from a previous version, please read the changelog carefully.

Breaking Changes

This release ships with a major breaking change that can break Ingress routing for your cluster if you are using path-based routing.

Until the last release, Kong used to strip the request path by default. With this release, we have disabled the feature by default. You are free to use KongIngress resource or the new konghq.com/strip-path annotation on Ingress resource to control this behavior.

Also, if you are upgrading, please make sure to install the two new Custom Resource Definitions (CRDs) into your Kubernetes cluster. Failure to do so will result in the controller throwing errors.

Deprecations

Starting with this release, we have deprecated the following annotations, which will be removed in the future:

- configuration.konghq.com

- plugins.konghq.com

- configuration.konghq.com/protocols

- configuration.konghq.com/protocol

- configuration.konghq.com/client-cert

These annotations have been renamed to their corresponding konghq.com annotations.

Please read the annotations document on how to use new annotations.

Compatibility

Kong for K8s supports a variety of deployments and run-times. For a complete view of Kong for K8s compatibility, please see the compatibility document.

Getting Started!

You can try out Kong for K8s using our lab environment, available for free to all at konglabs.io/kubernetes.

You can install Kong for K8s on your Kubernetes cluster with one click:

$ kubectl apply -f bit.ly/k4k8s

or

$ helm repo add kong https://charts.konghq.com

$ helm repo update

$ helm install kong/kong

Alternatively, if you want to use your own Kubernetes cluster, follow our getting started guide to get your hands dirty.

Please feel free to ask questions on our community forum — Kong Nation — and open a Github issue if you happen to run into a bug. 

Happy Konging!

The post Kong for Kubernetes 0.8 Released! appeared first on KongHQ.

Kong Recognized as a Great Place to Work®!

$
0
0

We are so excited to share that Kong has been certified as a Great Place to Work®! We recognize our certification as a huge achievement for us, as it reflects our commitment to putting people first by fostering a team-based, collaborative work environment where everyone can be their authentic selves.

Great Place to Work determines the eligibility requirement based on an employee satisfaction survey that looks at a company’s leadership, business practices, culture and workplace environment. 

At Kong, one of our values is “Real,” which we define as being genuine, principled and confident without attitude. We are humbled that our employees see this in action. According to the survey results, 98 percent of Kongers felt welcomed when they joined the company; 98 percent feel cared for at work, and 98 percent believe management facilitates a real entrepreneurial work environment. 

Our Co-Founder and CEO Augusto Marietti recognizes, “This is another validation that our teams truly care for each other by building a long-lasting community of brave souls and whole hearts that we like to call Kongers.” 

This is Kong’s first time being recognized as a Great Place to Work, so we want to thank our employees for their valuable feedback and commitment to making Kong an amazing place to work! The employee experience is a top priority and something that we will continue to work on every day. 

To view Kong’s Great Place to Work profile and results from the company-wide employee survey, please click here. If you’re curious to learn more about working at Kong, visit our careers website at konghq.com/careers and follow us on social media on Twitter and Instagram! ”

The post Kong Recognized as a Great Place to Work®! appeared first on KongHQ.

Kongers Unite to Support Australian Wildfire Recovery

$
0
0

It would be an understatement to say that 2020 has been one of the most difficult years many of us have ever experienced. With the latest COVID-19 crisis bringing the world to its knees, I am once again reminded of the way that people unite in the face of hardship. Our ability to mobilize and pull together our resources to create solutions is the silver lining to any crisis.

The country of Australia began the year ablaze, and I wanted to express gratitude from both myself and the Kong Australia team for how the Kong community stepped up to help both in the short term and long term to help our country recover from this disaster.

We were heartbroken to witness the devastation caused by the terrible bushfires that impacted much of Australia, and I want to acknowledge and thank the heroes of the fire service across the country as well as the Australian Defence Force (ADF), who put their lives on the line to protect our people, property and wildlife.

I also want to express my deepest thanks to all the Kongers who reached out to our local team, customers and community expressing concern and with offers to help.

Lastly, I wanted to thank all those that donated money to Kong’s fundraiser to raise money for the Center for Disaster Philanthropy to fund medium- to long-term recovery of areas impacted by the bushfires – matching every dollar donated and raising over $8K in two weeks! 

While Australia faces a long road to recovery, the spirit of this magnificent country and its people can never be underestimated. We want all Australians to know that you are not alone, and at Kong, we are determined to leverage our most valuable asset – our people – to help rebuild and restore our communities through each crisis that we face.

The post Kongers Unite to Support Australian Wildfire Recovery appeared first on KongHQ.

Viewing all 463 articles
Browse latest View live