Quantcast
Channel: Blog – KongHQ
Viewing all 463 articles
Browse latest View live

Kuma 0.3.1 Released with Third-Party CA Support, Health Checks, and a GUI!

$
0
0

At KubeCon North America 2019, the community provided us with a ton of feedback and feature requests. We’re proud to release some of the most widely requested features in our latest version of Kuma: third-party CA (Certificate Authority) support, health checks, and a GUI! Kuma’s new health checks will help minimize the number of failed requests between your application. The third-party CA support will provide more flexibility when deciding how to secure your mesh. Lastly, the GUI will help you visualize the mesh and its policies in an intuitive format! Let’s take a look at how each of these work.

You can take a look at the full change log here.

Third-Party CA Support

Kuma has a built-in CA to issue certificates for data planes. Data plane certificates generated by Kuma are X.509 certificates that are SPIFFE compliant. However, sometimes you need to have the flexibility to use the CA that you’re already familiar with. Starting today, you have that choice when using Kuma with two quick changes. First is to use the new kumactl command to add a certificate with a key and cert file that you provide. This is full kumactl command would be:

kumactl manage ca provided certificates add --mesh demo --key-file key.pem --cert-file cert.pem

Once you add a certificate via kumactl, all you have to do is change the mesh resource to use a provided CA instead of the builtin CA. The new mesh resource would look like this:

type: Mesh
name: default
mtls:
  enabled: true
  ca:
    provided: {}

By changing the CA to provided, the control plane will use a CA certificate provided by a user to sign certificates of individual data planes.

Health Checks

The objective of the health checks functionality is to dynamically mark individual endpoints as healthy or unhealthy. This is desirable since at a given point, one source service may be able to connect to a destination service successfully while another service is failing to reach it – the first node will consider it healthy, while the second will mark it as unhealthy and start routing traffic to other data planes.

Kuma supports two kinds of health checks, which can be used separately or in conjunction:

  • Active Checks: Where the data plane periodically sends requests to a destination endpoint, and the health of the target is determined based on its response
  • Passive Checks (also known as outlier detection): Where the data planes analyze the ongoing traffic being proxied and determines the health of targets based on their behavior responding requests.

To configure active health checks, you would add the new HeathCheck policy as shown below:

type: HealthCheck
name: web-to-backend
mesh: default
sources:
- match:
    service: web
destinations:
- match:
    service: backend
conf:
  activeChecks:
    interval: 5s
    timeout: 1s
    unhealthyThreshold: 1
    healthyThreshold: 1

This is how you would easily configure passive health checks:

type: HealthCheck
name: web-to-backend
mesh: default
sources:
- match:
    service: web
destinations:
- match:
    service: backend
conf:
  passiveChecks:
    unhealthyThreshold: 3
    penaltyInterval: 5s

GUI

Kuma now ships with a basic web-based GUI that will serve as a visual overview of your data planes, meshes and various traffic policies. The Global Overview will provide a summary of all of the meshes found and allows you to switch between them. You can then view each entity and see how many data planes and traffic permissions, routes, and logs are associated with that particular mesh.

If you want to view information regarding a specific mesh, you can go to Overview and select the desired mesh from the pulldown at the top of the sidebar. You can then click on any of the overviews in the sidebar to view the entities and policies associated with that mesh.

Let us know what else you would like to see in Kuma’s new GUI!

Announcements

We’ll be hosting our next online Meetup on January 14, and we hope to see you there. Until then, hope you enjoy the new features, and let us know what you think! If you have any other feature suggestions, please let us know so we can work together to build it. You can find us on the community Slack channel or through the GitHub repository.

Happy holidays!

The post Kuma 0.3.1 Released with Third-Party CA Support, Health Checks, and a GUI! appeared first on KongHQ.


Canary Deployment in 5 Minutes with Service Mesh

$
0
0

Welcome to our second hands-on Kuma guide! The first one walked you through securing your application with mTLS using Kuma. Today, this guide will walk you through Kuma’s new L4 traffic routing rules. These rules will allow you to easily implement blue/green deployments and canary releases. In summary, Kuma will now alleviate the stress of deploying new versions and/or features into your service mesh. Let’s take a glimpse at how to achieve it in our sample application:

Start Kubernetes and Marketplace Application

To start, you need a Kubernetes cluster with at least 4GB of memory. We’ve tested Kuma on Kubernetes v1.13.0 – v1.16.x, so use anything older than v1.13.0 with caution. In this tutorial, we’ll be using v1.15.4 on minikube, but feel free to run this in a cluster of your choice.

$ minikube start --cpus 2 --memory 4096 --kubernetes-version v1.15.4
😄  minikube v1.4.0 on Darwin 10.14.6
🔥  Creating virtualbox VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.15.4 on Docker 18.09.9 ...
🚜  Pulling images ...
🚀  Launching Kubernetes ...
⌛  Waiting for: apiserver proxy etcd scheduler controller dns
🏄  Done! kubectl is now configured to use "minikube"

When running on Kubernetes, Kuma will store all of its state and configuration on the underlying Kubernetes API server, and therefore requiring no dependency to store the data. 

With your Kubernetes cluster up and running, we can throw up a demo application built for Kuma. Deploy the marketplace application by running:

$ kubectl apply -f http://bit.ly/kuma101
namespace/kuma-demo created
serviceaccount/elasticsearch created
service/elasticsearch created
replicationcontroller/es created
deployment.apps/redis-master created
service/redis created
service/backend created
deployment.apps/kuma-demo-backend-v0 created
deployment.apps/kuma-demo-backend-v1 created
deployment.apps/kuma-demo-backend-v2 created
configmap/demo-app-config created
service/frontend created
deployment.apps/kuma-demo-app created

This will deploy our demo marketplace application split across four pods. The first pod is an Elasticsearch service that stores all the items in our marketplace. The second pod is the Vue front-end application that will give us a visual page to interact with. The third pod is our Node API server, which is in charge of interacting with the two databases. Lastly, we have the Redis service that stores reviews for each item. Let’s check that the pods are up and running by checking the kuma-demo namespace:

$ kubectl get pods -n kuma-demo
NAME                                       READY    STATUS      RESTARTS      AGE
es-87mgm                                   1/1      Running        0          91s
kuma-demo-app-7f799bbfdf-7bk2x             2/2      Running        0          91s
kuma-demo-backend-v0-6548b88bf8-46z6n      1/1      Running        0          91s
redis-master-6d4cf995c5-d4kc6              1/1      Running        0          91s

With the application running, port-forward the sample application to access the front-end UI at http://localhost:8080:

$ kubectl port-forward ${KUMA_DEMO_APP_POD_NAME} -n kuma-demo 8080:80
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80

Now that you can visualize the application, play around with it! This is what you just created:

The only difference is this diagram includes the v1 and v2 deployment of our back-end API. If you inspect our pods in kuma-demo namespace again, you will only find a lonely v0, but don’t worry, I included the deployments for v1 and v2 for you. Before we scale those deployments, let’s add Kuma.

Download Kuma

To start, we need to download the latest version of Kuma. You can find installation procedures for different platforms on our official documentation. The following guide is being created on macOS so it will be using the Darwin image:

$ wget https://kong.bintray.com/kuma/kuma-0.3.0-darwin-amd64.tar.gz
--2019-12-09 11:25:49--  https://kong.bintray.com/kuma/kuma-0.3.0-darwin-amd64.tar.gz
Resolving kong.bintray.com (kong.bintray.com)... 54.149.67.138, 34.215.12.119
Connecting to kong.bintray.com (kong.bintray.com)|54.149.67.138|:443... connected.
HTTP request sent, awaiting response... 302
Location: https://akamai.bintray.com/3a/3afc187b8e3daa912648fcbe16f0aa9c2eb90b4b0df4f0a5d47d74ae426371b1?__gda__=exp=1575920269~hmac=0d7c9af597660ab1036b3d50bef98fc68dfa0b832e2005d25e1628ae92c6621e&response-content-disposition=attachment%3Bfilename%3D%22kuma-0.3.0-darwin-amd64.tar.gz%22&response-content-type=application%2Fgzip&requestInfo=U2FsdGVkX1-CBTtYNUxxbm2yT4muZ0ig1ICnD2XOqJI7BobZ4DB_RouzRRsn3NBrSFjF_IqjN9wzbGk28ZcFS_mD79NCyZ0V0XxawLL8UvY5D8h-QQdfKTeRUpLUqOKI&response-X-Checksum-Sha1=6df196169311c66a544eccfdd73931b6f3b83593&response-X-Checksum-Sha2=3afc187b8e3daa912648fcbe16f0aa9c2eb90b4b0df4f0a5d47d74ae426371b1 [following]
--2019-12-09 11:25:49--  https://akamai.bintray.com/3a/3afc187b8e3daa912648fcbe16f0aa9c2eb90b4b0df4f0a5d47d74ae426371b1?__gda__=exp=1575920269~hmac=0d7c9af597660ab1036b3d50bef98fc68dfa0b832e2005d25e1628ae92c6621e&response-content-disposition=attachment%3Bfilename%3D%22kuma-0.3.0-darwin-amd64.tar.gz%22&response-content-type=application%2Fgzip&requestInfo=U2FsdGVkX1-CBTtYNUxxbm2yT4muZ0ig1ICnD2XOqJI7BobZ4DB_RouzRRsn3NBrSFjF_IqjN9wzbGk28ZcFS_mD79NCyZ0V0XxawLL8UvY5D8h-QQdfKTeRUpLUqOKI&response-X-Checksum-Sha1=6df196169311c66a544eccfdd73931b6f3b83593&response-X-Checksum-Sha2=3afc187b8e3daa912648fcbe16f0aa9c2eb90b4b0df4f0a5d47d74ae426371b1
Resolving akamai.bintray.com (akamai.bintray.com)... 184.27.29.177
Connecting to akamai.bintray.com (akamai.bintray.com)|184.27.29.177|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 38017379 (36M) [application/gzip]
Saving to: ‘kuma-0.3.0-darwin-amd64.tar.gz’

kuma-0.3.0-darwin-amd64.tar.gz      100%[================================================================>]  36.26M  4.38MB/s    in 8.8s

2019-12-09 11:25:59 (4.13 MB/s) - ‘kuma-0.3.0-darwin-amd64.tar.gz’ saved [38017379/38017379]

Next, let’s unbundle the files to get the following components:

$ tar xvzf kuma-0.3.0-darwin-amd64.tar.gz
x ./
x ./conf/
x ./conf/kuma-cp.conf
x ./bin/
x ./bin/kuma-tcp-echo
x ./bin/kuma-dp
x ./bin/kumactl
x ./bin/kuma-cp
x ./bin/envoy
x ./NOTICE
x ./README
x ./LICENSE

Lastly, go into the ./bin directory where the Kuma components will be:

$ cd bin && ls
envoy   kuma-cp   kuma-dp   kuma-tcp-echo kumactl

Install Kuma

With Kuma downloaded, let’s utilize kumactl to install Kuma on our cluster. The kumactl executable is a very important component in your journey with Kuma, so be sure to read more about it here. Run the following command to install Kuma onto our Kubernetes cluster:

$ ./kumactl install control-plane | kubectl apply -f -
namespace/kuma-system created
secret/kuma-admission-server-tls-cert created
secret/kuma-injector-tls-cert created
secret/kuma-sds-tls-cert created
configmap/kuma-control-plane-config created
configmap/kuma-injector-config created
serviceaccount/kuma-control-plane created
customresourcedefinition.apiextensions.k8s.io/dataplaneinsights.kuma.io created
customresourcedefinition.apiextensions.k8s.io/dataplanes.kuma.io created
customresourcedefinition.apiextensions.k8s.io/meshes.kuma.io created
customresourcedefinition.apiextensions.k8s.io/proxytemplates.kuma.io created
customresourcedefinition.apiextensions.k8s.io/trafficlogs.kuma.io created
customresourcedefinition.apiextensions.k8s.io/trafficpermissions.kuma.io created
customresourcedefinition.apiextensions.k8s.io/trafficroutes.kuma.io created
clusterrole.rbac.authorization.k8s.io/kuma:control-plane created
clusterrolebinding.rbac.authorization.k8s.io/kuma:control-plane created
role.rbac.authorization.k8s.io/kuma:control-plane created
rolebinding.rbac.authorization.k8s.io/kuma:control-plane created
service/kuma-injector created
service/kuma-control-plane created
deployment.apps/kuma-control-plane created
deployment.apps/kuma-injector created
mutatingwebhookconfiguration.admissionregistration.k8s.io/kuma-admission-mutating-webhook-configuration created
mutatingwebhookconfiguration.admissionregistration.k8s.io/kuma-injector-webhook-configuration created
validatingwebhookconfiguration.admissionregistration.k8s.io/kuma-validating-webhook-configuration created

When deploying on Kubernetes, you are supposed to change the state of Kuma by leveraging Kuma’s CRDs. Therefore, we will now use kubectl to help us through the remaining demo. To start, let’s check the pods are up and running within the kuma-system namespace:

$ kubectl get pods -n kuma-system
NAME                                  READY   STATUS    RESTARTS   AGE
kuma-control-plane-7bcc56c869-lzw9t   1/1     Running   0          70s
kuma-injector-9c96cddc8-745r7         1/1     Running   0          70s

While running on Kubernetes, no external dependencies are required, since it leverages the underlying Kubernetes API server to store its configuration. However, as you can see above, a kuma-injector service will also start in order to automatically inject sidecar data plane proxies without human intervention. Data plane proxies are injected into namespaces that include the following label:

kuma.io/sidecar-injection: enabled

Now that our control plane and injector are running, let’s delete the existing kuma-demo pods so they restart. This will give the injector a chance to deploy those sidecar proxies among each pod. 

$ kubectl delete pods --all -n kuma-demo
pod "es-87mgm" deleted
pod "kuma-demo-app-7f799bbfdf-7bk2x" deleted
pod "kuma-demo-backend-v0-6548b88bf8-46z6n" deleted
pod "redis-master-6d4cf995c5-d4kc6" deleted

Check that the pods are up and running again with an additional container. The additional container is the Envoy sidecar proxy that Kuma is injecting into each pod.

$ kubectl get pods -n kuma-demo
NAME                                    READY    STATUS     RESTARTS    AGE
es-jxzfp                                2/2      Running    0           43s
kuma-demo-app-7f799bbfdf-p5gjq          3/3      Running    0           43s
kuma-demo-backend-v0-6548b88bf8-8sbzn   2/2      Running    0           43s
redis-master-6d4cf995c5-42hlc           2/2      Running    0           42s

Now if we port-forward our marketplace application again, I challenge you to spot the difference.

$ kubectl port-forward ${KUMA_DEMO_APP_POD_NAME} -n kuma-demo 8080:80
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80

A-ha! Couldn’t find a thing, right? Well, that is because Kuma doesn’t require a change to your application’s code in order to be used. The only change is that Envoy now handles all the traffic between the services. Kuma implements a pragmatic approach that is very different from the first-generation control planes:

  • It runs with low operational overhead across all the organization
  • It supports every platform
  • It’s easy to use while relying on a solid networking foundation delivered by Envoy – and we see it in action right here!

Canary Deployment

With the mesh up and running, let’s start expanding our application with brand new features. Our current marketplace application has no sales. With the holiday season upon us, the engineering team worked hard to develop v1 and v2 version of the Kuma marketplace to support flash sales. The backend-v1 service will always have one item on sale, and the backend-v2 service will always have two items on sale. So to start, scale up the deployments of v1 and v2 like so:

$ kubectl scale deployment kuma-demo-backend-v1 -n kuma-demo --replicas=1
deployment.extensions/kuma-demo-backend-v1 scaled

and

$ kubectl scale deployment kuma-demo-backend-v2 -n kuma-demo --replicas=1
deployment.extensions/kuma-demo-backend-v2 scaled

Now if we check our pods again, you will see three backend services:

$ kubectl get pods -n kuma-demo
NAME                                       READY   STATUS      RESTARTS    AGE
es-jxzfp                                   2/2     Running      0          9m16s
kuma-demo-app-7f799bbfdf-p5gjq             3/3     Running      0          9m16s
kuma-demo-backend-v0-6548b88bf8-8sbzn      2/2     Running      0          9m16s
kuma-demo-backend-v1-894bcd4bc-p7xz8       2/2     Running      0          20s
kuma-demo-backend-v2-dffb4bffd-48z67       2/2     Running      0          11s
redis-master-6d4cf995c5-42hlc              2/2     Running      0          9m15s

With the new versions up and running, use the new TrafficRoute policy to slowly roll out users to our flash-sale capability. This is also known as canary deployment: a pattern for rolling out new releases to a subset of users or servers. By deploying the change to a small subset of users, we can test its stability and make sure we don’t go broke by introducing too many sales at once.

First, define the following alias:

$ alias benchmark='echo "NUM_REQ NUM_SPECIAL_OFFERS"; kubectl -n kuma-demo exec $( kubectl -n kuma-demo get pods -l app=kuma-demo-frontend -o=jsonpath="{.items[0].metadata.name}" ) -c kuma-fe -- sh -c '"'"'for i in `seq 1 100`; do curl -s http://backend:3001/items?q | jq -c ".[] | select(._source.specialOffer == true)" | wc -l ; done | sort | uniq -c | sort -k2n'"'"''

This alias will help send 100 requests from frontend-app to backend-api and count the number of special offers in the response. Then it will group the request by the number of special offers. Here is an example of the output before we start configuring our traffic-routing:

$ benchmark
NUM_REQ    NUM_SPECIAL_OFFERS
34                     0
33                     1
33                     2

The traffic is equally distributed because have not set any traffic-routing. Let’s change that! Here is what we need to achieve:

We can achieve that with the following policy:

cat <<EOF | kubectl apply -f -
apiVersion: kuma.io/v1alpha1
kind: TrafficRoute
metadata:
  name: frontend-to-backend
  namespace: kuma-demo
mesh: default
spec:
  sources:
  - match:
      service: frontend.kuma-demo.svc:80
  destinations:
  - match:
      service: backend.kuma-demo.svc:3001
  conf:
  # it is NOT a percentage. just a positive weight
  - weight: 80
    destination:
      service: backend.kuma-demo.svc:3001
      version: v0
  # we're NOT checking if total of all weights is 100
  - weight: 20
    destination:
      service: backend.kuma-demo.svc:3001
      version: v1
  # 0 means no traffic will be sent there
  - weight: 0
    destination:
      service: backend.kuma-demo.svc:3001
      version: v2
EOF

trafficroute.kuma.io/frontend-to-backend created

That is all that is necessary! With one simple policy and the weight you apply to each matching service, you can slowly roll out the v1 and v2 version of your application. Let’s run the benchmark alias one more time to see the TrafficRoute policy in action:

$ benchmark
NUM_REQ    NUM_SPECIAL_OFFERS
83                     0
17                     1

We do not see any results for two special offers because it is configured with a weight of 0. Once we’re comfortable with not going bankrupt with our rollout of v1, we can slowly apply weight to v2. You can also see the action live on the webpage. One last time, port-forward the application frontend like so:

$ kubectl port-forward ${KUMA_DEMO_APP_POD_NAME} -n kuma-demo 8080:80
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80

Two out of roughly 10 requests to our webpage will have the sale feature enabled:

That’s all! This was a really quick run-through, so make sure you check out Kuma’s official webpage or repository to find out about more features. You can also join our Slack channel to chat with us live and meet the community! Lastly, sign up for the Kuma newsletter below to stay up-to-date as we push out more features that will make this the best service mesh solution for you.

The post Canary Deployment in 5 Minutes with Service Mesh appeared first on KongHQ.

Kong Studio 1.0 Released!

$
0
0

Today, we’re thrilled to release Kong Studio 1.0, our spec-first design and development tool for APIs leveraging the power of Insomnia! In this release, you’ll find the ability to design…

The post Kong Studio 1.0 Released! appeared first on KongHQ.

Securing Kubernetes Applications in 5 Minutes with Service Mesh

Kong and codecentric AG Partner to Bring Kong Enterprise to Germany

Introducing Kong for Kubernetes: Kubernetes-Native Ingress and API Management

5 Sessions to Add to Your AWS re:Invent 2019 Schedule

Kuma 0.3 Released with Traffic Routing!

$
0
0

Today, we’re thrilled to release Kuma 0.3, our open source control plane with brand new traffic routing capabilities. Kuma’s new L4 traffic routing rules allow you to easily implement blue/green…

The post Kuma 0.3 Released with Traffic Routing! appeared first on KongHQ.


The Brave New World of Digital Innovation: Open. Decentralized. Developer-Driven.

Kuma 0.3.1 Released with Third-Party CA Support, Health Checks, and a GUI!

Canary Deployment in 5 Minutes with Service Mesh

Infographic: What Technology Leaders Need to Know About Digital Innovation in 2020

Kong Studio 1.0 Released!

$
0
0

Today, we’re thrilled to release Kong Studio 1.0, our spec-first design and development tool for APIs leveraging the power of Insomnia! In this release, you’ll find the ability to design specifications, sync with git, convert your spec into requests for debugging purposes and more.

Kong Studio represents a brand new product area for Kong — an integrated design and test environment for Kong Enterprise customers. We are excited to extend our service control platform to include pre-production use cases focused on improving the way that customers build and test their microservices and APIs. With Kong Studio, customers can easily adopt a modern, spec-driven approach to development while also automating many of the tedious aspects of maintaining API documentation in increasingly complex service environments. 

We built Kong Studio on top of the popular open source Insomnia API testing platform (now a part of the Kong family) to solve modern spec design and testing challenges. With its native integration to Kong Enterprise, Kong Studio 1.0 allows users to seamlessly edit, test and publish REST and GraphQL services directly into the Kong Developer Portal

Some of the benefits Kong Enterprise customers will see by adopting spec-driven development with Kong Studio include: 

  • Increased Developer Efficiency
  • Reduced Deployment Risk 
  • Improved Governance 

Kong Studio is available as a standalone add-on for Kong Enterprise customers. Please reach out to your account executive for more information about adding Kong Studio as part of your Kong Enterprise subscription.

Notable Features

OpenAPI Spec Editor

Kong Studio ships with a built-in editor and includes the features you need for highly productive spec design. Features include navigation and linting of your OpenAPI spec as you design.

Learn More…

OpenAPI Spec Editor

Git Sync

Kong Studio is built for the API DevOps lifecycle, where infrastructure and configuration are code. We enable this through tight integration with Git. Regardless of whether you’re using GitHub, Bitbucket or GitLab, you can import, commit, create branches, swap branches and more directly from Kong Studio.

Learn More…

Git Sync

Generate Requests from Specs

Kong Studio also provides tight integration with the Insomnia core. As you design and edit your specification, you can quickly generate and update existing requests directly from the OpenAPI spec editor built into Studio. Upon generating requests, you’ll enter the debugging mode — the Insomnia UI you’re already familiar with — and can quickly begin debugging your spec.

Learn More…

Debug Specs

OpenAPI GraphQL Support

Kong Studio believes that documentation support could be better. With that in mind, it comes with auto-detection for GraphQL APIs even when documented through OpenAPI.

Learn More…

Deploy to Kong’s Developer Portal

Lastly, and most importantly, is one of the key integrations of Kong Studio — integration with the Kong Enterprise platform. Directly from within Kong Studio, you’ll be able to deploy the OpenAPI spec you’ve been designing and debugging directly to the Kong Developer Portal of your choice. No matter what workspace, we’ve got you covered. Made changes and want to update your spec on Kong Developer Portal? We’ve got you covered there too.

Learn More…

Deploy to Dev Portal

We’re excited to get customers up and running on Kong Studio. To learn more about how Studio can help your developers build better services, check out our webinar or reach out to us directly.

The post Kong Studio 1.0 Released! appeared first on KongHQ.

Securing Kubernetes Applications in 5 Minutes with Service Mesh

$
0
0

We announced the release of Kuma – a modern, universal control plane for service mesh back in September 2019. Since then, a roaring wave of community feedback and contribution has flooded the project. And that’s a good thing, so thank you to everyone who has given their time to helping Kuma grow. One recurring feedback we got was that the community was excited to see a platform-agnostic service mesh. Unlike other control planes, Kuma natively runs across any platform, and it’s not limited in scope. With KubeCon NA right around the corner, let us explore one of the platforms that you can deploy Kuma.

Start Kubernetes and Marketplace Application

To start, you need a Kubernetes cluster with at least 4GB of memory. We’ve tested Kuma on Kubernetes v1.13.0 – v1.16.x, so use anything older than v1.13.0 with caution. In this tutorial, we’ll be using v1.15.4 on minikube, but feel free to run this in a cluster of your choice.

$ minikube start --cpus 2 --memory 4096 --kubernetes-version v1.15.4
😄  minikube v1.4.0 on Darwin 10.14.6
🔥  Creating virtualbox VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.15.4 on Docker 18.09.9 ...
🚜  Pulling images ...
🚀  Launching Kubernetes ...
⌛  Waiting for: apiserver proxy etcd scheduler controller dns
🏄  Done! kubectl is now configured to use "minikube"

When running on Kubernetes, Kuma will store all of its state and configuration on the underlying Kubernetes API Server, therefore requiring no dependency to store the data.

With you’re Kubernetes cluster up and running, we can throw up a demo application built for Kuma. Deploy the marketplace application by running:

$ kubectl apply -f http://bit.ly/kong1337
namespace/kuma-demo created
serviceaccount/elasticsearch created
service/elasticsearch created 
replicationcontroller/es created
deployment.apps/redis-master created
service/redis-master created
service/kuma-demo-api created
deployment.apps/kuma-demo-app created

This will deploy our demo marketplace application split across 3 pods. The first pod is an Elasticsearch service that stores all the items in our marketplace. The second pod is a Redis service that stores reviews for each item. The third pod is our Node/Vue application that allows you to visually query the Elastic and Redis endpoints. Let’s check the pods are up and running by checking the kuma-demo namespace:

$ kubectl get pods -n kuma-demo
NAME                            READY   STATUS    RESTARTS   AGE
es-n8df7                        1/1     Running   0          13m
kuma-demo-app-8fc49ddbf-gfjtb   2/2     Running   0          13m
redis-master-6d4cf995c5-nsghm   1/1     Running   0          13m

With the application running, port-forward the sample application to access the front-end UI at http://localhost:8080:

$ kubectl port-forward ${KUMA_DEMO_APP_POD_NAME} -n kuma-demo 8080 3001
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080
Forwarding from 127.0.0.1:3001 -> 3001
Forwarding from [::1]:3001 -> 3001

Now that you can visualize the application, play around with it! You should be able to search for all the items we sell on the marketplace along with some reviews. While this application works, it is lacking a few important features. First, the traffic between the services is not encrypted. Second, we have no observability on our application if something was to fail. And lastly, if we needed to change how services can communicate, it’s not easily achievable. So let’s quickly solve all three things using Kuma.

Download Kuma

To start, we need to download the latest version of Kuma. You can find installation procedures for different platforms on our official documentation. The following guide is being created on macOS so it will be using the Darwin image:

$ wget https://kong.bintray.com/kuma/kuma-0.2.2-darwin-amd64.tar.gz
--2019-10-13 05:53:46--  https://kong.bintray.com/kuma/kuma-0.2.2-darwin-amd64.tar.gz
Resolving kong.bintray.com (kong.bintray.com)... 52.88.33.18, 54.200.232.13
Connecting to kong.bintray.com (kong.bintray.com)|52.88.33.18|:443... connected.
HTTP request sent, awaiting response... 302
Location: https://akamai.bintray.com/69/694567d6d0d64f5eb5a5841aea3b4c3d60c8f2a6e6c3ff79cd5d580edf22e12b?__gda__=exp=1570917947~hmac=68f26ab23b95f97acebfc4b33a1bc1e88aeca46a44b1bc349af851019c941d0a&response-content-disposition=attachment%3Bfilename%3D%22kuma-0.2.2-darwin-amd64.tar.gz%22&response-content-type=application%2Fgzip&requestInfo=U2FsdGVkX1_SREBFG76q54ykX416x4BKSbGVrX5A-GfV55I-FdyX_0L9WI3EaLJdsXfRQ4V2pY3vP9viaRvtUxQEjLKVz_AEytCDaz5VW3oTvdhio0sq10KPgW3Z3hFN&response-X-Checksum-Sha1=01c56caae58a6d14a1ad24545ee0b25421c6d48e&response-X-Checksum-Sha2=694567d6d0d64f5eb5a5841aea3b4c3d60c8f2a6e6c3ff79cd5d580edf22e12b [following]
--2019-10-13 05:53:47--  https://akamai.bintray.com/69/694567d6d0d64f5eb5a5841aea3b4c3d60c8f2a6e6c3ff79cd5d580edf22e12b?__gda__=exp=1570917947~hmac=68f26ab23b95f97acebfc4b33a1bc1e88aeca46a44b1bc349af851019c941d0a&response-content-disposition=attachment%3Bfilename%3D%22kuma-0.2.2-darwin-amd64.tar.gz%22&response-content-type=application%2Fgzip&requestInfo=U2FsdGVkX1_SREBFG76q54ykX416x4BKSbGVrX5A-GfV55I-FdyX_0L9WI3EaLJdsXfRQ4V2pY3vP9viaRvtUxQEjLKVz_AEytCDaz5VW3oTvdhio0sq10KPgW3Z3hFN&response-X-Checksum-Sha1=01c56caae58a6d14a1ad24545ee0b25421c6d48e&response-X-Checksum-Sha2=694567d6d0d64f5eb5a5841aea3b4c3d60c8f2a6e6c3ff79cd5d580edf22e12b
Resolving akamai.bintray.com (akamai.bintray.com)... 104.93.1.149
Connecting to akamai.bintray.com (akamai.bintray.com)|104.93.1.149|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 42892462 (41M) [application/gzip]
Saving to: ‘kuma-0.2.2-darwin-amd64.tar.gz’

kuma-0.2.2-darwin-amd64.tar.g 100%[===============================================>]  40.91M  2.61MB/s    in 20s

2019-10-13 05:54:08 (2.09 MB/s) - ‘kuma-0.2.2-darwin-amd64.tar.gz’ saved [42892462/42892462]

Next, let’s unbundle the files to get the following components:

$ tar xvzf kuma-0.2.2-darwin-amd64.tar.gz
x ./
x ./conf/
x ./conf/kuma-cp.conf
x ./bin/
x ./bin/kuma-dp
x ./bin/envoy
x ./bin/kuma-tcp-echo
x ./bin/kumactl
x ./bin/kuma-cp
x ./README
x ./LICENSE

Lastly, Go into the ./bin directory where the Kuma components will be:

$ cd bin && ls
envoy   kuma-cp   kuma-dp   kuma-tcp-echo kumactl

Install Kuma

With Kuma downloaded, let’s utilize kumactl to install Kuma on our cluster. The kumactl executable is a very important component in your journey with Kuma so be sure to read more about it here. Run the following command to install Kuma onto our Kubernetes cluster:

$ ./kumactl install control-plane | kubectl apply -f -
namespace/kuma-system created
secret/kuma-injector-tls-cert created
secret/kuma-sds-tls-cert created
secret/kuma-admission-server-tls-cert created
configmap/kuma-injector-config created
serviceaccount/kuma-control-plane created
customresourcedefinition.apiextensions.k8s.io/dataplaneinsights.kuma.io created
customresourcedefinition.apiextensions.k8s.io/dataplanes.kuma.io created
customresourcedefinition.apiextensions.k8s.io/meshes.kuma.io created
customresourcedefinition.apiextensions.k8s.io/proxytemplates.kuma.io created
customresourcedefinition.apiextensions.k8s.io/trafficlogs.kuma.io created
customresourcedefinition.apiextensions.k8s.io/trafficpermissions.kuma.io created
clusterrole.rbac.authorization.k8s.io/kuma:control-plane created
clusterrolebinding.rbac.authorization.k8s.io/kuma:control-plane created
role.rbac.authorization.k8s.io/kuma:control-plane created
rolebinding.rbac.authorization.k8s.io/kuma:control-plane created
service/kuma-injector created
service/kuma-control-plane created
deployment.apps/kuma-control-plane created
deployment.apps/kuma-injector created
mutatingwebhookconfiguration.admissionregistration.k8s.io/kuma-admission-mutating-webhook-configuration created
mutatingwebhookconfiguration.admissionregistration.k8s.io/kuma-injector-webhook-configuration created

When deploying on Kubernetes, you are supposed to change the state of Kuma by leveraging Kuma’s CRDs. Therefore, we will now use kubectl to help us through the remaining demo. To start, let’s check the pods are up and running within the kuma-system namespace:

$ kubectl get pods -n kuma-system
NAME                                  READY   STATUS    RESTARTS   AGE
kuma-control-plane-7bcc56c869-lzw9t   1/1     Running   0          70s
kuma-injector-9c96cddc8-745r7         1/1     Running   0          70s

While running on Kubernetes, no external dependencies required, since it leverages the underlying K8s API server to store its configuration. However, as you can see above, a kuma-injector service will also start in order to automatically inject sidecar data-plane proxies without human intervention. Data-plane proxies are injected into namespaces that includes the following label:

kuma.io/sidecar-injection: enabled

Now that our control-plane and injector are running, let’s delete the existing kuma-demo pods so they restart. This will give the injector a chance to deploy those sidecar proxies among each pod.

$ kubectl delete pods --all -n kuma-demo
pod "es-n8df7" deleted
pod "kuma-demo-app-8fc49ddbf-gfjtb" deleted
pod "redis-master-6d4cf995c5-nsghm" deleted

Check the pods are up and running again with an additional container. The additional container is the Envoy sidecar proxy that Kuma is injecting into each pod.

$ kubectl get pods -n kuma-demo
NAME                            READY   STATUS    RESTARTS   AGE
es-gsc8w                        2/2     Running   0          2m25s
kuma-demo-app-8fc49ddbf-k5z5q   3/3     Running   0          2m25s
redis-master-6d4cf995c5-jxjjm   2/2     Running   0          2m25s

Now if we now port-forward our marketplace application again, I challenge you to spot the difference.

$ kubectl port-forward ${KUMA_DEMO_APP_POD_NAME} -n kuma-demo 8080 3001
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080
Forwarding from 127.0.0.1:3001 -> 3001
Forwarding from [::1]:3001 -> 3001

A-ha! Couldn’t find a thing right? Well, that is because Kuma doesn’t require a change to your application’s code in order to be used. The only change is that Envoy now handles all the traffic between the services. Kuma implements a pragmatic approach that is very different from the first-generation control planes:

  • it runs with low operational overhead across all the organization
  • it supports every platform
  • it’s easy to use while relying on a solid networking foundation delivered by Envoy.

And we see it in action right here!

Powerful Policies

With the mesh up and running, let’s start tackling the three issues I raised about this application. First, we have no encryption between our services, which leaves us vulnerable to attack. Kuma can easily fix this by utilizing the mutual TLS policy. This policy enables automatic encrypted mTLS traffic for all the services in a Mesh. Kuma ships with a builtin CA (Certificate Authority) which is initialized with an auto-generated root certificate. The root certificate is unique for every Mesh and it used to sign identity certificates for every data-plane. By default, mTLS is not enabled. You can enable Mutual TLS by updating the Mesh policy like so:

$ cat <<EOF | kubectl apply -f - 
apiVersion: kuma.io/v1alpha1
kind: Mesh
metadata:
  name: default
  namespace: kuma-system
spec:
  mtls:
    ca:
      builtin: {}
    enabled: true
EOF

With mTLS enabled, traffic is restricted by default. Remember to apply a TrafficPermission policy to permit connections between Dataplanes. So if you try to access the application, you will no longer see any items or reviews because the traffic between Node and Elasticsearch or Redis is now blocked off. Traffic Permissions allow you to determine security rules for services that consume other services via their Tags. It is a very useful policy to increase security in the Mesh and compliance in the organization. You can determine what source services are allowed to consume specific destination services like so:

$ cat <<EOF | kubectl apply -f - 
apiVersion: kuma.io/v1alpha1
kind: TrafficPermission
mesh: default
metadata:
  namespace: kuma-demo
  name: everything
spec:
  rules:
  - sources:
    - match:
        service: '*'
    destinations:
    - match:
        service: '*'
EOF

In this case, our rule states that any source service has permission to route traffic to any destination service. So if we now access our marketplace at http://localhost:8080, the demo application will look like it’s normal again. However, now all the traffic between Elasticsearch, Node, and Redis is encrypted!

But wait! Hypothetically, some other marketplace was disgruntled by our awesome webpage and starts spamming all our product with fake reviews. What could we do? With the same TrafficPermission policy, we can easily lock down our Redis service. Let’s give that a shot:

$ cat <<EOF | kubectl apply -f - 
apiVersion: kuma.io/v1alpha1
kind: TrafficPermission
mesh: default
metadata:
  namespace: kuma-demo
  name: everything
spec:
  rules:
  - sources:
    - match:
        service: 'kuma-demo-api.kuma-demo.svc:3001'
    destinations:
    - match:
        service: 'elasticsearch.kuma-demo.svc:80'
EOF

In this manifest, I’m changing the everything TrafficPermission policy to have a very specific source and destination. Now, only traffic from the kuma-demo-api service will be routed to the Elasticsearch service. Essentially, cutting the Redis service out of the application, giving us some time to find out who is targeting us and remove falsified reviews.

That’s all! This was a really quick run through, so make sure you check out Kuma’s official webpage or repository to find out about more features. You can also join our Slack channel to chat with us live and meet the community! Lastly, sign up for the Kuma newsletter below to stay up to date as we push out more features that will make this the best service mesh solution for you.

The post Securing Kubernetes Applications in 5 Minutes with Service Mesh appeared first on KongHQ.

Kong and codecentric AG Partner to Bring Kong Enterprise to Germany

$
0
0

Today, we’re announcing an exciting, new partnership with codecentric AG, a leading IT consultancy based in Germany. Through Kong’s growing Go-To-Market (GTM) Partner Program, Codecentric will help German companies accelerate their transition to microservices by adopting Kong Enterprise

At codecentric, API management is a very important pillar of IT integration. Digital transformation requires effective information exchange around the clock — both internally and beyond company walls. For business processes, models and IT architectures to be flexible, seamless and future-oriented, all of a company’s applications must be able to act as consumers and producers of services at any time. This requires an efficient API strategy to ensure security, consistency, scalability and maintainability.

Cloud native technologies and microservices play a crucial role in making this happen for organizations who want to expand their IT landscape during digital transformation while also simplifying the complexity of modern architectures. codentric and Kong will work together to make this an easy journey for German companies.

Interested in joining the Kong GTM Partner Program? Learn more and apply at https://konghq.com/partners/.

 

The post Kong and codecentric AG Partner to Bring Kong Enterprise to Germany appeared first on KongHQ.


Introducing Kong for Kubernetes: Kubernetes-Native Ingress and API Management

$
0
0

At this year’s KubeCon, we debuted Kong for Kubernetes, the industry’s only fully Kubernetes-native ingress controller that supports end-to-end API management and is backed by an enterprise support subscription. Kong for Kubernetes builds on Kong’s open source Ingress Controller with Kong Enterprise plugins to provide Kubernetes deployments, native integration with Prometheus, Jeager, and other cloud native projects, enterprise-grade authentication, traffic control, transformations and more.    

Over the past few years, Kubernetes has become the de-facto standard for container orchestration. However, despite its broadscale adoption, Kubernetes ingress solutions have to date failed to provide a comprehensive solution for end-to-end API management and traffic control for all applications deployed on a cluster. Legacy API management platforms cannot easily integrate with Kubernetes due to their monolithic runtime architecture and fail to provide native management of APIs within Kubernetes via the Kubernetes APIs (kubectl and CRDs). Similarly, dedicated Kubernetes ingress solutions lack the comprehensive security capabilities and support needed within enterprise organizations. Kong for Kubernetes provides the industry’s only solution that addresses all of these concerns.

Kong for Kubernetes addresses these concerns by providing a Kubernetes-native ingress and API management solution, complete with out-of-the-box security, traffic control and enterprise support. Kong for Kubernetes differentiates from other solutions by enabling end-to-end workflows for managing APIs and ingress traffic within kubectl to facilitate a GitOps-based operational change model. Below, we detail some of the key capabilities of Kong for Kubernetes that make it the ideal fit for organizations leveraging Kubernetes in production.

Kong Enterprise

With Kong for Kubernetes as an ingress point, you get a number of enhancements to your experience by enabling Kong Enterprise plugins that help you further extend Kong use cases and customization specific to your organizational needs. Some of our popular plug-ins include:

  • OIDC Connect to integrate Kong with third party OpenID Connect 1.0 Provider
  • Advanced Rate Limiting to rate limit how many HTTP requests developers can make
  • Advanced Proxy Caching to cache and serve commonly requested responses in Kong
  • Advanced Request Transformer to use powerful regular expressions, variables and templates to transform API requests

Kong for Kubernetes allows Kong Enterprise customers to implement the same authentication and traffic control policies for Kubernetes as their other API gateways to ensure consistent access control.  

Furthermore, the Kong Ingress Controller automatically maps  Kubernetes namespaces to Kong Enterprise Workspaces and Kong RBAC to Kubernetes RBAC, leading to a fluid experience of managing policies and privileges within Kubernetes. 

Service Mesh Integration with Kuma and Istio

The Kong Ingress Controller can now be integrated with service meshes such as Istio and Kuma by acting as an ingress point in a service mesh deployment. This setup makes the Kong Ingress Controller the single port of entry for all external traffic coming into the service mesh.

Kong Ingress handles all external client-facing routing, policies, documentation and metrics, while load-balancing and service-to-service policy enforcement is performed through the underlying service mesh solution.

This flexible architecture allows Kubernetes cluster owners to use their preferred service mesh to manage east-west traffic while benefiting from the capabilities of the Kong Ingress Controller for all north-south traffic.

The following graphic shows a high-level deployment of Kong Ingress Controller using either the Kuma or Istio service mesh. Envoy is injected as a sidecar to Kong Ingress pod and handles the routing for all traffic upstream.

Getting Started

Wondering whether Kong for Kubernetes will meet the needs of your service environment? Kong Ingress Controller supports flexible deployment options, with installation using a Kubernetes Operator, Helm Chart, YAML manifests and Kustomize.

We are excited to provide a flexible Kubernetes-native ingress and API management solution, complete with out-of-the-box security, traffic control and enterprise support.

Ready to get your hands dirty with K4K8S? Our live tutorial lets you start playing with K4K8S immediately. 

Visit our installation documentation page to learn how to download K4K8S and get running.  

 

The post Introducing Kong for Kubernetes: Kubernetes-Native Ingress and API Management appeared first on KongHQ.

5 Sessions to Add to Your AWS re:Invent 2019 Schedule

$
0
0

AWS re:Invent is an annual cloud computing conference hosted by Amazon Web Services that attracts tens of thousands of AWS staff, partners and users from all over the world. This year, Kong is proud to be attending as a Gold Sponsor, and we are gearing up for a full week of all things software infrastructure in Las Vegas on December 2-6.

Meet us at booth #2525 in the Exhibitors Hall for a chance to win a variety of prizes while learning about how Kong Enterprise can connect your development teams, partners and customers with a unified platform.

With access to more than 2,000 technical sessions, keynotes and certification opportunities, planning out your week along the Vegas Strip can feel downright daunting. Below, we’ve highlighted five sessions to help get you started:

 

1. Optimizing Microservices for Scale: Deploying Kong in ECS 

Thursday, December 5, 2:50 PM – 3:30 PM

Startup Central Stage, Expo Hall

You’ve made the big leap to microservices, but what strategies do you need to scale your services effectively? Marco Palladino, CTO and co-founder of Kong, will explore ways organizations can leverage the Kong API gateway in Amazon ECS to simplify cluster management and enable serverless functions in Lambda. He will discuss the journey to microservices, strategies for operating microservices at scale, deploying Kong using Amazon ECS, Amazon ElasticCache for Redis and Amazon Aurora with PostgreSQL Compatibility, as well as best practices for using Kong to secure, authorize and monitor microservices traffic.

 

2. Why observability requires the marriage of AI, metrics, and logs

Thursday, December 5, 1:45 PM – 2:45 PM

MGM, Level 1, Grand Ballroom 124

The new digital world presents a great opportunity as workloads move to the cloud and containers and companies benefit from serverless computing and an agile application delivery chain. However, these opportunities come with significant challenges. Site reliability engineers have been tasked with knitting together disparate platforms to build an observable stack, which is imperative for early detection of service degradation issues. We demonstrate a novel alternative that combines metrics, logs, and alerts into a comprehensive AIOps approach. Learn how to deliver an AI-enabled service that provides instant observability of your cloud application stack and how to combine logs and metrics into a single pane of glass. This presentation is brought to you by Moogsoft, an APN Partner.

 

3. Decoupled microservices: Building scalable applications

Monday, December 2, 1:00 PM – 3:15 PM

Aria, Level 1 East, Joshua 6

Often, when the microservices architecture style is applied, much of the communication between components is done over the network. In order to achieve the promises of microservices, this communication needs to happen in a loosely coupled manner. One frequently used option is to have all services expose an API following the REST architectural style. However, there is another option that provides even looser coupling: asynchronous messaging. In this workshop, you learn how to use AWS messaging services to build decoupled microservices architectures to achieve massive scale.

 

4. How Ticketmaster runs Kubernetes for 80% less without managing VMs

Wednesday, December 4, 5:30 PM – 6:30 PM

Aria, Level 1 East, Joshua 9

Serverless containers are the future of containers infrastructure. Matching and scaling the right infrastructure resource to ever-changing microservices deployments is a challenge. In this talk, the Ticketmaster engineering team reviews the evolution of containers deployments and the automatic scaling of infrastructure in Kubernetes. They discuss the tradeoffs and introduce a new approach to deploying serverless containers using Spotinst Ocean. Join this session to learn how Ticketmaster was able to run 100 percent of its Amazon EC2 on Spot Instances with programmatic fallback to On-Demand Instances or Reserved Instances across multiple AWS accounts. This presentation is brought to you by Spotinst, an APN Partner.

 

5. Tale of two cities: Goldman Sachs’s hybrid migration approach

Wednesday, December 4, 1:00 PM – 2:00 PM

Venetian, Level 3, Murano 3203

Goldman Sachs Global Investment Research division provides investment insights and ideas to clients around the world on a 24/7 basis, which requires a highly secure, scalable, and resilient environment. To re-architect its critical research platform to become a cloud-native application, the team developed a hybrid, container-based migration approach using AWS Fargate, Amazon API Gateway, Amazon MSK, and AWS Lambda that is underpinned by a secure sandboxed environment called SkyLab. In this session, learn how Goldman Sachs rapidly scaled its use of containers, changed its culture to embrace experimentation and fast failure, and adopted DevOps capabilities, including infrastructure as code, canary deployment, and zero-production access

The post 5 Sessions to Add to Your AWS re:Invent 2019 Schedule appeared first on KongHQ.

Kuma 0.3 Released with Traffic Routing!

$
0
0

Today, we’re thrilled to release Kuma 0.3, our open source control plane with brand new traffic routing capabilities. Kuma’s new L4 traffic routing rules allow you to easily implement blue/green deployments and canary releases. In summary, Kuma will now alleviate the stress of deploying new versions and/or features into your service mesh. Let’s take a glimpse at how to achieve it in our sample application:

This sample application has three versions of the backend API. To slowly roll out our change and ensure nothing breaks for the end-user, utilize Kuma’s new traffic routing policy.

spec:
  sources:
  - match:
      service: frontend
  destinations:
  - match:
      service: backend
  conf:
    - weight: 80
    destination:
      service: backend
      version: '0.0'
    - weight: 20
    destination:
      service: backend
      version: '1.0'
    - weight: 0
    destination:
      service: backend
      version: '2.0'

Like many other Kuma policies, you specify a source and destination service. However, with traffic routing, the policy includes an additional conf section where users specify how they want traffic to be routed. We will match the source to our frontend service, and the destination to our backend API service. Then, we give the corresponding weights to the following backend API service: 80 to v0, 20 to v1, and 0 to v2.

This allows us to slowly roll out traffic to the new backend API services. A simple yet powerful policy enables you to add canary deployment into your workflow!

We have many new features included in the 0.3 release. Please check out the Changelog to learn more about what the community has accomplished.

Community Highlight


(Pradeep, Community Contributor, and Kevin, Kuma’s Developer Advocate)

Speaking of community accomplishments, every release is made possible by contributors in the open source space. I had the privilege to meet Pradeep (@pradeepmurugesan) in London. He created the kumactl delete command feature. With his contribution, you can now delete Kuma resources using kumactl. We would also like to feature the following contributors for their contributions:

* @alrs (It was great meeting you at KubeCon!)
* @sterchelen
* @programmer04
* @Gabitchov

You all rock! We are finalizing your limited edition contributor shirts, so be on the lookout for those 🙂 And for folks looking to join the Kuma rockstar list, check out Kuma’s open issues on GitHub, and let us help you get started.

Announcements

We’ll be hosting our next online Meetup on December 10 at 10:00 AM PST. Please sign up here! We would love to have each and every one of you join. In the meantime, try out Kuma and let us know what your thoughts are!

The post Kuma 0.3 Released with Traffic Routing! appeared first on KongHQ.

The Brave New World of Digital Innovation: Open. Decentralized. Developer-Driven.

$
0
0

As we approach the end of the year, I am reflecting on the fascinating evolution of how technology solves business problems. Since 2016, I have seen microservices drive buying decisions for many large enterprises. At the same time, open source adoption has been gaining ground from its emergence as a grassroots movement in the 90s to an industry-defining standard, driven by the rise of developers as strategic influencers. While seeing the change first-hand is exciting, being a data-driven marketer, I also value being able to quantify the extent to which trends are taking hold, and tectonic shifts are occurring in the market. That is why I am very excited about the 2020 Digital Innovation Benchmark from Kong.

Here at Kong, we are committed to ushering in the next era of software. To keep a pulse on the state of digital innovation across industries, we recently commissioned a survey of 200 senior technology decision makers based in the United States, including CIOs, CTOs, VPs of IT, IT directors/architects and software engineers.* We have now released the findings of this research in our 2020 Digital Innovation Benchmark. While we knew that organizations are flocking to microservices, this survey revealed that adoption is already past a critical tipping point. The results are clear: software has already changed, and organizations not keeping up with digital innovation are not likely to survive. 

First, we found that open source is no longer a nice-to-have or a slight competitive edge for companies. Across industries, open source is now becoming the baseline and is forming the basis of a new tech stack. And yes, this is beyond just Linux. Eighty-three percent of respondents report their organization is using open source. The most commonly used open source technologies are databases. The next most commonly used open source technologies are containers (48 percent), API gateways (41 percent), infrastructure automation (40 percent), container orchestration (37 percent) and CI/CD tools (36 percent), which are all critical technologies to develop, deploy and run microservices at scale. They represent the new order for enabling innovative applications and business solutions.

Second, to spur innovation many organizations have embraced microservices architectures. Eighty-four percent of surveyed organizations are using microservices, with surveyed organizations running an average of 184 microservices and 60 percent of respondents running 50 or more. Again, whereas years ago migrating to microservices used to be an aspiration for many organizations, or perceived as realistic only in more innovative industries, today running on microservices has become the new normal. Technology leaders note multiple reasons for transitioning to microservices depending on the priorities of their individual organization, with improvements to security, increased development speed and increased speed of integrating new technologies frequently mentioned as drivers of adoption.

Finally, technology leaders recognize that the new open, decentralized world creates new challenges to address. Ninety percent of technology leaders across industries agree that

“one of the biggest technical challenges for organizations in the 21st century will be having a way to connect applications/services and secure data in motion at a massive scale and with optimum performance and reliability.” Underscoring the importance of digital innovation, many leaders (37 percent) also indicated that organizations that fail to keep up with the pace of digital innovation would likely go out of business or be absorbed by a competitor within three years. While cloud native technology used to be a way to create a competitive edge, this survey shows that the rapid evolution of digital innovation across industries now makes this a requirement for survival. 

We should celebrate that the next era of software is already here. At the same time, to address the needs of this new era, technology leaders must embrace solutions that prepare them for the unique challenges of an open, decentralized world. If you want to ensure your organization keeps up with the pace of digital innovation, I encourage you to learn from other leaders’ priorities and perspectives. So, find a comfortable nook and pour yourself a mug of something warm–I promise this will be a good read for your cold December evening.  

* Footnote: Respondents were evenly divided between publicly traded and privately held companies that had 1,000 or more employees. The “2020 Digital Innovation Benchmark” survey was fielded in August 2019 and represents a range of industries, including business and professional services; financial services; IT, technology and telecoms; manufacturing and production; and retail, distribution and transport.

The post The Brave New World of Digital Innovation: Open. Decentralized. Developer-Driven. appeared first on KongHQ.

Kuma 0.3.1 Released with Third-Party CA Support, Health Checks, and a GUI!

$
0
0

At KubeCon North America 2019, the community provided us with a ton of feedback and feature requests. We’re proud to release some of the most widely requested features in our latest version of Kuma: third-party CA (Certificate Authority) support, health checks, and a GUI! Kuma’s new health checks will help minimize the number of failed requests between your application. The third-party CA support will provide more flexibility when deciding how to secure your mesh. Lastly, the GUI will help you visualize the mesh and its policies in an intuitive format! Let’s take a look at how each of these work.

You can take a look at the full change log here.

Third-Party CA Support

Kuma has a built-in CA to issue certificates for data planes. Data plane certificates generated by Kuma are X.509 certificates that are SPIFFE compliant. However, sometimes you need to have the flexibility to use the CA that you’re already familiar with. Starting today, you have that choice when using Kuma with two quick changes. First is to use the new kumactl command to add a certificate with a key and cert file that you provide. This is full kumactl command would be:

kumactl manage ca provided certificates add --mesh demo --key-file key.pem --cert-file cert.pem

Once you add a certificate via kumactl, all you have to do is change the mesh resource to use a provided CA instead of the builtin CA. The new mesh resource would look like this:

type: Mesh
name: default
mtls:
  enabled: true
  ca:
    provided: {}

By changing the CA to provided, the control plane will use a CA certificate provided by a user to sign certificates of individual data planes.

Health Checks

The objective of the health checks functionality is to dynamically mark individual endpoints as healthy or unhealthy. This is desirable since at a given point, one source service may be able to connect to a destination service successfully while another service is failing to reach it – the first node will consider it healthy, while the second will mark it as unhealthy and start routing traffic to other data planes.

Kuma supports two kinds of health checks, which can be used separately or in conjunction:

  • Active Checks: Where the data plane periodically sends requests to a destination endpoint, and the health of the target is determined based on its response
  • Passive Checks (also known as outlier detection): Where the data planes analyze the ongoing traffic being proxied and determines the health of targets based on their behavior responding requests.

To configure active health checks, you would add the new HeathCheck policy as shown below:

type: HealthCheck
name: web-to-backend
mesh: default
sources:
- match:
    service: web
destinations:
- match:
    service: backend
conf:
  activeChecks:
    interval: 5s
    timeout: 1s
    unhealthyThreshold: 1
    healthyThreshold: 1

This is how you would easily configure passive health checks:

type: HealthCheck
name: web-to-backend
mesh: default
sources:
- match:
    service: web
destinations:
- match:
    service: backend
conf:
  passiveChecks:
    unhealthyThreshold: 3
    penaltyInterval: 5s

GUI

Kuma now ships with a basic web-based GUI that will serve as a visual overview of your data planes, meshes and various traffic policies. The Global Overview will provide a summary of all of the meshes found and allows you to switch between them. You can then view each entity and see how many data planes and traffic permissions, routes, and logs are associated with that particular mesh.

If you want to view information regarding a specific mesh, you can go to Overview and select the desired mesh from the pulldown at the top of the sidebar. You can then click on any of the overviews in the sidebar to view the entities and policies associated with that mesh.

Let us know what else you would like to see in Kuma’s new GUI!

Announcements

We’ll be hosting our next online Meetup on January 14, and we hope to see you there. Until then, hope you enjoy the new features, and let us know what you think! If you have any other feature suggestions, please let us know so we can work together to build it. You can find us on the community Slack channel or through the GitHub repository.

Happy holidays!

The post Kuma 0.3.1 Released with Third-Party CA Support, Health Checks, and a GUI! appeared first on KongHQ.

Viewing all 463 articles
Browse latest View live