Quantcast
Channel: Blog – KongHQ
Viewing all 463 articles
Browse latest View live

Introducing Kong Kubernetes Ingress Controller and Kong 0.13

$
0
0

 

Come learn about the newly released Kong 0.13 features and see the official Kong Kubernetes Ingress Controller in action.  We will go through the most recent Kong features, including Services and Routes, and discover how they play nicely with Kubernetes, the leading container orchestration platform.  

You will also have a chance to meet with the core engineering team in our new office in Union Square. Pizza and beers on us!   Bring yourself, a friend and an appetite to learn!

Date and Time

April 5, 2018  / 5:30 – 7:30 p.m.

Location

Kong HQ

251 Post Street, 2nd Floor

San Francisco, CA  94108

 

The post Introducing Kong Kubernetes Ingress Controller and Kong 0.13 appeared first on KongHQ.


Highly Available Microservices with Health Checks and Circuit Breakers

$
0
0

Developers are turning to microservices in greater numbers to help break through the inflexibility of old monolithic applications. Microservices offer many benefits, including separation of concerns, team autonomy, fault tolerance, and scaling across multiple regions for high availability.

However, microservices also present challenges, including more infrastructure complexity to manage. You have more services to monitor for availability and performance. It’s also a challenge to balance load and route around failures to maintain high availability. If your services are stateful, you need to maintain persistent connections between clients and instances. Most API developers would prefer to have a system manage these infrastructure complexities so they can focus on the business logic.

In this article, we’ll describe how algorithms for load balancing help you deliver highly available services. Then, we’ll also show an example of how Kong makes it easier to deliver high availability with built-in health checks and circuit breakers. Kong is the world’s most popular open source API management platform for microservices. With Kong, you get more control and richer health checks than a typical load balancer.

 

Intro to load balancing

Load balancing is the practice of distributing client request load across multiple application instances for improved performance. Load balancing distributes requests among healthy hosts so no single host gets overloaded.

A typical load balancing architecture showing that clients make requests to a load balancer, which then passes (or proxies) requests to the upstream hosts. Clients can be a real person or a service calling another service, and they can be external or internal to your company.

The primary advantages of load balancing are higher availability, highly performing application services, and improved customer experience. Load-balancing also lets us scale applications up and down independently and provides an ability to self-heal without app down time. It also lets us significantly improve speed to market by enabling a rolling or “canary” deployment process, so we can see how deployments are performing on a small set of hosts before rolling out across the entire cluster.

 

Important load balancer types

There are several algorithms or processes by which load can be balanced across servers: DNS, round robin, and ring balancer.

Domain Name Server (DNS) load balancing

The DNS load balancing process starts by configuring a domain in the DNS server with multiple-host IP addresses such that clients requests to the domain are distributed across multiple hosts.

In most Linux distributions, DNS by default sends the list of host IP addresses in a different order each time it responds to a new application client. As a result, different clients direct their requests to different servers, effectively distributing the load across the server group.

The disadvantage is that clients often cache the IP address for a period of time, known as time to live (TTL). If the TTL is minutes or hours, it can be impractical to remove unhealthy hosts or to rebalance load. If it’s set to seconds, you can recover faster but it also creates extra DNS traffic and latency. It’s better to use this approach with hosts that are highly performant and can recover quickly, or on internal networks where you can closely control DNS.

Round robin

In the round robin model, clients send requests to a centralized server which acts as a proxy to the upstream hosts. The simplest algorithm is called “round robin.” It distributes load to hosts evenly and in order. The advantage over DNS is that your team can very quickly add hosts during times of increased load, and remove hosts that are unhealthy or are not needed. The disadvantage is that each client request can get distributed to a different host, so it’s not a good algorithm when you need consistent sessions.

Ring balancer

A ring balancer allows you to maintain consistent or “sticky” sessions between clients and hosts. This can be important for web socket connections or where the server maintains a session state.

It works similarly to the round robin model because the load balancer acts as a proxy to the upstream hosts. However, it uses a consistent hash that maps the client to the upstream host. The protocol must use a client key in the hash, such as their IP address. When a host is removed, it affects only 1/N requests, where N is the number of hosts. Your system may be able to recover the session by transferring data to the new hosts, or the client may restart the session.

In the graphic below, we have 4 nodes that balance load across 32 partitions. Each client key is hashed and is mapped to one of the partitions. When a single node goes down, a quarter of partitions need to be reassigned to healthy nodes. The mapping from client to partition stays consistent even when nodes are added or removed.

 

Health checks and circuit breakers improve availability

Health checks can help us detect failed hosts so the load balancer can stop requests to them. A host can fail for many reasons, such as simply being overloaded, the  server process may have stopped running, it might have a failed deployment, or broken code to list a few reasons. This can result in connection timeouts or HTTP error codes. Whatever the reason, we want to route traffic around it so that customers are not affected.

Active health checks

In active health checks, the load balancer periodically “probes” upstream servers by sending a special health check request. If the load balancer fails to get a response back from the upstream server, or if the response is not as expected, it disables traffic to the server. For example, it’s common to require the response from the server includes the 200 OK HTTP code. If the server times out or responds with a 500 Server Error, then it is not healthy.

The disadvantage is that active health checks only use the specific rule they are configured for, so they may not replicate the full set of user behavior. For example, if your probe checks only the index page, it could be missing errors on a purchase page. These probes also create extra traffic and load on your hosts as well as your load balancer. In order to quickly identify unhealthy hosts, you need to increase the frequency of health checks which creates more load.

Passive health checks

In passive health checks, the load balancer monitors real requests as they pass through. If the number of failed requests exceeds a threshold, it marks the host as unhealthy.

The advantage of passive health checks are that they observe real requests, which better reflects the breadth and variety of user behavior. They also don’t generate additional traffic on the hosts or load balancer. The disadvantages are that users are affected before the problem is recognized, and you still need active probes to determine if hosts with unknown states are healthy.

We recommend you get the best of both worlds by using both passive and active health checks. This minimizes extra load on your servers while allowing you to quickly respond to unexpected behavior.

Circuit breakers

When you know that a given host is unhealthy, its best to “break the circuit” so that traffic flows to healthy hosts instead. This provides a better experience for end-users because they will encounter fewer errors and timeouts. It’s also better for your host because diverting traffic will prevent it from being overloaded, and give it a chance to recover. It may have too many requests to handle, the process or container may need to be restarted, or your team may need to investigate.

Circuit breakers are essential to enable automatic fault tolerance in production systems. There are also critical if you are doing blue-green or canary deployments. These allow you to test a new build in production on a fraction of your hosts. If the service becomes unhealthy, it can be removed automatically. Your team can then investigate the failed deployment.

What is Kong?

Kong is the most popular open source API gateway for microservices. It’s very fast with sub-millisecond latency, runs on any infrastructure, and is built on top of reliable technologies like NGINX. It has a rich plug-in ecosystem that allows it to offer many capabilities including rate limiting, access control and more.

Kong allows load balancing using the DNS method, and it’s ring-balancer offers both round robin and hash-based balancing. It also provides both passive and active health checks.

A unique advantage of Kong is that both active and passive health checks are offered for free in the Community Edition (CE). Nginx offers passive health checks in the community edition, but active health checks are included only in the paid edition, Nginx Plus. Amazon Elastic Load Balancers (ELB) don’t offer passive checks. Also, depending on your use, it may cost more than running your own instance of Kong. Kubernetes liveness probes offer only active checks.

 

Nginx Amazon ELB Kubernetes Kong CE
Active Checks X Plus only
Passive Checks X X

 

The Kong Enterprise edition also offers dedicated support, monitoring, and easier management. The Admin GUI makes it easy to add and remove services, plugins, and more. Its analytics feature can take the place of more expensive monitoring systems.

 

See it in action

Let’s do a demo to see how easy it is to configure health checks in Kong. Since they are familiar to many developers, we’ll use two Nginx servers as our upstream hosts. Another container running Kong will perform health checks and load balancing. When one of the hosts goes down, Kong will recognize that it is unhealthy and route traffic to the healthy container.

In this example, we’re going to use Docker to set up our test environment. This will allow you to follow along on your own developer desktop. If you are new to Docker, a great way to learn are the Katacoda tutorials. You don’t need to install anything and can learn the basics in about an hour.

 

Step 1: Add two test hosts

Lets install two test hosts that will respond to our requests. In this example, we will use the Nginx docker container one of which we’ll configure to be healthy and one as unhealthy. They will each be on separate ports so Kong can route to each.

First let’s create our healthy container. It will respond with “Hello World!” We’ll set this up using a static file and mount it in our container’s html directory.

$ mkdir host1
$ echo "Hello World!" > host1/index.html
$ docker run --name host1 -v ~/host1:/usr/share/nginx/html:ro -p 9090:80 -d nginx
$ curl localhost:9090
Hello World!

Next, let’s create our unhealthy container. We’ll configure Nginx to respond with a 500 Server Error. First, copy the default nginx config.

$ docker cp host1:/etc/nginx/conf.d/default.conf ~

Then edit the location to return a 500 error.

$ vim ~/default.conf
location / {
    return 500 ‘Error\n’;
    root   /usr/share/nginx/html;
    index  index.html index.htm;
}

Now start up the container and test it to make sure it returns the error.

$ docker run --name host2 -v ~/default.conf:/etc/nginx/conf.d/default.conf -p 9091:80 -d nginx
$ curl -i localhost:9091
Error

 

Step 2: Install Kong

Kong can be installed in wide variety of environments. We will follow the Docker instructions since they are relatively easy to test on a developer desktop.
First, we need a database where Kong can store its settings. Will use Postgres since it’s easy to set up a test container in Docker.

$ docker run -d --name kong-database \
              -p 5432:5432 \
              -e "POSTGRES_USER=kong" 
              -e "POSTGRES_DB=kong" \
              postgres:9.4

Next, we need to initialize the database.

$ docker run --rm \
                  --link kong-database:kong-database \
                  -e "KONG_DATABASE=postgres" \
                  -e "KONG_PG_HOST=kong-database" \
                  -e "KONG_CASSANDRA_CONTACT_POINTS=kong-database" \
                  kong:latest kong migrations up

Now let’s start the Kong container. These options use default ports and connect to our Cassandra database.

$ docker run -d --name kong \
                  --link kong-database:kong-database \
                  -e "KONG_DATABASE=postgres" \
                  -e "KONG_PG_HOST=kong-database" \
                  -e "KONG_CASSANDRA_CONTACT_POINTS=kong-database" \
                  -e "KONG_PROXY_ACCESS_LOG=/dev/stdout" \
                  -e "KONG_ADMIN_ACCESS_LOG=/dev/stdout" \
                  -e "KONG_PROXY_ERROR_LOG=/dev/stderr" \
                  -e "KONG_ADMIN_ERROR_LOG=/dev/stderr" \
                  -e "KONG_ADMIN_LISTEN=0.0.0.0:8001" \
                  -e "KONG_ADMIN_LISTEN_SSL=0.0.0.0:8444" \
                  -p 8000:8000 \
                  -p 8443:8443 \
                  -p 8001:8001 \
                  -p 8444:8444 \
                  kong:latest

Verify Kong is running on the port 8001 and gives a 200 ‘OK’ response. That means it’s working.

$ curl -i localhost:8001/apis
HTTP/1.1 200 OK

 

Step 3: Configure Kong to use our test hosts

Now we want to connect Kong to our test hosts. The first step is configuring an API in Kong. “API” is just a historic term since Kong can load balance any HTTP traffic, including web server requests. I’m going to call our API “mytest” since it’s easy to remember. I’m also setting the connection timeout to 5 seconds because I’m too impatient to wait the default 60 seconds. If you want to learn more about creating APIs, see Kong`s documentation.

$ curl -i -X POST \
   --url http://localhost:8001/apis/ \
   --data 'name=mytest' \
   --data 'hosts=mytest.com' \
   --data 'upstream_url=http://mytest/'
   --data 'upstream_connect_timeout=5000'

Next, we have to add an upstream for our API. This allows me to specify an active health check to probe my servers every 5 seconds. Additionally, they will be marked as unhealthy after a single timeout.

$ curl -i -X POST http://localhost:8001/upstreams/ \
            --data 'name=mytest' \
            --data 'healthchecks.active.healthy.interval=5' \
            --data 'healthchecks.active.unhealthy.interval=5' \
            --data 'healthchecks.active.unhealthy.http_failures=2' \
            --data 'healthchecks.active.healthy.successes=2'

Now we can add targets to the upstream we just created. These will point to the Nginx servers we just created in Step 1. Use the actual IP of your machine, not just the loopback address.

$ curl -i -X POST http://localhost:8001/upstreams/mytest/targets --data 'target=192.168.0.8:9090'
$ curl -i -X POST http://localhost:8001/upstreams/mytest/targets --data 'target=192.168.0.8:9091'

Kong should be fully configured now. We can test that it’s working correctly by making a GET request to Kong’s proxy port, which is 8000 by default. We will pass in a header identifying the host which is tied to our API. We should get back a response from our Nginx server saying “Hello”!

$ curl -H "Host: mytest.com" localhost:8000
Hello World!

 

Step 4: Verify health checks

You’ll notice that Kong is not returning a 500 error, no matter how many times you call it. So what happened to host2? You can check the kong logs to see the status of the health check.

$ docker logs kong | grep healthcheck
2018/02/21 20:00:05 [warn] 45#0: *17672 [lua] healthcheck.lua:957: log(): [healthcheck] (mytest) unhealthy HTTP increment (1/2) for 172.31.18.188:9091, context: ngx.timer, client: 172.17.0.1, server: 0.0.0.0:8001
2018/02/21 20:00:10 [warn] 45#0: *17692 [lua] healthcheck.lua:957: log(): [healthcheck] (mytest) unhealthy HTTP increment (2/2) for 172.31.18.188:9091, context: ngx.timer, client: 172.17.0.1, server: 0.0.0.0:8001

Kong is automatically detecting the failed host by incrementing its unhealthy counter. When it reaches the threshold of 2, it breaks the circuit and routes requests to the healthy host.
Next, let’s revert the Nginx config back so it returns a 200 OK code. We should see that Kong recognized it as healthy and it now returns the default Nginx page. You might need to run it a few times to see host2 since Kong doesn’t switch every other request.

$ docker cp host1:/etc/nginx/conf.d/default.conf ~
$ docker container restart host2
$ curl -H "Host: mytest.com" localhost:8000
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>

You successfully demonstrated health checks and circuit breakers! To continue this exercise, you may read more about Kong’s health checks and try setting up a passive health check. You could also read about load balancing algorithms and try setting up hash-based load balancing.

 

Conclusion

Kong is a scalable, fast, and distributed API gateway layer. Kong’s load balancing and health check capabilities can make your services highly available, redundant, and fault-tolerant. These algorithms can help avoid imbalance among servers, improve system utilization, and increase system throughput.
To learn more about health checks in Kong, see our recorded webinar with a live demonstration of health checks in action. It also includes a presentation on these features and live Q&A from the audience.

The post Highly Available Microservices with Health Checks and Circuit Breakers appeared first on KongHQ.

Kong CE 0.13.0 released

$
0
0

After a release candidate cycle of several weeks, the Kong team is very happy to announce the promotion of the final, stable release of Kong CE 0.13.0!

Download and try now Kong CE 0.13.0.

CE 0.13.0

The highlights of 0.13.0 are:

  • 🎆 The introduction of Services and Routes as new core entities. Those entities replace the “API” entity and simplify the setup of non-naive use-cases. They provide better separation of concerns and allow for plugins to be applied to specific endpoints.
  • 🎆 A new syntax for the proxy_listen and admin_listen directives allows you to disable the Proxy or the Admin API at once, meaning the separation of Kong control-planes and data-planes has never been that easy!
  • 🎆 The new endpoints such as /routes and /services are built with much improved support for form-urlencoding payloads, and produce friendlier responses. Expect existing endpoints to move towards this new implementation in the future and greatly improve the Admin API usability.
  • Fix several issues with our DNS resolver, health checks configuration, and application/multipart MIME type parsing.
  • Docker: the latest and 0.13.0 Docker Hub tags now point to the alpine image.

See the 0.13.0 Changelog for a complete list of changes in this release.

If you are already running Kong, consult the 0.13 Upgrade Path for a complete list of breaking changes, and suggested no-downtime upgrade path.

Services and Routes

With the “API” entity out of the picture, how are Services and Routes going to replace them, and why are they better? Let’s answer that with an example:

Consider a microservice in front of which we want to use Kong to take advantage of its authentication and rate limiting capabilities. Nothing out of the ordinary here!

Prior to Kong 0.13.0, proxying traffic to that API via Kong meant creating an API with matching rules and an upstream URL, like so:

$ curl -X POST http://<kong-ip>:8001/apis \
  -d "name=my-service" \
  -d "hosts=my-service.com" \
  -d "upstream_url=https://service.com"

And applying authentication and rate-limiting to that API was as simple as a couple of HTTP requests:

$ curl -X POST http://<kong-ip>:8001/apis/my-service/plugins \
  -d "name=key-auth"

$ curl -X POST http://<kong-ip>:8001/apis/my-service/plugins \
  -d "name=rate-limiting" \
  -d "config.minute=1000"

Great, now what’s the problem with that? Well, the simplicity of this model starts playing against us a soon as we introduce a (very common) scenario: applying different plugins per endpoint.

Applying plugins per endpoints in 0.12.x and below

What if our microservice exposes /foo and /expensive? Both endpoints require authentication, but the second one should also be rate-limited. Because the API entity defines all matching rules (i.e. endpoints) and points to your upstream microservice at the same time, we end up in a situation where we need to duplicate our API entity if we want to apply different plugins for each endpoint. Put in picture:

A diagram hilighting the shortcoming of the Kong 0.12.x data model.

Applying plugins per endpoint in Kong 0.12.x and prior. In red, the duplicated attributes.

In red, we can see all of the duplication that occurs when we follow this approach: two APIs, each with a different matching rule for the request URL but an identical upstream URL. And a duplicated basic-auth plugin, to ensure that all requests going to each of these “twin APIs” are authenticated.

Applying plugins per endpoints in 0.13.0

Enter Services and Routes. A Service is the equivalent of the API entity, minus its matching rules (or, more accurately, minus all attributes related to client requests). How do you proxy a client request to a Service then?, you may ask. By associating a Route to said Service, of course! Here is what the previous use-case translates to with Services and Routes:

A diagram representing plugins applied to specific endpoints with Kong 0.13.0

Applying plugins per endpoints is much easier with Kong 0.13.0

Now, the difference between APIs and Services/Routes is becoming evident: Services and Routes provide the separation of concerns between downstream clients and upstream services that APIs could not. This type of (common) use-case becomes easier to reason about, and we benefit from extra granularity when applying plugins. These new entities considerably improve usability and flexibility at the same time.

We also believe that their names are more appropriate than that of the API entity. Nowadays, we see Kong in front of monolithic and legacy APIs, but also in front of more recent service-oriented architectures or fleets of microservices. The term “API”, on top of being potentially mingled with the Admin API component, was also too narrow. Instead,

  • A Service reflects better the notion of “upstream service” or “backend” that you wish Kong to sit in front of. This could be a monolithic API, or a billing microservice, or even a serverless function!
  • A Route, in the context of a reverse-proxy, naturally implies that it is client-facing, and that requests matching one will eventually get proxied or “routed” somewhere.

You can read more about Services and Routes by reading the Proxy Guide or the Admin API reference. And finally, if you are already running Kong, fear not, for the API entity is deprecated, but not removed! All of your existing configuration will still work as of 0.13.0.

Conclusion

Services and Routes are the major new feature of Kong CE 0.13.0, but the release also ships with many other improvements, including a new configuration syntax for the proxy_listen and admin_listen directives, which allows for the separation control and data plane nodes! Stay tuned for an upcoming blog post about control and data plane separations, and in the meantime, happy Konging!

The post Kong CE 0.13.0 released appeared first on KongHQ.

Separating Control-Planes and Data-Planes in Kong

$
0
0

Starting with Kong CE 0.13 and the upcoming EE 0.32 it is possible to separate control- and data-planes in a Kong cluster.

So what are those planes? The control plane is how we instrument the system (pushing configs, fetching logs), whereas the data plane is the traffic that is actually being proxied by the system.

Consider a factory. The factory has a conveyor belt, and on this belt the parts are added, the products assembled and finally packed and shipped. But to run this factory we need a lot more: logistics, work schedules, maintenance, quality reports, and what not. In this example the conveyor belt would be the data plane, where all the auxiliary stuff to enable the belt to deliver the products would be the control plane.

Kong works as a cluster of independent, stateless, nodes. All the Kong nodes in a given cluster are connected to the same database, from which the nodes get their configuration information. Up till now each Kong node would expose a port where it would serve traffic for the proxy (data plane), and another for configuration (the RESTful management API, the control plane).

With the new release we have refactored the way the ports are configured which allows for greater flexibility in infrastructure architecture, and system control. This will enable the following uses:

  • disable the proxy all together (making a node a control-plane only node)
  • disable the management API all together (making a node a data-plane only node)
  • define multiple ports for either the proxy or admin api (not explored on this post, but worth mentioning)

This now opens up the possibility to proxy API traffic through Kong via one network segment, while administering Kong via a different network segment, which provides better isolation of the components, without risking accidentally opening up the Kong admin API to the whole internet.

To achieve this we removed the following (default) settings:

# Proxy
proxy_listen = 0.0.0.0:8000
proxy_listen_ssl = 0.0.0.0:8443
ssl = on
http2 = off

# Admin API
admin_listen = 127.0.0.1:8001
admin_listen_ssl = 127.0.0.1:8444
admin_ssl = on
admin_http2 = off

The format changed into a comma separated list of addresses with flags:

proxy_listen = [off] | <ip>:<port> [ssl] [http2] [proxy_protocol], ...
admin_listen = [off] | <ip>:<port> [ssl] [http2] [proxy_protocol], ... 

This format allows for multiple address/port combinations and flags to configure each of those. The new defaults, mimicking the exact same behavior of the old settings are:

proxy_listen = 0.0.0.0:8000, 0.0.0.0:8443 ssl
admin_listen = 127.0.0.1:8000, 127.0.0.1:8443 ssl

Given the new configuration properties we can now simply create a data-plane node by starting Kong with the `admin_listen` setting disabled:

$ KONG_ADMIN_LISTEN=off && kong start

Similarly for a control-plane node we can disable the `proxy_listen` setting:

$ KONG_PROXY_LISTEN=off && kong start

 

Read more about configuration options

The post Separating Control-Planes and Data-Planes in Kong appeared first on KongHQ.

Kong in Sweet Home Chicago

$
0
0

Kong + Goto Conferece

Sweet Home Chicago – The Kong team is crashing GOTO Chicago, April 25 – 26, 2018.

Kong is a proud sponsor of GOTO, the enterprise software development conference designed for team leads, architects, project management, and organized by developers, for developers. Drop by our booth and learn about Kong, the most popular open-source API Management platform. We’re excited to show you:

  • How Kong manages your APIs and Microservices
  • How to install Kong – learn more here
  • How to customize and optimize Kong specifically to your API needs

If you can’t attend please schedule a personalized Kong demo, or you can join one of our webinars.

Come Say Hello

Kong microservices API gateway experts will be giving demos, answering questions, and distributing the coolest stickers and T-shirts. Plus, in under 10 minutes we’ll show you how to deploy Kong.

Request a Meeting

Kong team members are available to discuss your business and technical requirements—face-to-face. Request a meeting to explore how easy it is to get started with Kong. (Mention “GoToChicago” – limited time slots are available. Sign up early to secure your meeting.)

Kong

Kong Inc. is the microservices API company. We are best known as the creator and primary supporter of the Kong platform, the most widely adopted open-source microservices API gateway.

The Kong platform is designed to sit in front of highly performant APIs and microservices to protect, extend, and orchestrate distributed systems. With over 15,000 stars on Github and 15 million plus downloads, Kong is the most popular open-source API gateway and microservices management layer.

 

The post Kong in Sweet Home Chicago appeared first on KongHQ.

Celebrating 100 Kong Contributors with a Special Edition T-shirt

$
0
0

One way to measure the success of an open source project is its popularity. Kong, with more than 15M downloads and 15,000 stars on GitHub, can be considered fairly successful. However, for long-term success, the popularity of an open source project needs to also be balanced with the health of the project. And, a key metric for measuring health is its pool of contributors. Therefore, we are very proud to announce that Kong has attracted its 100th contributor!

And as we celebrate this important milestone, we have produced a special edition Kong T-shirt for all of our contributors.

kong contributor special edition t-shirt

If you have already made a Pull Request to https://github.com/Kong/kong that was merged, please fill-out this form to receive your contributor T-shirt.

If you haven’t made a contribution yet, it is not too late! We are launching the Kong Contributor T-shirt program and will provide this t-shirt to all future Kong contributors.

Proudly wear this special edition T-shirt and tag us on Twitter @thekonging #KongStrong.

Kong 100 contributors

Want to get your 2018 Kong contributor T-shirt? Get started today by browsing the issues, reading our contribution guidelines, and proposing your own Pull Request!

The post Celebrating 100 Kong Contributors with a Special Edition T-shirt appeared first on KongHQ.

API Gateway – a Rapidly Changing Landscape

$
0
0

Note: Below is Chapter 2 from the recently released book “Kong: Becoming a King of API Gateways” written by Alex Kovalevych, Robert Buchanan, Daniel Lee, Chelsy Mooy, Xavier Bruhiere, and Jose Ramon Huerga. Posted with permission from Bleeding Edge Press. The full book can be purchased at Bleedingedgepress.com


 

When the Internet started there was only the concept of an HTTP Server that would serve up static web pages. That quickly grew into having application servers that served up web applications or servlets using HTTP Servers as reverse proxies to the applications. While these applications were great for the time, they became too large to allow integrations in a Service Oriented Architecture (SOA) with other applications/services, which lead to the creation of an Enterprise Service Bus (ESB).

All of the code for the sample project in this book can be found here.

What is an ESB?

An ESB implements a communication system between mutually interacting software applications in SOA. As an architecture, an ESB can be thought of as a central platform for integrating applications in an enterprise. It can also be thought of as an architecture that allows communication via a common communication bus that consists of a variety of point-to-point connections between providers and users of services.

An ESB promotes agility and flexibility with regard to high-level protocol communication (XML to JSON, and vice versa) between applications. The main goal of this high-level protocol communication is Enterprise Application Integration (EAI) of complex service or application landscapes in a maintainable manner. Some of its primary duties of an ESB are to route messages between services, monitor and control routing messages between services, control the versioning of services, and provide commodity services like data transformation and security. An ESB is generally used to facilitate the internal communication between services, but isn’t limited to this use case.

What is an API Gateway?

API Gateways are essentially glorified reverse proxies that offer more customizations and flexibility to reverse proxies. An API Gateway acts as an API frontend that orchestrates API Requests, enforces traffic policies (i.e. throttling, caching), security policies (i.e. authorization, authentication), gathers analytics on traffic, and orchestrates transformation engines for modifying requests/responses on the fly. An API Gateway is generally seen as the entry point for communication between the external requests and the internal services, hence its name. Using an API Gateway internally, however, can facilitate great rewards in terms of the standardization of policies.

Is an API Gateway a new ESB?

Are API Gateways a re-invention of the ESB? Yes and no. Although an API Gateway may provide a lot of the same functionality (security, data transformation, routing), it does it in an orchestrated way instead of a point-to-point or broadcast way. An API Gateway and ESB can live side-by-side because their primary duties overlap their intended purposes and they are separated by concerns of internal and external communication.

The equivalent of an ESB to animals is the nervous system combined with the circulatory system. When an organ needs to communicate with another specific organ, it uses the nervous system to send a point-to-point message. When an organ needs to broadcast a message to other organs that might be interested, it releases a hormone into the bloodstream to send a multicast message. When a message comes into the brain from anywhere in the body, the brain tells the body how to react. For example, if you accidentally touch a hot stove, the nerves in your skin shoot a message of pain to your brain. The brain then sends a message back telling the muscles in your hand to pull away. The API Gateway is equivalent to the brain in the way that it orchestrates interaction.

Although the API Gateway can be thought of as a brain, it doesn’t mean that it doesn’t need or isn’t enhanced by the nervous system. So, when looking at the current landscape, API Gateways are not a new ESB, but an enhancement of the original concept of an ESB.

API Gateway present

Back in time there were Neanderthals who discovered fire, and from fire they discovered cooking, and from cooking they were able to live longer. As they lived longer they started to want more than just a cave and they started building. First we had houses (HTTP Servers), then tribes (Reverse Proxies), then villages (ESB), and then we built towns. A town is small enough that everyone knows what is going on and can help out while being big enough that there is a need for a centralized way to manage the developments happening in the town. Most companies are the size of towns, and they have a lot of domain knowledge and offerings that a centralized API Gateway can fulfill their needs with, and they won’t need to be concerned with what is happening in the other towns.

In the present landscape there are a lot of well established offerings for API Gateways such as Kong, APIGEE, KrakenD, Tyk.io and IBM API Connect (previously Strongloop). Each one of these offerings has their advantages and disadvantages, from price, maintainability, scalability and customization.

API Gateway future

Although a lot of companies enjoy their peaceful town there are companies that are evolving into cities. While evolving from a town to a city the amount of buildings grows exponentially and planning the city becomes too big of a job for a single centralized contact, so the mayor has to employ a team to handle the planning.

With the orchestration of services in the microservice architectures becoming increasingly difficult, new developments into the API Gateway space have emerged to leverage other recent developments in tech.

Serverless

Functions as a Service (FaaS) (i.e. Amazon’s Lambda, OpenWhisk) were created not too long ago but have most recently become stable and a selling point to leverage as an API Gateway. When you breakdown an API Gateway it is really a stack of functions that chain off each other to fulfill a request. Thinking about an API Gateway in this fashion led to the use of FaaS platforms to facilitate API Gateway functions. There are several out there but one in particular that has been used heavily is the Serverless framework. This has been proven to be a good use case for simple API gateways, especially if you are already using FaaS technology for other business needs. It does mean that every aspect of your API Gateway needs must be coded, generally by your company. With Serverless the idea is that you do not have to run your own infrastructure. This can be a selling point to some, but it also means you are at the mercy of your providers network, policies, and operational capabilities. Using FaaS from a big provider like Google/AWS is ideal, but not every company is comfortable with having other people host their data, especially since any data breaches are a PR nightmare causing the demise of a company.

Service Meshes

As containerization became popular for the purpose of removing the overhead of running a full operating system just to host an application, it quickly became abused and now everything is its own container. This problem of every app as its own container spawned companies to have several thousand containers running at any given time. During a conference a speaker mentioned that in production they had over 4000 containers, but only 400 engineers, meaning that each engineer was responsible for 10 production systems. This would be considered a problem for an API Gateway, since in the current stance most gateways are hand registering each service it communicates with and how to handle the requests to each service (security and traffic policies).

With this massive growth in the number of containers a company creates, container orchestration evolved to start enabling service meshes. The term service mesh is often used to describe the network of microservices that make up such applications and the interactions between them. With the evolution of service meshes to orchestrate the number of containers connected in a logical service, the need to automatically register them into a gateway was desirable. Istio was developed as an orchestration tool to manage service meshes as well as Envoy.

There is definitely some overlap between API Gateway and Service mesh patterns, i.e. request routing, composition, authentication, rate limiting, error handling, monitoring, etc. That being said, the purpose of a service mesh is for an internal service-to-service communication with policy enforcement, while an API Gateway is primarily meant for external client-to-service communication. Much like an ESB being used as an API Gateway, a service mesh can be used as an API Gateway, but is more suited to replace the work of an ESB and work along side the API Gateway to expose the services to external consumers.

Sidecar Gateways

A trending pattern across all technology is the component movement. In this movement each and everything developed is analyzed to be either a component or a high-order component that is composed of several components. These components are the building blocks of modern applications. This component-based software engineering (CBSE) has even found its way into, what normally is centralized infrastructure, the API Gateway space. This pattern allows an application to get the primary functions of an API Gateway while living side-by-side, instead of having a historical top-down infrastructure. This approach to things allows a distributed method to enforce policies that are at an organization level.

Some examples of sidecar gateways would be Ambassador, Istio, and even Kong has adopted the sidecar approach as a capability.

API First: Weathering the storm

API first design is a strategy in which the first order of business is to develop an API that puts the target developer’s interests first and then build the product on top of it (website, mobile application, or a SaaS software).

This design strategy allows the product to enable full flexibility in how the users (developers) decide to utilize the product. This use may be outside the scope that it was designed for, but because of the flexibility new use cases are continually discovered. This strategy is taken into consideration by many engineers in software, construction, and more. Let’s take primitive shelters, for example. Their intended use was to allow humans to weather the natural elements. Eventually humans started using shelter to house their possessions, which was outside the provided use case for shelter, but didn’t violate the design aspects of a shelter. This use case for shelter led to a redesigning of the house to add more functionality without leaving its original intent behind as a shelter used to weather the natural elements.

To simplify, think about it like this: You built a primitive shelter, and it protects you from the natural elements. The world is a warmer and dryer place.

Now, another person in your tribe starts building an attached room, which interacts with your shelter. Then a third person sees the warmth and dryness the shelter is providing and decides to build a second story to your shelter. Soon, you have multiple people all building attachments with horizontal dependencies that are all on a different build cadence. What can happen if no discipline is applied to this in the realm of APIs is a nightmare of integration failures.

To avoid these integration failures and to recognize your shelter as a first-class artifact of the build process, you’d like others to be able to work against the shelter design contract without interfering with the original space that was developed.

Summary

You know have good examples of the use of API Gateways and why you would want to use Kong. In the next chapter we will look more closely at Kong and also compare it with competing API Gateways that are available.

 

 

The post API Gateway – a Rapidly Changing Landscape appeared first on KongHQ.

Announcing the Kubernetes Ingress Controller for Kong

$
0
0

Today we are excited to announce the Kubernetes Ingress Controller for Kong.

Container orchestration is rapidly changing to meet the needs of software infrastructure that demands more reliability, flexibility, and efficiency than ever. At the forefront of these tools is Kubernetes, a container orchestrator that enables operations and applications teams to deploy and scale workloads that meet these needs while still enabling developers with self-service and a great developer experience.

Critical to these workloads, however, is a networking stack that can support highly dynamic deployments across a clustered container orchestrator at scale.

Kong is a high performance, open source API gateway, traffic control, and microservice management layer that supports the demanding networking requirements these workloads have. Kong, however, does not force teams into a one-size-fits-all solution. To serve traffic for a deep ecosystem of software and enterprises, Kong comes supplied with a rich plugin ecosystem that extends Kong with features for authentication, traffic control, and more.

Deploying Kong onto Kubernetes has always been an easy process, but integration of services on Kubernetes with Kong was a manual process. That’s why we are excited to announce the Kong Ingress Controller for Kubernetes.

By integrating with the Kubernetes Ingress Controller spec, Kong ties directly to the Kubernetes lifecycle. As applications are deployed and new services are created, Kong will automatically live configure itself to serve traffic to these services.

This follows the Kubernetes philosophy of using declarative resources to define what we want to happen, rather than the historical imperative model of configuring servers how we want with a series of steps. In short, we define the end state. The ingress controller and Kubernetes advances the cluster to that state, rather than the end state being a side effect of actions we perform on the cluster.

This automatic configuration can be costly when using load balancers that require a restart, reload, or significant time to update routes. This is the case with the open source nginx ingress controller, which is based on a configuration file that must be reloaded with every change. In a highly available, dynamic environment, this configuration reload can result in downtime or unavailable routes while nginx is being reconfigured. The open source edition of Kong and the Kong Ingress Controller has a full management layer and API, live configuration of targets and upstreams, and a durable, scalable state storage using either Postgres or Cassandra that ensures every Kong instance is synced without delay or downtime.

Setting up the Kong Ingress Controller

Next, we’ll show you how easy it is to set up the Kong ingress controller. We have a GitHub example and will walk you through the steps below. You can also follow Kong CTO and Co-Founder, Marco Palladino, through the setup steps in this demo presentation.

 

Getting started is just a matter of installing all of the required Kubernetes manifests, such as the ingress controller Deployment itself, a fronting service, and all of the RBAC components needed for Kong to access the Kubernetes API paths it needs to successfully work.

These manifests will work on any Kubernetes cluster. If you are just getting started, we recommend using minikube for development. Minikube is an officially-provided single node Kubernetes cluster that runs on a virtual machine on your computer, and is the easiest way to get started working with Kubernetes as an application developer.

Installation is as simple as running the following command:

curl https://raw.githubusercontent.com/Kong/kubernetes-ingress-controller/master/deploy/single/all-in-one-postgres.yaml \
| kubectl create -f -

After installing the Kong Ingress Controller, we can now begin deploying services and ingress resources so that Kong can begin serving traffic to our cluster resources. Let’s deploy a test service into our cluster that will serve headers and basic information about our pod back to us.

curl https://raw.githubusercontent.com/Kong/kubernetes-ingress-controller/master/deploy/manifests/dummy-application.yaml \
| kubectl create -f -

This deploys our application, but we still need an Ingress Resource to serve traffic to it. We can create one for our dummy application with the following manifest:

$ echo "
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: foo-bar
spec:
rules:
- host: foo.bar
http:
paths:
- path: /
backend:
serviceName: http-svc
servicePort: 80
" | kubectl create -f -

With our Kong Ingress Controller and our application deployed, we can now start serving traffic to our application.

$ export PROXY_IP=$(minikube   service -n kong kong-proxy --url --format "{{ .IP }}" | head -1)
$ export HTTP_PORT=$(minikube  service -n kong kong-proxy --url --format "{{ .Port }}" | head -1)

$ curl -vvvv $PROXY_IP:$HTTP_PORT -H "Host: foo.bar"

Adding Plugins

Plugins in the Kong Ingress Controller are exposed as Custom Resource Definitions (CRDs). CRDs are third party API objects on the Kubernetes API server that operators can define, allowing for arbitrary data to be used in custom control loops such as the Kong Ingress Controller.

Let’s add the rate limiting plugin to our Kong example, and tie it to our ingress resource. Plugins map one-to-one with ingress resources, allowing us to have fine grained control over how we apply plugins to our upstreams.

$ echo "
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
name: add-ratelimiting-to-route
config:
hour: 100
limit_by: ip
second: 10
" | kubectl create -f -

$ kubectl patch ingress foo-bar \
-p '{"metadata":{"annotations":{"rate-limiting.plugin.konghq.com":"add-ratelimiting-to-route\n"}}}'

Now that this is applied, we can cURL our service endpoint again and get the following response:

$ curl -vvvv $PROXY_IP:$HTTP_PORT -H "Host: foo.bar"

> GET / HTTP/1.1

> Host: foo.bar

> User-Agent: curl/7.54.0

> Accept: */*

>

< HTTP/1.1 200 OK

< Content-Type: text/html; charset=ISO-8859-1

< Transfer-Encoding: chunked

< Connection: keep-alive

< X-RateLimit-Limit-hour: 100

< X-RateLimit-Remaining-hour: 99

< X-RateLimit-Limit-second: 10

< X-RateLimit-Remaining-second: 9

We immediately see that our rate limiting headers are applied and available for this endpoint! If we deploy another service and ingress, our rate limit plugin will not apply unless we create a new KongPlugin resource with its own configuration and add the annotation to our new ingress.

Now we have a fully featured API gateway ingress stack, complete with an application and rate limiting for it. You’re ready to take control of your microservices with Kong!

Keep up with the development of the Kong ingress controller, request features, or report issues on our Github repository. We look forward to seeing how you use it!

The post Announcing the Kubernetes Ingress Controller for Kong appeared first on KongHQ.


The API Economy – The Why, What, and How

$
0
0

 

Kong is a proud member of the Andreessen Horowitz portfolio. As part of the a16z family, we are often invited to participate in exciting speaking opportunities. Kong’s CEO and Co-Founder, Augusto Marietti, had the pleasure of participating in a recent podcast, The API Economy – The Why, What, and How. The podcast’s guest panel included fellow a16z portfolio members Laura Behrens Wu, CEO of Shippo, and Cristina Cordova, Business Development and Partnerships at Stripe.

Stripe is building an infrastructure for the movement of money including payment processing; Shippo powers multi-carrier shipping for all kinds of commerce. Kong is the most popular open-source API microservice management platform.

Per a16z’s review of the conversation “APIs (application programming interfaces)….can be described as everything from Lego building blocks to Tetris to front doors to even veins in the human body. Because the defining property of APIs is that they’re ways to send and receive information between different parts, that is, communicate between software applications (which often map onto different organizational functions/services in a company too). APIs, therefore, give companies access to data and competencies they wouldn’t otherwise have — or better yet, that they no longer need — by letting even non-tech and small companies combine these building blocks to get exactly what they want.”

Ultimately, APIs enable companies to focus on their core competencies. The discussion further outlined ways organizations can optimize APIs for use by non-technical stakeholders, such as finance or operations. Additionally, the panel talked about APIs influencing and changing how companies originate. Perhaps a company is designed around an API first model?

Thank you a16z and Sonal Chokshi for hosting and moderating this discussion. To learn more about Kong’s API management platform register here for a demo or watch one of our on-demand webinars covering topics like Microservices, Serverless, Service Mesh, Kubernetes, and Kong Installation.

The post The API Economy – The Why, What, and How appeared first on KongHQ.

Thank you Vietnam!

$
0
0

A special thank you to our APAC Kong Channel Partners! We just shared an amazing week in Saigon where the team outlined Kong’s technical capabilities, business strategies, and roadmap for continued success.

It was great spending several days working with our partners. Not only did we share a knowledge transfer of Kong, but we got to meet our incredible partners in person. We view our channel partners as an extension of the Kong team. The time together was an invaluable experience that further strengthened our relationships.

Here’s some pictures of our special event:

 

Mia, Dir. of Global Channel Sales kicking off the Partner Briefing

Anantha, APAC Dir. of Sales, presenting Blowing up the Monolith

Raj, APAC SE, and JP, Customer Success Engineer, presenting customer use cases

The event’s free flow discussions were incredibly helpful for everyone.

Sharing an incredible week with our partners – letting loose on the rooftop cocktail party!

Kongers on the loose in Saigon!

 

We’d like to thank the entire team at Hotel des Arts in Saigon. The hotel and staff become a wonderful second home to everyone attending the event.

Our next Kong Partner Briefing takes place June 6th – 8th in Milton Keynes, UK. We can’t wait to meet our EMEA partners!

Visit our partners page to learn more about our incredible global partners.

 

The post Thank you Vietnam! appeared first on KongHQ.

Kong EE 0.32 – Status Code Analytics, Zipkin Tracing, and Much More!

$
0
0

Kong Inc. is thrilled to announce the release of Kong Enterprise Edition (EE) Version 0.32. This new release includes the Routes and Service model which provides better separation of concerns and allows for plugins to be applied to specific endpoints. The release also includes many updates that further enhance full control of the API lifecycle, and introduces the Azure functions, Zipkin tracing, and Edge Compute plugins that further increase the utility of the Kong microservices API gateway to enterprises implementing modern architectures and going cloud native.

Vitals – Status Code Tracking, Reporting, and Visualizing

    • Status Code tracking (GUI+API)
      • Status Code groups per Cluster – counts of 1xx, 2xx, 3xx, 4xx, 5xx groups across the cluster over time. Visible in the Admin GUI at ADMIN_URL/vitals/status-codes.
      • Status Codes per Service – count of individual status codes correlated to a particular service. Visible in the Admin GUI at ADMIN_URL/services/{service_id}.
      • Status Codes per Route – count of individual status codes correlated to a particular route. Visible in the Admin GUI at ADMIN_URL/routes/{route_id}.
      • Status Codes per Consumer and Route – count of individual status codes returned to a given consumer on a given route.  Visible in the Admin GUI at ADMIN_URL/consumers/{consumer_id}.

 

Vitals: request volume and latency performance monitoring.

 

Vitals: status code analytics.

 

Vitals: datastore cache performance

Routes & Services Replaces APIs

Kong’s new resource entities “Services” and “Routes” provide the separation of concerns between downstream clients and upstream services that the previous singular “API” entity could not. Common use-cases become easier to reason about, and you benefit from extra granularity when applying plugins. These new entities considerably improve usability and flexibility at the same time, while reducing potential repetitive configuration. Find more information and examples on Services and Routes in the Kong Community Edition 0.13.0 Release Post.

Rate Limiting Plugins

Kong EE 0.32 now ships with both “rate-limiting” and “rate-limiting-advanced” plugins. This gives administrators the flexibility to choose the configuration level that best meets the needs of their specific use case.

Azure Functions Plugin

The new Azure Functions plugin allows Kong to route API requests to Microsoft’s serverless service. Whether you are refactoring monoliths into microservices and serverless functions or doing greenfield serverless projects, Kong streamlines your journey. Stay tuned for our upcoming webinar on Azure Functions and other not-yet-announced integrations with Microsoft Azure. The Azure Functions plugin is available today, bundled with Kong EE 0.32, and it will be bundled with Kong Community Edition (CE) starting with the upcoming CE version 0.14.

Zipkin (and Jaeger!) Tracing  Plugin

The new Zipkin Tracing plugin lets you log timing information to a Zipkin compatible server. If this plugin is enabled, Kong will propagate B3 headers to enable distributed tracing, as well as send spans to a specified Zipkin (or Jaeger) server. Distributed tracing is a popular observability pattern in Kubernetes and Cloud Native infrastructures (and just in case you missed it, check out Kong’s recently announced Kubernetes Ingress Controller). The Zipkin plugin is available today, bundled with Kong EE 0.32.

Edge Compute Plugin

While Kong has long supported custom plugins, writing such plugins and deploying them to your Kong infrastructure hasn’t been easy for everyone. Kong’s new Edge Compute plugin allows Kong Admins to quickly deploy new snippets of Lua code to be run at the start or end of specified request/response cycles. Your Lua code is updated via Kong Admin API calls, and the Edge Compute plugin can be quickly configured to run on any specified Route, Service, or Consumer. This plugin is available in a Preview version, bundled with Kong EE 0.32 – please try it out and share how you use it in Kong Nation.

Many More Improvements to Kong Enterprise Plugins

  • The basic Request Transformer plugin from Kong CE is now included in Kong EE, along with Kong EE’s Request Transformer Advanced plugin.
  • The Proxy Cache plugin now allows customizing the cache key, selecting specific headers or query params to be included.
  • A new EE-only LDAP Advanced plugin includes augmented ability to search by LDAP fields
  • Kong’s industry-leading OpenID Connect plugin has many improvements and expanded configuration options

Kong Dev Portal

  • Code Snippets
  • Developer “request access” full life-cycle
  • Default Dev Portal included in Kong disto (with default theme)
  • Authentication on Dev Portal out of box (uncomment in Kong.conf)
  • Docs for Routes/Services for Dev Portal
  • Docs for Admin API

Numerous Improvements to Kong Admin GUI

  • Routes and Services GUI
  • New Plugins thumbnail view
  • Health Checks / Circuit Breakers GUI

If you are a Kong Enterprise subscriber, we encourage you to read the 0.32 Changelog prior to upgrading. Contact your Customer Success Engineer with your specific questions.

Not using Kong Enterprise yet? Request a demo from Kong API experts and explore using Kong EE to secure and scale your microservices APIs.

 

 

The post Kong EE 0.32 – Status Code Analytics, Zipkin Tracing, and Much More! appeared first on KongHQ.

Kong at DockerCon 2018

$
0
0

 

The Kong team is thrilled to be taking part in DockerCon 2018 in San Francisco this coming June 12 – 15. DockerCon is the premier container conference where technology leaders come together to learn and discuss the Docker ecosystem.

We will be hosting a booth at the Welcoming Reception on Tuesday evening and during the Ecosystem Expo taking place Wednesday June 13 and Thursday June 14.

Come say hi! Meet the engineers behind the most widely used open source API gateway.

They will be ready to talk to you about the intersection between Kong and the Docker ecosystem, microservices, containers, and the recently released Kong Community Edition 0.13 and Kong Enterprise Edition 0.32.

We will be raffling a couple of Sonos devices at our booth and will also have t-shirts and stickers for you!

Event Details:
DockerCon
June 12 – 15, 2018
Moscone Center, San Francisco
Meet us at Booth S20

 

Don’t have your tickets yet? Get a 20% discount on us when you use promo code SPONSOR20

 

The post Kong at DockerCon 2018 appeared first on KongHQ.

Kong CE 0.13 Release Presentation by Kong Principal Engineer Thibault Charbonnier

$
0
0

 

 

Kong Community Edition (CE) 0.13 was released in late March 2018. Some of the biggest updates in this release include Services and Routes as new core entities, and a new syntax for the proxy_listen and admin_listen directives that allow you to disable the Proxy or the Admin API at once, meaning the separation of Kong control-planes and data-planes. 

In this presentation, Thibault Charbonnier, Principal Engineer at Kong, presents on these updates and other important changes included in Kong CE 0.13. Topics covered include:

  • Native Clustering
  • DNS Resolution
  • Health Checks and Circuit Breakers
  • Control & Data Planes
  • Services & Routes

The post Kong CE 0.13 Release Presentation by Kong Principal Engineer Thibault Charbonnier appeared first on KongHQ.

Thinking of Moving to Microservices? Five Questions You Need to Answer

$
0
0


This is the first of two blogs examining considerations for transitioning to a microservices-based architecture. For more information, check out our e-book 
Blowing Up the Monolith: Adopting a Microservices-Based Architecture.

 

Making the decision to transition from a monolithic architecture to microservices cannot be taken lightly. The time and resources needed to undertake a move to microservices are substantial, and it’s essential to carefully weigh the pros and cons before blowing up your monolith. When debating whether to make the transition, it’s important to take a holistic view of how microservices will impact your organization. Though there are clear advantages to microservices, such as improvements in performance, ease of deployment, and scalability, some characteristics of a microservices architecture can also support maintaining a monolithic architecture.

When considering making the shift to microservices, be sure that you can confidently answer the following five questions:

  1. What are Your Goals?
    As a first step, you should make sure that a microservices-oriented architecture aligns with your organizational goals. Make a list of the key objectives and initiatives you hope to accomplish. For example, you may want to free up resources, increase flexibility in deploying or updating applications, or better ensure scalability. These goals will provide the backbone for making the decision whether or not to move to microservices.
  2. What are the Pain Points and Boundaries?
    With your goals outlined, identify the biggest pain points and boundaries within your monolithic codebase. View your monolith as a collection of services and note which aspects pose challenges to achieving your goals. For example, your monolith may limit flexibility in deploying applications due to the length of the development cycle. While doing this, avoid spending too much time “sizing” these services as it pertains to the amount of code behind them. There is always going to be time in the future to decouple services even further as you learn the pain points of building and operating under this new architecture.
  3. How Will Microservices Help Achieve Your Goals?
    Now that you have a good view of the issues caused by your monolithic codebase, you can map the benefits of Microservices directly to your organizational needs. For example, we can draw a line to the greater flexibility offered by microservices in deploying or updating the application thanks to shorter and more focused development cycles. This carries over into a far more efficient overall application development process, which frees up resources. With the anticipated benefits well understood, you can make a compelling case and gain organizational buy-in.
  4. Are You Resourced Appropriately?
    As you consider transitioning to microservices, it’s important to keep in mind that your existing business will still be running and growing on the monolith. To avoid interruptions and unforeseen complications, your organization must simultaneously maintain the old codebase and work on the new one. An effective method of doing this requires creating two teams to split the work. Doing this, however, means that a keen eye must be kept on resource allocation. A common side effect of this split workload is friction across the two teams, since maintaining a large monolithic application is not as exciting as working on new technologies. This problem can be more easily dealt with in a larger team, where team members can rotate between the two projects.
  5. What is the Time Horizon?
    From the considerations above, it’s clear that transitioning from a monolith architecture to microservices cannot be done overnight. It’s essential that expectations for the undertaking are managed appropriately across the organization to avoid frustrations or resource conflicts. Implementing a microservices architecture provides numerous benefits, but it’s not a quick fix. As you build out your transition plan, be conservative in estimating the amount of time it will take. When in doubt, overbudget time in order to avoid missed deadlines and frustration among team members.

 

As mentioned, a common driver for moving to microservices is the argument that maintaining a monolithic codebase is inefficient and hinders the organizational pursuit of business agility. However, this does not mean that transitioning to microservices will be easy. Check back for our next blog where we’ll examine the tactical aspects of transitioning from a monolith to microservices.

 

The post Thinking of Moving to Microservices? Five Questions You Need to Answer appeared first on KongHQ.

Reducing Deployment Risk: Canary Releases and Blue/Green Deployments with Kong

$
0
0

When we build software, it’s critical that we test and roll-out the software in a controlled manner. To make sure this happens, we make use of available tools and best practices to make sure that the software works as intended. We conduct code reviews, execute all the possible unit, integration, and functional tests, and then do it all again in a staging or QA environment that mimics production as closely as possible. But these are just the basics. Eventually, the proof of the pudding is in the eating… so let’s head to production.

When going to production with new releases there are two methods of reducing deployment risk that, though proven, are often underutilized. The methods are Canary Releases and Blue/Green Deployments. Typically, we’ll only use these methods after all other QA has been passed as they both directly work with production traffic.

Now, let’s take a closer look at how Kong can help us.

Canary Releases

A canary release exposes a limited amount of production traffic to the new version we are deploying. For example, we would route 2% of all traffic to the new service to test it with live production data. By doing this, we can test our release for any unexpected regressions, such as application integrations or resource usage (CPU/memory/etc.) that didn’t show up in our test environments. .

 

With Kong, we can easily create a Canary release by using Kong’s load balancer features or the Canary Release plugin (Kong Enterprise Edition only).

Canary with Kong Community Edition

With Kong Community Edition (CE) we can use Kong’s load balancer (and its Upstream entity) to help us out. Let’s say we have the following configuration:

  • An Upstream my.service, containing 1 Target 1.2.3.4 with weight 100, running the current production instance
  • A Service that directs traffic to http://my.service/

By adding additional Targets with a low weight to the Upstream, we can now divert traffic to another instance. Let’s say we have our new version deployed on a system with IP address5.6.7.8. If we then add that Target, with a weight of 1. It should get approximately 1% of all traffic. If that service works as expected, we can increase the weight (or decrease the weight of the existing one) to get more traffic. Once we’re satisfied, we can set the weight of the old version to 0, and we’ll be completely switched to the new version.

When things don’t go as expected, we can set the new Target to weight 0 to roll-back and resume all traffic on our existing production nodes.

When using the default balancer settings, it will divert requests randomly. Because of this, consumers of your API may be “flip-flopping” between the new and old versions. This occurs because the balancer scheme is weighted round robin by default. If we configure the balancer to use the consistent hashing methods, however, we can make sure that the same consumers always end up on the same back-end and prevent this flip-flopping.

It’s important to note that this example uses a single Target for each version of the service. We can add more Targets, but manual management becomes progressively more difficult. Fortunately, Kong’s Canary Release plugin solves this challenge.

Canary with Kong Enterprise Edition

With the Kong Enterprise Edition-only Canary Release plugin, executing Canary Releases is even easier. In the CE approach above we had to manually manage the Targets for the Upstream entity to route the traffic there. With the Canary plugin, we can set an alternate destination for the traffic, identified by a hostname (or IP address), a port, and a URI. In this case it is easier to not add Targets, but rather create another load balancer specified for the new version of your Service.

Now, let’s say we have the same configuration as above:

  • An Upstream my.v1.service, containing 1 target 1.2.3.4 with weight 100, running the current production instance
  • A Service that directs traffic to http://my.v1.service/

The difference here is that we now have a version in the Upstream name. For the Canary Release we’ll add another upstream like this:

  • An Upstream my.v2.service, containing 1 Target 5.6.7.8 with weight 100, running the new version
  • A Canary Release plugin configured on the Service that directs 1% of traffic to host my.v2.service

Now we can have as many Targets in each Upstream as we want, and only need to control one setting in the Canary Release plugin to determine how much traffic is redirected. This makes it much easier to manage the release, especially in larger deployments.

Besides explicitly setting a % of traffic to route to the new destination, the Canary Release plugin also supports timed (progressing) releases and releases based on groups of consumers. The group feature allows the gradual roll out to groups of people. For example, we can use the group feature to first add testers, then employees, and then everyone. The group feature is/will be available with Kong Enterprise Edition 0.33.

Benefits of the Kong EE Canary Release plugin:

  • No manual tracking of Targets
  • Canary can also be done on URI, instead of only IP/port combo
  • Use versioned Upstreams to make it easier to manage the system
  • Release based on groups

Blue/Green Deployments

Where a Canary Release can still be considered testing, a Blue/Green release is really a release – an all-or-nothing switch. Blue/Green releases work by having two identical environments, one Blue and the other Green. At any given time, one of them is staging and the other is running production. When a release is ready in staging, the roles of the two environments switch. Now our staging becomes production, and our production becomes staging.

This simple setup is very powerful. It allows us to test everything in staging as it is identical to production. Even after the switch, the staging environment (former production) can hang around for a bit in case something does not work out as planned and we need to quickly roll-back.

With Kong, doing a Blue/Green release is simple. Just create two Upstreams and, when you want to switch traffic, execute a PATCH request to update the Service to point to the other Upstream.

From Here…

The high-level view always seems easy, but the reality is always more challenging. From an application perspective, there are several caveats we need to consider. How do we handle long-running connections/transactions? How do we deal with updated database schemas while (at least temporarily) running the two in parallel?

Using Kong will not make all issues magically disappear, but it will provide you with powerful tools to reduce risk and simplify the release process.

Happy releasing!

The post Reducing Deployment Risk: Canary Releases and Blue/Green Deployments with Kong appeared first on KongHQ.


Bletchley Park hosts Kong’s EMEA Partner Briefing

$
0
0

A special thank you to our EMEA Kong Channel Partners! We just shared an amazing week at Kong’s new UK office located in Bletchley Park where the team outlined Kong’s technical capabilities, business strategies, and roadmap for continued success.

It was great spending several days working with our partners. Not only did we share a knowledge transfer of Kong, but we got to meet our incredible partners in person. We view our channel partners as an extension of the Kong team. The time together was an invaluable experience that further strengthened our relationships.

In addition to Kong’s workshops and knowledge transfer sessions, we also toured The National Museum of Computing. “The National Museum of Computing, located on Bletchley Park, is an independent charity housing the world’s largest collection of functional historic computers, including the rebuilt Colossus, the world’s first electronic computer, and the WITCH, the world’s oldest working digital computer. The museum enables visitors to follow the development of computing from the ultra-secret pioneering efforts of the 1940s through the large systems and mainframes of the 1950s, 60s and 70s, and the rise of personal computing in the 1980s and beyond.”

Kong’s Bletchley Park Office

It’s an honor to have Kong’s first European office located in such a historical location.

Kong’s new UK Office – Bletchley Park, the birthplace of computing

Kong’s UK Office, Bletchley Park

Here are some pictures of our special event:

Sandeep Singh Kohli, Head of Marketing, presenting on microservices adoption

Marco Palladino, CTO, presenting on service mesh


Great partner dinner at The Navigation Inn – Cosgrove, Milton Keynes

We’d like to thank the entire team at Bletchley Park. The museum and staff became a wonderful second home to everyone attending the event. Kong is looking forward to building our European team in this incredible location for years to come.

Our next Kong Partner Briefing takes place July 10th – 12th in Santiago, Chile. We can’t wait to meet our LATAM partners!

Visit our partners page to learn more about our incredible global partners.

The post Bletchley Park hosts Kong’s EMEA Partner Briefing appeared first on KongHQ.

So You’ve Decided to Transition to Microservices, What Now?

$
0
0

This is the second of two blogs examining considerations for transitioning to a microservices-based architecture. For more information, check out our e-book Blowing Up the Monolith: Adopting a Microservices-Based Architecture. In our previous blog, we outlined the five questions we must consider before making the transition to a microservices architecture

Now that we have a better understanding of the benefits and challenges of a transition to microservices, we need to understand how we make the transition from a technical perspective. There are three primary strategies we can adopt to transition to microservices – the Ice Cream Scoop, the Lego, and the Nuclear Option. Before we evaluate the pros and cons of each of these, we need to identify our boundaries and test. This process will be the same for any strategy we choose, but it’s important not to overlook as it will fundamentally shape our success as we dive into the transition.

To identify the boundaries of our monolith, we must first figure out what services need to be created or broken out from the monolithic codebase. To do this, we can envision what our architecture will look like in a completed microservice architecture. This means understanding how big or how small we want our services to be and how they will be communicating with each other. A good place to start is by examining the boundaries that are most negatively impacted by the monolith, for example, those that we deploy, change or scale more often than the others.

Testing transitioning to microservices is effectively a refactoring, and we need to take all the regular precautions we would before a “regular” refactoring. A best practice here is that before attempting any change, a solid and reliable suite of integration and regression tests are put into place for the monolith. Some of these tests will likely fail along the way, but having well-tested functionality will help to track down what is not working as expected. With our testing and boundary identification completed, let’s look at our three strategies for transitioning to microservices.

  1. Ice Cream Scoop Strategy
    This strategy implies a gradual transition from a monolithic application to a microservice architecture by “scooping out” different components within the monolith into separate services. Given the gradual nature of this strategy, there will be a period where monolith and microservices will exist simultaneously. The advantages of this are that our gradual migration reduces risk without impacting the uptime and end-user experience. This gradual transition, however, is also a drawback as the process will take longer to fully execute.
  2. Lego Strategy
    This strategy entails only building new features as microservices, and it is ideal for organizations that want to maintain their existing monolith. Using the Lego strategy will not resolve issues with our existing monolithic codebase, but it will fix problems for future expansions of the product. This option calls for stacking the monolithic and microservices on top of each other in a hybrid architecture. The primary advantages here are speed and reduced effort due to not needing to do much work on the monolith. The primary disadvantages are that the monolith will continue having its original problems and new APIs will likely need to be created to support the microservice-oriented features. This strategy can help buy time refactoring, but it ultimately risks adding more tech debt.
  3. Nuclear Option Strategy
    Our final option is rarely used. The Nuclear Option requires rewriting the entire monolithic application into microservices all at once. We may still support the old monolith with hotfixes and patches, but we would build every new feature in the new codebase. The main advantage is that this allows the organization to re-think how things are done and effectively rewrite the app from scratch. The disadvantage is that it requires rewriting the app from scratch, which could create unforeseen issues. The Nuclear Option may also inadvertently cause “second system syndrome” where end users will need to deal with a stalled monolith until the new architecture is ready for deployment.

While transitioning to microservices will always require substantial effort, choosing the correct strategy for your organization can substantially reduce friction during the process. No matter which strategy we choose to make the transition, it’s critical that we effectively communicate expectations and requirements to team members. With a clear strategy outlined and agreed upon, we are well on our way to blowing up our monolith and reaping the benefits of a microservices architecture.

The post So You’ve Decided to Transition to Microservices, What Now? appeared first on KongHQ.

What I Saw at DockerCon 2018

$
0
0

You would think the world is falling apart; or rather it seems that way, and I’m only really talking about the world of software. I’ll leave politics out of this. It’s not that the world of software is falling apart; or rather our applications are. What once were gleaming monoliths, monuments to our own achievement are now being hacked and splintered into microservices. Rather this is the natural cycle that our industry seems to follow every few decades. The boom and bust, the macro and the micro. Migrating from one paradigm to another in search of performance, flexibility, and control. Docker is driving home the sharding of the monolith, pushing the composable unit of computation. I see this in my role at Kong as a Customer Success Engineer; working with people who are migrating to, or asking questions about, microservices.

I was a visitor to DockerCon 2018 in San Francisco. Everywhere I turned I was faced with the prospect of the new order, microservice.

Optimize for the micro

The first session I attended was about how to make my Docker images as small as they could be to enable them to be quick across the wire, to make them easy to build on, and flexible to use among different teams. Smaller images allow for quicker deploy times and less complexity during development. It is double important with microservices to be able to quickly spin up new images; to upgrade to a newer versions of a critical app, and migrate traffic between version with zero downtime. Small images help enable microservices in that they allow for faster deployments, and faster development. The message is clear, optimize for size and you optimize for speed.

The new unit of collaboration

In a talk by Gareth Rushgrove he touched on Docker and all the spaces it could still add value, hitting home on the idea of collaboration and sharing. He sees room for Docker to be the unit of composability. Where we used to zip up WordPress and pass it around, we would now share Docker compose files. Configurations for all kinds of applications made up of containers of disparate and smaller apps. One Docker compose file to spin up an application, a database, a caching server, an auth server, a real time logging and monitoring server. A series of blueprints to tie together many smaller services, read to work in unison and deployable with a series of straight forward commands. Docker is enabling the complexity of separation to be distilled into a unit of simplicity.

Docker allows for a buffer layer between small services and complexity. It enables working with separate concerns as a single unit. While still allowing for the single units to change easily when needed. This abstraction, while appearing to add just another layer, allows for separation of concerns letting each service do its job. When they communicate using a common protocol and language it makes it easier to change out parts when needed.

A monolithic transformation

Microservice are not just for greenfield work. There are plenty of old monoliths sitting around, just waiting to refactored into shiny a new collection services, each isolated on their own. In the talk 5 Patterns for Success for Application Transformation I was presented with five strategies to use when splitting a monolith into microservices. How to get get logs out, how to get configuration in, how to check dependencies, how to share services health around the platform, and how to expose metrics for services.

Each topic is important to a smooth running microservices platform. I won’t go into details, but the message was clear; microservices are coming, and doing so in many forms new and old. There is a way forward, where everyone can play. Just remember to ask yourself the five questions you need to know when moving to microservices.

Tie it all together in a mesh

And then there was the talk on service mesh, the information superhighway of any microservice platform. The internal communication conduits that link the services together, that enable quick service discovery and speed to assemble a response to every request. The presenter, Tony Pujals, said routing rules and traffic policies are at the heart of a microservices platform. They enable zero downtime upgrades, logging, distributed tracing, and extensibility. They enable the smooth running of any microservices platform.

The monuments are toppoling, falling into small piles, not of rubble, but services, not of crumbling rock, but of composible resources; the monoliths are being chipped away into microservices, and Docker is driving the boat.

 

Ukiah Smith at DockerCon

Ukiah Smith at DockerCon 2018

The post What I Saw at DockerCon 2018 appeared first on KongHQ.

Highly Available Microservices with Health Checks and Circuit Breakers

$
0
0

Developers are turning to microservices in greater numbers to help break through the inflexibility of old monolithic applications. Microservices offer many benefits, including separation of concerns, team autonomy, fault tolerance, and scaling across multiple regions for high availability.

However, microservices also present challenges, including more infrastructure complexity to manage. You have more services to monitor for availability and performance. It’s also a challenge to balance load and route around failures to maintain high availability. If your services are stateful, you need to maintain persistent connections between clients and instances. Most API developers would prefer to have a system manage these infrastructure complexities so they can focus on the business logic.

In this article, we’ll describe how algorithms for load balancing help you deliver highly available services. Then, we’ll also show an example of how Kong makes it easier to deliver high availability with built-in health checks and circuit breakers. Kong is the world’s most popular open source API management platform for microservices. With Kong, you get more control and richer health checks than a typical load balancer.

Intro to load balancing

Load balancing is the practice of distributing client request load across multiple application instances for improved performance. Load balancing distributes requests among healthy hosts so no single host gets overloaded.

A typical load balancing architecture showing that clients make requests to a load balancer, which then passes (or proxies) requests to the upstream hosts. Clients can be a real person or a service calling another service, and they can be external or internal to your company.

The primary advantages of load balancing are higher availability, highly performing application services, and improved customer experience. Load-balancing also lets us scale applications up and down independently and provides an ability to self-heal without app down time. It also lets us significantly improve speed to market by enabling a rolling or “canary” deployment process, so we can see how deployments are performing on a small set of hosts before rolling out across the entire cluster.

Important load balancer types

There are several algorithms or processes by which load can be balanced across servers: DNS, round robin, and ring balancer.

Domain Name Server (DNS) load balancing

The DNS load balancing process starts by configuring a domain in the DNS server with multiple-host IP addresses such that clients requests to the domain are distributed across multiple hosts.

In most Linux distributions, DNS by default sends the list of host IP addresses in a different order each time it responds to a new application client. As a result, different clients direct their requests to different servers, effectively distributing the load across the server group.

The disadvantage is that clients often cache the IP address for a period of time, known as time to live (TTL). If the TTL is minutes or hours, it can be impractical to remove unhealthy hosts or to rebalance load. If it’s set to seconds, you can recover faster but it also creates extra DNS traffic and latency. It’s better to use this approach with hosts that are highly performant and can recover quickly, or on internal networks where you can closely control DNS.

Round robin

In the round robin model, clients send requests to a centralized server which acts as a proxy to the upstream hosts. The simplest algorithm is called “round robin.” It distributes load to hosts evenly and in order. The advantage over DNS is that your team can very quickly add hosts during times of increased load, and remove hosts that are unhealthy or are not needed. The disadvantage is that each client request can get distributed to a different host, so it’s not a good algorithm when you need consistent sessions.

Ring balancer

A ring balancer allows you to maintain consistent or “sticky” sessions between clients and hosts. This can be important for web socket connections or where the server maintains a session state.

It works similarly to the round robin model because the load balancer acts as a proxy to the upstream hosts. However, it uses a consistent hash that maps the client to the upstream host. The protocol must use a client key in the hash, such as their IP address. When a host is removed, it affects only 1/N requests, where N is the number of hosts. Your system may be able to recover the session by transferring data to the new hosts, or the client may restart the session.

In the graphic below, we have 4 nodes that balance load across 32 partitions. Each client key is hashed and is mapped to one of the partitions. When a single node goes down, a quarter of partitions need to be reassigned to healthy nodes. The mapping from client to partition stays consistent even when nodes are added or removed.

 

Health checks and circuit breakers improve availability

Health checks can help us detect failed hosts so the load balancer can stop requests to them. A host can fail for many reasons, such as simply being overloaded, the  server process may have stopped running, it might have a failed deployment, or broken code to list a few reasons. This can result in connection timeouts or HTTP error codes. Whatever the reason, we want to route traffic around it so that customers are not affected.

Active health checks

In active health checks, the load balancer periodically “probes” upstream servers by sending a special health check request. If the load balancer fails to get a response back from the upstream server, or if the response is not as expected, it disables traffic to the server. For example, it’s common to require the response from the server includes the 200 OK HTTP code. If the server times out or responds with a 500 Server Error, then it is not healthy.

The disadvantage is that active health checks only use the specific rule they are configured for, so they may not replicate the full set of user behavior. For example, if your probe checks only the index page, it could be missing errors on a purchase page. These probes also create extra traffic and load on your hosts as well as your load balancer. In order to quickly identify unhealthy hosts, you need to increase the frequency of health checks which creates more load.

Passive health checks

In passive health checks, the load balancer monitors real requests as they pass through. If the number of failed requests exceeds a threshold, it marks the host as unhealthy.

The advantage of passive health checks are that they observe real requests, which better reflects the breadth and variety of user behavior. They also don’t generate additional traffic on the hosts or load balancer. The disadvantages are that users are affected before the problem is recognized, and you still need active probes to determine if hosts with unknown states are healthy.

We recommend you get the best of both worlds by using both passive and active health checks. This minimizes extra load on your servers while allowing you to quickly respond to unexpected behavior.

Circuit breakers

When you know that a given host is unhealthy, its best to “break the circuit” so that traffic flows to healthy hosts instead. This provides a better experience for end-users because they will encounter fewer errors and timeouts. It’s also better for your host because diverting traffic will prevent it from being overloaded, and give it a chance to recover. It may have too many requests to handle, the process or container may need to be restarted, or your team may need to investigate.

Circuit breakers are essential to enable automatic fault tolerance in production systems. There are also critical if you are doing blue-green or canary deployments. These allow you to test a new build in production on a fraction of your hosts. If the service becomes unhealthy, it can be removed automatically. Your team can then investigate the failed deployment.

What is Kong?

Kong is the most popular open source API gateway for microservices. It’s very fast with sub-millisecond latency, runs on any infrastructure, and is built on top of reliable technologies like NGINX. It has a rich plug-in ecosystem that allows it to offer many capabilities including rate limiting, access control and more.

Kong allows load balancing using the DNS method, and it’s ring-balancer offers both round robin and hash-based balancing. It also provides both passive and active health checks.

A unique advantage of Kong is that both active and passive health checks are offered for free in the Community Edition (CE). Nginx offers passive health checks in the community edition, but active health checks are included only in the paid edition, Nginx Plus. Amazon Elastic Load Balancers (ELB) don’t offer passive checks. Also, depending on your use, it may cost more than running your own instance of Kong. Kubernetes liveness probes offer only active checks.

 

Nginx Amazon ELB Kubernetes Kong CE
Active Checks X Plus only
Passive Checks X X

 

The Kong Enterprise edition also offers dedicated support, monitoring, and easier management. The Admin GUI makes it easy to add and remove services, plugins, and more. Its analytics feature can take the place of more expensive monitoring systems.

See it in action

Let’s do a demo to see how easy it is to configure health checks in Kong. Since they are familiar to many developers, we’ll use two Nginx servers as our upstream hosts. Another container running Kong will perform health checks and load balancing. When one of the hosts goes down, Kong will recognize that it is unhealthy and route traffic to the healthy container.

In this example, we’re going to use Docker to set up our test environment. This will allow you to follow along on your own developer desktop. If you are new to Docker, a great way to learn are the Katacoda tutorials. You don’t need to install anything and can learn the basics in about an hour.

Step 1: Add two test hosts

Lets install two test hosts that will respond to our requests. In this example, we will use the Nginx docker container one of which we’ll configure to be healthy and one as unhealthy. They will each be on separate ports so Kong can route to each.

First let’s create our healthy container. It will respond with “Hello World!” We’ll set this up using a static file and mount it in our container’s html directory.

$ mkdir host1
$ echo "Hello World!" > host1/index.html
$ docker run --name host1 -v ~/host1:/usr/share/nginx/html:ro -p 9090:80 -d nginx
$ curl localhost:9090
Hello World!

Next, let’s create our unhealthy container. We’ll configure Nginx to respond with a 500 Server Error. First, copy the default nginx config.

$ docker cp host1:/etc/nginx/conf.d/default.conf ~

Then edit the location to return a 500 error.

$ vim ~/default.conf
location / {
    return 500 ‘Error\n’;
    root   /usr/share/nginx/html;
    index  index.html index.htm;
}

Now start up the container and test it to make sure it returns the error.

$ docker run --name host2 -v ~/default.conf:/etc/nginx/conf.d/default.conf -p 9091:80 -d nginx
$ curl -i localhost:9091
Error

Step 2: Install Kong

Kong can be installed in wide variety of environments. We will follow the Docker instructions since they are relatively easy to test on a developer desktop.
First, we need a database where Kong can store its settings. Will use Postgres since it’s easy to set up a test container in Docker.

$ docker run -d --name kong-database \
              -p 5432:5432 \
              -e "POSTGRES_USER=kong" 
              -e "POSTGRES_DB=kong" \
              postgres:9.4

Next, we need to initialize the database.

$ docker run --rm \
                  --link kong-database:kong-database \
                  -e "KONG_DATABASE=postgres" \
                  -e "KONG_PG_HOST=kong-database" \
                  -e "KONG_CASSANDRA_CONTACT_POINTS=kong-database" \
                  kong:latest kong migrations up

Now let’s start the Kong container. These options use default ports and connect to our Cassandra database.

$ docker run -d --name kong \
                  --link kong-database:kong-database \
                  -e "KONG_DATABASE=postgres" \
                  -e "KONG_PG_HOST=kong-database" \
                  -e "KONG_CASSANDRA_CONTACT_POINTS=kong-database" \
                  -e "KONG_PROXY_ACCESS_LOG=/dev/stdout" \
                  -e "KONG_ADMIN_ACCESS_LOG=/dev/stdout" \
                  -e "KONG_PROXY_ERROR_LOG=/dev/stderr" \
                  -e "KONG_ADMIN_ERROR_LOG=/dev/stderr" \
                  -e "KONG_ADMIN_LISTEN=0.0.0.0:8001" \
                  -e "KONG_ADMIN_LISTEN_SSL=0.0.0.0:8444" \
                  -p 8000:8000 \
                  -p 8443:8443 \
                  -p 8001:8001 \
                  -p 8444:8444 \
                  kong:latest

Verify Kong is running on the port 8001 and gives a 200 ‘OK’ response. That means it’s working.

$ curl -i localhost:8001/apis
HTTP/1.1 200 OK

Step 3: Configure Kong to use our test hosts

Now we want to connect Kong to our test hosts. The first step is configuring an API in Kong. “API” is just a historic term since Kong can load balance any HTTP traffic, including web server requests. I’m going to call our API “mytest” since it’s easy to remember. I’m also setting the connection timeout to 5 seconds because I’m too impatient to wait the default 60 seconds. If you want to learn more about creating APIs, see Kong`s documentation.

$ curl -i -X POST \
   --url http://localhost:8001/apis/ \
   --data 'name=mytest' \
   --data 'hosts=mytest.com' \
   --data 'upstream_url=http://mytest/'
   --data 'upstream_connect_timeout=5000'

Next, we have to add an upstream for our API. This allows me to specify an active health check to probe my servers every 5 seconds. Additionally, they will be marked as unhealthy after a single timeout.

$ curl -i -X POST http://localhost:8001/upstreams/ \
            --data 'name=mytest' \
            --data 'healthchecks.active.healthy.interval=5' \
            --data 'healthchecks.active.unhealthy.interval=5' \
            --data 'healthchecks.active.unhealthy.http_failures=2' \
            --data 'healthchecks.active.healthy.successes=2'

Now we can add targets to the upstream we just created. These will point to the Nginx servers we just created in Step 1. Use the actual IP of your machine, not just the loopback address.

$ curl -i -X POST http://localhost:8001/upstreams/mytest/targets --data 'target=192.168.0.8:9090'
$ curl -i -X POST http://localhost:8001/upstreams/mytest/targets --data 'target=192.168.0.8:9091'

Kong should be fully configured now. We can test that it’s working correctly by making a GET request to Kong’s proxy port, which is 8000 by default. We will pass in a header identifying the host which is tied to our API. We should get back a response from our Nginx server saying “Hello”!

$ curl -H "Host: mytest.com" localhost:8000
Hello World!

Step 4: Verify health checks

You’ll notice that Kong is not returning a 500 error, no matter how many times you call it. So what happened to host2? You can check the kong logs to see the status of the health check.

$ docker logs kong | grep healthcheck
2018/02/21 20:00:05 [warn] 45#0: *17672 [lua] healthcheck.lua:957: log(): [healthcheck] (mytest) unhealthy HTTP increment (1/2) for 172.31.18.188:9091, context: ngx.timer, client: 172.17.0.1, server: 0.0.0.0:8001
2018/02/21 20:00:10 [warn] 45#0: *17692 [lua] healthcheck.lua:957: log(): [healthcheck] (mytest) unhealthy HTTP increment (2/2) for 172.31.18.188:9091, context: ngx.timer, client: 172.17.0.1, server: 0.0.0.0:8001

Kong is automatically detecting the failed host by incrementing its unhealthy counter. When it reaches the threshold of 2, it breaks the circuit and routes requests to the healthy host.
Next, let’s revert the Nginx config back so it returns a 200 OK code. We should see that Kong recognized it as healthy and it now returns the default Nginx page. You might need to run it a few times to see host2 since Kong doesn’t switch every other request.

$ docker cp host1:/etc/nginx/conf.d/default.conf ~
$ docker container restart host2
$ curl -H "Host: mytest.com" localhost:8000
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>

You successfully demonstrated health checks and circuit breakers! To continue this exercise, you may read more about Kong’s health checks and try setting up a passive health check. You could also read about load balancing algorithms and try setting up hash-based load balancing.

Conclusion

Kong is a scalable, fast, and distributed API gateway layer. Kong’s load balancing and health check capabilities can make your services highly available, redundant, and fault-tolerant. These algorithms can help avoid imbalance among servers, improve system utilization, and increase system throughput.
To learn more about health checks in Kong, see our recorded webinar with a live demonstration of health checks in action. It also includes a presentation on these features and live Q&A from the audience.

The post Highly Available Microservices with Health Checks and Circuit Breakers appeared first on KongHQ.

Announcing Kong CE 0.14.0 – including Zipkin, Prometheus, and More!

$
0
0

Our teams and contributors have been hard at work over the last couple of months. We’ve made efforts to bring many new features helping Kong to better integrate in modern cloud environments.

We are thrilled to announce the coming release of Kong CE 0.14.0!  This will be our largest release to date by number of new features, new integrations, and bug fixes; so we want to give you a preview of what is coming. Over the next few weeks, we’ll follow up with subsequent posts detailing the features in depth.

Test the 0.14 release candidate now and join the conversation on Kong Nation!

CE 0.14.0

The highlights of the upcoming 0.14.0 release are:

  • 🎆 The first version of the Plugin Development Kit, a new standardized and forward-compatible way of writing plugins.
  • 🎆 Four new bundled (and open source) plugins, helping Kong to integrate better with Cloud Native environments:
  • 🎆 Dynamically injected Nginx directives, which should reduce the need for custom Nginx templates.
  • Plugins are now executed on Nginx-produced errors (HTTP 4xx or 5xx), which allows logging plugins to report them.
  • Support for PUT requests in the Admin API’s modern endpoints (Services/Routes/Consumers/Certificates).
  • And a lot of bug fixes!

If you are already running Kong, read the 0.14 Upgrade Path for a complete list of breaking changes, and suggested upgrade path.

Plugin Development Kit

Making plugins easy to write and safe has been a long-term goal of Kong. The Plugin Development Kit (or “PDK”) is a new step towards this goal and simplifies the work of plugin authors. The PDK is a set of Lua functions and variables that provides a standardized and forward-compatible way of writing plugins:

View the code on Gist.

In a nutshell, the PDK will offer a number of benefits:

  • Standardization. The PDK aims at providing all functionality plugins may need under a single umbrella, and all plugins using it are more likely to behave similarly (same parsing rules, same errors, etc…).
  • Usability. The high-level abstractions provided by the PDK should be (we hope!) simpler to use than the bare-bones ngx_lua API.
  • Isolation. Typical plugin operations such as logging or caching can be done in isolation from other plugins.
  • Forward-compatibility. Our goal is to maintain backwards-compatibility of the PDK, and as such, we made it a semver-versionned component, and plugins will in the future be able to lock the PDK version they depend upon.

To provide a concrete example, we also refactored Kong’s key-auth plugin to use the PDK. Moving forward, we are hoping to update all bundled plugins to use the PDK. And yes, we are definitely welcoming contributions helping us reach this goal!

You will be able to browse the complete list of functions and variables online, in Plugin Development Kit Reference we are coming up with. We will also be publishing another blob post to deep-dive into the PDK in the following weeks, so stay tuned!

Injected Nginx directives

Do you find yourself maintaining an Nginx configuration template to tweak a few directives to your needs? While they do allow for powerful customization, custom Nginx templates can be challenging to write and maintain.

Enters dynamically-injected Nginx directives. The simplest way to describe them is probably by example. Say we want to increase the value for our large_client_header_buffers setting. By specifying the following value in our kong.conf file:

View the code on Gist.

the large_client_header_buffers directives will be injected in the proxy server block of the Nginx configuration. Specifying such directives via an environment variable is also supported:

View the code on Gist.

Another blog post will go into the details of injected Nginx directives, and our online Configuration Reference will also be updated with a complete user guide.

And more…

A lot more can be said about Kong CE 0.14.0, and more blog posts will be coming soon to deep-dive into some of its shiniest new features. In the meantime, test the 0.14 release candidate and jump into the conversation on Kong Nation – let us know what you think!

Happy Konging!

The post Announcing Kong CE 0.14.0 – including Zipkin, Prometheus, and More! appeared first on KongHQ.

Viewing all 463 articles
Browse latest View live