Quantcast
Channel: Blog – KongHQ
Viewing all 463 articles
Browse latest View live

“Intro to Kong” and “Mashape Enterprise” Webinars – Coming soon!

$
0
0

Register today for webinars in late-August, 2017. Each webinar is presented twice, at different times, to make it more likely you’ll find a time that works in your time zone.
Intro to Kong covers the basics – you’ll learn how to download, install, and configure Kong to start proxying API requests.
Mashape Enterprise details some of the advanced features we’ve released recently.
Both webinars will be filled with useful information, and Mashape’s talented Customer Success team will available to answer questions during the presentations.

Learn the basics with “Intro to Kong, or go advanced with “Mashape Enterprise”
Register today

The post “Intro to Kong” and “Mashape Enterprise” Webinars – Coming soon! appeared first on Mashape Blog.


Using Instaclustr and Cassandra with Kong

$
0
0

Instaclustr, experts of delivering hosted and managed Apache Cassandra™ services, was an early partner of Mashape whose developers needed an easy path to implement Cassandra clusters to support their Kong deployments. Due to its scalability, reliability, and performance, the Mashape team selected Instaclustr and Cassandra as an initial database solution.
Apache Cassandra is an open source distributed database management system designed to handle large amounts of data across many commodity servers, providing high availability with no single point of failure. It also offers robust support for clusters spanning multiple data centers, with asynchronous masterless replication allowing low latency operations for…

The post Using Instaclustr and Cassandra with Kong appeared first on Mashape Blog.

Kong CE 0.11.0 released

$
0
0

The latest and greatest version of Kong features improvements all over the board for a better and easier integration with your infrastructure!
Note: Kong Community Edition and Mashape Enterprise are now distinct products – if you are a user of the Open Source version, this product is now called Kong CE, the Community Edition. If you are an enterprise customer, you should plan on using our new Mashape Enterprise product going forward.
Download Kong 0.11.0 now and give it a try!

Updates

The highlights of this release are:

    Support for regex URIs in routing, one of the oldest requested

The post Kong CE 0.11.0 released appeared first on Mashape Blog.

Kong goes to Portland – NGINX.conf 2017

$
0
0

Watch out Portland the Kong team is attending NGINX.conf 2017, September 6 – 8th!
Kong is a proud silver sponsor of the event. If you want to learn more about Kong and meet the team, drop by our booth and say hello. We’ve got amazing t-shirts!
Booth Activities:
– How Kong manages your APIs and Microservices
– How to deploy Kong
– How to customize and optimize Kong specifically to your API needs
– How to add microservice analytics and an API developer portal
Kong
With nearly 12,000 stars on Github and 4 million plus downloads, Kong is the most…

The post Kong goes to Portland – NGINX.conf 2017 appeared first on Mashape Blog.

Secure and Manage AWS Lambda Endpoints with Kong

$
0
0

Kong is the most popular open-source API management layer. It`s extendable through a curated list of plugins and you can write your own plugins as well. Kong scales horizontally, runs on all infrastructure types, and its core is built on top of OpenResty, which is one of the most performant web application servers.

1. Introduction to Kong and AWS Lambda

In this quickstart tutorial, we will walk you through the steps to setup Kong with AWS Lambda and  build a simple “Hello World” app as a demonstration. Kong can help you secure and manage your AWS Lambda services. It…

The post Secure and Manage AWS Lambda Endpoints with Kong appeared first on Mashape Blog.

Kong Presents at MesosCon 2017

$
0
0

The Kong team is heading to Los Angeles for MesosCon, September 13 – 15th. Join us at the largest gathering of global Mesos community members.
Kong is a proud silver sponsor of the event. If you want to learn more about Kong and meet the team, drop by our booth and say hello. We’ve got amazing stickers and t-shirts!
Come learn about Kong, the most popular open-source API Management platform.
– How Kong manages your APIs and Microservices
– How to deploy Kong with Mesosphere DC/OS – learn more here
– How to customize and optimize Kong specifically to your…

The post Kong Presents at MesosCon 2017 appeared first on Mashape Blog.

Mesosphere DC/OS Package Now on KONG

$
0
0

Kong now offers Mesosphere DC/OS. This integration enables developers to deploy Kong on a Mesosphere DC/OS cluster to simplify operations and achieve higher resource utilization.
Apache Mesos is the open-source distributed systems kernel at the heart of DC/OS. By abstracting the entire datacenter into a single pool of computing resources it simplifies running distributed systems at scale. Mesosphere can support different types of distributed workloads and container orchestration, like Mesos and Docker containers, and stateful big-data technologies, such as Cassandra, one of Kong’s optimized DB options.
Kong is designed to sit in front of highly performant APIs and microservices…

The post Mesosphere DC/OS Package Now on KONG appeared first on Mashape Blog.

MesosCon Wrap Up

$
0
0

MesosCon 2017 was an amazing event! Thanks to everyone that dropped by our booth. It was great to share Kong with you.
During the event, Marco Palladino, Kong’s CTO, presented “API Gateway Pattern & Kong in a Microservices World.” In a container world, APIs are becoming increasingly more important as a communication medium, inside and outside the firewall. As more services are created the harder it gets to efficiently secure, manage, and extend them in a variety of environments. This applies to both single or multi-DC setups.
API gateways can be used to centralize common functionality in one place by…

The post MesosCon Wrap Up appeared first on Mashape Blog.


Kong CE 0.11.1 & 0.10.4 released

$
0
0

Today, we are for the first time announcing not one, but two new releases: Kong CE 0.11.1, and Kong CE 0.10.4.

Important usability fixes were made to our 0.11 mainline, improving support for various areas such as DNS resolution, client requests matching and routing, and the Admin API. Additionally, a few new features landed in some of our built-in plugins!

As part of our constant effort to support our users and the community, we decided to back-port some of the fixes included in both 0.11.0 and 0.11.1 into our older, 0.10 family. We hope that 0.10.4 will benefit users who haven’t received the benefits of a 0.11 upgrade yet.

Download and try now Kong CE 0.11.1 or Kong CE 0.10.4.

0.11.1

The notable items of this release are:

  • Dropping of the lua_code_cache property (this property is considered harmful since 0.11.0)
  • Numerous bug fixes to the core, CLI, and Admin API components
  • Plugins
    • aws-lambda: added support to forward the client request’s HTTP method, headers, URI, and body to the Lambda function
    • key-auth: new run_on_preflight configuration option to control authentication on preflight requests
    • jwt: new run_on_preflight configuration option to control authentication on preflight requests
    • Fixes for the bot-detection and ip-restriction plugins

Consult the Kong 0.11.1 Changelog for a complete list of fixes included in this release.

0.10.4

Consult the Kong 0.10.4 Changelog for a complete list of fixes back-ported in this release.

 

As always, we thank all of our users and contributors for the feedback and the hard work that they provide!

 

The post Kong CE 0.11.1 & 0.10.4 released appeared first on KongHQ.

Kong & Alpine on Docker

$
0
0

We are happy to announce the availability of the official Alpine based Docker image, available starting from 0.11.x by pulling the “{version}-alpine” tag, for example:

$ docker pull kong:0.11-alpine

The Alpine based image is an addition to the Kong image and does not replace – for the time being – the default CentOS based image, and it’s available on both Kong Community Edition (CE) and Enterprise Edition (EE). If you are an EE customer please use the appropriate Alpine tag to retrieve the image.

At Kong performance and portability are first class citizens. In the past few months we removed older dependencies that are not required anymore (Dnsmasq in 0.9.x, and Serf in 0.11.x), and today with Alpine support we reduced the final size of our Docker distribution as well.

Let’s look at the stats: on Docker Store the CentOS based image is reported at a compressed size of 122MB, while the new Alpine image at only 30MB, which is 75% less.

And the uncompressed size reported by docker history also shows a reduction from 313MB to 84MB.

 

$ docker history kong:0.11.0
IMAGE               CREATED             CREATED BY                                      SIZE                COMMENT
604ef970973d        5 weeks ago         /bin/sh -c #(nop)  CMD ["/usr/local/openre...   0B
<missing>           5 weeks ago         /bin/sh -c #(nop)  STOPSIGNAL [SIGTERM]         0B
<missing>           5 weeks ago         /bin/sh -c #(nop)  EXPOSE 8000/tcp 8001/tc...   0B
<missing>           5 weeks ago         /bin/sh -c #(nop)  ENTRYPOINT ["/docker-en...   0B
<missing>           5 weeks ago         /bin/sh -c #(nop) COPY file:0ce55305f95ddc...   307B
<missing>           5 weeks ago         /bin/sh -c yum install -y wget https://bin...   116MB
<missing>           5 weeks ago         /bin/sh -c #(nop)  ENV KONG_VERSION=0.11.0      0B
<missing>           5 weeks ago         /bin/sh -c #(nop)  MAINTAINER Marco Pallad...   0B
<missing>           5 weeks ago         /bin/sh -c #(nop)  CMD ["/bin/bash"]            0B
<missing>           5 weeks ago         /bin/sh -c #(nop)  LABEL name=CentOS Base ...   0B
<missing>           5 weeks ago         /bin/sh -c #(nop) ADD file:1ed4d1a29d09a63...   197MB

$ docker history kong:0.11-alpine
IMAGE               CREATED             CREATED BY                                      SIZE                COMMENT
d53207c4f9f5        6 weeks ago         /bin/sh -c #(nop)  CMD ["/usr/local/openre...   0B
<missing>           6 weeks ago         /bin/sh -c #(nop)  STOPSIGNAL [SIGTERM]         0B
<missing>           6 weeks ago         /bin/sh -c #(nop)  EXPOSE 8000/tcp 8001/tc...   0B
<missing>           6 weeks ago         /bin/sh -c #(nop)  ENTRYPOINT ["/docker-en...   0B
<missing>           6 weeks ago         /bin/sh -c #(nop) COPY file:0ce55305f95ddc...   307B
<missing>           6 weeks ago         /bin/sh -c apk update  && apk add --virtua...   79.3MB
<missing>           6 weeks ago         /bin/sh -c #(nop)  ENV KONG_SHA256=34cfd44...   0B
<missing>           6 weeks ago         /bin/sh -c #(nop)  ENV KONG_VERSION=0.11.0      0B
<missing>           6 weeks ago         /bin/sh -c #(nop)  MAINTAINER Marco Pallad...   0B
<missing>           6 weeks ago         /bin/sh -c #(nop)  CMD ["/bin/sh"]              0B
<missing>           6 weeks ago         /bin/sh -c #(nop) ADD file:4583e12bf5caec4...   3.97MB

 

The Alpine-based image is a new addition to the Kong ecosystem and we recommend to use it carefully in staging environments and to report back to the Kong team any problem/feedback you may encounter. The long-term plan is to gradually make the Alpine-based image the default Docker image for Kong.

Please also let us know if you would like to use Kong with other base images besides CentOS or Alpine.

The post Kong & Alpine on Docker appeared first on KongHQ.

See Kong at AWS re:Invent 2017

$
0
0

Meet Kong at AWS re:Invent
Some things are already great on their own, but become even better in combination:

Pizza and beer.
Electricity and light bulbs.
Peanut butter and chocolate.
Kong and AWS.

Like you, the Kong team is headed to Las Vegas for AWS re:Invent from November 27th-December 1st. We’ll be in Booth 1232 at the Venetian Expo Hall—please stop by! We’re looking forward to learning about your API use cases and to sharing details about Kong, the world’s most popular open source API gateway. Together we’ll find scalable and resilient solutions that secure, manage and orchestrate your microservice APIs in the cloud.
Kong T-shirts and Stickers

With so much going on at re:Invent, we’ve made it easy (and even fun) for you to learn more about Kong:

Plan to Visit Booth 1232

Kong microservice API gateway experts will be giving demos, answering questions, and dishing out the coolest stickers and T-shirts this side of the Mississippi. What’s more, you’ll learn how, in fewer than 10 minutes, to deploy Kong in the AWS Cloud.

Request a Meeting at re:Invent

Kong executives and engineers are available to discuss your business and technical requirements—face-to-face. Request a meeting to explore how Kong addresses your enterprise deployment requirements (limited time slots are available—sign up early to secure your meeting).

AWS re:Invent 2017 is a great opportunity to learn how to make your cloud applications more scalable and your organization more agile with Kong and AWS. We look forward to meeting you!

The post See Kong at AWS re:Invent 2017 appeared first on KongHQ.

Kong Enterprise 0.29 is Released With Vitals Monitoring and Analytics Features

$
0
0

In October we introduced Kong Enterprise Edition, the microservice API platform for large organizations. Now, just about a month later, we’re delivering new features specifically for enterprise customers. Release highlights include:

  • New Vitals Analytics Interface
  • New Forward Proxy Plugin
  • Improved Proxy Caching Plugin

Enterprise customers are encouraged to visit the Kong Enterprise Customer Success portal  (subscription required) for the full list of features, enhancements, fixes and system requirements.

New: Kong Vitals Monitoring and Analytics

Kong Enterprise 0.29 introduces Vitals, Kong Enterprise’s new monitoring and analytics interface. Vitals lets you keep track of the activity volume of your API application and Kong instances, all from from a web browser (and soon via the Kong Admin API too).

Importantly, and like all Kong features, Vitals is designed to have the smallest possible impact on API performance. As you may know, in most scenarios, Kong adds <1ms latency to API requests.

Vitals has a visual interface in the Kong Admin GUI, so let’s give it a look:

Kong Vitals Monitoring and Analytics

This single screen provides a powerful view of API gateway activity over time. You can choose a desired time range and reporting interval, and Vitals displays current and historical results. Let’s drill into the three graphs:

  • L2 Cache Hit/Miss. This graph shows accesses to Kong’s datastore cache. The blue line shows hits; the red line shows cache misses, which result in database fetches. There was an interesting moment at 12:25:03 pm when both hits and misses are relatively high. In this case the absolute number of cache hits and misses calls attention to this moment.
  • L2 Cache Hit Percentage. The green line shows that Kong is utilizing cache most of the time. The period of time we focused on from the Hit/Miss chart shows as a 69.3% hit percentage. About 5 seconds later, the cache hit percentage chart goes to 0%, presenting a different but equally interesting moment. At that time the total number of requests are low, but nearly all requests require accessing the data store. The Cache Hit Percentage Chart surfaces this anomaly for investigation. Shortly thereafter the cache hit percentage climbs back to the near 100% level.
  • Proxy Latency Min/Max. The yellow line shows the maximum latency introduced by the gateway; the purple line shows the minimum latency introduced at the gateway. Kong is designed to utilize cache whenever possible. This chart provides insights into how expensive certain cache hits are to overall performance. As expected, latency is zero except when new actions require loading information from the data store into cache.

Vitals data is is available for visual analysis in a web browser today, and will be writable to log files and 3rd-party monitoring platforms soon.

New: Forward Proxy Plugin

The Forward Proxy Plugin allows Kong Enterprise to connect to intermediary transparent HTTP proxies (instead of directly to a specific upstream_url in the API definition) when forwarding requests upstream. This is useful in environments where Kong sits in an organization’s internal network, the upstream API is available via the public internet, and the organization proxies all outbound traffic through a forward proxy server.

Improved: Proxy Caching Plugin

The Proxy Caching Plugin for Kong Enterprise makes it fast and easy to configure caching of responses and serving of those cached responses to matching requests. This release adds support for cache-control directives, easier-to-use Admin API endpoints to list and clear proxy cache data and more control over cache purging in multi-node Kong clusters.

And Lots More

Periodic Kong Enterprise releases consolidate new features, improvements, and fixes across all product areas to make API operations more feature-rich, reliable and performant. Kong Enterprise subscribers are strongly encouraged to review all changes  and upgrade to this valuable new release.

The post Kong Enterprise 0.29 is Released With Vitals Monitoring and Analytics Features appeared first on KongHQ.

Kong CE 0.11.2 released

$
0
0

Over the course of this last month, our team members and open source contributors have come up with a few improvements and new features today releasing as part of Kong CE 0.11.2! Download and try now Kong CE 0.11.2.

The highlights of this release are:

  • Admin API
    • New endpoints to paginate through the Kong-managed credentials (API keys, HTTP Basic credentials, JWTs, etc.)
    • Additional new endpoints to retrieve the Consumer associated with such credentials
    • Minor bug fixes
  • Core
    • Improved performance and memory footprint when parsing multipart request bodies
    • Minor bug fixes
  • Plugins
    • Performance improvements for both rate limiting plugins
    • Minor bug fixes for the hmac-auth plugin

Browse the Kong CE 0.11.2 Changelog for a complete list of changes included in this release.

We would like to address our warmest thanks to our contributors for their hard work on this release, and we are urging you to stay tuned for more!

The post Kong CE 0.11.2 released appeared first on KongHQ.

Introducing Kong Nation: Our Brand New Home For The Kong Community

$
0
0

Kong Nation is now available as the premier place to ask questions about the Kong microservice API gateway, plugins, uses cases, application architectures, and more. Access to Kong Nation is free to everyone.

Join Kong Nation. It's Free

 

Kong Nation was born from community feedback. You’ve been telling us that the community-wide conversation was increasingly fragmented across several discussion platforms. Searching for answers across platforms was impossible and figuring out which of the several platforms to participate in was confusing. That’s far short of the experience we want to provide. Rather than consolidate on one of the incumbent platforms, we aimed higher. And as a result Kong Nation provides an expansive feature set on day-one, including:

  • threaded discussions
  • natural language search
  • a fresh user experience
  • mobile support
  • user-controlled email notifications
  • much more … try it yourself

Everyone is welcome to participate in Kong Nation. All we ask is that you treat the forum and your fellow community members with respect and civility; check out the community guidelines if you have questions.

A Community-Powered Discussion Forum

All of the information in Kong Nation is created by and for the community. Yes, several Kong Inc. employees are posting answers to your questions, sharing useful information, and (rarely) moderating contributions. But make no mistake: this is a community forum, not a Kong Inc. forum. Your voice is essential to its success. There is a lot of fantastic content today, and over time Kong Nation will have a huge and helpful collection of information on architecting, deploying, operating and scaling microservice APIs.

Quickly Research Kong Knowledge

Kong Nation is organized into 6 categories comprising topics and posts. When posting new topics, please give some thought to the appropriate category for your question. Here are a few examples:

  • Kong installation and database configuration questions belong in Installation/Setup category.
  • Questions regarding the troubleshooting or use of a plugin are well suited to the Questions category.
  • Interest in new features, databases or cloud technologies belong in the Feature Suggestions category.
  • Bug reports should be posted to Github Issues, not to Kong Nation.
  • And developers who want to share to their open source Kong plugins can post details in the Announcements category.

Kong Nation has a very capable internal search engine that returns relevance-ranked results. Additionally, all Kong Nation content is indexed by Google and others, so you can find results in major search engines. Type site:discuss.konghq.com your-topic.

Join Kong Nation: Signing up is Easy

Everything about Kong Nation is better when you sign up. You’ll gain the ability to post questions, participate in discussions, and share your knowledge with the rest of the community. Registered users also have the ability to receive customized notifications, from a daily digest to event-driven alerts around specific topics. Best of all, signing up is super easy:

  1. Click on the “Sign Up” button in the upper right of your browser.
  2. Create and authenticate your account using your Google or Github credentials, or via email. The choice is yours.

Join Kong Nation Using Google or Github Profiles

See you at Kong Nation!

The post Introducing Kong Nation: Our Brand New Home For The Kong Community appeared first on KongHQ.

How to Design a Scalable Rate Limiting Algorithm

$
0
0

Rate limiting protects your APIs from overuse by limiting how often each user can call the API. This protects them from inadvertent or malicious overuse. Without rate limiting, each user may request as often as they like, which can lead to “spikes” of requests that starve other consumers. After rate limiting is enabled, they are limited to a fixed number of requests per second.

 

In the example chart, you can see how rate limiting blocks requests over time. The API was initially receiving 4 requests per minute shown in green. When rate limiting was enabled at 12:02, additional requests shown in red are denied.

Rate limiting is very important for public APIs where you want to maintain a good quality of service for every consumer, even when some users take more than their fair share. Computationally-intensive endpoints are particularly in need of rate limiting – especially when served by auto-scaling, or by pay-by-the-computation services like AWS Lambda and OpenWhisk. You also may want to rate limit APIs that serve sensitive data, because this could limit the data exposed if an attacker gains access in some unforeseen event.

There are actually many different ways to enable rate limiting, and we will explore the pros and cons of different rate limiting algorithms. We will also explore the issues that come up when scaling across a cluster. Lastly, we’ll show you an example of how to quickly set up rate limiting using Kong, which is the most popular open-source API gateway.

 

Rate Limiting Algorithms

There are various algorithms for rate limiting, each with their own benefits and drawbacks. Let’s review each of them so you can pick the best one for your needs.

 

Leaky Bucket

Leaky bucket (closely related to token bucket) is an algorithm that provides a simple, intuitive approach to rate limiting via a queue which you can think of as a bucket holding the requests. When a request is registered, it is appended to the end of the queue. At a regular interval, the first item on the queue is processed. This is also known as a first in first out (FIFO) queue. If the queue is full, then additional requests are discarded (or leaked).

 

 

The advantage of this algorithm is that it smooths out bursts of requests and processes them at an approximately average rate. It’s also easy to implement on a single server or load balancer, and is memory efficient for each user given the limited queue size.

However, a burst of traffic can fill up the queue with old requests and starve more recent requests from being processed. It also provides no guarantee that requests get processed in a fixed amount of time. Additionally, if you load balance servers for fault tolerance or increased throughput, you must use a policy to coordinate and enforce the limit between them. We will come back to challenges of distributed environments later.

 

Fixed Window

In a fixed window algorithm, a window size of n seconds (typically using human-friendly values, such as 60 or 3600 seconds) is used to track the rate. Each incoming request increments the counter for the window. If the counter exceeds a threshold, the request is discarded. The windows are typically defined by the floor of the current timestamp, so 12:00:03 with a 60 second window length, would be in the 12:00:00 window.

 

 

 

The advantage of this algorithm is that it ensures more recent requests gets processed without being starved by old requests. However, a single burst of traffic that occurs near the boundary of a window can result in twice the rate of requests being processed, because it will allow requests for both the current and next windows within a short time. Additionally, if many consumers wait for a reset window, for example at the top of the hour, then they may stampede your API at the same time.

 

Sliding Log

Sliding Log rate limiting involves tracking a time stamped log for each consumer’s request. These logs are usually stored in a hash set or table that is sorted by time. Logs with timestamps beyond a threshold are discarded. When a new request comes in, we calculate the sum of logs to determine the request rate. If the request would exceed the threshold rate, then it is held.

 

 

The advantage of this algorithm is that it does not suffer from the boundary conditions of fixed windows. The rate limit will be enforced precisely. Also, because the sliding log is tracked for each consumer, you don’t have the stampede effect that challenges fixed windows. However, it can be very expensive to store an unlimited number of logs for every request. It’s also expensive to compute because each request requires calculating a summation over the consumer’s prior requests, potentially across a cluster of servers. As a result, it does not scale well to handle large bursts of traffic or denial of service attacks.

 

Sliding Window

This is a hybrid approach that combines the low processing cost of the fixed window algorithm, and the improved boundary conditions of the sliding log. Like the fixed window algorithm, we track a counter for each fixed window. Next, we account for a weighted value of the previous window’s request rate based on the current timestamp to smooth out bursts of traffic. For example, if the current window is 25% through, then we weight the previous window’s count by 75%. The relatively small number of data points needed to track per key allows us to scale and distribute across large clusters.

 

 

We recommend the sliding window approach because it gives the flexibility to scale rate limiting with good performance. The rate windows are an intuitive way she to present rate limit data to API consumers. It also avoids the starvation problem of leaky bucket, and the bursting problems of fixed window implementations.

 

Rate Limiting in Distributed Systems

 

Synchronization Policies

If you want to enforce a global rate limit when you are using a cluster of multiple nodes, you must set up a policy to enforce it. If each node were to track its own rate limit, then a consumer could exceed a global rate limit when requests are sent to different nodes. In fact, the greater the number of nodes, the more likely the user will be able to exceed the global limit.

The simplest way to enforce the limit is to set up sticky sessions in your load balancer so that each consumer gets sent to exactly one node. The disadvantages include a lack of fault tolerance and scaling problems when nodes get overloaded.

A better solution that allows more flexible load-balancing rules is to use a centralized data store such as Redis or Cassandra. This will store the counts for each window and consumer. The two main problems with this approach are increased latency making requests to the data store, and race conditions, which we will discuss next.

 

Race Conditions

One of the largest problems with a centralized data store is the potential for race conditions in high concurrency request patterns. This happens when you use a naïve “get-then-set” approach, wherein you retrieve the current rate limit counter, increment it, and then push it back to the datastore. The problem with this model is that in the time it takes to perform a full cycle of read-increment-store, additional requests can come through, each attempting to store the increment counter with an invalid (lower) counter value. This allows a consumer sending a very high rate of requests to bypass rate limiting controls.

 

 

One way to avoid this problem is to put a “lock” around the key in question, preventing any other processes from accessing or writing to the counter. This would quickly become a major performance bottleneck, and does not scale well, particularly when using remote servers like Redis as the backing datastore.

A better approach is to use a “set-then-get” mindset, relying on atomic operators that implement locks in a very performant fashion, allowing you to quickly increment and check counter values without letting the atomic operations get in the way.

 

Optimizing for Performance

The other disadvantage of using a centralized data store is increased latency when checking on the rate limit counters. Unfortunately, even checking a fast data store like Redis would result in milliseconds of additional latency for every request.

In order to make these rate limit determinations with minimal latency, it’s necessary to make checks locally in memory. This can be done by relaxing the rate check conditions and using an eventually consistent model. For example, each node can create a data sync cycle that will synchronize with the centralized data store. Each node periodically pushes a counter increment for each consumer and window it saw to the datastore, which will atomically update the values. The node can then retrieve the updated values to update it’s in-memory version. This cycle of converge → diverge → reconverge among nodes in the cluster is eventually consistent.

 

 

The periodic rate at which nodes converge should be configurable. Shorter sync intervals will result in less divergence of data points when traffic is spread across multiple nodes in the cluster (e.g., when sitting behind a round robin balancer), whereas longer sync intervals put less read/write pressure on the datastore, and less overhead on each node to fetch new synced values.

 

Quickly Set Up Rate Limiting with Kong

 

Kong is an open source API gateway that makes it very easy to build scalable services with rate limiting. It’s used by over 300,000 active instances globally. It scales perfectly from single Kong nodes to massive, globe-spanning Kong clusters.

Kong sits in front of your APIs and is the main entry-point to your upstream APIs. While processing the request and the response, Kong will execute any plugin that you have decided to add to the API.

 

 

Kong’s rate limiting plug-in is highly configurable. It offers flexibility to define multiple rate limit windows and rates for each API and consumer. It includes support for local memory, Redis, Postgres, and Cassandra backing datastores. It also offers a variety of data synchronization options including synchronous and eventually consistent models.

You can quickly install Kong on one of your dev machines to test it out. My favorite way to get started is to use the AWS cloud formation template since I get a pre-configured dev machine in just a few clicks. Just choose one of the HVM options, and set your instance sizes to use t2.micro as these are affordable for testing. Then ssh into a command line on your new instance for the next step.

 

Adding an API on Kong

The next step is adding an API on Kong using Kong’s admin API. We will use httpbin as our example, which is a free testing service for APIs. The get URL will mirror back my request data as JSON. We also assume Kong is running on the local system at the default ports.

curl -i -X POST \
--url http://localhost:8001/apis/ \
--data 'name=test' \
--data 'uris=/test' \
--data 'upstream_url=http://httpbin.org/get'

Now Kong is aware that every request sent to “/test” should be proxied to httpbin. We can make the following request to Kong on its proxy port to test it:

curl http://localhost:8000/test
{
"args": {},
"headers": {
"Accept": "*/*",
"Connection": "close",
"Host": "httpbin.org",
"User-Agent": "curl/7.51.0",
"X-Forwarded-Host": "localhost"
},
"origin": "localhost, 52.89.171.202",
"url": "http://localhost/get"
}

It’s alive! The request has been received by Kong and proxied to httpbin, which has mirrored back the headers for my request and my origin IP address.

Adding Basic Rate-Limiting

Let’s go ahead and protect it from an excessive number of requests by adding the rate-limiting functionality using the community edition Rate-Limiting plugin, with a limit of 5 requests per minute from every consumer:

curl -i -X POST http://localhost:8001/apis/test/plugins/ \
-d "name=rate-limiting" \
-d "config.minute=5"

If we now make more than 5 requests, Kong will respond with the following error message:

curl http://localhost:8000/test

{
"message":"API rate limit exceeded"
}

Looking good! We have added an API on Kong, and we added rate-limiting in just two HTTP requests to Kong’s admin API.

It defaults to rate limiting by IP address using fixed windows, and synchronizes across all nodes in your cluster using your default datastore. For other options, including rate limiting per consumer or using another datastore like Redis, please see the documentation.

 

Better Performance with Kong Enterprise Edition

 

The Enterprise edition of rate limiting adds support for the sliding window algorithm for better control and performance. The sliding window prevents your API from being overloaded near window boundaries, as explained in the sections above. For low latency, it uses an in-memory table of the counters and can synchronize across the cluster using asynchronous or synchronous updates. This gives the latency of local thresholds, and is scalable across your entire cluster.

The first step is to install the Enterprise edition of Kong. Then, you can configure the rate limit, the window size in seconds, and how often to sync the counter values. It’s really easy to use, and you can get this powerful control with a simple API call:

curl -i -X POST http://localhost:8001/apis/test/plugins \
-d "name=rate-limiting" \
-d "config.limit=5" \
-d "config.window_size=60" \
-d "config.sync_rate=10"

The enterprise edition also adds support for Redis Sentinel, which makes Redis highly available and more fault tolerant. You can read more in the Enterprise rate limiting plugin documentation.

Other features include an admin GUI, more security features like role based access control, analytics, and professional support. If you’re interested in learning more about the Enterprise edition, just contact Kong’s sales team to request a demo.

 

The post How to Design a Scalable Rate Limiting Algorithm appeared first on KongHQ.


Kong CE 0.12.0rc1 now available for testing

$
0
0

Making the Kong CE 0.12.0rc1 release candidate available is a great way to finish an amazing year full of milestones:

  • 3 major releases: 0.10, 0.11 and now 0.12
  • Kong Nation, our new community forum
  • Several new developers added to the Kong team
  • The largest number of community contributions in a single year

We worked hard to put something under the tree for you in time for the holidays 🎄🎁, and starting now, you can now download Kong CE 0.12.0rc1.

It contains a preview of several new exciting features and major improvements we are eager for you to get your hands on:

  • Support for passive health checks: Kong can now short-circuit some of your remote instances from its load balancer when it encounters too many TCP or HTTP errors.
  • Support for active health checks: You can also configure periodic HTTP test requests to actively monitor the state of your remote instances and pre-emptively short-circuit them.
  • Support for hash based load balancing: Kong now offers consistent hashing/sticky sessions load balancing capabilities.
  • Logging plugins now log requests that were short-circuited by Kong (such as HTTP 401 responses from auth plugins, or HTTP 429 responses from rate-limiting plugins).

You can view the full Changelog for a detailed list of changes included in this release.

You can browse the documentation repository to view the (currently in progress) 0.12.0 documentation. We are hard at work bringing it up to date for those new features!

Consult the 0.12 Upgrade Path for a list of breaking changes and suggested migration steps.

As a release candidate, we discourage the use of 0.12.0rc1 in production environments, but we strongly encourage that you give it a test run, preview the new features, investigate their future integration in your environment, and give us any feedback you may have! Kong Nation is a great way to provide general feedback to us, and the GitHub issues are still the de facto place for bug reports.

Happy holidays to the Kong community!

The post Kong CE 0.12.0rc1 now available for testing appeared first on KongHQ.

Kong CE 0.12.0 released

$
0
0

After a few weeks of testing of our release candidates, we are very proud today to announce the latest Community Edition release: Kong CE 0.12.0!

Download Kong 0.12.0 now and give it a try!

The highlights of this new major release are:

  • Support for circuit-breaking. Smarter tracking of unhealthy upstream instances.
  • Support for passive health checks. On-the-fly circuit-breaking for unhealthy remote instances upon TCP or HTTP errors.
  • Support for active health checks. Preemptively short-circuit unhealthy remote instances with periodic HTTP test requests.
  • Support for hash-based load balancing. Kong now offers consistent hashing/sticky sessions load balancing.
  • Logging plugins now log requests that were short-circuited by Kong (such as HTTP 401 responses from auth plugins, or HTTP 429 responses from rate-limiting plugins).

You can view the full Kong CE 0.12.0 Changelog for a detailed list of changes included in this release.

Consider spending some time browsing the Kong 0.12 documentation, and give some particular attention to the new health checks reference.

Consult the 0.12 Upgrade Path for a list of breaking changes and suggested migration steps.

 

The post Kong CE 0.12.0 released appeared first on KongHQ.

Kong EE 0.30 Enhances Microservice Visibility and Resiliency

$
0
0

Kong Inc. is thrilled to announce that Kong Enterprise Edition (EE) Version 0.30 is now shipping. This new microservice API gateway release is aimed at organizations that require sub-milisecond API performance, security, and availability at scale for their multi-node Kong clusters. New features addressing microservice visibility and resiliency include: health checks and circuit breakers, expanded Vitals monitoring, a new canary release plugin, and an improved proxy caching plugin.

Health Checks & Circuit Breakers

Kong now supports two kinds of health checks, which can be used separately or in conjunction:

  • Active Health Checks are periodic requests or “probes” of targets and analyses of responses for health (typically an HTTP success status code) or unhealthiness (potentially TCP errors, timeouts, or HTTP failure status codes above a configured threshold).
  • Circuit Breakers (aka Passive Health Checks) do not generate additional traffic, and instead observe proxied traffic to take action once a target returns unhealthy responses above a certain threshold.

Health checks and circuit breakers are also included in the recently released Kong Community Edition 0.12.

Expanded Vitals Monitoring

Vitals lets you keep track of the activity volume and performance of your API applications and Kong instances, via Kong’s Admin GUI in a web browser and via Kong’s Admin API. Kong EE 0.30 adds monitoring of a broader set of metrics, including request counts, upstream latency and datastore cache performance.

Microservice Visibility and Resiliency with Kong Vitals Monitoring

Vitals metrics are available as a cluster-wide total, or per Kong node. Administrators can quickly look to Vitals for a snapshot of current Kong and API activity, or use the visual data to assist in troubleshooting. Vitals’ API allows easy integration with all monitoring and alerting platforms.

Canary Release Plugin

Use the new Canary Release plugin for controlled roll outs of new microservice versions to a small subset of users. Administrators can assess performance and resiliency of the new software in a production environment by sending only a small number of users to the new version. This plugin also enables roll back to your original upstream service, or shifting all traffic to the new software.

Proxy Caching Plugin Improvements

The Proxy caching plugin now supports Redis (stand-alone and Sentinel) caching of proxy cache entities. This advanced architecture allows cached responses to be shared across Kong nodes. The result is improved resiliency as cached results can fulfill requests received by any Kong node, even when upstream resources become temporarily unavailable.

Additional Features in Kong EE 0.30

This release also adds a number of customer-requested features, enhancements and performance improvements, including:

  • Improved and expanded Enterprise Edition Rate Limiting plugin. Kong now supports “fixed window” usage quotas and hiding of rate limiting-related headers.
  • Hash-based load balancing in Kong EE. Kong now offers consistent hashing/sticky sessions load balancing capabilities based off client IPs, request headers, or consumers.
  • Expanded logging. Kong’s various logging plugins now track unauthorized and rate-limited requests.

With enterprise deployments characterized by a large and variable number interdependent nodes in a production microservices cluster, it’s a near certainty that some component will eventually fail. Having proper redundancy and automation, and utilizing features in Kong EE reduces service interruption risks.

Kong Enterprise subscribers are strongly encouraged to review all changes (subscription required) and upgrade to this valuable new release. Contact your Customer Success Engineer if you have specific questions.

Are you ready to explore using Kong EE to secure and scale your microservice APIs? Request a demo from Kong API experts.

The post Kong EE 0.30 Enhances Microservice Visibility and Resiliency appeared first on KongHQ.

Kong CE 0.12.2 & 0.13.0rc1 released

$
0
0

We have two Kong Community Edition releases to announce today: 0.12.2, and 0.13.0rc1.

The first is the latest stable version of our 0.12 family, and the second is a release candidate bringing support for new core entities that will ease the task of configuring Kong.

Download and try now Kong CE 0.12.2 or Kong CE 0.13.0rc1.

CE 0.12.2

Here are the notable items of this release, which is now considered the latest stable:

  • New endpoint /upstreams/:upstream_id/health to retrieve the health information of an Upstream as seen by the health checker.
  • New --yes flag for the kong migrations reset command to run in non-interactive mode.
  • Fix a few issues related to the load balancer and health checks initialization when using hostnames in Targets.
  • Fix some more issues in the Admin API and migration components.

See the 0.12.2 changelog for a complete list of changes included in this release.

Users running on the 0.12 family of releases are encouraged to upgrade to this new version. As usual, minor iterations do not introduce any migration or breaking changes.

CE 0.13.0rc1

This release candidate is made available for testing today. Its most notable change is the addition of two new core entities: Routes and Services This is an important evolutionary change that helps reducing configuration overhead.

While 0.13.0rc1 contains all of the changes listed in 0.12.2, it also comes with the following major additions:

  • 🎆 The introduction of Routes and Services as new core entities. No more need to duplicate your API entities to apply different matching rules and plugins on your endpoints!
  • 🎆 A new syntax for the proxy_listen and admin_listen directives allows you to disable the Proxy or the Admin API at once, meaning the separation of Kong control-planes and data-planes has never been that easy!
  • 🎆 The new endpoints such as /routes and /services are built with much improved support for form-urlencoding payloads, and produce much friendlier responses. Expect existing endpoints to move towards this new implementation in the future and greatly improve the Admin API usability.
  • Fix several issues with our DNS resolver.
  • Fix several issues related to application/multipart MIME type parsing.

See the 0.13.0rc1 changelog for a complete list of changes included in this release candidate.

As a release candidate, we discourage the use of 0.13.0rc1 in production environments, but we strongly encourage testers to give it a try and give us your feedback! Kong Nation is a great way to ask questions or post feedback, and the GitHub issues are still the de facto place for bug reports.

We wish to thank in advance all of the testers for this release candidate. The more more feedback we receive from the community, the faster we will release a stable version!

The post Kong CE 0.12.2 & 0.13.0rc1 released appeared first on KongHQ.

Kong Takes on Tokyo

$
0
0

Kong is proudly hosting a Tokyo Meetup, March 14th with our partner XLsoft. Join us to learn about Kong and microservices APIs, meet new developers, and enjoy pizza and beer. Register here!

 

Date and Time

March 14th at 7:00pm (6:30pm check-in)

Venue

XLsoft KK

Moriden Building 6F

3-9-9 Mita Minato-Ku

Tokyo, 108-0073 Japan

 

Meetup Details

We are planning a demonstration of Kong Enterprise Edition, an explanation of features and the following:

– Secure access to API endpoints using enterprise level authentication

– Traffic management with rate limiting

– Role-based access control

– Advanced rate limiting

– Regular expression routing and translation

– Advanced authentication

 

 

We’re also sponsoring the Gartner Enterprise Application Strategy & Application Architecture Summit 2018, March 15th – 16th. Drop by our booth and say hello. Let’s talk about APIs and microservices. The Summit attracts CIOs, application developers and other IT executives from around the world.

Please visit our booth to discuss how Kong can secure, manage and orchestrate your microservice APIs.

 

Dates

March 15 – 16, 2018

Venue

Tokyo Conference Center Shinagawa

Area Shinagawa 3rd, 4th & 5th Floor, 1-9-36 Konan, Minato-ku

Tokyo, Japan – 108-0075

 

About Kong

The Kong API Management platform is designed to sit in front of highly performant APIs and microservices to protect, extend and orchestrate distributed systems. With over 14,500 stars on Github and 10 million plus downloads, Kong is the most popular open-source API gateway and microservices management layer.

The post Kong Takes on Tokyo appeared first on KongHQ.

Viewing all 463 articles
Browse latest View live