Quantcast
Channel: Blog – KongHQ
Viewing all 463 articles
Browse latest View live

Serverless: Komal Mangtani, Greg Osuri, Guillermo Rauch, Gwen Shapira

$
0
0

The Rise of Serverless

Serverless and event-driven computing are gaining traction, providing cost savings in the cloud and more efficient resource utilization on premise. Watch this talk recording to hear Kong CTO Marco Palladino moderate a panel discussion with industry leaders about the rise of serverless, its interactions with other cloud-native technologies, the challenges of implementing it, and where the field is headed in the future.

 

See More Kong Summit Talks

Sign up to receive updates and watch the presentations on the Kong Summit page. We’d love to see you in 2019!

The post Serverless: Komal Mangtani, Greg Osuri, Guillermo Rauch, Gwen Shapira appeared first on KongHQ.


Kong Cloud Case Study Part 2: Collecting Metrics at 1M+ RPS

$
0
0

In our last blog post in this series, we discussed our journey designing a metrics pipeline for Kong Cloud to ensure the reliability of our SaaS offering. We discussed how we re-architected our production data pipeline using OpenResty to send metrics to Prometheus and saw huge performance gains. We are now able to monitor high traffic volumes in our system using much less compute power, lowering our costs. Decoupling metrics and logging infrastructure has helped us evolve and scale the two systems independently.

In this blog post, we will discuss the production metrics pipeline architecture in detail – covering how metrics are collected, aggregated and processed. We talk about the factors we considered while deciding where to use push and pull philosophies for passing metrics between the components of our pipeline.

Architecture Overview

OpenResty

We use OpenResty at the Kong Cloud edge to collect request metrics amongst other things. OpenResty executes logic in the context of various “phases” that run throughout the lifecycle of an HTTP request. These phases allow Lua logic to run before, during and after the HTTP request is handled by NGINX. In our edge tier, OpenResty worker processes extract metrics during the log phase of each request they handle by executing Lua code. Workers store these metrics in the local memory of the Lua Virtual Machine (VM). Specifics of the various phases of OpenResty execution are detailed in the lua-nginx-module documentation.

Metrics Exposition

In NGINX with multiple worker processes, each request is handled by one of the worker processes. This means that requests for metrics from a system like Prometheus would be handled by only one of the worker processes, and it would be hard to predict which one that would be. Because each worker process collects metrics on its own, a worker process will only provide data stored in its own memory space and not from other workers.

If the workers could be kept synchronized, it wouldn’t matter which one was serving metrics requests. However, the event-driven model of NGINX makes it hard to synchronize all the workers without slowing down the current request, since worker level synchronization requires establishing a mutex across all workers in order to update shared memory.

Another solution would be to aggregate and store metrics in a shared memory zone (a shared dictionary in OpenResty), apart from local worker memory. Aggregating the metrics into this zone presented a problem – highly concurrent levels of traffic. A substantial amount of time was wasted on lock contention as workers contended to update the same statistics.

At first, we tried to work around the architectural limitations of shared memory. We examined two possibilities: we could either increase the counter in shared memory for each request or let workers writeback to the shared memory on a specified interval. Both of the solutions led to performance penalties since each operation on the OpenResty shared dictionary involves a node level lock and thus slowed down the current request. After considering this, we concluded that shared memory using OpenResty shared dictionaries wouldn’t be the optimal solution.

A smarter design called for a way to collect metrics from within each worker, without spending cycles on lock contention and without requiring workers to sync their statistics with each other on a periodic interval to be polled by Prometheus.

Design Considerations

Metrics systems can be based on either a push or pull model. In our experience, the model to use for monitoring is highly dependent on the type of workload and architecture of the application being monitored. To monitor each of our software components and the infrastructure the component lives on, we needed to tailor our solution to collect metrics effectively using either push, pull or a mixture.

We use Prometheus for monitoring our infrastructure and services. Prometheus is designed with the pull philosophy in mind. It scrapes metrics and needs extra tooling to connect to components that actively push metrics out. Any system that includes it must either rely exclusively on pull, or mix push and pull strategies.

Pull

In a pull model, the metrics server requests metrics data from each service within the infrastructure, extracts the metrics and indexes them. Each monitored service provides an interface exposing its metrics for the system to consume.

This requires that the monitor and the services being monitored agree on a pre-existing endpoint by which to serve metrics and requires the monitored service to spend cycles calculating, aggregating and serving metrics data via an additional application interface. For Prometheus, this interface is an HTTP endpoint that serves metrics in exposition formats that Prometheus scrapes on a configured interval.

The cost of additional complexity on the monitored component is balanced by greater flexibility for the metrics server; selectively fetching and ingesting metrics allows a monitoring server to more intelligently handle service overloads, retry logic and selective ingestion of data points. Another advantage of a pull model is that the component doesn’t need to know the presence of the monitoring component. Prometheus can use service discovery to find the components it needs to monitor.

Prometheus prefers pull over push generally, but that doesn’t necessarily mean we should prefer pull over push when designing our metrics infrastructure.

Push

In the traditional push model (leveraged by tools like StatsD, Graphite, InfluxDB, etc.), the service being monitored sends out metrics to the metrics system itself. Typically this is done over a lightweight protocol (such as a thin UDP wire format in the case of StatsD) to reduce the complexity and load requirement of the monitored system. This is much more straightforward to set up since the monitored service only needs to know about the address of the metrics system in order to stream metrics to it.

Prometheus doesn’t support a “pure” push model because components can’t write to it directly. To monitor push-model components with Prometheus, the component being monitored actively sends metrics to middleware, which Prometheus then actively scrapes. The middleware essentially acts as a “repeater” that accepts push from a monitored component. Depending on the data format the component is pushing, the “repeater” can be a Pushgateway or StatsD Exporter.

We designed the model we use to be a mix of push and pull, as shown below:

Diagram of traffic flow from metrics components, and whether information is being pushed or pulled.

Components

Edge Logger

The edge logger is a piece of Lua code that runs on each worker. It has two responsibilities:

  1. Collect metrics for the current request and store them in local worker memory during the log phase
  2. Push the metrics to an edge exporter on a specified interval

The edge logger does worker-level aggregation.

The edge logger collects metadata for every request being proxied. The data can be classified into two data types: counters and histograms.

Counters

Counters are incremented whenever an event occurs. At the edge, we record a few metrics which are counter-based:  

  • Number of requests by status codes, components and customers: This information would help us drill down into which customers were experiencing issues if a new roll out ever caused issues. We also count HTTP status codes returned by Kong by customer’s upstream and the edge. If these start to diverge, we know there is a problem with one of the components. For example, if a request is being terminated at Kong with a 500 status code, that means that Kong is throwing an error, but if the 500 comes from the customer’s upstream server, then the customer needs to take action to resolve it.
  • Transit egress/ingress: We count the number of bytes sent and received for each customer and type of service (Proxy, Kong Manager, Kong Portal, etc).

Histograms

Histograms are generated by bucketing events and then counting the events in each bucket. We use them to record timing statistics by configuring buckets of durations, classifying each request in one of those buckets and incrementing a counter for each bucket. A few examples of metrics that we record as histograms are:

  • Total request duration: We keep track of how long requests are taking from start to finish
  • Latency added by Kong: This helps us make sure that Kong is performing with acceptable overheads
  • Latency added by Kong Cloud: This is one of the important service level indicators (SLIs) for us. It indicates how much latency we’re adding to requests and allows us to continuously tune and optimize our platform.

Performance tuning and metrics storage

To gain better performance and improve resource utilization, we use a foreign function interface (FFI) to create a C level struct for metrics like latencies and error counters. When the edge logger takes the JSON config of all metrics it’s configured to collect, it will also generate the FFI definition of the C struct.

We also use the LuaJIT function table.new to preallocate the table needed to store schemaless metrics like status codes. Since we don’t care about rare status codes like 458 or 523, we group all status codes except for those that are vital to us into groups. For example, all status codes larger than 199 and smaller than 300 will be collected at 200, 201, 204 and 2xx. This allows us to much more accurately predict the memory usage and CPU overhead when inserting and updating the value in the table.

Metrics like request size or response duration are stored in NGINX variables. To access them with the logger, we need to use the ngx.var Lua table which holds those metrics. To avoid unnecessary ngx.var lookups, we use a consul-template to render the nginx.conf so that variables like monitored component are hardcoded into the Lua code parameter that calls the logger.

As is discussed before, due to the NGINX worker model, each worker process holds its own instance of metrics data. To aggregate those data across all the workers, each logger actively pushes data from its worker to an exporter. When each worker sends its local metrics data, it also attaches the worker_id value so that the exporter can distinguish the source of metrics.

Edge Exporter

The exporter is a small application running alongside the OpenResty worker processes. It works similarly to the Pushgateway. It is configured with a config file, defining the metric names and types it is expected to receive from the OpenResty worker processes. The edge exporter supports counters, gauges and histograms at the moment. It translates the metrics it receives from OpenResty worker processes using the Golang client library by Prometheus.

The edge exporter listens to each OpenResty worker process on a port on its loopback interface for metrics. Periodically, each edge logger sends all the metrics it has gathered since its worker process started to the edge exporter. The exporter completely replaces all its metrics for each worker process whenever it receives a new update from it. On the surface, this might seem inefficient, but in reality, this design has been fruitful since this essentially makes edge-exporter stateless and easy to maintain.

We’ve found that skipping metrics which are essentially empty or zero can help reduce the load on Prometheus. For example, if we are tracking server errors (500 HTTP status codes), we expect them to be zero in production most of the time, so we skip exporting metrics which have zero values. If there’s a lot of such “zero” value metrics, then skipping such metrics reduces CPU consumption on Prometheus, since those metrics don’t need to be time-stamped, recorded and indexed.

Counter-metric resets can and do occur when OpenResty worker processes are restarted/reloaded. Prometheus queries and functions handle these appropriately in most cases and we don’t do anything special on the Edge Exporter side to handle counter resets.

Stay tuned for our next blog post, where we will discuss how we benchmark and optimize our logging library.

The post Kong Cloud Case Study Part 2: Collecting Metrics at 1M+ RPS appeared first on KongHQ.

Sharpening the Axe: Our Journey into Disruption with Kong

$
0
0

Jason Walker shares how Cargill is using Kong to transform legacy architecture with a “Cloud first, but not always” approach. Hear why Cargill chose Kong for their API gateway as part of their internal API platform, Capricorn, allowing Jason’s small team to stay nimble while they administer decentralized deployments. In this talk from Kong Summit 2018, Jason shares how Kong routes traffic in Cargill’s Kubernetes cluster. He also discusses how Kong fits in with Cargill’s architectural principles and strategies to maintain discrete controls over continuous deployment, and more.

See More Kong Summit Talks

Sign up to receive updates and watch the presentations on the Kong Summit page. We’d love to see you in 2019!

Full Transcript

Hi, so I’m Jason Walker, I’ve been a Cargill for about two years. Prior to that I was at a large retailer that has a dog as a mascot I was there for a couple years. Over the course in the last couple of decades one of the things that I found that I’ve been involved in is helping teams level up on their journey of disrupting whatever type of market or industry that they’re in whether that be in retail in banking and now and we like to refer to his tech AG. so if you’re not familiar with the sharpening the ax quote it comes from, as I am aware of it I wasn’t there so I can’t actually quote you know quote the master if you will but Abe Lincoln and I’m going to paraphrase but essentially if you give me five hours to chop down a tree I’m going to spend four hours sharpening the ax.

And so there’s a little bit of the things that a collection of us have done as far as to bring some disruptive ideas into Cargill is to look at what are those things that we need to set up as far as a set of supportive platform services etc. for the rest of Cargill to be able to pull in and combine and use as a method to be able to address some of the digital transformation issues that we have.

A little bit about Cargill has anyone heard of Cargill? Wow that’s actually impressive because usually to say it’s the largest company nobody has ever heard of. At Cargill we’ve been around for over 150 years I think this is probably more anecdotal than anything else but my understanding is that all of McDonald’s eggs worldwide are supplied in some way by Cargill. So if you’ve ever had an Egg McMuffin thank you very much please keep doing that.

One of the big purposes that we have is to nourish the world that is our purpose our mission is to help people thrive and we know what they need to do is we need to do these in safe responsible and sustainable ways and while technology becomes an enabler of that it’s certainly one of the aspects of us having just the idea that we need to be able to continue to improve without technology it’s going to become more burdensome it’s going to be more difficult and essentially we’re going to end up losing out on things like farmers of the new world and those types of capabilities that may come from using technologies like Kong, like cloud service providers and being able to truly level up.

We are steeped in legacy as I mentioned during the keynote we have thousands of plants those plants are anything from grinding corn to creating oil and then there’s some amount of like disassembly of various types of protein I’ll leave it at that. One of the things though is that those locations in some cases are very remote. In different parts of the world we have cocoa pod tree plantations or farms, there’s no connectivity really that’s there so we need to be able to do some creative things of like being able to take drones to take pictures of trees to make sure that the trees are healthy and we can only do that through being able to extend some of the technology for equipment that we have.

So the direction that we’re going is and we realize we need to be able to establish a bedrock that we can build up a broader foundation and these are the these are the types of things and we’ll get into some of the specifics around some of the different platforms. Colin Job is also here with me from Cargill and has a talk later on today. We work in an organization within our global IT function called Cargill digital labs, and Cargill digital labs we lean on our architectural principles I made note of that during the keynote there’s eight in total but some of the things that will be interleaved inside of this presentation this conversation is simple and standard loosely coupled provider agnostic.

When it comes to simple and standard all of these are probably self-explanatory but one of the things that we want to be able to do and we’ll get into more specifics later on is an ability to be able to provide to our developers who are building apps, building solutions for our customers a way that they can consume a set of services that are repeatable so if they do it on their workstation it’s the same thing as what they’re doing in dev that they’re promoting the stage they’re moving to prod etc. For loosely coupled we want to make sure that we can tightly align things like the way that we manage security but loosely couple that with the implementation in itself.

We far too often start to make the tool the primary thing that is going to deliver an implementation as opposed to saying we have a set of capabilities. We need to find tools that can actually map to those capabilities so in the event we need to swap something out by going from one cloud service provider to another or being able to deploy and scale across, we need to make sure that we’re able to do that. Then provider agnostic enforces that and makes it to where it’s very clear that we’re providing very intellectual property type widgets that we’re able to actually move those things around and not couple them to two various technology bases.

So in the digital lab space the stuff that’s inside the triangle are essentially our digital foundations, right now there’s three of them. The data platform is the oldest that is where like big data stuff takes place. So there’s analytics there’s reporting there’s all types of capabilities that you would expect from a big data platform. The next ten years one or the next oldest one is our cloud platform tons of great work in that space for us to be able to consume through a very basic interface a means to be able to get cloud oriented infrastructure whether that be deploying a kubernetes, being able to get database as a service, object store etc. Those two feed into and provide a set of services for us to be able to do things like provide the API platform, and the API platform isn’t just the gateway we provide some other components such as scaling, metrics, and the ability to be able to extend and in safely expose to the right people at the right time various data points.

For example as we look to do things like maybe monetize some of the data that we have for various operations or things that we want to be able to expose, the API platform is there for us to be able to provide that. One thing that we’re looking at is standing up as we go through and look to generate more competencies around IOT is to eventually have that IOT platform sit more on the edge but then also be able to consume the other components that are in the digital foundation strangle. It isn’t intended to look like Illuminati and sometimes people kind of like let’s put the triangle no, so API platform is born from us building up those digital foundations. These are four of the main things that we wanted to be able to articulate as part of like why we’re building an API platform the things that were being able to create it.

Dave Chucky is our product owner. So we have internally we call the Data Platform CDP Cargill Data Platform Cargill cloud platform, is CCP. The API platform it was like well it’s capped and then it turned into copy and now and I forgot the stickers we’ve got a bunch of like little Capricorn stickers he just went with Capricorn and now let’s call Capricorn but Cappy just for short. We obviously have various strategies that come into play with Cargill whether it be a technology strategy business strategy there’s probably a dozen strategies if we were to actually that’s 150 year old company like it’s huge we’ve got a lot of strategies. But we want to incorporate the key components in our architecture principles into what the API platform is going to provide. We also want to make sure that when it comes down to a road mapping that platform as well as other platforms that the services work well together.

We also want to be able to say that when it comes down to deployment to production and life cycling of all of the different moving parts that we can take a systems thinking approach to being able to deliver these various services and of course since we do we already have a place within our cloud platform to manage hosting to manage observability and a certain degree of like of metrics, logs, etc. We want to be able to reuse those components if we already have data points that are in our data platform we want to reuse them and not recreate them. These are some of the high-level tenants if you will of within the API platform.

Great we’ve got all these things so how does Kong fit? As I mentioned before we at Cargill and this is not specific to Cargill I think anyone that’s been in a large enterprise and it’s sort of dealt with some of the I’ll say the bureaucracy of a large enterprise it’d be very easy to say that we already have a tool we already have an incumbent in place let’s just use that big hammer see I bleep myself big hammer for all the problems that potentially exist. We can always make use of this one particular tool and always be able to deploy it.

We wanted to take a step back and just say what are some high-level criterion that we would use and let’s evaluate the market? We already had in place some incumbents when it came down to being able to provide gateway type of experiences but they were limiting there are different things that we wanted to be able to do against some of the other aspects of our club of our architecture principles because I mentioned the keynote we have a cloud first but not always.

Well if we’re cloud first but not always is there an opportunity to introduce a requirement that says let’s do cloud first and find based upon those capabilities tools that actually enable and empower the delivery. So there are three capabilities that we wanted to take it a super-high level let’s look at things like in the open source market things that then can extend maybe commercial offerings but there was really nothing that was removed including the incumbent. We wanted to look at this and say could we take this criteria what would we use what would it look like and evaluate against the various offerings that are out there.

The first one cloud native implementation we wanted something that was going to be containerized something that would be able to be using our cloud platform space we’re maturing and leveling up around kubernetes so this is something that we’d be able to potentially use and reuse. For choice on pipeline we’ve been maturing our CICD CD pipeline and want to make sure that the way that we’re maturing that is not going to be interfered with by a new technology base but something that’s able to integrate into the pipeline that we’re building. It’s pretty extensive as far as what we do it a lot of the things that we do with that we feel like it’s over the course of you know the last 10 or so years of experience. We feel like we are actually pretty good at it. We know we can get better and there are some things we’ll talk about with that.

The last thing is making sure that that developer experienced the, it works on my machine, is something that they’re able to actually pull in incorporate and then be able promote those same artifacts and the same configs and the same experience upwards it did not be all of a sudden something is different even though it worked on my Mac or my Windows machine.

So within the containers in kubernetes space our cloud platform is where we have the majority of artifacts that are hosted. There is kubernetes as a one of the moving parts of the cloud platform but again there’s other things that you would expect from an abstraction layer to a cloud service provider. So there’s object storage there’s database there are security curves, there’s firewalls, there’s various points of ingress, there’s DNS, all those things we’ve built into the cloud platform so the API platform we’re just able to consume them and use some declarative gamal to be able to tell the cloud platform here’s the way that I need some stuff done in order for me to be able to provide that API platform.

So DNS ingress and hosting is provided by our cloud platform dropping down to the data platform data reporting analytics. We don’t do transactional within the data platform we have the application team’s own that but they’re able to actually make use of the cloud platform to be able to consume their own stuff. And so in the API platform at a super high level routing authentication and gateway type capabilities.

So for discrete controls over continuous deployment again I said I feel like we continue to mature and evolve and when we have conversations with other companies and this not even necessarily with Cargill but just over the experience of talking with other companies but they’re like we’re pretty far along in this journey and it’s something that as we go through and we identify here’s a new way to do something. We do get a little bit of like hey we just figured out a way to level up and then that I don’t know if we’re like level 7 or out of whatever there’s no I don’t think there’s necessarily a scale but when you find something and then internally we’re able to kind of socialize and get a really good experience and a good fit and feel for the way that it sort of does it pass a smell test with other teams and they go hey that’s kind of cool we should do that too.

Here’s a picture at a really high level of some of the stuff that we’ve got going on. To kind of go across the deal we’ve got a build repository where we do all of our testing we built an artifact we basically put out their principal or not a principal but the guidance of you can use whatever language you want but you’re going to build a docker image, but the hard stop like you’re just going to build a docker image. So that what that allows us to do what I think I can do the laser, that doesn’t really show up does it?

I’ll go over here still doesn’t work. We have this build repository which lets us do basically a really basic fork branch type workflow and that is a workload that we subscribe to we don’t necessarily do a get workflow where you pull or clone from one repo and then push to it we want to do a fork, did I just do, oh I saw some giggles I thought maybe I screwed something up. No but branch submit a pull request that trigger some activity emerged triggers some more activity but a tag ultimately creates a version semantically versioned artifact that we present for deployment.

That doesn’t mean it gets deployed it’s just now in a position to be deployed. What we have is a separate deployment repository that follow some similar things as far as a pull request being submitted to it but the environments are defined and we actually make use of the deployments API that are available through a lot of the different CI servers as well as SCM servers to be able to say hey we want to be able to deploy this to our engineering environment which is where my team is able to introduce breaking changes and if things go bad we just crater the environment and rebuild it. Why? Because we have built all of our images and our configs is tagged semantically versioned images that were able to deploy on demand.

We also keep set secrets separate. The underlying tooling for our CI server is drone CI. We just use the open-source version of drone CI. It under the covers is essentially it leverages Hashicorp vault to manage secrets. So what we do is we push the secrets into drone, drone is then able to in a – your way push them into various environments whether it be through something like cube CTL or AWS parameter store or into another vault environment that’s able to be consumed. So what that then allows us to do is keep these things discreet. We often will timestamp the name of the secret so in the event that we need to introduce secret number two, secret number one can live out in the wild we can track it make sure secret two is actually the thing that’s being consumed as we move forward in through our environments and then deprecated secret number one throwing it away.

So we never have the big bang of I need to rotate a secret and now I’ve got this issue of do I redeploy and try to forklift and oh God I hope this works or do we just introduce something new and then and then rotate through? So down here in the in this drone box we have SCM of course, we make use of doing all of our testing and this is anything from unit tests to functional integration security performance anything that we’re able to come up with. We package up the image we make sure that the image is actually not only that it actually will run and work so we can push it into an environment that just says hey we created this docker image but can I actually push it and consume it someplace.

So in some cases we’ll just do a essentially a smoke test against it and then we have a bin repository that has some controls in it though that does not allow us to do things like overwrite an existing version because there’s nothing worse than having a 1.0 that was just released and then like tomorrow 1.0 gets pushed again you’re like wait no we cannot have different [inaudible 00:16:56] for 1.0 am I right or am I right or am I right? Thank you yeah.

So we call out delivery and deployment in separate boxes because in our continuous delivery space now that we’ve published that image we want to do CVE scans. We do common vulnerability we want to make sure that any kind of licensing that may be packaged inside of that image doesn’t include things that we don’t want Cargill to have any kind of issue if we release something. Way back when there was an FTP client it was sort of like a putty but it was an FTP client and the licensing was do good. It was like what does that mean? Well you just have to do good. Okay so subjective right so we have to make sure that the licenses that are actually pushed into those images just adhere to stuff like it’s patchy v2 or it’s MIT or it’s in some cases GPL v3 or GPL stuff okay but in other areas or where we potentially want to maybe sell something that has intellectual property we need to keep those licenses out so that we don’t run into situations where we have to push up stream.

Then hooda like as part of the delivery cycle we will build in some exploratory stuff and this is just observe orient decide and act right so it’s an old 70s military term as far as just when you’re going through and you’ve got a ton of data and you need to make decisions, observe orient decide and act. Then deployment captain is called out here that is the framework that our cloud platform team has created is just the tag name because kubernetes and you know more stickers, I think maybe we have stickers for those. But with Captain it is that it’s a gamal file essentially that runs as a drone plugin and we’re able to declare what an environment should look like whether it be dev stage or prod through essentially be very similar to like docker compose but totally different syntax but a similar type of setup.

And then of course in drone we’re able to post in things in the chat ops post other API’s and incorporate some additional things like ITSM controls or compliance are things that were A working on and B leveling up in the ITSM space where as a 150 year old company with tons of legacy, we have our ITSM team going what if we expose an API you pushed like changes that you’re doing over here into that API, we’re like hey we happen to have a place where you can run that API just saying.

All right so declare local environments for the batteries included. This is honestly just a really high level what if there’s an open-source project out there that can do a visualization of docker compose files that’s all this is just to give you an idea of we’re really just making use of docker compose in order to say in one package we can have developers on their workstation consume a gateway using community edition, a database in admin GUI we’re just making use of Konga know if anyone’s heard of Konga but using Konga is the admin for this particular scenario exposing on their local machine the necessary ports and then via configurations this file share essentially is able to be consumed so when the gateway starts up its able to kind of put some sugar into the deployment so you get things like a login that kind of thing.

So the next opportunities okay I’m way good, next opportunities. One of the key things I mentioned Dave Chucky is our product owner one of the things that he continues to iterate on is let’s make sure we’re giving a people choice. When it comes down to the stick or the carrot in interacting with customers we want to give a lot of carrots, I think we’ve all had enough stick. So we want to do is take a look at what are some things that we can do in order to internally like level up our game on being able to promote a safe, secure, and really just a same set of platforms API platform included. So CICD self service and plug-in development are three things where we see what we want to be able to do and move forward.

This is where we see definitely Kong in the approach that’s taking place from Kong really starting to map very well. So in CICD I mean this is just as we’ve gone from Community Edition now we’re moving into Enterprise Edition and seeing now with stuff with 1.0 there’s just a ton of opportunity for more automation. Like more ways to be able to automate not only like the deployment config but we’re working on some stuff from the plug inside to be able to do things like let’s just evaluate that the compliance of a gateway actually adheres to what we expect it to look like and that that maps to our compliance and risk teams expectations of things like hey if there happens to be an HTTP port on an interface it redirects to HTTPS, just some basic things like that.

Not necessarily like pen test not necessarily full-fledged vulnerability scans but just some lightweight things that we can either introduce as part of our CICD pipeline to give developers a really quick feedback but also a method that we’re able to continue to level up and be able to push safe code into the environment.

I mentioned the carrot and the stick, we always are asking our customers like what do you want the like how can we do better? In that if there are ideas that those customers have just ask. The ideas around self-service and we’ve developed a few tools and utilities that we promote as our API platform services it includes things like, if you’ve ever heard of a JSON encrypted JSON there’s a some libraries out there around salt and sodium and basically it’s an ability to be able to encrypt text such that you can present the encrypted text through like a chain like a pull request and then use a public private key to encrypt decrypt.

Reason we do this is we have application teams that are making use of authentication plugins, think of things like through open ID and open ID connect, they have a client ID and a client secret. As the platform team we don’t want to know their secret but we need their plugin to be configured to allow the traffic to authenticate. So we have a platform services app with an API that talks back into the gateway where people are able to register essentially a token, the gateway owns the private token they get the public token that we very able to encrypt there with that public token and only the Gateway is able to decrypt it.

So they’re able to check their secrets into source code management in clear-text because it’s already ciphered text and the only thing that is setup to be able to decrypt is the gateway. What that does is makes it to where we don’t have to know what the passwords are and those teams are able to just use the pull request model to say here’s a change here’s an update yep we screwed up our fill in the blank identity provider configuration and we need to update it or we need to revoke and renew passwords.

For plugins this is one area where Collin and I, we continue to riff on ideas we’d love to be able to do like a secrets plug-in be able to do a metadata plug-in, the ecosystem that that Kong is presenting especially with the plug-in developer kit, it’s going to make it to where we’re going to be able to start launching and in fingers-crossed as we get better in Cargill around those be able to open source those things that participate in it in a better way. I mean it’d be great it would be great if anyone does like secrets management at all have a Kong plug-in that can just talk to something like vault. Am I right? Yeah cool so yeah, wow yes.

So it’s a bit of these particular aspects of that Kong community in that Kong ecosystem. We saw it early when we were using community now that we’re moving forward we see it as a great fit into the different things that we’re looking to do. so the next steps was just some things that and I was really excited to hear about some of the stuff with like the service mesh and so forth because like our first phase is we’re centralizing the platform and the deployments. What we want to do is we’re actually kind of doing the let’s create a monolith of a cluster with that can scale up and down but have everything recourse through that set of gateways through that cluster.

What it allows us to do is with a very small team build up new features build up and enhance against what the features are, stay as nimble as possible and be able to as new Kong features are released be able to consume them in same ways and honestly because we are constrained by having a small team, we actually don’t over invest in I’ll say indulgent ideas that we think will become elegant everyone’s going to love that never come to fruition because they’re just like it’s just too much. So we keep things simple we iterate on in this centralized deployment what are just really the important things that we need to be able to provide and promote and be able to execute against that.

So the next phase for us is moving to send decentralized deployments and once we get the automation stuff squared away and teams are able to self service and not collide with you our eyes and up streams we can be in a position that we can say we’re not going to have us at a gateway directory I have multiple clusters and push those clusters closer to the applications.

Let the app teams actually do the deployment of their own stuff, we’ll end up with this whatever the packaging is it’ll be a Duggar but whatever the package is it’ll be something where teams are going to be able to consume on demand through semantically versioned images and deployments and then push it all down to where it just becomes part of the network through via service mesh and until they agree with the pattern out of technology. I was so happy to hear layer four, four to seven, I’ve been asking a couple of times like you’re going to four right like you’re yes so that’s really great to hear.

Because as we talked about you know cloud first but not always, we’re absolutely going to have those things that need to stay on premise, may refer numbers like clown, crown jewels, clown drools yeah clowns rules, that even got worse like I doubled down on the crown jewels. But those big things that every company says this is like regardless of the product that we sell there’s a there’s an algorithm there’s a formula there’s some secret sauce that needs to stay protected that probably will never make it into another data center that’s owned by someone else right I mean there’s just there’s just those things, but the more that we’re able to provide the services at the network layer and it just sort of acts like DNS like it’s just part that’s just there, the more that we’re going to be able to accelerate some of the development in the delivery of the digital transformation that Cargill is undergoing.

So those are kind of the key things around how Kong fits within the API platform. It’s the aspects of like the plugins, the direction we want to go with self-service, this architecture there’s big architectural things that are coming out of Kong. Like I said we had no perspective or no information as part of what was being announced and just to see commonality, there was a bit of you know talking behind at the back then we were watching things go up and it was like, they’re checking our boxes right like I mean it’s just all moving around very, very nicely.

We are hiring, sort of the obligatory like we’re hiring slide. So if between API platform to the cloud platform in various areas we have plenty of things that are going on and with that we’re I think I’m early, I don’t know I can’t even tell the time it is. Yeah but nonetheless open it up to questions. Thank you. And if not I’m not keeping you from lunch, so I don’t feel bad. We don’t have the little cubes?

Speaker 2: So Cargill is a 150 year old company so I’m interested in seeing what was your experience in migrating like legacy like did you have monoliths that you have started to break down and like what was like in your opinion some challenges that you faced in terms of like API management or was making API is externally available.

Jason Walker: Yeah the I’m assuming everyone was able to your question and let me know if I missed the answer here, some of the things that we haven’t done yet is externalized all the API’s. the previous implementation of an API management strategy included lots of I’ll say tightly-knit integrations of there was already an existing file based integration to get data from point A to point B, instead of the mindset of well how do we decouple that and how do we go data centric but provide a loosely coupled interface, it was basically let’s just lift and land that integration which meant that any of the gateways or any of those services we’re really just a rinse and repeat it just we changed out the tool. Which meant we actually weren’t doing things like using or consuming or building out the API’s.

That’s not to say that there were you know that that was the exclusive approach but when it came down to some of the lift and shift if you will I think that we have taken in an approach to how we’re managing this the Okay platform itself is an enabler of in a way to expose some of the work and effort that was done around rationalizing and exposing the data itself. In the past it would be the data itself was just it was despair it was duplicated there was no ownership.

That data platform is actually the thing that’s triggering the ability to say we can we can assign data ownership and make it very clear regardless of like economical but really clear what that data model should look like data and domain model and then make you some API platform to safely expose that to the right consumers to the right places with the right guards and controls in place. As we mature that API platform we are building it in such a way that any API could be exposed externally, not that we would because of reasons. Does that … Okay. And I saw two hands, she has a mic, she wins, she roshambo’d you.

Speaker 3: Thank you this was very informational, so I have two questions you can pick whichever one you want answer. One was I was really interested in knowing how you have done the hybrid, if you could just get a use case or something? And the second question was did you go from monoliths to micro service or you were using a different API gateway before this and then you decided to move to Kong and if you just went directly from monolith to Kong did you evaluate some other gateways before deciding for Kong and why did you go with Kong?

Jason Walker: Wow there was a lot in there, wow. Did we have a previous gateway or a previous strategy in a previous set of tools? Yes and they actually still live today. So when it comes down to the hybrid model we actually keep them discrete today where it is Kong is intended and is used today for our cloud-based deployments. We still have that investment that hasn’t fully lifecycle yet for the on-premise however our decisions of making use of Kong with the idea of as long as we’re able to consume things like a kubernetes environment then our deployment to the cloud can remain consistent even if it’s on Prem.

So there are some reasons around saying like Kong and that overall architecture is something that we can reuse on Prem there’s some more work that would need to be done to make sure that the right hosting environment is available on Prem. We probably have like 12 to 18 months left on current investments of which in six months we’ll probably start to have a real conversation of alright as that lifecycle comes down when do we actually start to migrate and make more use of what we’re doing in the API platform, Kong being a component of that.

Speaker 3: Okay so the on-Prem one is not Kong and the cloud one is Kong is what you are saying?

Jason Walker: Correct.

Speaker 3: And just really quick the second question was why did you decide to go with Kong when there’s the other API gateways out there in the market?

Jason Walker: Oh sure, we went back and forth a little bit around you know okay there’s why Kong but what that process looked like as far as getting to the decision of let’s try Kong literally we just we spun up some different environments of making use of other open-source gateways half a dozen of them and when we look at the various components of other parts of our ecosystem like our monitoring tool, when we look at the monitoring and instrumentation and then we look at the integrations that are available right out of the box and there was a Kong button.

It was like okay the thing we already have in place will monitor the thing that we’re already looking at do the others already have that very quick easy click on the button and go? No. so we look at the ecosystem as a whole that’s just a sample of that ecosystem as a whole whether it’s open source in a web scale IOT scale what have you, it became really clear that there was traction in the common community between things like the number of stars on GitHub it was actually looked like that was a deciding factor it was like one of how many people are actually paying attention to this how many people are contributing how old is the last pull request?

We went through and treated it like if than if we just do open source. A secondary thing was there commercial support if we went open source from the same path or did we have to like third-party it. Like we could just mature and level into like an enterprise software license agreement what have you, are we dealing with the same relationship and is that something that we would see ourselves being able to fit into? Not only for ourselves as we start the API platform but Cargill Inc. globally be able to scale it up. Does that help? Okay.

Speaker 3: Thank you, that was good.

Speaker 4: Testing oh good, so I have a two part question as well, name your favorite color no-

Jason Walker: Name my favorite … yea, like whoa.

Speaker 4: So Capricorn I didn’t quite hear the first part is it is it the central API table that Cargill services all used to find and talk to each other or is this the aspiration of Capricorn?

Jason Walker: It is the were down the path. We haven’t made the statement you must put your stuff into Capricorn. What we have done is we’ve gone to customers and say what is the statement you must feel like to them? Because we know that we’re not necessarily in a maturity level to be able to have the globally distributed the resiliency in place. We have some warts every time we pull up the sleeves and we want to make sure we can address those and build an environment where Capricorn is so frictionless and easy to use, the people like why are we doing anything about this and not make the directive like a top down. We want actually internally in an inner source way our internal API customers to gravitate towards this really simple standard easy way for us to deploy our API’s and it not feel like it’s being done to them. I don’t know if I answered-

Speaker 4: That’s exactly because some of our customers are in the same place, they’re trying to build these central API services that the rest of the org would then move their stuff to and then consume and the answer I’m looking for is because you can’t usually make them do that how do you attract them to that or what are you finding success and given them a good reason to come what’s the carrot that you, you talk a little bit about carrots are there other carrots that you are looking for that would make your case more compelling for Capricorn?

Jason Walker: Yeah so there are other influences that those teams are interacting with. It may be like a security team versus team asset management like software compliant like all of these different areas. one of the things that in establishing these different platforms is having those customer interviews to say hey if we build these things and we can check off all these boxes so when you go and talk to your security team you can say I’m just using the API platform and they go this meetings over have a good day, that’s part of like the bureaucratic big company like we can reduce that friction.

The additional carrot and how I think we’re actually attracting some of the internal customers is by simply going what do you want it to do? And they actually start to say a lot of the things that like the security team would want them to say like well we don’t want anyone just to be able to deploy code. okay cool let’s not get into implementation details but that’s a that’s a great what statement so if it was there’s a mount of governance and there’s some amount of control but the application team owns that then the platform team is going to have to require there be a named owner.

If the app team wants to own stuff somebody has to own it, that’s cool if we’re able to establish that balance as they using that as an example the way that we’re able to automate and build in certain controls makes it to where the app team is actually getting what they asked for. Security teams probably actually getting what they’re asking for but now we’ve made it to where it’s more of a carrot. Does that answer? Okay cool. And I think yep.

Speaker 5: You talked about the batteries included local developer deployment, do you have like inter dependencies between the services when you do that? so like if you’re working on service A and it’s got like five dependencies or does your dot compose having all of those and then how are you configuring the local code for that?

Jason Walker: Not yet. So what we have is an ability for people to pull in a config so get it to where they’re able to interact and essentially be admin and then be able to import and push in certain configs and policies. Its intended to be not that you are going to be able to reproduce integration type testing, but be able to get to a point where you can at least say it’s more of a four unit test type development work or you don’t have those dependencies and in fact if you’re building those dependencies in your unit tests there’s an opportunity. I would say you’re doing it wrong but that’s kind of a jerk thing to say. I don’t know if there was any …

Speaker 6: So you mentioned the monitoring and how with Kong you get like monitoring out-of-the-box are using Prometheus?

I’m so sorry with that train I didn’t hear, I’m just going to come up to you.

Speaker 6: So you mentioned monitoring of API’s, are using Prometheus or if not what’s your experience with monitoring Kong?

Jason Walker: We actually consume the stats deep plugin and as part of our deployment we will prefix an API’s metrics with the environment and name of that API so as it progresses up the main monitoring tool they’re able to drill in and get specifics on latency on whatever types of details at that step there’s like 15 or 20 different metrics that it publishes. Because we’re making use of kubernetes some of the things that we’re able to do is we make use of horizontal pod auto scaler HPA and we keep our CPU threshold really low so if we need to scale up because there’s a lot of traffic, we’re less concerned about like memory footprint because we can go as wide as we technically want.

I bring that up because all those plugins and all those metrics is just going to mean more in memory type of consumption as things are filtering through so we just expand out to be able to account for the overhead. For things like logs we just do basic engine x parsing on the output.

The post Sharpening the Axe: Our Journey into Disruption with Kong appeared first on KongHQ.

Microservices and Service Mesh

$
0
0

Microservices and Service Mesh – East/West Traffic Control

The service mesh deployment architecture is quickly gaining popularity in the industry. In the strategy, remote procedure calls (RPCs) from one service to another inside of your infrastructure pass through two proxies, one co-located with the originating service, and one at the destination. The local proxy is able to perform a load-balancing role and make decisions about which remote service instance to communicate with, while the remote proxy is able to vet incoming traffic. This east/west traffic control between microservices is in contrast to a more traditional “bus” architecture. But what else can a service mesh be used for? And how will the definition evolve as the industry gains experience with service mesh deployments? Hear James Callahan, Kong solution architect, discuss the requirements that networking infrastructure should meet to qualify as a service mesh, possible alternative architectures for service meshes, environments where service meshes should operate, and future projects and tools in the service mesh space.

 

See More Kong Summit Talks

Sign up to receive updates and watch the presentations on the Kong Summit page. We’d love to see you in 2019!

The post Microservices and Service Mesh appeared first on KongHQ.

Steps to Deploying Kong as a Service Mesh

$
0
0

In a previous post, we explained how the team at Kong thinks of the term “service mesh.” In this post, we’ll start digging into the workings of Kong deployed as a mesh. We’ll talk about a hypothetical example of the smallest possible deployment of a mesh, with two services talking to each other via two Kong instances – one local to each service.

When deployed in a mesh, we refer to local instances of Kong as “paired proxies.” This is because the service mesh, broken down into its smallest atomic parts, is made of individual network transactions between two proxies that are “aware” of each other. Although any two proxies in the mesh can communicate with each other in this way, when you think about it from the standpoint of a single transaction, there is no service mesh.

Kong’s service mesh deployment – and all service meshes – are made of proxies that form pairs at connection time. Via those paired connections, the proxies provide security, reliability and observability for distributed application architectures. In other words, all service meshes are really collections of paired proxies.

Because the whole mesh is made of paired proxies, our example will be simple. We start out with service A and service B, which exchange requests and responses over insecure, non-TLS (Transport Layer Security) network connections. We’ll assume we have root level access to the hosts running A and B, and that there are security, reliability and observability issues with those connections. Let’s start solving those problems with a pair of Kong proxies.

Symbols and Terminology

We’ll establish some symbols and terminology that we’ll use through the remainder of this and in many other documents about Kong’s service mesh deployment architectures:

Service and Kong Instances

  • A, B, etc. represent single instances of services (also known as “applications”)
    • A is a service that makes requests to B
    • B is a service that responds to requests from A. B does not initiate any requests, nor does it get requested by any service other than A.
    • Both services send and receive non-TLS traffic only – they cannot establish or terminate TLS connections. Both services communicate via HTTP.
  • K represents a Kong node that is not “affiliated” with any particular service
  • KA, KB, etc. represent Kong nodes that proxy all traffic coming in to and going out from A, B, etc.
    • Unlike Kong nodes deployed at the “edge” of your computing environment as API gateways, these KA, KB, etc. nodes are deployed local to the services they are proxying as node proxies or sidecars.

Connections Between Services and Proxies

Though the arrows in this section point only one way for simplicity, it represents both the request and the response traffic. This same convention of “arrow on one end only, for clarity” applies throughout this example.

  • -> represents a non-TLS local connection
  • ---> represents a non-TLS network connection
  • >>>> represents a TLS/HTTPS network connection
  • ===> represents a Kong mutual TLS (KmTLS) network connection between Kong nodes

Kong Configurations

  • Kong Routes are used to configure the “incoming” side of a Kong proxy. A Route must be associated to one Service.
  • Kong Services are used to configure the “outgoing” or upstream side of a Kong proxy. A Service is associated with one or more Routes.

Deployment and Configuration

Here is an architectural walkthrough of how to deploy Kong as a service mesh, which will highlight some of the advantages of this pattern and where they come from. For a full tutorial with code snippets, please see the Streams and Service Mesh documentation.

  1. Start with Service A making HTTP requests to Service B across the network: A--->B. This connection is unsecured and unobservable, and the traffic is traveling over a network, which is inherently unreliable.
  2. Deploy an instance of Kong K and its required datastore to start your Kong cluster. This Kong node can be configured as a Control Plane node only – we’ll be using it only for configuring Kong, not for proxying traffic.
  3. Connect to the Kong Admin API and configure a Service that sends traffic to B via HTTPS and a matching Route that accepts incoming requests for B via both HTTP and HTTPS. (Note that if we started using this Service+Route immediately, we’d get an error because B cannot terminate TLS connections.)
  4. Deploy a Kong proxy KB local to B and configure origins, transparent and iptables. You now have a Kong proxy in front of B proxying all inbound requests and outbound response traffic, and you’ve made no changes to B.
    1. Although Service B cannot terminate TLS connections, the origins config KONG_ORIGINS="https://B:443=http://B:80 causes traffic that KB would normally send via HTTPS to port 8443 to instead be sent via HTTP to port 80.
    2. Kong “blocks by default,” which means that given this configuration, B can’t initiate any requests because KB is now intercepting all outbound requests, and there is not yet a Kong Service+Route for KB to send such requests.
    3. We aren’t yet benefiting from the Kong proxy – while KB is in the request/response path, it isn’t doing anything helpful.
    4. The current situation looks like this: A--->KB->B.
      1. If we had a new Service X that sent HTTPS requests to B, we could also have X>>>KB->B.
  5. A is sending requests to B via unencrypted HTTP. An HTTPS connection between A and B with mTLS would make communication more secure and is one of the capabilities that Kong can provide when deployed as a mesh. A doesn’t initiate HTTPS connections, and we can’t make changes to A. To get the security improvement we seek, first deploy another Kong proxy KA, local to A, configured with origins, transparent and iptables. Let’s examine in detail what happens now:
    1. A initiates an HTTP connection to B as usual. The configuration of transparent and iptables causes KA to intercept this request.
      1. A->KA
    2. KA uses the Service+Route configured in step #3 of this example to accept the incoming request via HTTP, then send it across the network to B via HTTPS.
      1. A->KA>>>
    3. The configuration of transparent and iptables on KB causes KB to intercept the HTTPS request from KA rather than having the request reach B directly – which is necessary because unlike B, KB is able to terminate TLS connections.
      1. A->KA>>>KB
    4. KA and KB automatically upgrade the TLS connection to mutual TLS (mTLS) using Kong-generated certificates. We call mTLS with Kong certs a `KmTLS` connection. We now have a paired proxy.
      1. A->KA===>KB
    5. KB terminates the TLS connection and forwards the request to B via a local HTTP connection. The configuration of origins on KB causes KB to send traffic to B locally rather than across the network as KA did.
      1. A->KA===>KB->B
    6. The response flows “in reverse:” When B responds via HTTP, KB receives the response and sends it over the KmTLS connection to KA. KA terminates TLS and forwards the response to A via a local HTTP connection.
      1. A<-KA<===KB<-B
  6. In step #3, we configured the Route for B to accept both HTTP and HTTPS traffic. As long as there are applications that might call B over HTTP, we need to leave this configuration. However, if we can assert “starting now, all communications with B must be secured with TLS,” then we can PATCH the configuration of the Route for B to accept only HTTPS requests. In our example above, the only service calling B is A, and it is now doing so via TLS (which is initiated by KA) – thus, to ensure that our cross-network traffic is always encrypted, you can make this PATCH change now.
  7. Now that we’ve got a paired proxy between A and B, we can start applying Kong plugins that run only on KA (like authentication), only on KB (like rate limiting with a local counter), or on both (like Zipkin tracing).

Stay tuned for the next blog posts in our series, in which we’ll examine how to use paired proxies to solve observability, security and reliability problems in distributed application architectures.

The post Steps to Deploying Kong as a Service Mesh appeared first on KongHQ.

Kong: Kubernetes Ingress Controller

$
0
0

Is Your Stack Ready to Support Kubernetes at Scale?

Kubernetes is fundamentally changing container orchestration; is your stack ready to support it at scale? Watch the talk recording to learn how the Kong’s Kubernetes Ingress Controller can power-drive your APIs and microservices on top of the Kubernetes platform. Hear Kong engineers walk through the process of setting up the Ingress controller and review its various features.

 

Sign Up for Summit Updates

We’ll be releasing blogs with all the summit recordings in the coming weeks. You can sign up for updates about new videos, and news about next year’s event on the summit page. We’d love to see you in 2019!

The post Kong: Kubernetes Ingress Controller appeared first on KongHQ.

A Tour of Kong’s Routing Capabilities

$
0
0

Routing Tricks and Tips

Kong is very easy to get up and running: start an instance, configure a service, configure a route pointing to the service, and off it goes routing requests, applying any plugins you enable along the way. But Kong can do a lot more than connecting clients to services via routes. In this talk from Kong Summit, you’ll learn about Kong’s various routing capabilities, including load balancing via upstreams and targets, different hashing modes, health checks and circuit breakers (and how to combine them), controlling routing via plugins and more.

 

See More Kong Summit Talks

Sign up to receive updates and watch the presentations on the Kong Summit page. We’d love to see you in 2019!

The post A Tour of Kong’s Routing Capabilities appeared first on KongHQ.

Kong Brain and Kong Immunity Released!

$
0
0

Four months ago, we declared that API Management is dead and announced our vision for a service control platform. Today, we’re taking a critical step towards fulfilling that vision with the launch of artificial intelligence and machine learning additions to the Kong Enterprise platform – Kong Brain and Kong Immunity.

As your organization increasingly adopts microservices, you will inevitably face new challenges in maintaining visibility, security and governance at scale. Kong Brain and Kong Immunity leverage artificial intelligence/machine learning (AI/ML) to improve visibility, security and efficiency across your entire development lifecycle.

Below, we’ll dive into the ways Kong Brain and Kong Immunity will help your organization by serving as cornerstones of the service control platform. Please note that we will be rolling out these features over an Early Access period as we continue to refine the products.

Kong Brain: Intelligent Automation

To improve governance, efficiency and visibility, Kong Brain integrates tightly with Kong Manager and the Kong Developer Portal to autonomously execute service management and deployment tasks. To accomplish this, Kong Brain uses a real-time collector to ingest your documentation and data flows, analyze changes and take action. To maintain control, you can allow designated users to create workflows and approve changes directly in Kong Manager.

Auto-Configure Your Kong Enterprise Deployment

With the pace of innovation ever-accelerating, shortening your development cycles becomes critical to your organization’s success. To accelerate your deployment process, Kong Brain uses your existing OpenAPI spec files to automate the configuration of your Kong Enterprise deployment. This removes the potential for human error and allows you to maintain strict standards for your service development teams, ensuring optimal performance.

Auto-Generate Documentation

To strengthen governance and boost efficiency, Kong Brain allows you to immediately standardize documentation across your entire Kong Enterprise deployment. To begin, deploy Kong Brain in front of the desired services to allow the collector to ingest your data. Once ingestion is complete, Kong Brain generates new documentation in OpenAPI 2/3 (fka Swagger) specification and allows you to push it into your Developer Portal.

Autonomous Documentation Updating

To avoid potential outages and issues caused by out-of-date documentation, Kong Brain automatically pushes the newest documentation to your Developer Portal. Once your team pushes a new or updated service to Kong Enterprise, Kong Brain will update the documentation globally and reflect it in the Developer Portal for internal and external developers with appropriate access.

Generate a Visual Map of Your Services

As your number of microservices increases, it becomes increasingly difficult to understand connections and dependencies across services, teams and environments. To improve visibility across your teams and enhance service discovery, Kong Brain creates a real-time visual map of your services across teams, regions, platforms and more. If you notice an issue or change that falls out of line with compliance, you can take immediate action. As Kong Brain continues to learn about your organization, it will increasingly unearth and flag potential redundancies, bottlenecks and issues across your Kong Enterprise deployment.

Kong Immunity: Adaptive Monitoring

Ensuring security and optimal service performance are critical to business success, but the shift to microservices exponentially increases the challenge due to increased east-west traffic. Kong Immunity addresses these challenges through machine-learning-fueled detection and analysis of service behavior anomalies in real-time. As users take action to address anomalies, Kong Immunity learns from those actions to refine detection and unearth more nuanced issues.

Create a Baseline for Healthy Traffic

Traffic patterns provide a window into the behavior and performance of services under different conditions. To understand your existing traffic patterns, Kong Immunity ingests data flowing through the Kong data plane to create a baseline for healthy traffic. As anomalies are detected and addressed through changes to the Kong Enterprise configuration, Kong Immunity continuously adapts this baseline. Over time, it learns to adapt without human direction.

Autonomously Identify Anomalies

To identify potential security issues, inefficiencies or performance bottlenecks, Kong Immunity flags traffic that deviates from the expected or desired patterns without disrupting services. Depending on your needs and goals, you can adjust the settings of Kong Immunity to recognize individual traffic events, patterns and other types of anomalous activity. For ultimate control, your designated users can allow a certain amount of deviation from expected norms before categorizing it as an anomaly.

Automatically Alert

How quickly you respond to a security event can mean the difference between a simple fix and catastrophic damage. As Kong Immunity detects anomalies in real-time, it automatically sends a notification alerting you to the issue. To avoid disruptions to your teams, you can designate specific users to receive alerts based on Roles Based Access Controls (RBAC) within Kong Manager. You or your designated users can also adjust the timing and sensitivity of alerts to make them more or less frequent for individual services depending on their importance.

Analyze and Address Anomalies

To help you effectively remedy issues in your services, Kong Immunity allows you to review anomalies to understand the root cause and take action. Using Kong Vitals, you can investigate service behavior and address the issue with just a few clicks. As the usage of Kong Immunity increases, it learns your desired behavior and continuously refines its model to better detect or ignore anomalies.

 

The introduction of these new AI/ML capabilities is a major milestone in our journey to reinvent the way that enterprises broker their information. The Kong Brain and Kong Immunity releases help our customers streamline service development, management and security across their entire organization. Stay tuned for deeper dives into our new capabilities and use cases.

Interested in joining the Early Access program? It’s easy to get started. Join the webinar to learn more, or reach out to us today about Kong Brain and Kong Immunity.

 

The post Kong Brain and Kong Immunity Released! appeared first on KongHQ.


Creating Mock APIs with API Fortress and Kong

$
0
0

API Fortress is a continuous testing platform for APIs. We’ve been a friend to Kong since the beginning. In this guest blog post, we explain how our company uses Kong to facilitate the process of virtualizing APIs.

Why Mock APIs?

As more companies adopt microservice architectures and DevOps best practices, APIs play a bigger and bigger role in software development. The explosion of services and APIs makes control platforms like Kong imperative in the new API economy. Many companies want to innovate faster, and the push towards shorter and more effective sprints has made Continuous Integration and Continuous Delivery (CI/CD) a popular practice. This practice requires automated testing, but even companies that haven’t implemented instantaneous deployment patterns can still accelerate development by automating API tests.

API Fortress builds tools to automate API testing. Our experience working with customers and their API testing challenges has highlighted a prevalent requirement – mocking APIs. Companies we work with need to mock APIs for three common reasons:

  • To build tests early: Teams need to mock APIs so that they can start building tests for those APIs before the APIs are live.
  • To save money: Paid APIs like Google Maps and Salesforce can be expensive to make calls to. Mocking APIs allows teams to save money on these calls during development.
  • To detect bugs: Mocking APIs during bug hunting allows teams to hone in and diagnose which specific microservices might be causing a hard-to-solve bug.

There are many more reasons, but those are three of the most common.

Architecting an API Mocking Solution with Kong

The idea of mocking an API might seem simple, but in practice, the existing options that users had for mocking were limited. Open source libraries take a lot of effort to set up, and existing paid platforms can be overly expensive and sometimes limited. At API Fortress, we believed that a simple, robust option was needed. So we built one.

The experience of creating mocked APIs in our platform is fairly straightforward: users simply copy and paste the payloads into our GUI. But, we wanted to find an even easier route. We wanted our platform to automatically create mocks from live API calls. The goal was to find a method more powerful than recording manual calls that users make from their desktops. We’re a platform, after all, so we wanted to take a platform-level approach.

While we were architecting our solution, we realized that Kong could be a critical component to intercept real API calls and responses, which we could then use to generate mocks. API calls flow from clients through Kong to both API fortress and to their destinations.

With our plugin, Kong can capture snapshots of payloads that pass through it and send those snapshots to API Fortress. Our platform can then automatically turn those calls into mocked APIs with the click of a button. One click, and an entire organization has access to virtualized API. Cool, right?

How Does API Mocking Work?

Users can set up API mocking in a few steps, which are described in the API Fortress documentation here.

The basic process is broken into three steps:

  1. First, we’re going to turn on the Kong proxy server.
  2. Next, we’re going to create a proxied endpoint.
  3. Finally, we’re going to push that proxied endpoint into API Fortress mocking.

Prerequisites

Our mocking feature is only available for the on-premises version, but we offer a free 30-day trial. If you’d like to follow along, simply sign up for an account, and our team will help you get a container (Docker/Kubernetes/Open Shift) to deploy. We’ll summarize the general steps to use mocking below, which may help you think about your own implementation.

A trial provides you with a few prerequisites for mocking with API Fortress, including:

  • An updated version of the API Fortress core/docker-compose.yml (if using Docker, for example), which includes the Kong section at the end
  • The initialization script, init_kong.sh
  • The start script, start_kong.sh

Starting Kong

The first step is to start Kong and its prerequisites, which include a database. You can use Cassandra or Postgres with Kong, but in our example, we use Postgres, which we provide a preconfigured instance of as part of the API Fortress container.

Next, we need to initialize Kong using the previously mentioned script, init_kong.sh. Once Kong has finished initializing, we can start the Kong container itself. We do this using our provided docker-compose file. After the file runs, you want to check and make sure that the container is running.

We can verify that the proxy is up and running using a cURL command or the HTTP client of your choice. A positive response from this route indicates that the proxy server is up and running. Congratulations! You now have a live proxy server!

Our last step in the setup phase is creating an API Fortress API key. The gif below shows the process.

Proxying a Route:

Every time we create a mock route, we need to point it at the dashboard. The simplest way to do so is to add a wildcard entry to our DNS. If that is not an option, individual DNS entries for each mock route are also an option. If API Fortress is running at apif.example.com, the wildcard entry of *.apif.example.com would point at the same IP address and allow every prepended domain to reach the same server.

In order to proxy an API route, we need to POST a request to the proxy server that contains the following information about the route in the header:

  • name: The name of the API profile
  • upstream_url: The origin URL, which is the destination that we’re passing through the proxy on our way to
  • hosts: A list of hosts that will trigger this API profile, which is formatted as the URL(s) that will trigger the proxied response

The result is a profile of a proxied API.

Finally, we need to test the proxied route itself. To do so, curl the URL that you previously defined as the host appended with port 8000, accompanied by any key-value pairs you need to submit and necessary routing information. Our expected response should match the response of the endpoint that we’re proxying, provided we’re passing the correct headers.

Recording a Mock Endpoint:

As with creating the actual proxied endpoints, creating recorded mocks requires a modification of the DNS. Adding a wildcard entry for the mock server (*.demoapi-mocks.apif.example.com) will allow these requests to be properly routed once the mocks are recorded.

Activate Mock Recording:

The next step is to activate the fortress-http-log plugin for Kong by sending a POST request to apif.example.com:8001/apis/3389fcee-3ada-4ed6-957b-082085601111/plugins where apif.example.com is the URL of your API Fortress instance.

The request should pass a number of URL-encoded key-value pairs in the body, which are commonly known as post parameters. These values are largely static.

  • config.api_key: The API Key value created in step 1
  • config.secret: The API Secret value created in step 1
  • config.mock_domain: The mock domain you wish these routes to be appended to in API Fortress Mocking; it does not need to already exist

Once the request has successfully been sent, the fortress-http-log plugin for Kong will be active, and mock recording will be enabled!

Record an Endpoint:

You can then start recording mocks by calling the proxied API. As always, note that the proxy route in this call must be replaced with the proxy route that you created. The port, in this case, is 8000 (Kong’s proxy port) rather than 8001 (Kong’s admin API port).

Query the Recorded Mock API:

Finally, we can verify the new mock route in two primary ways. First, we should see it in the Mocking interface in API Fortress. Second, we can query the route directly and receive the same expected response to this call that we receive when polling either the actual or the proxied API.

Learn More About API Fortress and Kong

As you can see, Kong plays a central part in our Mock API recording. We’re proud to be part of the Kong community and are excited to see the project growing. 1.0 added some particularly interesting features, and we look forward to doing more together in the future.

For the full documentation on mocking with API Fortress and Kong, or to learn more about what we do, please visit us online. See you around the Kong community!

The post Creating Mock APIs with API Fortress and Kong appeared first on KongHQ.

Meet Kong at O’Reilly Software Architecture New York!

$
0
0

 

The Kong team is headed to New York next week, and we’d love to see you there! Kong is a proud sponsor of O’Reilly Software Architecture New York, an enterprise software architecture conference focused on professional training and networking for software architects.

We’re excited to talk with you about microservice, cloud-native, service mesh and serverless architectures. Visit us at booth #98 to learn more about Kong’s API platform. To schedule a one-on-one meeting with a member of our team to see how Kong works first-hand and discuss how it can work with your API strategy, please fill out this form and mention O’Reilly Software Architecture New York.

Kong Senior Solutions Engineer Aaron Miller will be hosting a Meet the Experts talk from 3:05-3:50 on Wednesday, February 6 in the Sponsor Pavillion. Come by Table B to talk about API management, how increasingly distributed systems create new challenges for managing communications across an organization’s architecture and how your organization can tackle these challenges.

We’ll also be giving out some fun giveaways and raffling off cool prizes. Be sure to stop by for a chance to win.

See you in New York!

The post Meet Kong at O’Reilly <br>Software Architecture New York! appeared first on KongHQ.

TCP stream support in Kong

$
0
0

With Kong 1.0 users are now able to control TCP (Transport Control Protocol) traffic. Learn about how we added TCP support, and how you can try it out.

TCP traffic powers email, file transfer, ssh, and many other common types of traffic that can’t be handled by a layer 7 proxy. Our expansion to layer 4 will enable you to connect even more of your services using Kong.

Why now?

When we were designing the ability to deploy Kong as a service mesh, we wanted to build a system that could connect all our users’ services, running on any infrastructure, written in any language, and architected in any pattern. Part of fulfilling this promise meant moving down the Open Systems Interconnection (OSI) stack to cover services that communicate using protocols other than HTTP. We wanted Kong to be able to handle all types of TCP traffic when deployed either as an API gateway or in a mesh pattern. With our sponsorship of OpenResty’s stream support, we were able to add user-facing support for TCP traffic to Kong.

How does it work?

Our new stream_listen configuration option allows users to select the IPs and ports where Kong’s stream mode should listen for TCP traffic. Kong automatically terminates Transport Layer Security (TLS) for incoming TLS traffic, and depending on service configuration you can have Kong encrypt outbound connections with TLS or not. Using Kong’s Server Name Indication (SNI) and certificate entities, users can now also configure their own TLS certificates. One of the major use cases for Kong’s TCP support is TLS termination.

TCP support allows Kong to terminate TLS connections

Kong’s extensibility with plugins is a big reason that users choose Kong over other API Gateways or service meshes. TCP traffic handling is still in its early days, and Kong 1.0 didn’t ship with any TCP-supporting plugins. But, users are already able to write their own custom plugins that apply to TCP traffic. Writing a TCP plugin is a little different to a traditional Kong HTTP plugin. Instead of “rewrite”, “access”, “header” and “body” phases, TCP plugins will have a “preread” phase.

What are the Gotchas?

TCP support requires some customizations that Kong has made for OpenResty. If you’re compiling your own OpenResty from source, apply Kong’s openresty-patches to be able to use this new functionality. Kongs packages and images already come with these patches applied.

Another thing to watch out for is that you should *not* try to use the Nginx SSL listener directive for stream ports. Kong decides to terminate TLS via it’s own mechanisms instead.

How do I try it?

  1. Start your kong with the stream_listen configuration option, selecting the port you want to listen on. You may choose to do this with either the config file or via environment variables.
  2. Configure a service with either a ‘tcp’ or ‘tls’ protocol field. If you select ‘tcp’, then traffic will be sent to your upstream as plain traffic. If you select ‘tls’ then kong will encrypt the outgoing traffic with TLS. This should be familiar to users of Kong where they had the existing choice of ‘http’ vs ‘https’ for the protocol field.
  3. Configure a route for your service based on ‘sources’ ‘destinations’ and/or ‘snis’.

The following is a runnable example using Docker to terminate TLS traffic before sending it to tcpbin.

# Start up your normal Postgres database and run kong migrations
# See https://docs.konghq.com/install/docker/ for more information
docker network create kong-net
docker run -d --name kong-database 
--network=kong-net 
-e "POSTGRES_USER=kong" 
-e "POSTGRES_DB=kong" 
postgres:11
docker run --rm 
--network=kong-net 
-e "KONG_PG_HOST=kong-database" 
kong:1.0.2-alpine kong migrations bootstrap

# Start kong with stream_listen
docker run 
--network=kong-net 
-e "KONG_PG_HOST=kong-database" 
-e "KONG_PROXY_ACCESS_LOG=/dev/stdout" 
-e "KONG_ADMIN_ACCESS_LOG=/dev/stdout" 
-e "KONG_PROXY_ERROR_LOG=/dev/stderr" 
-e "KONG_ADMIN_ERROR_LOG=/dev/stderr" 
-e "KONG_ADMIN_LISTEN=0.0.0.0:8001" -p 8001:8001 
-e "KONG_STREAM_LISTEN=0.0.0.0:5555" -p 5555:5555 
kong:1.0.2-alpine

# Add a service and route
curl localhost:8001/services -d name=tcpbin-echo -d url=tcp://52.20.16.20:30000
curl localhost:8001/services/tcpbin-echo/routes -d protocols=tls -d snis=tlsbin-example

# Try out the service (it'll do a TLS handshake then echo your input)
openssl s_client -connect localhost:5555 -servername tlsbin-example

What’s next for TCP support?

Here are just a few of the improvements to TCP support that you can look forward to in future versions of Kong.

At the moment Kong will unconditionally try to terminate TLS if the traffic looks like a valid TLS ClientHello. We want to make this configurable on a per-route basis, which will include the ability to terminate or not based on SNI.

The next Kong release will include support for custom Nginx directives for the stream module. You can check out that work in the PR here.

In the future, the Kong Plugin Development Kit (PDK) will include more support for TCP data. Currently, to write many types of TCP plugins you need to delve into sparsely documented internal structures. We will slowly be exposing more fields and making it easier to write your own Kong TCP plugins. We’ll also be updating any appropriate Kong-supported plugins to work with TCP.

If you’re a plugin maintainer and want to add TCP support, or if you have any questions about TCP support in Kong, please get in touch with us through our community forum, Kong Nation.

The post TCP stream support in Kong appeared first on KongHQ.

Observability For Your Microservices Using Kong and Kubernetes

$
0
0

In the modern SaaS world, observability is key to running software reliability, managing risks and deriving business value out of the code that you’re shipping. To measure how your service is performing, you record Service Level Indicators (SLIs) or metrics, and alert whenever performance, correctness or availability is affected.

Very broadly, application monitoring falls into two categories: white box and black box monitoring. These terms mean exactly what they sound like.

Whitebox monitoring provides visibility into the internals of your applications. It can include things like thread stats, GC stats, internal performance counters and timing data. Whitebox monitoring usually requires instrumenting your application, meaning, it requires some modifications to your code. But, it can be extremely helpful in figuring out the root cause of an outage or a bottleneck. The (sharply dropping) cost of instrumenting your applications pays off very quickly with an increased understanding of how your application performs in a variety of scenarios and allows you to make reasonable trade-offs with concrete data.

Black box monitoring means treating the application as a black box, sending it various inputs and observing it from the outside to see how it responds. Because it doesn’t require instrumenting your code and can be implemented from outside your application, black box monitoring can give a simple picture of performance that can be standardized across multiple applications. When implemented in a microservice architecture, black box monitoring can give an operator a similar view of services as the services have of each other.

Both types of monitoring serve different purposes and it’s important to include both of them in your systems. In this blog, we outline how to implement black box monitoring, with the understanding that combining both types of monitoring will give you a complete picture of your application health. Kong allows users to easily implement black box monitoring because it sits between the consumers of a service and the service itself. This allows it to collect the same black box metrics for every service it sits in front of, providing uniformity and preventing repetition.

In this tutorial, we will explain how you can leverage the Prometheus monitoring stack in conjunction with Kong, to get black box metrics and observability for all of your services. We choose Prometheus, since we use it quite a bit, but this guide can be applied to other solutions like StatsD, Datadog, Graphite, InfluxDB etc. We will be deploying all of our components on Kubernetes. Buckle up!

Design

We will be setting up the following architecture as part of this guide.

Here, on the right, we have a few services running, which we would like to monitor. We also have Prometheus, which collects and indexes monitoring data, and Grafana, which graphs the monitoring data.

We’re going to deploy Kong as a Kubernetes Ingress Controller, meaning we’ll be configuring Kong using Kubernetes resources and Kong will route all traffic inbound for our application from the outside world. It is also possible to set up routing rules in Kong to proxy traffic.

Prerequisites

You’ll need a few things before we start setting up our services:

  • Kubernetes cluster: You can use Minikube or a GKE cluster for the purpose of this tutorial. We run a Kubernetes cluster v 1.12.x.
  • Helm: We will be using Helm to install all of our components. Tiller should be installed on your k8s cluster and helm CLI should be available on your workstation. You can follow Helm’s quickstart guide to set up helm.

Once you’ve Kubernetes and Helm setup, you’re good to proceed.

Caution: Some settings in this guide are tweaked to keep this guide simple. These settings are not meant for Production usage.

Install Prometheus and Grafana

Prometheus

We will install Prometheus with a scrape interval of 10 seconds to have fine grained data points for all metrics. We’ll install both Prometheus and Grafana in a dedicated ‘monitoring’ namespace.

To install Prometheus, execute the following:
helm install --name prometheus stable/prometheus --namespace monitoring --values https://bit.ly/2RgzDtg --version 8.4.1

Grafana

Grafana is installed with the following values for the Helm chart (see comments for explanation):

persistence:
  enabled: true  # enable persistence using Persistent Volumes
datasources:
 datasources.yaml:
   apiVersion: 1
   Datasources:  # configure Grafana to read metrics from Prometheus
   - name: Prometheus
     type: prometheus
     url: http://prometheus-server # Since Prometheus is deployed in
     access: proxy    # same namespace, this resolves
                      # to the Prometheus Server we installed previous
     isDefault: true  # The default data source is Prometheus

dashboardProviders:
  dashboardproviders.yaml:
    apiVersion: 1
    providers:
    - name: 'default' # Configure a dashboard provider file to
      orgId: 1        # put Kong dashboard into.
      folder: ''
      type: file
      disableDeletion: false
      editable: true
      options:
        path: /var/lib/grafana/dashboards/default
dashboards:
  default:
    kong-dash:
      gnetId: 7424  # Install the following Grafana dashboard in the
      revision: 1   # instance: https://grafana.com/dashboards/7424 
      datasource: Prometheus

To install Grafana, go head and execute the following:
helm install stable/grafana --name grafana --namespace monitoring --values https://bit.ly/2sgxIWK --version 1.22.1

Set Up Kong

Next, we will install Kong, if you don’t already have it installed in your Kubernetes cluster.
We chose to use the Kong Ingress Controller for this purpose since it allows us to configure Kong using Kubernetes itself. You can also choose to install Kong as an application and configure it using Kong’s Admin API.

helm install stable/kong --name kong --namespace kong --values https://bit.ly/2RgSRio --version 0.9.0

The helm chart values we use here are:

admin:
  useTLS: false     # Metrics for Prometheus are available
readinessProbe:     # on the Admin API. By default, Prometheus
  httpGet:          # scrapes are HTTP and not HTTPS.
    scheme: HTTP    # Admin API should be on TLS in production.
livenessProbe:
  httpGet:
    scheme: HTTP
ingressController:  # enable Kong as an Ingress controller
  enabled: true
podAnnotations:
  prometheus.io/scrape: "true" # Ask Prometheus to scrape the
  prometheus.io/port: "8444"   # Kong pods for metrics

It will take a few minutes to get all pods in the running state as images are pulled down and components start up.

Enable Prometheus Plugin in Kong

Next, once Kong is running, we will create a Custom Resource in Kubernetes to enable the Prometheus plugin in Kong. This configures Kong to collect metrics for all requests proxies via Kong and expose them to Prometheus.

Execute the following to enable the Prometheus plugin for all requests:

echo "apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
  labels:
    global: \"true\"
  name: prometheus
plugin: prometheus
" | kubectl apply -f -

Set Up Port Forwards

Now, we will gain access to the components we just deployed. In a production environment, you would have a Kubernetes Service with external IP or load balancer, which would allow you to access Prometheus, Grafana and Kong. For demo purposes, we will set up port-forwarding using kubectl to get access. Please do not do this in production.

Open a new terminal and execute the following commands:

POD_NAME=$(kubectl get pods --namespace monitoring -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace monitoring  port-forward $POD_NAME 9090 &

# You can access Prometheus in your browser at localhost:9090

POD_NAME=$(kubectl get pods --namespace monitoring -l "app=grafana" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace monitoring port-forward $POD_NAME 3000 &

# You can access Grafana in your browser at localhost:3000
# We will get around to getting admin credentials in just a minute.

POD_NAME=$(kubectl get pods --namespace kong -l "app=kong" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace kong port-forward $POD_NAME 8000 &

# Kong proxy port is now your localhost 8000 port
# We are using plain-text HTTP proxy for this purpose of
# demo.

Access Grafana Dashboard

To access Grafana, you need to get the password for the admin user.

Execute the following to read the password and take a note of it:

kubectl get secret --namespace monitoring grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo 

Now, browse to http://localhost:3000 and fill in username as “admin” and password as what you just noted above. You should be logged in to Grafana and find that Kong’s Grafana Dashboard is sitting there waiting for you.

Setup Services

Now, we have all the components for monitoring setup, we will spin up some services for demo purposes and setup Ingress routing for them.

Install Services

We will setup three services: billing, invoice, comments.
Execute the following to spin these services up:
kubectl apply -f https://gist.githubusercontent.com/hbagdi/2d8ef66fe22cb99e1514f410f992268d/raw/a03d789b70c46ccd0b99d9f1ed838dc21419fc33/multiple-services.yaml

Install Ingress for the Services

Next, once the services are up and running, we will create Ingress routing rules in Kubernetes. This will configure Kong to proxy traffic destined for these services correctly.

Execute the following:

echo "apiVersion: configuration.konghq.com/v1
kind: KongIngress
metadata:
  name: strip-path
route:
  strip_path: true
" | kubectl apply -f -

echo "apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    configuration.konghq.com: strip-path
  name: sample-ingresses
spec:
  rules:
  - http:
     paths:
     - path: /billing
       backend:
         serviceName: billing
         servicePort: 80
     - path: /comments
       backend:
         serviceName: comments
         servicePort: 80
     - path: /invoice
       backend:
         serviceName: invoice
         servicePort: 80" | kubectl apply -f -

Let’s Create Some Traffic

We’re done configuring our services and proxies. Time to see if our set up works or catches fire.
Execute the following in a new terminal:

while true;
do
  curl http://localhost:8000/billing/status/200
  curl http://localhost:8000/billing/status/501
  curl http://localhost:8000/invoice/status/201
  curl http://localhost:8000/invoice/status/404
  curl http://localhost:8000/comments/status/200
  curl http://localhost:8000/comments/status/200
  sleep 0.01
done

Since we have already enabled Prometheus plugin in Kong to collect metrics for requests proxied via Kong, we should see metrics coming through in the Grafana dashboard.

You should be able to see metrics related to the traffic flowing through our services.
Try tweaking the above script to send different traffic patterns and see how the metrics change.
The upstream services are httpbin instances, meaning you can use a variety of endpoints to shape your traffic.

Metrics collected

Request Latencies of Various Services

Request Latencies of Various Services

Kong collects latency data of how long your services take to respond to requests.
One can use this data to alert the on-call engineer if the latency goes beyond a certain threshold. For example, let’s say you’ve an SLA that your APIs will respond with latency of less than 20 millisecond for 95% of the requests. You could configure Prometheus to alert based on the following query:
histogram_quantile(0.95, sum(rate(kong_latency_bucket{type="request"}[1m])) by (le,service)) > 20

The query calculates the 95th percentile of the the total request latency (or duration) for all of your services and alerts you if it is more than 20 milliseconds. The “type” label in this query is “request”, which tracks the latency added by Kong and the service. You can switch this to “upstream”, to track latency added by the service only. Prometheus is really flexible and well documented, so we won’t go into details of setting up alerts here, but you’ll be able to find them in the Prometheus documentation.

Kong Proxy Latency

Kong Proxy Latency

Kong also collects metrics about it’s performance. The following query is similar to the previous one but gives us insight into latency added by Kong:
histogram_quantile(0.90, sum(rate(kong_latency_bucket{type="kong"}[1m])) by (le,service)) > 2

Error Rates

Error Rates

Another important metric to track is the rate of errors and requests your services are serving. The timeseries kong_http_status collects HTTP status code metrics for each service.

This metric can help you track the rate of errors for each of your service:
sum(rate(kong_http_status{code=~"5[0-9]{2}"}[1m])) by (service)

You can also calculate the percentage of requests in any duration that are errors. Try to come up with a query to derive that result.

Please note that all HTTP status codes are indexed, meaning you could use the data to learn about your typical traffic pattern and identify problems. For example, a sudden rise in 404 response codes could be indicative of client codes requesting an endpoint that was removed in a recent deploy.

Request Rate and Bandwidth

Request Rate

One can derive the total request rate for each of your service or across your Kubernetes cluster using the kong_http_status timeseries.

Bandwidth

Another metric that Kong keeps track of is the amount of network bandwidth (kong_bandwidth) being consumed. This gives you an estimate of how request/response sizes co-relate with other behaviours in your infrastructure.

With these metrics, you should be able to gain quite a bit of insight and implement strategies like the RED method (Requests, Errors and Durations) for monitoring.

And that’s it. You now have metrics for the services running inside your Kubernetes cluster and have much more visibility into your applications, which you gained using only configurations. Since you now have Kong set up in your Kubernetes cluster, you might want to check out its other uses plugin-enabled functionalities: authentication, logging, transformations, load balancing, circuit-breaking, and much more, which you can now easily use with very little additional setup.

If you’ve have any questions, please reach out to Kong’s helpful community members via Kong Nation.

Happy Konging!

 

Thanks to Judith Malnick, Robert Paprocki and Marco Palladino for reviewing drafts of this post!

The post Observability For Your Microservices Using Kong and Kubernetes appeared first on KongHQ.

New Kong Settings for Service Mesh

$
0
0

This post is the last in a three-part series about deploying Kong as a service mesh. The first post discussed how we define the term “service mesh” here at Kong, and the second post explained the architectural pattern used to deploy Kong as a mesh. In this post, we will talk about the new features and configuration options we added to give Kong its mesh capabilities.

 

Deploying Kong as a service mesh requires Kong nodes to understand which service is local to them, establish mutual Transport Layer Security (mTLS) with each other, understand the original destination for requests and know on which end of a transaction plugins should run. We added a number of new configurations and features to Kong 1.0 and 0.15 that enable these capabilities. They include:

  • TCP proxying
  • <code>origins</code>
  • <code>transparent</code> and <code>iptables</code>
  • <code>run_on</code>
  • A Kong Certificate Authority (CA)

TCP Proxying

Kong can now proxy Transport Control Protocol (TCP) traffic in addition to HTTP traffic. This capability was added to support mTLS. The TLS handshake involves creating a TCP streaming session and assigning a key to that session that encrypts traffic for as long as the session lasts. Although Kong uses mTLS for service mesh deployments, Kong users can now take advantages of added TCP support, no matter where their Kong nodes are deployed.

origins

The new configuration setting <code>origins</code> tells a Kong node which instance of a service it should proxy outbound traffic to. In a mesh, it causes a given Kong node to route the traffic it receives to its local service instance rather than back over the network to a non-local service instance. Specifically, <code>origins</code> overrides outbound routing from Kong to one origin (any instance of a service) with a different origin (the local instance of the service).

iptables and transparent

Together, setting <code>iptables</code> and <code>transparent</code> cause a given service to communicate only with a local Kong proxy (instead of bypassing it) without changing the service itself. Each service could be modified so that it only communicated with its local proxy. However, we intended Kong to function as a service mesh without requiring any changes to the services it proxies. Using <code>iptables</code> and <code>transparent</code> make that possible.

iptables

The user-space utility, <code>iptables</code>, allows system administrators to send traffic destined for one set of IPs or ports to different ones. When traffic comes into a host destined for a service, <code>iptables</code> will send that traffic through Kong instead.

transparent

A new, optional suffix for <code>proxy_listen</code> and <code>stream_listen</code>,  <code>transparent</code> lets Nginx (which Kong is built on) read original destinations and ports that <code>iptables</code> have changed and answer requests. This enables Kong to listen to and respond from the IP addresses and ports you configure in iptables.

An example:

  • <code>iptables</code> on a host/vm/pod running Kong is configured with a rule like: “send everything that was addressed to 1.2.3.0/24 on any port to the program listening on 9.8.7.6:80 instead”
  • For Kong to accept the HTTP traffic sent to 9.8.7.6:80, kong.conf includes <code>#proxy_listen = 9.8.7.6:80 transparent</code>
  • OR if you want Kong to accept TCP traffic sent to 9.8.7.6:80, kong.conf would need to include <code>#stream_listen = 9.8.7.6:80 transparent</code>
  • Kong/nginx will fail to start if both <code>stream_listen</code> and <code>proxy_listen</code> contain the same ip+port

run_on

The <code>run_on</code> property controls the Kong instance that a plugin executes on when Kong is running as a service mesh. Without <code> run_on</code>, configured plugins always run on both nodes in a Kong-to-Kong connection, which can produce undesirable results. For example, consider a Kong-to-Kong connection with a rate-limiting plugin enabled. The first Kong node would increment a rate-limiting counter, and then the second Kong node would also increment the counter, resulting in one request being counted twice. The <code>run_on</code> setting requires a mutual TLS connection between the Kong nodes, with a certificate issued by Kong’s built-in certificate authority.

Kong Certificate Authority (CA)

Kong-generated certificates are required for a mesh deployment to work correctly because of an ALPN (Application Layer Protocol Negotiation) tweak that the Kong CA includes in client certificates. The tweak enables Kong nodes to recognize each other as part of the same mesh and apply mesh-specific settings. Although you can deploy Kong nodes in a mesh pattern and establish mTLS connections between them using certificates from an outside CA, this configuration won’t function fully because mesh settings like <code>run_on</code> won’t work without Kong-generated certificates.

The Kong CA is generated and Kong-to-Kong mTLS connections (KmTLS) are established in the following steps:

  1. The first Kong node in your cluster starts up, checks the datastore, notices there is no Certificate Authority (CA), creates a CA and stores it in the datastore. This CA stays in the datastore and is valid indefinitely.
  2. That “first” Kong node and all future Kong nodes that start in that cluster get the CA from the datastore, use it to generate a certificate, which the nodes store locally. Each Kong node in the cluster now has a certificate signed by the same CA.
  3. When a given Kong node makes a TLS connection, it utilizes its certificate. When that connection is made to a second Kong node in the cluster, the connection is automatically upgraded to KmTLS.
  4. Now that the connection is KmTLS, the Kong nodes can correctly run plugins as directed by the <code>run_on</code> settings at the configured points in the connection (on the first Kong node, on the second or on both).

Note that if a Kong node connects to another Kong node without TLS (eg. via HTTP or TCP), there is no opportunity for a TLS connection to be automatically upgraded to mTLS. Therefore unencrypted connections between Kong nodes can not form fully functional service meshes. TLS is not only available when Kong is run as a service mesh; it is required.

Current Limitations

We’re excited to continue developing Kong’s service mesh capabilities. For now, here are a few things to make sure of and look out for when deploying Kong in a service mesh pattern:

  • Kong nodes in the cluster must be version 1.0.0 or above
  • Kong nodes must all belong to a single Kong cluster (Kong does not yet support KmTLS when connecting across multiple Kong clusters)
  • Kong’s Service entities must be configured with TLS or HTTPS  (not TCP or HTTP).
  • Some plugin configurations could cause the first Kong node that requests pass through to strip consumer information from the request before sending it to the second Kong node. Plugins that run on the second Kong node could have trouble with these requests if they need to know about the consumer.

Deploy Kong as a Service Mesh

We’re excited for users to start using Kong as a service mesh. For full details on how to get started, see the documentation about streams and service mesh, or Kubernetes and service mesh.

 

The post New Kong Settings for Service Mesh appeared first on KongHQ.

Join Kong at AWS Summit Santa Clara!

$
0
0

 

The Kong team will be in Santa Clara next week for AWS Summit Santa Clara. Kong is a proud sponsor of this free event that brings together a wide range of attendees across company size, industry and scope, and we hope to see you there!

To schedule a one-on-one meeting with a member of our team to see how Kong works first-hand and discuss how it can work with your API strategy, please fill out this form and mention AWS Summit Santa Clara.

We’re excited to talk with you about microservice, cloud-native, service mesh and serverless architectures. Visit us at Booth #134 to learn about Kong’s API Platform. We’ll also raffle off some cool prizes. See you in Santa Clara!

The post Join Kong at AWS Summit Santa Clara! appeared first on KongHQ.

Kong 1.1 Released!

$
0
0

Kong 1.1 Released with Declarative Configuration and DB-less Mode!

Today, we’re thrilled to announce the release of Kong 1.1! Building on the release of support for service mesh in Kong 1.0 last September, our engineering teams and community have been hard at work on this latest iteration of our open source offering. With new Declarative Config and DB-less deployment capabilities, as well as numerous small improvements and fixes, Kong 1.1 is one of our most exciting releases to date!

Below, we’ll highlight Kong 1.1’s new features, improvements and bug fixes, and what they mean for our users. Be sure to check back over the next few weeks as we follow up with more in-depth posts detailing these features.

Declarative Config

Kong 1.1 enables declarative config for more dynamic traffic management. By enabling declarative config, Kong users will be able to specify the desired system state through a YAML or JSON file instead of a sequence of API calls. Using declarative config provides several key benefits to reduce complexity, increase automation and enhance system performance.

Key Benefits:

  • Create A Single Source of Truth
    • Consolidate your entire Kong configuration in a single YAML or JSON file to reduce the possibility of errors and simplify management.
  • Reduce Complexity
    • Specify the desired state to create fewer moving pieces and eliminate intermediate inconsistent states.
  • Automate Deployment Pipeline
    • Remove manual deployment tasks through integration with tools like Jenkins. Enable Kong to manage traffic with consistent configurations via your deployment pipeline.
  • Expand Deployment Options
    • Increase infrastructure flexibility with declarative config to enable more deployment options for Kong to support K8S, service mesh and DB-less deployments.

Key Functions:

  • New command: kong config init to generate a template kong.yml file to get you started
  • New command: kong config parse kong.yml to verify the syntax of the kong.yml file before using it
  • New Admin API endpoint: /config to replace the configuration of Kong entities entirely, replacing it with the contents of a new declarative config file

DB-less Mode

To support the use of declarative config and further reduce complexity and be more flexible for deployment, Kong 1.1 ships with the ability to enable DB-less mode. Users that turn on DB-less mode can manage their entire configuration in-memory to reduce modes of failure and maximize system resiliency.

 

Key Benefits:

  • Increase Resource Efficiency
    • Eliminate DB overhead and dependencies by utilizing a single YAML or JSON file to store your Kong configurations.
  • Simplify Management
    • Remove the need for databases to minimize complexity for use case configurations that can be held in memory.
  • Quick Start
    • Streamline the process of getting Kong configured by leveraging a single file as opposed to DBs.
  • Increase End-to-end Automation
    • Easily manage a fleet by integrating with CI/CD tools such as Ansible to push out configurations to Kong without any need for bespoke change database data.

Key Functions:

  • New option in kong.conf: database=off to start Kong without a database
  • New option in kong.conf: declarative_config=kong.yml to load a YAML file as the configuration.

Additional New Features in Kong 1.1

Entities Tags

  • Be able to segment and track ownership of kong entities
  • Increase search accuracy for Services and Admin API endpoints by taking advantage of Kong’s support for tags across all core entities.

Automatic Bulk DB Import

  • Streamline the configuration process by automatically upserting all entities contained in a YAML file.

Automatic Transparent Proxying

  • Minimize disruptions by defaulting to (optional) transparent proxying for all Routes without an assigned Service.

Improved Orchestration for Kubernetes Sidecar Injection

  • Smooth your K8s deployment experience with the Kubernetes Sidecar Injection plugin (now bundled with Kong). Ease graceful termination when using orchestration tools with the new option for –wait in kong quit.

Other Improvements and Bug Fixes

Kong 1.1 also contains improvements to stream handling, support for ACL authenticated groups, and several bug fixes pointed out by our users. For the full details, check out the Change Log here!

As always, the documentation for Kong 1.1 is available online here. Additionally, as mentioned above, we will be discussing the key features in 1.1 in subsequent blog posts and on community calls, so stay tuned!

Thank you to our community of users, contributors, and core maintainers for your continuing support of Kong’s open source platform. Please give Kong 1.1 a try, and be sure to let us know what you think!

Kong Nation

As usual, feel free to ask any question on Kong Nation, our Community forum. Learning from your feedback will allow us to better understand the mission critical use-cases and keep improving Kong.

Happy Konging!

The post Kong 1.1 Released! appeared first on KongHQ.


Kong Raises $43 Million to Connect the Next Era of Software

$
0
0

Today, we have some exciting news! We’re announcing our $43 million Series C round, led by Index Ventures and our board member Mike Volpi, with participation from existing investors Andreessen Horowitz (Martin Casado) and CRV (Devdutt Yellurkar), as well as new strategic investors GGV Capital and World Innovation Lab (WiL).

We believe that the way software is created is undergoing a revolution as large organizations transition to brand-new, distributed software architectures that span multi-cloud and hybrid architectures. Services and APIs act as the glue that connects these distributed architectures by enabling information to flow in and out across all environments. By 2030, there will be over 500B connected devices and all those will have services and APIs.

This funding is the culmination of many years of work that resulted in a record breaking 2018. Last year we accomplished things that we never could have imagined back in 2009 when we were working out of a garage in Milan. We hit more than 54 million open-source Kong downloads and reached over 100 Kong Enterprise customers including Yahoo! Japan, WeWork, SoulCycle, and Ferrari. In 2008, if you had told Marco and me that one of the most iconic Italian companies in history would be our customer, we probably would have called you crazy. Today, they’re one of the many incredible companies worldwide using Kong to usher in the next era of software.

At Kong, our mission is to build the nervous system for the cloud by intelligently brokering information across all types of services. The Kong Service Control Platform is designed to seamlessly connect, govern and run APIs and microservices at scale, across multi-cloud, hybrid and on-prem workloads.

This new cash infusion will allow us to not only enrich the industry’s first service control platform for the most demanding enterprise environments but also to double down on our open-source products to support our growing community of developers and to invest in international expansion, including Asia Pacific and Europe.

We’re very proud of what our incredible team and open-source community have achieved to date, and we look forward to building a long-lasting, iconic company. We are still in the early days of the long journey of the cloud era, and Kong is just leaving the first of many footsteps.

Onwards!

#KongStrong

The post Kong Raises $43 Million to Connect the Next Era of Software appeared first on KongHQ.

Kong at AWS Global Summits: Coming Soon to a City Near You!

$
0
0

Anaheim, Amsterdam, Australia, oh my! The Kong team is packing our bags and hitting the road for AWS Global Summits around the world over the next few months. At each free event, the cloud computing community gathers to discuss AWS core topics and software trends. Visit our booth for riveting conversations on microservices, API strategy, cloud native and other emerging software architectures. We’ll also be giving out some fun giveaways.

To schedule a one-on-one meeting with a member of our team for a walk-through on how Kong works and discuss how it can work with your API strategy, please fill out this form and mention your local AWS Summit.

You can find us at the following Summits:

P.S. These events are free to attend. Register today!

The post Kong at AWS Global Summits: Coming Soon to a City Near You! appeared first on KongHQ.

Kong Summit 2019: Call for Speakers Now Open!

$
0
0

Save the date! Kong Summit will be returning for its second year this coming October 2-3 at the Hilton Union Square in San Francisco.

Focused on building the next era of software, Kong Summit 2019 will provide a fun and educational forum for discovering, learning and trying the technologies, practical solutions and techniques needed to create modern service architectures. The event will bring together members of the Kong user community, industry ecosystem contributors and industry thought leaders. Attendees will learn new strategies, explore novel ways of approaching technical and organizational problems, get hands-on experience with Kong’s latest and greatest features, learn how to use popular open source ecosystem projects, and more.

Our call for speakers is now open through June 16. We’re looking for community experts to share Kong use cases, best practices and lessons learned. Learn more and apply to be a speaker at https://konghq.com/kong-summit/call-for-speakers/! Accepted speakers will receive a complimentary VIP ticket to the Summit.

More than 200 people from the Kong community and beyond gathered for last year’s inaugural event. This year, we’re making it even bigger and better! Stay tuned for more on Kong Summit 2019, including early bird registration and agenda details.

 

The post Kong Summit 2019: Call for Speakers Now Open! appeared first on KongHQ.

Kong Gives Back: Volunteering at Project Open Hand

$
0
0

When it came time to decide what to do for our first volunteer event of 2019, we Kongers were looking to get hands-on in serving some of the most at-risk members of our community. Project Open Hand, whose mission is to improve health outcomes and quality of life by providing nutritious meals to the sick and vulnerable, provided Kong just the opportunity to do so.

 

Kong Gives Back - Project Open Hand

 

Upon arrival, we were greeted by our engaged and adept host, Alicia Orozco, who briefed us on the rich history of the non-profit organization. Specifically, Alicia educated us on the communities that the organization primarily serves – those living with HIV/AIDS, chronic illness, seniors and adults with disabilities. More importantly, Project Open Hand works to not only provide meals, but nutritious meals with love, meaning in addition to meals, the non-profit provides nutrition education and counseling to empower these communities and provide access to better quality of life.

After our history lesson, it was time to get to work! We had the pleasure of volunteering alongside staff and other volunteers in duties that varied from chopping up onions and celery to preparing burger patties and meal packaging. Project Open Hand provided an impeccably clean, safe and fun environment for us to work, and the time flew by. At the end of the day, Alicia debriefed with us and shared what we had accomplished:

  • 4,000 servings of onions
  • 250 servings of celery
  • 300 servings of cilantro
  • 800 servings of burger patties

This totaled 5,350 servings!

 

Kong Gives Back - Project Open Hand 2

Kong Gives Back - Project Open Hand 3

 

While Project Open Hand exemplifies all of Kong’s core values, I think they best serve as an example of being an Explorer. From the onset, the organization has challenged the status quo of what is possible in terms of providing access to nutrition, truly showing how these underserved communities benefit through the healing power of nutritious food.

We look forward to our next volunteer event and the chance to engage with our community again!

To learn more about Kong’s core values, culture and the impact we make, visit our Careers page at https://konghq.com/careers/.

The post Kong Gives Back: Volunteering at Project Open Hand appeared first on KongHQ.

Kong Community Documentation: Mentoring Technical Writers

$
0
0

Calling all technical writers! Kong is applying to participate in this year’s Season of Docs and would love to work alongside you. The goal is to foster collaboration between open source projects and technical writers.

Season of Docs is a unique program that pairs technical writers with our open source mentors to introduce the technical writer to our open source community and provide guidance while the writer works on a real-world open source project.

Our open source documentation, which can be found on this repository, will be undergoing some major refactoring in the upcoming months. We would like to work with technical writers during this process to improve on the existing guides and enhance the accessibility to Kong. In order to enhance the accessibility, the technical writer(s) will work alongside our mentors to find hidden assumptions and perform substantive editing to fix such assumptions. Additionally, technical writers will also play a key role in expanding our existing style guide and technical writing guide throughout the program. By adding to our guides, we want technical writers to help solidify the structure to all future Kong documentation.

Learn more about Kong’s community here and check back on April 30th to see the finalized list of this year’s Season of Docs participants!

Please contact Kevin Chen at kevin.chen@konghq.com if you have any questions.

The post Kong Community Documentation: Mentoring Technical Writers appeared first on KongHQ.

Viewing all 463 articles
Browse latest View live